id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,166,803
https://en.wikipedia.org/wiki/Class-D%20amplifier
A class-D amplifier or switching amplifier is an electronic amplifier in which the amplifying devices (transistors, usually MOSFETs) operate as electronic switches, and not as linear gain devices as in other amplifiers. They operate by rapidly switching back and forth between the supply rails, using pulse-width modulation, pulse-density modulation, or related techniques to produce a pulse train output. A simple low-pass filter may be used to attenuate their high-frequency content to provide analog output current and voltage. Little energy is dissipated in the amplifying transistors because they are always either fully on or fully off, so efficiency can exceed 90%. History The first class-D amplifier was invented by British scientist Alec Reeves in the 1950s and was first called by that name in 1955. The first commercial product was a kit module called the X-10 released by Sinclair Radionics in 1964. However, it had an output power of only 2.5 watts. The Sinclair X-20 in 1966 produced 20 watts but suffered from the inconsistencies and limitations of the germanium-based bipolar junction transistors available at the time. As a result, these early class-D amplifiers were impractical and unsuccessful. Practical class-D amplifiers were enabled by the development of silicon-based MOSFET (metal–oxide–semiconductor field-effect transistor) technology. In 1978, Sony introduced the TA-N88, the first class-D unit to employ power MOSFETs and a switched-mode power supply. There were subsequently rapid developments in MOSFET technology between 1979 and 1985. The availability of low-cost, fast-switching MOSFETs led to class-D amplifiers becoming successful in the mid-1980s. The first class-D amplifier based integrated circuit was released by Tripath in 1996, and it saw widespread use. Basic operation Class-D amplifiers work by generating a train of rectangular pulses of fixed amplitude but varying width and separation. This modulation represents the amplitude variations of the analog audio input signal. In some implementations, the pulses are synchronized with an incoming digital audio signal removing the necessity to convert the signal to analog. The output of the modulator is then used to turn the output transistors on and off alternately. Since the transistors are either fully on or fully off, they dissipate very little power. A simple low-pass filter consisting of an inductor and a capacitor provides a path for the low frequencies of the audio signal, leaving the high-frequency pulses behind. The structure of a class-D power stage is comparable to that of a synchronously rectified buck converter, a type of non-isolated switched-mode power supply (SMPS). Whereas buck converters usually function as voltage regulators, delivering a constant DC voltage into a variable load, and can only source current, a class-D amplifier delivers a constantly changing voltage into a fixed load. A switching amplifier may use any type of power supply (e.g., a car battery or an internal SMPS), but the defining characteristic is that the amplification process itself operates by switching. The theoretical power efficiency of class-D amplifiers is 100%. That is to say, all of the power supplied to it is delivered to the load and none is turned to heat. This is because an ideal switch in its on state would encounter no resistance and conduct all the current with no voltage drop across it, hence no power would be dissipated as heat. And when it is off, it would have the full supply voltage across it but no leakage current flowing through it, and again no power would be dissipated. Real-world power MOSFETs are not ideal switches, but practical efficiencies well over 90% are common for class-D amplifiers. By contrast, linear AB-class amplifiers are always operated with both current flowing through and voltage standing across the power devices. An ideal class-B amplifier has a theoretical maximum efficiency of 78%. Class-A amplifiers (purely linear, with the devices always at least partially on) have a theoretical maximum efficiency of 50% and some designs have efficiencies below 20%. Signal modulation The 2-level waveform is derived using pulse-width modulation (PWM), pulse-density modulation (sometimes referred to as pulse frequency modulation), sliding mode control (more commonly called self-oscillating modulation.) or discrete-time forms of modulation such as delta-sigma modulation. A simple means of creating the PWM signal is to use a high-speed comparator ("C" in the block diagram above) that compares a high-frequency triangular wave with the audio input. This generates a series of pulses of which the duty cycle is directly proportional with the instantaneous value of the audio signal. The comparator then drives a MOS gate driver which in turn drives a pair of high-power switching transistors (usually MOSFETs). This produces an amplified replica of the comparator's PWM signal. The output filter removes the high-frequency switching components of the PWM signal and reconstructs audio information that the speaker can use. DSP-based amplifiers that generate a PWM signal directly from a digital audio signal (e. g. SPDIF) either use a counter to time the pulse length or implement a digital equivalent of the triangle-based modulator. In either case, the time resolution afforded by practical clock frequencies is only a few hundredths of a switching period, which is not enough to ensure low noise. In effect, the pulse length gets quantized, resulting in quantization distortion. In both cases, negative feedback is applied inside the digital domain, forming a noise shaper which results in lower noise in the audible frequency range. Design challenges Switching speed Two significant design challenges for MOSFET driver circuits in class-D amplifiers are keeping dead times and linear mode operation as short as possible. Dead time is the period during a switching transition when both output MOSFETs are driven into cut-off mode and both are off. Dead times need to be as short as possible to maintain an accurate low-distortion output signal, but dead times that are too short cause the MOSFET that is switching on to start conducting before the MOSFET that is switching off has stopped conducting and the MOSFETs effectively short the output power supply through themselves in a condition known as shoot-through. The controlling circuitry also needs to switch the MOSFETs as quickly as possible to minimize the amount of time a MOSFET is in linear mode—the state between cut-off mode and saturation mode where the MOSFET is neither fully on nor fully off and conducts current with significant resistance, creating significant heat. Failures that allow shoot-through or too much linear mode operation result in excessive losses and sometimes catastrophic failure of the MOSFETs. With fixed-frequency PWM modulation, as the (peak) output voltage approaches either of the supply rails, the pulse width can get so narrow as to challenge the ability of the driver circuit and the MOSFET to respond. These pulses can be as short as a few nanoseconds and can result in shoot through and heating due to linear mode operation. Other modulation techniques such as pulse-density modulation can achieve higher peak output voltages, as well as greater efficiency compared to fixed-frequency PWM. Power supply design Class-D amplifiers place an additional requirement on their power supply, namely that it be able to sink energy returning from the load. Reactive (capacitive or inductive) loads store energy during part of a cycle and release some of this energy back later. Linear amplifiers will dissipate this energy, class-D amplifiers return it to the power supply which should somehow be able to store it. In addition, half-bridge class-D amplifiers transfer energy from one supply rail (e.g. the positive rail) to the other (e.g. the negative) depending on the sign of the output current. This happens with both resistive and reactive loads. The supply should either have enough capacitive storage on both rails, or be able to transfer this energy to the other rail. Active device selection The active devices in a class-D amplifier need only act as controllable switches and need not have a particularly linear response to the control input. MOSFETs are usually used. Error control The actual output of the amplifier is not just dependent on the content of the modulated PWM signal. A number of sources may introduce errors. Any variation in power supply voltage directly amplitude-modulates the output voltage. Dead time errors make the output impedance non-linear. The output filter has a strongly load-dependent frequency response. An effective way to combat errors, regardless of their source, is negative feedback. A feedback loop including the output stage can be made using a simple integrator. To include the output filter, a PID controller is used, sometimes with additional integrating terms. The need to feed the actual output signal back into the modulator makes the direct generation of PWM from a SPDIF source unattractive. Mitigating the same issues in an amplifier without feedback requires addressing each separately at the source. Power supply modulation can be partially canceled by measuring the supply voltage to adjust signal gain as part of PWM conversion. Distortion can be reduced by switching faster. The output impedance cannot be controlled other than through feedback. Advantages The major advantage of a class-D amplifier is that it can be more efficient than a linear amplifier by dissipating less power as heat in the active devices. Given that large heat sinks are not required, class-D amplifiers are much lighter weight than class-A, -B, or -AB amplifiers, an important consideration with portable sound reinforcement system equipment and bass amplifiers. Uses Home theater in a box systems. These economical home cinema systems are almost universally equipped with class-D amplifiers. On account of modest performance requirements and straightforward design, direct conversion from digital audio to PWM without feedback is most common. Mobile phones. The internal loudspeaker is driven by up to 1 W. Class D is used to preserve battery lifetime. Hearing aids. The miniature loudspeaker (known as the receiver) is directly driven by a class-D amplifier to maximize battery life and can provide levels of 130 dB SPL or more. Powered speakers and active subwoofers High-end audio is generally conservative with regards to adopting new technologies but class-D amplifiers have made an appearance Sound reinforcement systems. For very high power amplification the power loss of class-AB amplifiers is unacceptable. Amplifiers with several kilowatts of output power are available as class D. Class-D power amplifiers are available that are rated at 3000 W total output, yet weigh only 3.6 kg (8 lb). Bass instrument amplification Radio frequency amplifiers may use class D or other switch-mode classes to provide high-efficiency RF power amplification in communications systems. See also Power amplifier classes References External links Haber, Eric Designing With class-D amplifier ICs some IC-oriented Class D design considerations Harden, Paul an article on basic digital RF amplifier design intended for ham radio operators but applicable to audio class-D amplifiers Electronic amplifiers Switching amplifiers D
Class-D amplifier
[ "Technology" ]
2,317
[ "Electronic amplifiers", "Amplifiers" ]
2,166,926
https://en.wikipedia.org/wiki/Fano%20resonance
In physics, a Fano resonance is a type of resonant scattering phenomenon that gives rise to an asymmetric line-shape. Interference between a background and a resonant scattering process produces the asymmetric line-shape. It is named after Italian-American physicist Ugo Fano, who in 1961 gave a theoretical explanation for the scattering line-shape of inelastic scattering of electrons from helium; however, Ettore Majorana was the first to discover this phenomenon. Fano resonance is a weak coupling effect meaning that the decay rate is so high, that no hybridization occurs. The coupling modifies the resonance properties such as spectral position and width and its line-shape takes on the distinctive asymmetric Fano profile. Because it is a general wave phenomenon, examples can be found across many areas of physics and engineering. History The explanation of the Fano line-shape first appeared in the context of inelastic electron scattering by helium and autoionization. The incident electron doubly excites the atom to the state, a sort of shape resonance. The doubly excited atom spontaneously decays by ejecting one of the excited electrons. Fano showed that interference between the amplitude to simply scatter the incident electron and the amplitude to scatter via autoionization creates an asymmetric scattering line-shape around the autoionization energy with a line-width very close to the inverse of the autoionization lifetime. Explanation The Fano resonance line-shape is due to interference between two scattering amplitudes, one due to scattering within a continuum of states (the background process) and the second due to an excitation of a discrete state (the resonant process). The energy of the resonant state must lie in the energy range of the continuum (background) states for the effect to occur. Near the resonant energy, the background scattering amplitude typically varies slowly with energy while the resonant scattering amplitude changes both in magnitude and phase quickly. It is this variation that creates the asymmetric profile. For energies far from the resonant energy the background scattering process dominates. Within of the resonant energy, the phase of the resonant scattering amplitude changes by . It is this rapid variation in phase that creates the asymmetric line-shape. Fano showed that the total scattering cross-section assumes the following form, where describes the line width of the resonant energy and , the Fano parameter, measures the ratio of resonant scattering to the direct (background) scattering amplitude. This is consistent with the interpretation within the Feshbach–Fano partitioning theory. In the case the direct scattering amplitude vanishes, the parameter becomes zero and the Fano formula becomes : Looking at transmission shows that this last expression boils down to the expected Breit–Wigner (Lorentzian) formula, as , the three parameters Lorentzian function (note that it is not a density function and does not integrate to 1, as its amplitude is 1 and not ). Examples Examples of Fano resonances can be found in atomic physics, nuclear physics, condensed matter physics, electrical circuits, microwave engineering, nonlinear optics, nanophotonics, magnetic metamaterials, and in mechanical waves. Fano can be observed with photoelectron spectroscopy and Raman spectroscopy. The phenomenon can be also observed at visible frequencies using simple glass microspheres, which may allow enhancing the magnetic field of light (which is typically small) by a few orders of magnitude. See also Resonance (particle physics) Core-excited shape resonance Antiresonance References Quantum mechanics Scattering
Fano resonance
[ "Physics", "Chemistry", "Materials_science" ]
741
[ "Theoretical physics", "Quantum mechanics", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics" ]
2,166,952
https://en.wikipedia.org/wiki/Authentication%2C%20authorization%2C%20and%20accounting
Authentication, authorization, and accounting (AAA) is a framework used to control and track access within a computer network. Authentication is concerned with proving identity, authorization with granting permissions, accounting with maintaining a continuous and robust audit trail via logging. Common network protocols providing this functionality include TACACS+, RADIUS, and Diameter. Disambiguation In some related but distinct contexts, the term AAA has been used to refer to protocol-specific information. For example, Diameter uses the URI scheme AAA, which also stands for "Authentication, Authorization and Accounting", as well as the Diameter-based Protocol AAAS, which stands for "Authentication, Authorization and Accounting with Secure Transport". These protocols were defined by the Internet Engineering Task Force in RFC 6733 and are intended to provide an AAA framework for applications, such as network access or IP mobility in both local and roaming situations. However, the AAA paradigm is used more widely in the computer security industry. Usage of AAA servers in CDMA networks AAA servers in CDMA data networks are entities that provide Internet Protocol (IP) functionality to support the functions of authentication, authorization and accounting. The AAA server in the CDMA wireless data network architecture is similar to the HLR in the CDMA wireless voice network architecture. Types of AAA servers include the following: Access Network AAA (AN-AAA): Communicates with the RNC in the Access Network (AN) to enable authentication and authorization functions to be performed at the AN. The interface between AN and AN-AAA is known as the A12 interface. Broker AAA (B-AAA): Acts as an intermediary to proxy AAA traffic between roaming partner networks (i.e., between the H-AAA server in the home network and V-AAA server in the serving network). B-AAA servers are used in CRX networks to enable CRX providers to offer billing settlement functions. Home AAA (H-AAA): The AAA server in the roamer's home network. The H-AAA is similar to the HLR in voice. The H-AAA stores user profile information, responds to authentication requests, and collects accounting information. Visited AAA (V-AAA): The AAA server in the visited network from which a roamer is receiving service. The V-AAA in the serving network communicates with the H-AAA in a roamer's home network. Authentication requests and accounting information are forwarded by the V-AAA to the H-AAA, either directly or through a B-AAA. Current AAA servers communicate using the RADIUS protocol. As such, TIA specifications refer to AAA servers as RADIUS servers. While at one point it was expected that Diameter was to replace RADIUS, that has not happened. Diameter is largely used only in the mobile (3G/4G/5G) space, and RADIUS is used everywhere else. The behavior of AAA servers (radius servers) in the CDMA2000 wireless IP network is specified in TIA-835. See also Layer 8 Computer access control References Code division multiple access Computer security procedures
Authentication, authorization, and accounting
[ "Engineering" ]
623
[ "Cybersecurity engineering", "Computer security procedures" ]
2,167,047
https://en.wikipedia.org/wiki/Shell%20Eco-marathon
Shell Eco-marathon is a world-wide energy efficiency competition sponsored by Shell. Participants build automotive vehicles to achieve the highest possible fuel efficiency. There are two vehicle classes within Shell Eco-marathon: Prototype and UrbanConcept. There are three energy categories within Shell Eco-marathon: battery-electric, hydrogen fuel cell, and internal combustion engine (gasoline, ethanol, or diesel). Prizes are awarded separately for each vehicle class and energy category. The pinnacle of the competition is the Shell Eco-marathon Drivers' World Championship, where the most energy-efficient UrbanConcept vehicles compete in a race with a limited amount of energy. Shell Eco-marathon competitions are held around the world with nine events as of 2018. The 2018 competition season includes events held in Singapore, California, Paris, London, Istanbul, Johannesburg, Rio de Janeiro, India, and China. Participants are students from various academic backgrounds including university teams such as past finalists University of British Columbia, Duke University, University of Toronto, and University of California, Los Angeles. In 2018, over 5,000 students from over 700 universities in 52 countries participated in Shell Eco-marathon. The digital reach of Shell Eco-marathon is approximately several million. History In 1939, a group of Shell scientists based in a research laboratory in Wood River, Illinois, USA, had a friendly bet to see who could drive their own car furthest on one gallon of fuel. The winner managed a fuel economy of . A repeat of the challenge yielded dramatically improved results over the years: with a 1947 Studebaker in 1949 with a 1959 Fiat 600 in 1968 with a 1959 Opel in 1973. During the 1980s, a Canadian version of the competition was called the 'Shell Fuelathon', with competitions in Oakville, Ontario Canada. The current record is , set in 2011 by the Polytechnic University of Milan's prototype Apollo. The world record in Diesel efficiency was achieved by a team from the Universitat Politècnica de Valencia (Politechnical University of Valencia, Spain) in 2010 with 1396.8 kilometres per litre. In contrast, the most efficient production Diesel passenger cars achieve , and some high-powered sports cars achieve as little as . The current European Shell Eco-marathon record for a combustion engine entry was set in 2004 by the team from Lycée La Joliverie (France) at 3,410 km on the equivalent of a single litre of fuel. Prototype vehicles using fuel cells are capable of greater energy efficiency. In 2005, a hydrogen-powered vehicle built by Swiss team ETH Zurich achieved a projected 3,836 km on the equivalent of a single litre of fuel. This is equivalent to the distance between Paris and Moscow. In 2013, ethanol efficiency world record was set by Toulouse Ingenerie Multidisciplinarie with 3100 km of a single litre of ethanol. This is equivalent to the distance between Toulouse and Istanbul. In 2009, the entry from the Technical School at La Joliverie College, a car named "Microjoule," achieved 3,771 km per litre, or 0.02652 L/100 km. Microjoule also won the 2023 competition, but with a significantly lower efficiency of 2507.15 km/l. Event The Eco-Marathon has different classes of competition, according to the energy source used: Fuel cells, solar cells, gasoline, Diesel fuel and LPG. During the competition, cars must attain an average speed of at least 15 mph (23 km/h) over a distance of 10 miles (16 km). The course is typically a motor racing track or closed-off city streets. The fuel is strictly measured out for each entrant at the start and end of the course. The difference is used to calculate the vehicle's average fuel consumption. Solar-powered vehicles are not eligible for the grand prize for fuel efficiency. In 2017, more than 100 student teams from many countries across the Americas competed in the Shell-Eco Marathon Americas to a crowd of over 20,000 throughout the competitions at the Cobo Center in Detroit, Michigan. Entrants The top performing vehicles are purpose designed for high efficiency. Some vehicles use a coast and burn technique whereby they briefly accelerate from 10 to 20 mph (from 16 to 32 km/h) and then switch the engine off and coast until the speed drops back down to 10 mph (16 km/h). This process is repeated resulting in average speed of 15 mph for the course. Typically the vehicles have: Automobile drag coefficients (Cd) below 0.1 Rolling resistance coefficients less than 0.0015 Weight without driver under 45 kg Engine efficiency under 200 specific fuel consumption (cc/bhp/hr) The vehicles are highly specialized and optimized for the event and are not intended for everyday use. The designs represent what can be achieved with current technology and offer a glimpse into the future of car design based on minimal environmental impact in a world with reduced oil reserves. The work of the participants can be used to show ways manufacturers could redesign their products. References External links 2013 Archive of Official Shell Eco-Marathon website Archive of Shell Eco-Marathon 2010 Rules 2016 Archive of Shell Eco-Marathon Europe Energy technology competitions Green racing Science competitions Sustainable transport 1939 establishments in Illinois
Shell Eco-marathon
[ "Physics", "Technology" ]
1,067
[ "Science competitions", "Physical systems", "Transport", "Sustainable transport", "Science and technology awards" ]
2,167,172
https://en.wikipedia.org/wiki/HP%20FOCUS
The Hewlett-Packard FOCUS microprocessor, launched in 1982, was the first commercial, single chip, fully 32-bit microprocessor available on the market. At this time, all 32-bit competitors (DEC, IBM, Prime Computer, etc.) used multi-chip bit-slice-CPU designs, while single-chip designs like the Motorola 68000 were a mix of 32 and 16-bit. Introduced in the Hewlett-Packard HP 9000 Series 500 workstations and servers (originally launched as the HP 9020 and also, unofficially, called HP 9000 Series 600), the single-chip CPU was used alongside the I/O Processor (IOP), Memory Controller (MMU), Clock, and a number of 128-kilobit dynamic RAM devices as the basis of the HP 9000 system architecture. It was a 32-bit implementation of the 16-bit HP 3000 computer's stack architecture, with over 220 instructions (some 32 bits wide, some 16 bits wide), a segmented memory model, and no general purpose programmer-visible registers. The design of the FOCUS CPU was richly inspired by the custom silicon on sapphire (SOS) chip design HP used in their HP 3000 series machines. Because of the high density of HP's NMOS-III IC process, heat dissipation was a problem. Therefore, the chips were mounted on special printed circuit boards, with a ~1 mm copper sheet at its core, called "finstrates". The Focus CPU is microcoded with a 9,216 by 38-bit microcode control store. Internal data paths and registers are all 32-bit wide. The Focus CPU has a transistor count of 450,000 FETs. References See for HP Journal articles. FOCUS Stack machines 32-bit microprocessors
HP FOCUS
[ "Technology" ]
382
[ "Computing stubs", "Computer hardware stubs" ]
2,167,199
https://en.wikipedia.org/wiki/Warning%20system
A warning system is any system of biological or technical nature deployed by an individual or group to inform of a future danger. Its purpose is to enable the deployer of the warning system to prepare for the danger and act accordingly to mitigate or avoid it. Warnings cannot be effective unless people react to them. People are more likely to ignore a system that regularly produces false warnings (the cry-wolf effect), but reducing the number of false warnings generally also increases the risk of not giving a warning when it is needed. Some warnings are non-specific: for instance, the probability of an earthquake of a certain magnitude in a certain area over the next decade. Such warnings cannot be used to guide short-term precautions such as evacuation. Opportunities to take long-term precautions, such as better building codes and disaster preparedness, may be ignored. Biological warning systems Aposematism (e.g. warning coloration) Climate canary Fear Miner's canary Pain Man-made warning systems Emergency population warning Civilian warning systems Alberta Emergency Alert Alberta Emergency Public Warning System (replaced by Alberta Emergency Alert) Alert Ready (Canada) Automatic Warning System Child abduction alert system Dam safety system Earthquake warning system Emergency Alert System (EAS) (United States) Famine Early Warning Systems Network Federal Civil Defense Authority Fire alarm system Gale warning Ground proximity warning system Indian Ocean Tsunami Warning System International Early Warning Programme J-Alert (Japan) Lane departure warning system National Severe Weather Warning Service N.E.A.R. (National Emergency Alarm Repeater) North Warning System Standard Emergency Warning Signal (Australia) Traffic Collision Avoidance System Train Protection & Warning System Tsunami warning system Military warning systems Historical beacon-based systems: Byzantine beacon system in Asia Minor during the 9th century Space-based missile early warning systems: Defense Support Program (United States, to be succeeded by the "Space-Based Infrared System") Space-Based Infrared System (SBIRS) (United States) Oko, also known as "SPRN" (Russia) Airborne early warning systems: Airborne Early Warning and Control ("AWACS" for NATO, many countries have developed their own AEW&C systems) Ground-based early warning radar systems: Ballistic Missile Early Warning System and PAVE PAWS (United States) Duga radar, also known as the "Russian Woodpecker" (Russia) Dnestr radar (1st generation Russian) Daryal radar (2nd generation Russian) Voronezh radar (3rd and current generation Russian) Chain Home (British, now defunct) Chain Home Low (British, now defunct) ROTOR (British, now defunct) Optical sensors: Bomb Alarm System Emergency broadcasting: CONELRAD (United States, succeeded by the Emergency Broadcast System) Emergency Broadcast System (EBS) (United States, succeeded by the Emergency Alert System) See also Alarm (disambiguation) Notes and references Emergency communication pl:System wczesnego ostrzegania
Warning system
[ "Technology", "Engineering" ]
590
[ "Warning systems", "Safety engineering", "Measuring instruments" ]
2,167,227
https://en.wikipedia.org/wiki/Feshbach%E2%80%93Fano%20partitioning
In quantum mechanics, and in particular in scattering theory, the Feshbach–Fano method, named after Herman Feshbach and Ugo Fano, separates (partitions) the resonant and the background components of the wave function and therefore of the associated quantities like cross sections or phase shift. This approach allows us to define rigorously the concept of resonance in quantum mechanics. In general, the partitioning formalism is based on the definition of two complementary projectors P and Q such that P + Q = 1. The subspaces onto which P and Q project are sets of states obeying the continuum and the bound state boundary conditions respectively. P and Q are interpreted as the projectors on the background and the resonant subspaces respectively. The projectors P and Q are not defined within the Feshbach–Fano method. This is its major power as well as its major weakness. On the one hand, this makes the method very general and, on the other hand, it introduces some arbitrariness which is difficult to control. Some authors define first the P space as an approximation to the background scattering but most authors define first the Q space as an approximation to the resonance. This step relies always on some physical intuition which is not easy to quantify. In practice P or Q should be chosen such that the resulting background scattering phase or cross-section is slowly depending on the scattering energy in the neighbourhood of the resonances (this is the so-called flat continuum hypothesis). If one succeeds in translating the flat continuum hypothesis in a mathematical form, it is possible to generate a set of equations defining P and Q on a less arbitrary basis. The aim of the Feshbach–Fano method is to solve the Schrödinger equation governing a scattering process (defined by the Hamiltonian H) in two steps: First by solving the scattering problem ruled by the background Hamiltonian PHP. It is often supposed that the solution of this problem is trivial or at least fulfilling some standard hypotheses which allow to skip its full resolution. Second by solving the resonant scattering problem corresponding to the effective complex (energy dependent) Hamiltonian whose dimension is equal to the number of interacting resonances and depends parametrically on the scattering energy E. The resonance parameters and are obtained by solving the so-called implicit equation for z in the lower complex plane. The solution is the resonance pole. If is close to the real axis it gives rise to a Breit–Wigner or a Fano profile in the corresponding cross section. Both resulting T matrices have to be added in order to obtain the T matrix corresponding to the full scattering problem : References See also Resonance (particle physics) Resonances in scattering from potentials Feshbach resonance Fano resonance Scattering theory
Feshbach–Fano partitioning
[ "Chemistry" ]
567
[ "Scattering", "Scattering theory" ]
2,167,401
https://en.wikipedia.org/wiki/Longest%20common%20substring
In computer science, a longest common substring of two or more strings is a longest string that is a substring of all of them. There may be more than one longest common substring. Applications include data deduplication and plagiarism detection. Examples The picture shows two strings where the problem has multiple solutions. Although the substring occurrences always overlap, it is impossible to obtain a longer common substring by "uniting" them. The strings "ABABC", "BABCA" and "ABCBA" have only one longest common substring, viz. "ABC" of length 3. Other common substrings are "A", "AB", "B", "BA", "BC" and "C". ABABC ||| BABCA ||| ABCBA Problem definition Given two strings, of length and of length , find a longest string which is substring of both and . A generalization is the k-common substring problem. Given the set of strings , where and . Find for each , a longest string which occurs as substring of at least strings. Algorithms One can find the lengths and starting positions of the longest common substrings of and in time with the help of a generalized suffix tree. A faster algorithm can be achieved in the word RAM model of computation if the size of the input alphabet is in . In particular, this algorithm runs in time using space. Solving the problem by dynamic programming costs . The solutions to the generalized problem take space and time with dynamic programming and take time with a generalized suffix tree. Suffix tree The longest common substrings of a set of strings can be found by building a generalized suffix tree for the strings, and then finding the deepest internal nodes which have leaf nodes from all the strings in the subtree below it. The figure on the right is the suffix tree for the strings "ABAB", "BABA" and "ABBA", padded with unique string terminators, to become "ABAB$0", "BABA$1" and "ABBA$2". The nodes representing "A", "B", "AB" and "BA" all have descendant leaves from all of the strings, numbered 0, 1 and 2. Building the suffix tree takes time (if the size of the alphabet is constant). If the tree is traversed from the bottom up with a bit vector telling which strings are seen below each node, the k-common substring problem can be solved in time. If the suffix tree is prepared for constant time lowest common ancestor retrieval, it can be solved in time. Dynamic programming The following pseudocode finds the set of longest common substrings between two strings with dynamic programming: function LongestCommonSubstring(S[1..r], T[1..n]) L := array(1..r, 1..n) z := 0 ret := {} for i := 1..r for j := 1..n if S[i] = T[j] if i = 1 or j = 1 L[i, j] := 1 else L[i, j] := L[i − 1, j − 1] + 1 if L[i, j] > z z := L[i, j] ret := {S[(i − z + 1)..i]} else if L[i, j] = z ret := ret ∪ {S[(i − z + 1)..i]} else L[i, j] := 0 return ret This algorithm runs in time. The array L stores the length of the longest common suffix of the prefixes S[1..i] and T[1..j] which end at position i and j, respectively. The variable z is used to hold the length of the longest common substring found so far. The set ret is used to hold the set of strings which are of length z. The set ret can be saved efficiently by just storing the index i, which is the last character of the longest common substring (of size z) instead of S[(i-z+1)..i]. Thus all the longest common substrings would be, for each i in ret, S[(ret[i]-z)..(ret[i])]. The following tricks can be used to reduce the memory usage of an implementation: Keep only the last and current row of the DP table to save memory ( instead of ) The last and current row can be stored on the same 1D array by traversing the inner loop backwards Store only non-zero values in the rows. This can be done using hash-tables instead of arrays. This is useful for large alphabets. See also Longest palindromic substring n-gram, all the possible substrings of length n that are contained in a string References External links Dictionary of Algorithms and Data Structures: longest common substring Perl/XS implementation of the dynamic programming algorithm Perl/XS implementation of the suffix tree algorithm Dynamic programming implementations in various languages on wikibooks working AS3 implementation of the dynamic programming algorithm Suffix Tree based C implementation of Longest common substring for two strings Problems on strings Dynamic programming Articles with example pseudocode
Longest common substring
[ "Mathematics" ]
1,134
[ "Problems on strings", "Mathematical problems", "Computational problems" ]
2,167,447
https://en.wikipedia.org/wiki/Pentode
A pentode is an electronic device having five electrodes. The term most commonly applies to a three-grid amplifying vacuum tube or thermionic valve that was invented by Gilles Holst and Bernhard D.H. Tellegen in 1926. The pentode (called a triple-grid amplifier in some literature) was developed from the screen-grid tube or shield-grid tube (a type of tetrode tube) by the addition of a grid between the screen grid and the plate. The screen-grid tube was limited in performance as an amplifier due to secondary emission of electrons from the plate. The additional grid is called the suppressor grid. The suppressor grid is usually operated at or near the potential of the cathode and prevents secondary emission electrons from the plate from reaching the screen grid. The addition of the suppressor grid permits much greater output signal amplitude to be obtained from the plate of the pentode in amplifier operation than from the plate of the screen-grid tube at the same plate supply voltage. Pentodes were widely manufactured and used in electronic equipment until the 1960s to 1970s, during which time transistors replaced tubes in new designs. During the first quarter of the 21st century, a few pentode tubes have been in production for high power radio frequency applications, musical instrument amplifiers (especially guitars), home audio and niche markets. Types of pentodes Ordinary pentodes are referred to as sharp-cutoff or high-slope pentodes and have uniform aperture size in the control grid. The uniform construction of the control grid results in the amplification factor (mu or μ) and transconductance changing very little with increasingly negative grid voltage, resulting in fairly abrupt cutoff of plate current. These pentodes are suitable for application in amplifier designs that operate over limited ranges of signal and bias on the control grid. Examples include: EF37A, EF86/6267, 1N5GT, 6AU6A, 6J7GT. Often, but not always, in the European valve naming scheme for pentodes an even number indicated a sharp-cutoff device while odd indicated remote-cutoff; the EF37 was an exception to this general trend, perhaps due to its history as an update to the EF36 ("The Mullard EF36, EF37 and EF37A" at the National Valve Museum). Remote-cutoff, variable-mu, super-control or variable slope pentodes handle much greater signal and bias voltages on the control grid than ordinary pentodes, without cutting off the anode current. The control grid of the variable-mu pentode is constructed so as to result in a given incremental change of control grid voltage having less effect on change of anode current as the control grid voltage increases negatively relative to the cathode. The control grid often has the form of a helix of varying pitch. As the control grid voltage becomes more negative, the amplification factor of the tube becomes smaller. Variable-mu pentodes reduce distortion and cross-modulation (intermodulation) and permit much larger amplifier dynamic range than ordinary pentodes. Variable-mu pentodes were first applied in radio frequency amplifier stages of radio receivers, typically with automatic volume control, and are applied in other applications requiring the ability to operate over large variations of signal and control voltages. The first commercially available variable-mu pentodes were the RCA 239 in 1932 and the Mullard VP4 in 1933. Power pentodes or power-amplifier pentodes. Power pentodes are designed to operate at higher currents, higher temperatures and higher voltages than ordinary pentodes. The cathode of the power pentode is designed to be capable of sufficient electron emission to give the required current through the tube to produce the desired power in the load impedance. The plate or anode of a power pentode is designed to be capable of dissipating more power than that of an ordinary pentode. The EL34, EL84, 6CL6, 6F6, 6G6, SY4307A and 6K6GT are some examples of pentodes designed for power amplification. Some power pentodes for specific television requirements were: video output pentodes, e.g. 15A6/PL83, PL802 frame output or vertical deflection pentodes, such as the PL84 and the pentode sections of the 18GV8/PCL85. line output or horizontal deflection pentodes, such as the PL36, 27GB5/PL500, PL505 etc. A "triode-pentode" is a single envelope containing both a triode and a pentode, such as an ECF80 or ECL86. Advantages over the tetrode The simple tetrode or screen-grid tube offered a larger amplification factor, more power and a higher frequency capability than the earlier triode. However, in the tetrode secondary electrons knocked out of the anode (plate) by the electrons from the cathode striking it (a process called secondary emission) can flow to the screen grid due to its relatively high potential. This current of electrons leaving the anode reduces the net anode current Ia. As the anode voltage Va is increased, the electrons from the cathode hit the anode with more energy, knocking out more secondary electrons, increasing this current of electrons leaving the anode. The result is that in the tetrode the anode current Ia is found to decrease with increasing anode voltage Va, over part of the characteristic curve. This property (ΔVa/ΔIa < 0) is called negative resistance. It can cause the tetrode to become unstable, leading to parasitic oscillations in the output, called dynatron oscillations in some circumstances. The pentode, as introduced by Tellegen, has an additional electrode, or third grid, called the suppressor grid, located between the screen grid and the anode, which solves the problem of secondary emission. The suppressor grid is given a low potential—it is usually either grounded or connected to the cathode. Secondary emission electrons from the anode are repelled by the negative potential on the suppressor grid, so they can't reach the screen grid but return to the anode. The primary electrons from the cathode have a higher kinetic energy, so they can still pass through the suppressor grid and reach the anode. Pentodes, therefore, can have higher current outputs and a wider output voltage swing; the anode/plate can even be at a lower voltage than the screen grid yet still amplify well. Comparisons with the triode Pentodes (and tetrodes) tend to have a much lower feedback capacitance, due to the screening effect of the second grid. Pentodes tend to have higher noise (partition noise) because of the random splitting of the cathode current between the screen grid and the anode, Triodes have a lower internal anode resistance, and hence higher damping factor when used in audio output circuits, compared with pentodes, when negative feedback is absent. That also reduces the potential voltage amplification obtainable from a triode compared with a pentode of the same transconductance, and usually means a more efficient output stage can be made using pentodes, with a lower power drive signal. Pentodes are almost unaffected by changes in supply voltage, and can thus operate with more poorly stabilized supplies than triodes. Pentodes and triodes (and tetrodes) do have essentially similar relationships between grid (one) input voltage and anode output current when the anode voltage is held constant, i.e. close to a square-law relationship. Usage Pentode tubes were first used in consumer-type radio receivers. A well-known pentode type, the EF50, was designed before the start of World War II, and was extensively used in radar sets and other military electronic equipment. The pentode contributed to the electronic preponderance of the Allies. The Colossus computer and the Manchester Baby used large numbers of EF36 pentode tubes. Later on, the 7AK7 tube was expressly developed for use in computer equipment. After World War II, pentodes were widely used in TV receivers, particularly the successor to the EF50, the EF80. Vacuum tubes were replaced by transistors during the 1960s. However, they continue to be used in certain applications, including high-power radio transmitters and (because of their well-known valve sound) in high-end and professional audio applications, microphone preamplifiers and electric guitar amplifiers. Large stockpiles in countries of the former Soviet Union have provided a continuing supply of such devices, some designed for other purposes but adapted to audio use, such as the GU-50 transmitter tube. Triode-strapped pentode circuits A pentode can have its screen grid (grid 2) connected to the anode (plate), in which case it reverts to an ordinary triode with commensurate characteristics (lower anode resistance, lower mu, lower noise, more drive voltage required). The device is then said to be "triode-strapped" or "triode-connected". This is sometimes provided as an option in audiophile pentode amplifier circuits, to give the sought-after "sonic qualities" of a triode power amplifier. A resistor may be included in series with the screen grid to avoid exceeding the screen grid's power or voltage rating, and to prevent local oscillation. Triode-connection is a useful option for audiophiles who wish to avoid the expense of 'true' power triodes. See also Beam tetrode Pentode transistor Valve audio amplifier technical specification List of vacuum tubes References Dutch inventions Vacuum tubes
Pentode
[ "Physics" ]
2,083
[ "Vacuum tubes", "Vacuum", "Matter" ]
2,167,518
https://en.wikipedia.org/wiki/Kamptulicon
Kamptulicon, whose name was derived from the Greek kamptos ("flexible") + oulos ("thick"), was a floor covering made from powdered cork and natural rubber. Patented by Elijah Galloway in 1843, kamptulicon was first launched in public at the 1862 International Exhibition in London, where it caused a sensation. Its promoters compared it to thick, soft leather, and lauded its ease of cleaning, water resistance, warmth, and sound-deadening qualities. Critics, however, pointed out that its grey-brown colour was unattractive. Attempts were made to brighten it up by stencilling patterns on it with oil paint, but these suffered from a lack of durability. Kamptulicon was manufactured by sprinkling powdered cork on to thin bands of rubber, which was then rolled and rerolled until thoroughly mixed. It was then coated on one or both sides with linseed oil varnish or oil paint. Powdered sulphur was also sometimes mixed in, and the material then heated to produce a form of vulcanized kamptulicon. As well as a floor covering, kamptulicon was also used as cushions in stamping presses, and as polishing wheels for metals. Within a few years, faced by stiff competition from the manufacturers of oilcloth coupled with huge increases in the price of natural rubber, kamptulicon production ceased. See also Linoleum Notes Composite materials Floors
Kamptulicon
[ "Physics", "Engineering" ]
302
[ "Structural engineering", "Floors", "Composite materials", "Materials", "Matter" ]
2,167,598
https://en.wikipedia.org/wiki/Lexicon%20JamMan
The JamMan is an audio looping device manufactured by Lexicon in the mid-1990s. The idea for the JamMan began with modifications Gary Hall had devised for the Lexicon PCM-42 that allowed him to play into a long, looping delay whose clock could be synchronized to an external source. (Hall, who worked for Lexicon in two different periods, was the primary architect of the PCM41 and PCM42, as well as the non-reverberation effects that first appeared in the 224X and became better known in the PCM70.) Bob Sellon extended the concept considerably, starting with elaborate PCM42 modifications and eventually working with several others at Lexicon to arrive at the JamMan. The product allowed musicians to record musical phrases at the touch of a button which were then played back, looping indefinitely. The musician would typically use the looping audio as a backing track providing a virtual backing band. The device also allowed MIDI drum machines and sequencers to be synchronized to them providing additional accompaniment. By pressing a button on the floor pedal, the device begins recording a rhythm (8 seconds of memory comes standard, but an upgraded 32-second memory chip is a common upgrade). When the musician is finished playing the part to be looped, simply press the tap button on the floor again and the machine does two things immediately: replays the part from the beginning looping it indefinitely while sending a MIDI clock signal to a drum machine which kicks in right on time and is synchronized to the rhythm part. The JamMan is a 1U rack mounted unit that is controlled using 1 or 2 footswitches or via MIDI. Users Daft Punk Peter Gabriel Junior Vasquez Joseph Arthur Chet Atkins David Torn Alessandro Batazzi Radio Chongqing Blixa Bargeld Boyd Rice Nick Robinson Keller Williams Phil Keaggy Tristan 'Z' Zand Jacob Moon Kelly Burnette Matthieu Chedid Michael Brecker Keith Marquis Nick McCabe See also DigiTech JamMan References External links Looper's Delight Jam Man page with info and links Effects units Sound recording technology Sound production technology Sampling (music)
Lexicon JamMan
[ "Technology" ]
437
[ "Recording devices", "Sound recording technology" ]
2,167,630
https://en.wikipedia.org/wiki/James%20Till
James Edgar Till (born August 25, 1931) is a University of Toronto biophysicist, best known for demonstrating – in a partnership with Ernest McCulloch – the existence of stem cells. Early work Till was born in Lloydminster, which is located on the border between Saskatchewan and Alberta. The family farm was located north of Lloydminster, in Alberta; the eastern margin of the farm was the Alberta–Saskatchewan boundary. He attended the University of Saskatchewan with scholarships awarded by the Standard Oil Company and the National Research Council, graduating with a B.Sc. in 1952 and a M.Sc. in physics in 1954. Some of his early work was conducted with Harold E. Johns, a pioneer in cobalt-60 radiotherapy. Till proceeded to Yale University, where he received a Ph.D. in biophysics in 1957. He then became a post-doctoral fellow at the University of Toronto. Stem cells Harold E. Johns recruited Till to the Ontario Cancer Institute at Princess Margaret Hospital shortly after he completed his work at Yale. Subsequently, Till chose to work with Ernest McCulloch at the University of Toronto. Thus, the older physician's insight was combined with the younger physicist's rigorous and thorough nature. In the early 1960s, McCulloch and Till started a series of experiments that involved injecting bone marrow cells into irradiated mice. They observed that small raised lumps grew on the spleens of the mice, in proportion to the number of bone marrow cells injected. Till and McCulloch dubbed the lumps 'spleen colonies', and speculated that each lump arose from a single marrow cell: perhaps a stem cell. In later work, Till & McCulloch were joined by graduate student Andy Becker. They cemented their stem cell theory and in 1963 published their results in Nature. In the same year, in collaboration with Lou Siminovitch, a trailblazer for molecular biology in Canada, they obtained evidence that these same marrow cells were capable of self-renewal, a crucial aspect of the functional definition of stem cells that they had formulated. In 1969, Till became a Fellow of the Royal Society of Canada. Later career In the 1980s Till's focus shifted, moving gradually into evaluation of cancer therapies, quality of life issues, and Internet research, including Internet research ethics and the ethics of List mining. Till holds the distinguished title of University Professor Emeritus at the University of Toronto. Recently, Till has been a vocal proponent of open access to scientific publications. Until 2019, Till was an editorial member of the open access journal Journal of Medical Internet Research. Till was a founding member of the Board of Directors of the Canadian Stem Cell Foundation (no longer active). Honours 1969, he and Ernest A. McCulloch were awarded the Canada Gairdner International Award 1993, awarded Robert L. Noble Prize by the National Cancer Institute of Canada, now the research arm of the Canadian Cancer Society 1994, made an Officer of the Order of Canada 2000, made a Fellow of the Royal Society of London 2004, inducted into the Canadian Medical Hall of Fame 2005, he and Ernest A. McCulloch were awarded the Albert Lasker Award for Basic Medical Research 2006, made a member of Order of Ontario 2018, awarded Edogawa-NICHE Prize Selected publications External links Canadian Medical Hall of Fame entry James Till CV, Community of Science Joint publications by Till and McCulloch, 1961-1969; full text courtesy University of Toronto Follow Jim Till on twitter James E. Till archival papers held at the University of Toronto Archives and Records Management Services U of Toronto researcher James Till receives International Honour Inaugural Edogawa NICHE Prize awarded to Prof James Till 1931 births Living people Canadian cancer researchers Canadian fellows of the Royal Society Fellows of the Royal Society of Canada Members of the Order of Ontario Officers of the Order of Canada Stem cell researchers University of Saskatchewan alumni People from Lloydminster Recipients of the Albert Lasker Award for Basic Medical Research Yale University alumni Canadian biophysicists 20th-century Canadian scientists 21st-century Canadian scientists Scientists from Saskatchewan
James Till
[ "Biology" ]
815
[ "Stem cell researchers", "Stem cell research" ]
2,168,393
https://en.wikipedia.org/wiki/Lanthanide%20contraction
The lanthanide contraction is the greater-than-expected decrease in atomic radii and ionic radii of the elements in the lanthanide series, from left to right. It is caused by the poor shielding effect of nuclear charge by the 4f electrons along with the expected periodic trend of increasing electronegativity and nuclear charge on moving from left to right. About 10% of the lanthanide contraction has been attributed to relativistic effects. A decrease in atomic radii can be observed across the 4f elements from atomic number 57, lanthanum, to 70, ytterbium. This results in smaller than otherwise expected atomic radii and ionic radii for the subsequent d-block elements starting with 71, lutetium. This effect causes the radii of transition metals of group 5 and 6 to become unusually similar, as the expected increase in radius going down a period is nearly cancelled out by the f-block insertion, and has many other far ranging consequences in post-lanthanide elements. The decrease in ionic radii (Ln3+) is much more uniform compared to decrease in atomic radii. The term was coined by the Norwegian geochemist Victor Goldschmidt in his series "Geochemische Verteilungsgesetze der Elemente" (Geochemical distribution laws of the elements). Cause The effect results from poor shielding of nuclear charge (nuclear attractive force on electrons) by 4f electrons; the 6s electrons are drawn towards the nucleus, thus resulting in a smaller atomic radius. In single-electron atoms, the average separation of an electron from the nucleus is determined by the subshell it belongs to, and decreases with increasing charge on the nucleus; this, in turn, leads to a decrease in atomic radius. In multi-electron atoms, the decrease in radius brought about by an increase in nuclear charge is partially offset by increasing electrostatic repulsion among electrons. In particular, a "shielding effect" operates: i.e., as electrons are added in outer shells, electrons already present shield the outer electrons from nuclear charge, making them experience a lower effective charge on the nucleus. The shielding effect exerted by the inner electrons decreases in the order s > p > d > f. Usually, as a particular subshell is filled in a period, the atomic radius decreases. This effect is particularly pronounced in the case of lanthanides, as the 4f subshell which is filled across these elements is not very effective at shielding the outer shell (n=5 and n=6) electrons. Thus the shielding effect is less able to counter the decrease in radius caused by increasing nuclear charge. This leads to "lanthanide contraction". The ionic radius drops from 103 pm for lanthanum(III) to 86.1 pm for lutetium(III). About 10% of the lanthanide contraction has been attributed to relativistic effects. The lanthanide contraction was experimentally observed in aqueous solutions of lanthanides, including radioactive promethium, through X-ray absorption spectroscopy measurements. Effects The results of the increased attraction of the outer shell electrons across the lanthanide period may be divided into effects on the lanthanide series itself including the decrease in ionic radii, and influences on the following or post-lanthanide elements. Properties of the lanthanides The ionic radii of the lanthanides decrease from 103 pm (La3+) to 86 pm (Lu3+) in the lanthanide series, electrons are added to the 4f shell. This first f shell is inside the full 5s and 5p shells (as well as the 6s shell in the neutral atom); the 4f shell is well-localized near the atomic nucleus and has little effect on chemical bonding. The decrease in atomic and ionic radii does affect their chemistry, however. Without the lanthanide contraction, a chemical separation of lanthanides would be extremely difficult. However, this contraction makes the chemical separation of period 5 and period 6 transition metals of the same group rather difficult. Even when the mass of an atomic nucleus is the same, a decrease in the atomic volume has a corresponding increase in the density as illustrated by alpha crystals of cerium (at 77 Kelvin) and gamma crystals of cerium (near room temperature) where the atomic volume of the latter is 120.3% of the former and the density of the former is 120.5% of the latter (i.e., 20.696 vs 17.2 and 8.16 vs 6.770, respectively). As expected, when more mass (protons & neutrons) is packed into a space that is subject to "contraction", the density increases consistently with atomic number for the lanthanides (excluding the atypical 2nd, 7th, and 14th) culminating in the value for the last lanthanide (Lu) being 160% of the first lanthanide (La). Melting points (in Kelvin) also increase consistently across these 12 lanthanides culminating in the value for the last being 161% of the first. This density-melting point association does not depend upon just a comparison between these two lanthanides because the correlation coefficient (Pearson product-moment) for density and melting point for these 12 lanthanides is 0.982 and 0.946 for all 15 lanthanides. There is a general trend of increasing Vickers hardness, Brinell hardness, density and melting point from lanthanum to lutetium (with europium and ytterbium being the most notable exceptions; in the metallic state, they are divalent rather than trivalent). Cerium, along with europium and ytterbium, are atypical when their properties are compared with the other 12 lanthanides as evidenced by the clearly lower values (than either adjacent element) for melting points (lower by >10<43%), Vickers hardness (lower by >32<82%), and densities (lower by >26<33%, when exclude Ce, where the density increases by 10% vs lanthanum). The lower densities for europium and ytterbium (than their adjacent lanthanides) are associated with larger atomic volumes at 148% and 128% of the average volume for the typical 12 lanthanides (i.e., 28.979, 25.067, and 19.629 cm3/mol, respectively). Because the atomic volume of Yb is 21% more than that of Ce, it is understandable that the density for Ce (the 2nd lanthanide) is 98% of that of ytterbium (the 14th lanthanide) when there is a 24% increase in atomic weight for the latter, and the melting point for Ce (1068 K) is nearly the same as the 1097 K for ytterbium and the 1099 K for europium. These 3 elements are the only lanthanides with melting points below the lowest for the other twelve, which is 1193 K for lanthanum. Because europium has a half-filled 4f subshell, this may account for its atypical values when compared with the data for 12 of the lanthanides. Lutetium is the hardest and densest lanthanide and has the highest melting point at 1925 K, which is the year that Goldschmidt published the terminology "Die Lanthaniden-Kontraktion." Unlike the m. p. data for the lanthanides (where the values increase consistently when the 2nd, 7th & 14th are excluded), the b. p. temperatures show a repeated pattern at 162% and 165% for the 8th lanthanide relative to the 6th and the 15th relative to the 13th (which ignores the atypical 7th and 14th). The 8th and 15th are among the four lanthanides with one electron in the 5d shell (where the others are the 1st and 2nd) and the b. p. values for these four are +/- 2.6% about 3642 K. See the post-lanthanides section for more comments on the 5d-shell electrons. There is also a repeated b. p. pattern at 66% and 71% for the 6th and 13th lanthanides (relative to the preceding elements) that differ by one electron in the 4f shell, i.e., 5 to 6 and 12 to 13. Magnetism of the lanthanides It has been shown that lanthanide contraction plays a crucial role in determining the magnetic phase diagram of the heavy rare-earth elements, i.e. those from Gadolinium onwards. Influence on the post-lanthanides The elements following the lanthanides in the periodic table are influenced by the lanthanide contraction. When the first three post-lanthanide elements (Hf, Ta, and W) are combined with the 12 lanthanides, the Pearson correlation coefficient increases from 0.982 to 0.997. On average for the 12 lanthanides, the melting point (on the Kelvin scale) = 1.92x the density (in g/cm^3) while the three elements following the lanthanides have similar values at 188x, 197x, and 192x before the densities continue to increase but the melting points decrease for the next 2 elements followed by both properties decreasing (at different rates) for the next 8 elements. Hafnium is rather unique because not only do density and m. p. temperature change proportionally (relative to lutetium, the last lanthanide) at 135% and 130% but also the b. p. temperature at 133%. The elements with 2, 3, & 4 electrons in the 5d shell (post-lanthanides Hf, Ta, W) have increasing b. p. values such that the b. p. value for W (wolfram, aka tungsten) is 169% of that for the element with one 5d electron (Lu). The high melting point and two other properties of tungsten originates from strong covalent bonds formed between tungsten atoms by the 5d electrons. The elements with 5 to 10 electrons in the 5d shell (Re to Hg) have progressively lower b. p. values such that the element with ten 5d electrons (Hg) has a b. p. value at 52% of tungsten’s (with four 5d electrons). The radii of the period-6 transition metals are smaller than would be expected if there were no lanthanides, and are in fact very similar to the radii of the period-5 transition metals since the effect of the additional electron shell is almost entirely offset by the lanthanide contraction. For example, the atomic radius of the metal zirconium, Zr (a period-5 transition element), is 155 pm (empirical value) and that of hafnium, Hf (the corresponding period-6 element), is 159 pm. The ionic radius of Zr4+ is 84 pm and that of Hf4+ is 83 pm. The radii are very similar even though the number of electrons increases from 40 to 72 and the atomic mass increases from 91.22 to 178.49 g/mol. The increase in mass and the unchanged radii lead to a steep increase in density from 6.51 to 13.35 g/cm3. Zirconium and hafnium, therefore, have very similar chemical behavior, having closely similar radii and electron configurations. Radius-dependent properties such as lattice energies, solvation energies, and stability constants of complexes are also similar. Because of this similarity, hafnium is found only in association with zirconium, which is much more abundant. This also meant that hafnium was discovered as a separate element in 1923, 134 years after zirconium was discovered in 1789. Titanium, on the other hand, is in the same group, but differs enough from those two metals that it is seldom found with them. See also D-block contraction (or scandide contraction) Lanthanide References External links Reference Page, See Figure 2 for details Complex Definition Chemical bonding Lanthanides Atomic radius
Lanthanide contraction
[ "Physics", "Chemistry", "Materials_science" ]
2,545
[ "Atomic radius", "Condensed matter physics", "nan", "Chemical bonding", "Atoms", "Matter" ]
2,168,513
https://en.wikipedia.org/wiki/Vital%20rates
Vital rates refer to how fast vital statistics change in a population (usually measured per 1000 individuals). There are 2 categories within vital rates: crude rates and refined rates. Crude rates measure vital statistics in a general population (overall change in births and deaths per 1000). Refined rates measure the change in vital statistics in a specific demographic (such as age, sex, race, etc.). Marriage rates The national marriage rates since 1972,in the US have fallen by almost 50% at six people per 1000. According to Iran Index and National Organization for Civil Registration of Iran Iranian divorce rate is in the red at its record highest level since 1979, divorce quotas were introduced to curb enthuitasim. References Ecology
Vital rates
[ "Biology" ]
147
[ "Ecology" ]
2,168,700
https://en.wikipedia.org/wiki/F-algebra
In mathematics, specifically in category theory, F-algebras generalize the notion of algebraic structure. Rewriting the algebraic laws in terms of morphisms eliminates all references to quantified elements from the axioms, and these algebraic laws may then be glued together in terms of a single functor F, the signature. F-algebras can also be used to represent data structures used in programming, such as lists and trees. The main related concepts are initial F-algebras which may serve to encapsulate the induction principle, and the dual construction F-coalgebras. Definition If is a category, and is an endofunctor of , then an -algebra is a tuple , where is an object of and is a -morphism . The object is called the carrier of the algebra. When it is permissible from context, algebras are often referred to by their carrier only instead of the tuple. A homomorphism from an -algebra to an -algebra is a -morphism such that , according to the following commutative diagram: Equipped with these morphisms, -algebras constitute a category. The dual construction are -coalgebras, which are objects together with a morphism . Examples Groups Classically, a group is a set with a group law , with , satisfying three axioms: the existence of an identity element, the existence of an inverse for each element of the group, and associativity. To put this in a categorical framework, first define the identity and inverse as functions (morphisms of the set ) by with , and with . Here denotes the set with one element , which allows one to identify elements with morphisms . It is then possible to write the axioms of a group in terms of functions (note how the existential quantifier is absent): , , . Then this can be expressed with commutative diagrams: Now use the coproduct (the disjoint union of sets) to glue the three morphisms in one: according to Thus a group is an -algebra where is the functor . However the reverse is not necessarily true. Some -algebra where is the functor are not groups. The above construction is used to define group objects over an arbitrary category with finite products and a terminal object . When the category admits finite coproducts, the group objects are -algebras. For example, finite groups are -algebras in the category of finite sets and Lie groups are -algebras in the category of smooth manifolds with smooth maps. Algebraic structures Going one step ahead of universal algebra, most algebraic structures are F-algebras. For example, abelian groups are F-algebras for the same functor F(G) = 1 + G + G×G as for groups, with an additional axiom for commutativity: m∘t = m, where t(x,y) = (y,x) is the transpose on GxG. Monoids are F-algebras of signature F(M) = 1 + M×M. In the same vein, semigroups are F-algebras of signature F(S) = S×S Rings, domains and fields are also F-algebras with a signature involving two laws +,•: R×R → R, an additive identity 0: 1 → R, a multiplicative identity 1: 1 → R, and an additive inverse for each element -: R → R. As all these functions share the same codomain R they can be glued into a single signature function 1 + 1 + R + R×R + R×R → R, with axioms to express associativity, distributivity, and so on. This makes rings F-algebras on the category of sets with signature 1 + 1 + R + R×R + R×R. Alternatively, we can look at the functor F(R) = 1 + R×R in the category of abelian groups. In that context, the multiplication is a homomorphism, meaning m(x + y, z) = m(x,z) + m(y,z) and m(x,y + z) = m(x,y) + m(x,z), which are precisely the distributivity conditions. Therefore, a ring is an F-algebra of signature 1 + R×R over the category of abelian groups which satisfies two axioms (associativity and identity for the multiplication). When we come to vector spaces and modules, the signature functor includes a scalar multiplication k×E → E, and the signature F(E) = 1 + E + k×E is parametrized by k over the category of fields, or rings. Algebras over a field can be viewed as F-algebras of signature 1 + 1 + A + A×A + A×A + k×A over the category of sets, of signature 1 + A×A over the category of modules (a module with an internal multiplication), and of signature k×A over the category of rings (a ring with a scalar multiplication), when they are associative and unitary. Lattice Not all mathematical structures are F-algebras. For example, a poset P may be defined in categorical terms with a morphism s:P × P → Ω, on a subobject classifier (Ω = {0,1} in the category of sets and s(x,y)=1 precisely when x≤y). The axioms restricting the morphism s to define a poset can be rewritten in terms of morphisms. However, as the codomain of s is Ω and not P, it is not an F-algebra. However, lattices, which are partial orders in which every two elements have a supremum and an infimum, and in particular total orders, are F-algebras. This is because they can equivalently be defined in terms of the algebraic operations: x∨y = inf(x,y) and x∧y = sup(x,y), subject to certain axioms (commutativity, associativity, absorption and idempotency). Thus they are F-algebras of signature P x P + P x P. It is often said that lattice theory draws on both order theory and universal algebra. Recurrence Consider the functor that sends a set to . Here, denotes the category of sets, denotes the usual coproduct given by the disjoint union, and is a terminal object (i.e. any singleton set). Then, the set of natural numbers together with the function —which is the coproduct of the functions and —is an F-algebra. Initial F-algebra If the category of F-algebras for a given endofunctor F has an initial object, it is called an initial algebra. The algebra in the above example is an initial algebra. Various finite data structures used in programming, such as lists and trees, can be obtained as initial algebras of specific endofunctors. Types defined by using least fixed point construct with functor F can be regarded as an initial F-algebra, provided that parametricity holds for the type. See also Universal algebra. Terminal F-coalgebra In a dual way, a similar relationship exists between notions of greatest fixed point and terminal F-coalgebra. These can be used for allowing potentially infinite objects while maintaining strong normalization property. In the strongly normalizing Charity programming language (i.e. each program terminates in it), coinductive data types can be used to achieve surprising results, enabling the definition of lookup constructs to implement such “strong” functions like the Ackermann function. See also Algebras for a monad Algebraic data type Catamorphism Dialgebra Notes References External links Categorical programming with inductive and coinductive types () by Varmo Vene Philip Wadler: Recursive types for free! () University of Glasgow, June 1990. Draft. Algebra and coalgebra () from CLiki B. Jacobs, J. Rutten: A Tutorial on (Co) Algebras and (Co) Induction. Bulletin of the European Association for Theoretical Computer Science, vol. 62, 1997, Understanding F-Algebras () by Bartosz Milewski Category theory Functional programming
F-algebra
[ "Mathematics" ]
1,771
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
2,168,731
https://en.wikipedia.org/wiki/Acropora
Acropora is a genus of small polyp stony coral in the phylum Cnidaria. Some of its species are known as table coral, elkhorn coral, and staghorn coral. Over 149 species are described. Acropora species are some of the major reef corals responsible for building the immense calcium carbonate substructure that supports the thin living skin of a reef. Anatomy and distribution Depending on the species and location, Acropora species may grow as plates or slender or broad branches. Like other corals, Acropora corals are colonies of individual polyps, which are about 2 mm across and share tissue and a nerve net. The polyps can withdraw back into the coral in response to movement or disturbance by potential predators, but when undisturbed, they protrude slightly. The polyps typically extend further at night to help capture plankton and organic matter from the water. The species are distributed in the Indo-Pacific (over 100 species) and Caribbean (3 species). However, the true number of species is unknown: firstly, the validity of many of these species is questioned as some have been shown to represent hybrids, for example Acropora prolifera; and secondly, some species have been shown to represent cryptic species complexes. Threats Symbiodinium, symbiotic algae, live in the corals' cells and produce energy for the animals through photosynthesis. Environmental destruction has led to a dwindling of populations of Acropora, along with other coral species. Acropora is especially susceptible to bleaching when stressed. Bleaching is due to the loss of the coral's zooxanthellae, which are a golden-brown color. Bleached corals are stark white and may die if new Symbiodinium cells cannot be assimilated. Common causes of bleaching and coral death include pollution, abnormally warm water temperatures, increased ocean acidification, sedimentation, and eutrophication. In 2014 the U.S. Fish and Wildlife Service listed ten Acropora species as 'threatened'. Reef-keeping Most Acropora species are brown or green, but a few are brightly colored, and those rare corals are prized by aquarists. Captive propagation of Acropora is widespread in the reef-keeping community. Given the right conditions, many Acropora species grow quickly, and individual colonies can exceed a meter across in the wild. In a well-maintained reef aquarium, finger-sized fragments can grow into medicine ball-sized colonies in one to two years. Captive specimens are steadily undergoing changes due to selection which enable them to thrive in the home aquarium. In some cases, fragments of captive specimens are used to repopulate barren reefs in the wild. Acropora species are challenging to keep in a home aquarium. They require bright light, stable temperatures, regular addition of calcium and alkalinity supplements, and clean, turbulent water. Common parasites of colonies in reef aquariums are "Acropora-eating flatworms" Amakusaplana acroporae, and "red bugs" (Tegastes acroporanus). Species The following species are recognised in the genus Acropora: Acropora abrolhosensis Veron, 1985 Acropora abrotanoides (Lamarck, 1816) Acropora acervata (Dana, 1846) Acropora aculeus (Dana, 1846) Acropora acuminata (Verrill, 1864) Acropora alvarezi† Wallace, 2008 Acropora anglica† (Duncan, 1866) Acropora anthocercis (Brook, 1893) Acropora arabensis Hodgson and Carpenter, 1995 Acropora arafura Wallace, Done & Muir, 2012 Acropora aspera (Dana, 1846) Acropora austera (Dana, 1846) Acropora awi Wallace and Wolstenholme, 1998 Acropora bartonensis† Wallace, 2008 Acropora batunai Wallace, 1997 Acropora borneoensis† (Felix, 1921) Acropora branchi Riegl, 1995 Acropora britannica† Wallace, 2008 Acropora bushyensis Veron and Wallace, 1984 Acropora capillaris (Klunzinger, 1879) Acropora cardenae Wells, 1986 Acropora carduus (Dana, 1846) Acropora caroliniana Nemenzo, 1976 Acropora cerealis (Dana, 1846) Acropora cervicornis (Lamarck, 1816) - staghorn coral Acropora chesterfieldensis Veron and Wallace, 1984 Acropora clathrata (Brook, 1891) Acropora cytherea (Dana, 1846) Acropora darrellae† Santodomingo, Wallace & Johnson, 2015 Acropora deformis† (Michelin, 1840) Acropora dendrum (Bassett-Smith, 1890) Acropora derawaensis Wallace, 1997 Acropora desalwii Wallace, 1994 Acropora digitifera (Dana, 1846) Acropora divaricata (Dana, 1846) Acropora donei Veron and Wallace, 1984 Acropora downingi Wallace, 1999 Acropora duncani† (Reuss, 1866) Acropora echinata (Dana, 1846) Acropora elegans (Milne-Edwards and Haime, 1860) Acropora elenae† Santodomingo, Wallace & Johnson, 2015 Acropora elseyi (Brook, 1892) Acropora emanuelae† Santodomingo, Wallace & Johnson, 2015 Acropora eurystoma (Klunzinger, 1879) Acropora fastigata Nemenzo, 1967 Acropora fennemai† (Gerth, 1921) Acropora fenneri Veron, 2000 Acropora filiformis Veron, 2000 Acropora florida (Dana, 1846) Acropora gemmifera (Brook, 1892) Acropora glauca (Brook, 1893) Acropora globiceps (Dana, 1846) Acropora gomezi Veron, 2000 Acropora grandis (Brook, 1892) Acropora granulosa (Milne-Edwards and Haime, 1860) Acropora haidingeri† (Reuss, 1864) Acropora halmaherae Wallace and Wolstenholme, 1998 Acropora hasibuani† Santodomingo, Wallace & Johnson, 2015 Acropora hemprichii (Ehrenberg, 1834) Acropora herklotsi† (Reuss, 1866) Acropora hoeksemai Wallace, 1997 Acropora horrida (Dana, 1846) Acropora humilis (Dana, 1846) Acropora hyacinthus (Dana, 1846) Acropora indonesia Wallace, 1997 Acropora intermedia (Brook, 1891) Acropora jacquelineae Wallace, 1994 Acropora japonica Veron, 2000 Acropora kimbeensis Wallace, 1999 Acropora kirstyae Veron and Wallace, 1984 Acropora kosurini Wallace, 1994 Acropora lamarcki Veron, 2000 Acropora latistella (Brook, 1892) Acropora laurae† Santodomingo, Wallace & Johnson, 2015 Acropora lavandulina† (Michelin, 1840) Acropora listeri (Brook, 1893) Acropora loisetteae Wallace, 1994 Acropora lokani Wallace, 1994 Acropora longicyathus (Milne-Edwards and Haime, 1860) Acropora loripes (Brook, 1892) Acropora lovelli Veron and Wallace, 1984 Acropora lutkeni Crossland, 1952 Acropora macrocalyx† Wallace & Bosselini, 2015 Acropora microclados (Ehrenberg, 1834) Acropora microphthalma (Verrill, 1869) Acropora millepora (Ehrenberg, 1834) Acropora monticulosa (Brueggemann, 1879) Acropora mossambica Riegl, 1995 Acropora multiacuta Nemenzo, 1967 Acropora muricata (Linnaeus, 1758) Acropora nana (Studer, 1878) Acropora nasuta (Dana, 1846) Acropora natalensis Riegl, 1995 Acropora ornata† (DeFrance, 1828) Acropora palmata (Lamarck, 1816) - elkhorn coral Acropora palmerae Wells, 1954 Acropora paniculata Verrill, 1902 Acropora papillare Latypov, 1992 Acropora pectinata Veron, 2000 Acropora pharaonis (Milne-Edwards and Haime, 1860) Acropora pichoni Wallace, 1999 Acropora piedmontensis† Wallace & Bosselini, 2015 Acropora plumosa Wallace and Wolstenholme, 1998 Acropora polystoma (Brook, 1891) Acropora prolifera (Lamarck, 1816) - fused staghorn coral Acropora proteacea† Wallace, 2008 Acropora proximalis Veron, 2000 Acropora pulchra (Brook, 1891) Acropora renemai† Santodomingo, Wallace & Johnson, 2015 Acropora retusa (Dana, 1846) Acropora ridzwani Ditlev, 2003 Acropora robusta (Dana, 1846) Acropora roemeri† (Duncan, 1866) Acropora rongelapensis Richards & Wallace, 2004 Acropora roseni Wallace, 1999 Acropora rudis (Rehberg, 1892) Acropora rufa Veron, 2000 Acropora russelli Wallace, 1994 Acropora salentina† Wallace & Bosselini, 2015 Acropora samoensis (Brook, 1891) Acropora sarmentosa (Brook, 1892) Acropora secale (Studer, 1878) Acropora selago (Studer, 1878) Acropora seriata (Ehrenberg, 1834) Acropora serrata Lamarck Acropora simplex Wallace and Wolstenholme, 1998 Acropora sirikitiae Wallace, Phongsuwan & Muir, 2012 Acropora slovenica† Wallace & Bosselini, 2015 Acropora solanderi† (Defrance, 1828) Acropora solitaryensis Veron and Wallace, 1984 Acropora sordiensis Riegl, 1995 Acropora spathulata (Brook, 1891) Acropora speciosa (Quelch, 1886) Acropora spicifera (Dana, 1846) Acropora squarrosa (Ehrenberg, 1834) Acropora striata (Verrill, 1866) Acropora subglabra (Brook, 1891) Acropora subulata (Dana, 1846) Acropora suharsonoi Wallace, 1994 Acropora sukarnoi Wallace, 1997 Acropora tanegashimensi -s Veron, 1990 Acropora tenella (Brook, 1892) Acropora tenuis (Dana, 1846) Acropora torihalimeda Wallace, 1994 Acropora tortuosa (Dana, 1846) Acropora tuberculosa (Milne Edwards, 1860) Acropora turaki Wallace, 1994 Acropora valenciennesi (Milne-Edwards and Haime, 1860) Acropora valida (Dana, 1846) Acropora variolosa (Klunzinger, 1879) Acropora vaughani Wells, 1954 Acropora verweyi Veron and Wallace, 1984 Acropora walindii Wallace, 1999 Acropora willisae Veron and Wallace, 1984 Acropora wilsonae† Wallace, 2008 Acropora yongei Veron and Wallace, 1984 References Further reading External links Acroporidae Coral reefs Articles containing video clips Scleractinia genera
Acropora
[ "Biology" ]
2,502
[ "Biogeomorphology", "Coral reefs" ]
2,168,763
https://en.wikipedia.org/wiki/Reyn
In fluid dynamics, the reyn is a British unit of dynamic viscosity, named in honour of Osbourne Reynolds, for whom the Reynolds number is also named. Conversions By definition, 1 reyn = 1 lbf s in−2. It follows that the relation between the reyn and the poise is approximately 1 reyn = 6.89476 × 104 P. In SI units, viscosity is expressed in newton-seconds per square meter, or equivalently in pascal-seconds. The conversion factor between the two is approximately 1 reyn = 6890 Pa s. References External links Reyn History of the unit Fluid dynamics Units of dynamic viscosity
Reyn
[ "Chemistry", "Mathematics", "Engineering" ]
139
[ "Fluid dynamics stubs", "Chemical engineering", "Quantity", "Units of dynamic viscosity", "Piping", "Units of measurement", "Fluid dynamics" ]
2,168,837
https://en.wikipedia.org/wiki/Password-based%20cryptography
Password-based cryptography is the study of password-based key encryption, decryption, and authorization. It generally refers two distinct classes of methods: Single-party methods Multi-party methods Single party methods Some systems attempt to derive a cryptographic key directly from a password. However, such practice is generally ill-advised when there is a threat of brute-force attack. Techniques to mitigate such attack include passphrases and iterated (deliberately slow) password-based key derivation functions such as PBKDF2 (RFC 2898). Multi-party methods Password-authenticated key agreement systems allow two or more parties that agree on a password (or password-related data) to derive shared keys without exposing the password or keys to network attack. Earlier generations of challenge–response authentication systems have also been used with passwords, but these have generally been subject to eavesdropping and/or brute-force attacks on the password. See also Password Passphrase Password-authenticated key agreement References Further reading https://link.springer.com/chapter/10.1007/978-3-642-32009-5_19 https://link.springer.com/chapter/10.1007/978-3-662-46447-2_14 Cryptography
Password-based cryptography
[ "Mathematics", "Engineering" ]
274
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
2,168,889
https://en.wikipedia.org/wiki/Asset%20allocation
Asset allocation is the implementation of an investment strategy that attempts to balance risk versus reward by adjusting the percentage of each asset in an investment portfolio according to the investor's risk tolerance, goals and investment time frame. The focus is on the characteristics of the overall portfolio. Such a strategy contrasts with an approach that focuses on individual assets. Description Many financial experts argue that asset allocation is an important factor in determining returns for an investment portfolio. Asset allocation is based on the principle that different assets perform differently in different market and economic conditions. A fundamental justification for asset allocation is the notion that different asset classes offer returns that are not perfectly correlated, hence diversification reduces the overall risk in terms of the variability of returns for a given level of expected return. Asset diversification has been described as "the only free lunch you will find in the investment game". Academic research has painstakingly explained the importance and benefits of asset allocation and the problems of active management (see academic studies section below). Although the risk is reduced as long as correlations are not perfect, it is typically forecast (wholly or in part) based on statistical relationships (like correlation and variance) that existed over some past period. Expectations for return are often derived in the same way. Studies of these forecasting methods constitute an important direction of academic research. When such backward-looking approaches are used to forecast future returns or risks using the traditional mean-variance optimization approach to the asset allocation of modern portfolio theory (MPT), the strategy is, in fact, predicting future risks and returns based on history. As there is no guarantee that past relationships will continue in the future, this is one of the "weak links" in traditional asset allocation strategies as derived from MPT. Other, more subtle weaknesses include seemingly minor errors in forecasting leading to recommended allocations that are grossly skewed from investment mandates and/or impractical—often even violating an investment manager's "common sense" understanding of a tenable portfolio-allocation strategy. ==Asset classes== An asset class is a group of economic resources sharing similar characteristics, such as riskiness and return. There are many types of assets that may or may not be included in an asset allocation strategy. Traditional assets The "traditional" asset classes are stocks, bonds, and cash: Stocks: value, dividend, growth, or sector-specific (or a "blend" of any two or more of the preceding); large-cap versus mid-cap, small-cap or micro-cap; domestic, foreign (developed), emerging or frontier markets Bonds (fixed income securities more generally): investment-grade or junk (high-yield); government or corporate; short-term, intermediate, long-term; domestic, foreign, emerging markets Cash and cash equivalents (e.g., deposit account, money market fund) Allocation among these three provides a starting point. Usually included are hybrid instruments such as convertible bonds and preferred stocks, counting as a mixture of bonds and stocks. Alternative assets Other alternative assets that may be considered include: Valuable economic goods and consumer goods such as precious metals and other valuable tangible goods. Commercial or residential real estate (also REITs) Collectibles such as art, coins, or stamps Insurance products (annuity, life settlements, catastrophe bonds, personal life insurance products, etc.) Derivatives such as options, collateralized debt, and futures Foreign currency Venture capital Private equity Distressed securities Infrastructure Allocation strategy There are several types of asset allocation strategies based on investment goals, risk tolerance, time frames and diversification. The most common forms of asset allocation are: strategic, dynamic, tactical, and core-satellite. Strategic asset allocation The primary goal of strategic asset allocation is to create an asset mix that seeks to provide the optimal balance between expected risk and return for a long-term investment horizon. Generally speaking, strategic asset allocation strategies are agnostic to economic environments, i.e., they do not change their allocation postures relative to changing market or economic conditions. Dynamic asset allocation Dynamic asset allocation is similar to strategic asset allocation in that portfolios are built by allocating to an asset mix that seeks to provide the optimal balance between expected risk and return for a long-term investment horizon. Like strategic allocation strategies, dynamic strategies largely retain exposure to their original asset classes; however, unlike strategic strategies, dynamic asset allocation portfolios will adjust their postures over time relative to changes in the economic environment. Tactical asset allocation Tactical asset allocation is a strategy in which an investor takes a more active approach that tries to position a portfolio into those assets, sectors, or individual stocks that show the most potential for perceived gains. While an original asset mix is formulated much like strategic and dynamic portfolio, tactical strategies are often traded more actively and are free to move entirely in and out of their core asset classes. Core-satellite asset allocation Core-satellite allocation strategies generally contain a 'core' strategic element making up the most significant portion of the portfolio, while applying a dynamic or tactical 'satellite' strategy that makes up a smaller part of the portfolio. In this way, core-satellite allocation strategies are a hybrid of the strategic and dynamic/tactical allocation strategies mentioned above. Classification Industry sectors may be classified according to an industry classification taxonomy (such as the Industry Classification Benchmark). The top-level sectors may be grouped as below: Morningstar X-ray Defensive Consumer Staples Health Care Utilities Sensitive Energy Industrials Technology Telecommunications Cyclical Consumer Discretionary Basic Materials Financials Real Estate Per the Tactical asset allocation strategy above, an investor may allocate more to cyclical sectors when the economy is showing gains, and more to defensive when it is not. Academic studies In 1986, Gary P. Brinson, L. Randolph Hood, and SEI's Gilbert L. Beebower (BHB) published a study about asset allocation of 91 large pension funds measured from 1974 to 1983. They replaced the pension funds' stock, bond, and cash selections with corresponding market indexes. The indexed quarterly return was found to be higher than the pension plan's actual quarterly return. The two quarterly return series' linear correlation was measured at 96.7%, with shared variance of 93.6%. A 1991 follow-up study by Brinson, Singer, and Beebower measured variance of 91.5%. The conclusion of the study was that replacing active choices with simple asset classes worked just as well as, if not even better than, professional pension managers. Also, a small number of asset classes was sufficient for financial planning. Financial advisors often pointed to this study to support the idea that asset allocation is more important than all other concerns, which the BHB study is incorrectly thought to have lumped together as "market timing" but was actually policy selection. One problem with the Brinson study was that the cost factor in the two return series was not clearly discussed. However, in response to a letter to the editor, Hood noted that the returns series were gross of management fees. In 1997, William Jahnke initiated a debate on this topic, attacking the BHB study in a paper titled "The Asset Allocation Hoax". The Jahnke discussion appeared in the Journal of Financial Planning as an opinion piece, not a peer reviewed article. Jahnke's main criticism, still undisputed, was that BHB's use of quarterly data dampens the impact of compounding slight portfolio disparities over time, relative to the benchmark. One could compound 2% and 2.15% quarterly over 20 years and see the sizable difference in cumulative return. However, the difference is still 15 basis points (hundredths of a percent) per quarter; the difference is one of perception, not fact. In 2000, Ibbotson and Kaplan used five asset classes in their study "Does Asset Allocation Policy Explain 40, 90, or 100 Percent of Performance?". The asset classes included were large-cap US stock, small-cap US stock, non-US stock, US bonds, and cash. Ibbotson and Kaplan examined the 10-year return of 94 US balanced mutual funds versus the corresponding indexed returns. This time, after properly adjusting for the cost of running index funds, the actual returns again failed to beat index returns. The linear correlation between monthly index return series and the actual monthly actual return series was measured at 90.2%, with shared variance of 81.4%. Ibbotson concluded 1) that asset allocation explained 40% of the variation of returns across funds, and 2) that it explained virtually 100% of the level of fund returns. Gary Brinson has expressed his general agreement with the Ibbotson-Kaplan conclusions. In both studies, it is misleading to make statements such as "asset allocation explains 93.6% of investment return". Even "asset allocation explains 93.6% of quarterly performance variance" leaves much to be desired, because the shared variance could be from pension funds' operating structure. Hood, however, rejects this interpretation on the grounds that pension plans, in particular, cannot cross-share risks and that they are explicitly singular entities, rendering shared variance irrelevant. The statistics were most helpful when used to demonstrate the similarity of the index return series and the actual return series. A 2000 paper by Meir Statman found that using the same parameters that explained BHB's 93.6% variance result, a hypothetical financial advisor with perfect foresight in tactical asset allocation performed 8.1% better per year, yet the strategic asset allocation still explained 89.4% of the variance. Thus, explaining variance does not explain performance. Statman says that strategic asset allocation is movement along the efficient frontier, whereas tactical asset allocation involves movement of the efficient frontier. A more common sense explanation of the Brinson, Hood, and Beebower study is that asset allocation explains more than 90% of the volatility of returns of an overall portfolio, but will not explain the ending results of your portfolio over long periods of time. Hood notes in his review of the material over 20 years, however, that explaining performance over time is possible with the BHB approach but was not the focus of the original paper. Bekkers, Doeswijk and Lam (2009) investigate the diversification benefits for a portfolio by distinguishing ten different investment categories simultaneously in a mean-variance analysis as well as a market portfolio approach. The results suggest that real estate, commodities, and high yield add the most value to the traditional asset mix of stocks, bonds, and cash. A study with such broad coverage of asset classes has not been conducted before, not in the context of determining capital market expectations and performing a mean-variance analysis, neither in assessing the global market portfolio. Doeswijk, Lam and Swinkels (2014) argue that the portfolio of the average investor contains important information for strategic asset allocation purposes. This portfolio shows the relative value of all assets according to the market crowd, which one could interpret as a benchmark or the optimal portfolio for the average investor. The authors determine the market values of equities, private equity, real estate, high yield bonds, emerging debt, non-government bonds, government bonds, inflation linked bonds, commodities, and hedge funds. For this range of assets, they estimate the invested global market portfolio for the period 1990 to 2012. For the main asset categories equities, real estate, non-government bonds, and government bonds they extend the period to 1959 until 2012. Doeswijk, Lam and Swinkels (2019) show that the global market portfolio realizes a compounded real return of 4.45% per year with a standard deviation of 11.2% from 1960 until 2017. In the inflationary period from 1960 to 1979, the compounded real return of the global market portfolio is 3.24% per year, while this is 6.01% per year in the disinflationary period from 1980 to 2017. The average return during recessions was -1.96% per year, versus 7.72% per year during expansions. The reward for the average investor over the period 1960 to 2017 is a compounded return of 3.39% points above the risk-less rate earned by savers. Historically, since the 20th century, US equities have outperformed equities of other countries because of the competitive advantage US has due to its large GDP. Performance indicators McGuigan described an examination of funds that were in the top quartile of performance during 1983 to 1993. During the second measurement period of 1993 to 2003, only 28.57% of the funds remained in the top quartile. 33.33% of the funds dropped to the second quartile. The rest of the funds dropped to the third or fourth quartile. In fact, low cost was a more reliable indicator of performance. Bogle noted that an examination of five-year performance data of large-cap blend funds revealed that the lowest cost quartile funds had the best performance, and the highest cost quartile funds had the worst performance. Return versus risk trade-off In asset allocation planning, the decision on the amount of stocks versus bonds in one's portfolio is a very important decision. Simply buying stocks without regard of a possible bear market can result in panic selling later. One's true risk tolerance can be hard to gauge until having experienced a real bear market with money invested in the market. Finding the proper balance is key. The tables show why asset allocation is important. It determines an investor's future return, as well as the bear market burden that he or she will have to carry successfully to realize the returns. Problems with asset allocation There are various reasons why asset allocation fails to work. Investor behavior is inherently biased. Even though investor chooses an asset allocation, implementation is a challenge. Investors agree to asset allocation, but after some good returns, they decide that they really wanted more risk. Investors agree to asset allocation, but after some bad returns, they decide that they really wanted less risk. Investors' risk tolerance is not knowable ahead of time. Security selection within asset classes will not necessarily produce a risk profile equal to the asset class. The long-run behavior of asset classes does not guarantee their shorter-term behavior. Different assets are subject to distinct tax treatments and regulatory considerations, which can make asset allocation decisions more complex. Frequent asset class rebalancing and maintaining a diversified portfolio can lead to substantial costs and fees, which may reduce overall returns. Accurately predicting the optimal times to invest in or sell out of various asset classes is difficult, and poor timing can adversely affect returns. See also Asset location Economic capital Efficient-market hypothesis Performance attribution References External links Asset allocation performance Model portfolios for buy and hold index investors Calculator for determining allocation of retirement assets, and related risk questionnaire Calculator which determines future asset mix based on differing growth rates and contributions Investment management Actuarial science Corporate development
Asset allocation
[ "Mathematics" ]
3,046
[ "Applied mathematics", "Actuarial science" ]
2,168,924
https://en.wikipedia.org/wiki/Mercury%20cadmium%20telluride
Hg1−xCdxTe or mercury cadmium telluride (also cadmium mercury telluride, MCT, MerCad Telluride, MerCadTel, MerCaT or CMT) is a chemical compound of cadmium telluride (CdTe) and mercury telluride (HgTe) with a tunable bandgap spanning the shortwave infrared to the very long wave infrared regions. The amount of cadmium (Cd) in the alloy can be chosen so as to tune the optical absorption of the material to the desired infrared wavelength. CdTe is a semiconductor with a bandgap of approximately at room temperature. HgTe is a semimetal, which means that its bandgap energy is zero. Mixing these two substances allows one to obtain any bandgap between 0 and 1.5 eV. Properties Physical Hg1−xCdxTe has a zincblende structure with two interpenetrating face-centered cubic lattices offset by (1/4,1/4,1/4)ao in the primitive cell. The cations Cd and Hg are statistically mixed on the yellow sublattice while the Te anions form the grey sublattice in the image. Electronic The electron mobility of HgCdTe with a large Hg content is very high. Among common semiconductors used for infrared detection, only InSb and InAs surpass electron mobility of HgCdTe at room temperature. At 80 K, the electron mobility of Hg0.8Cd0.2Te can be several hundred thousand cm2/(V·s). Electrons also have a long ballistic length at this temperature; their mean free path can be several micrometres. The intrinsic carrier concentration is given by where k is the Boltzmann constant, q is the elementary electric charge, T is the material temperature, x is the percentage of cadmium concentration, and Eg is the bandgap given by Using the relationship , where λ is in μm and Eg. is in electron volts, one can also obtain the cutoff wavelength as a function of x and t: Minority carrier lifetime Auger recombination Two types of Auger recombination affect HgCdTe: Auger 1 and Auger 7 recombination. Auger 1 recombination involves two electrons and one hole, where an electron and a hole combine and the remaining electron receives energy equal to or greater than the band gap. Auger 7 recombination is similar to Auger 1, but involves one electron and two holes. The Auger 1 minority carrier lifetime for intrinsic (undoped) HgCdTe is given by where FF is the overlap integral (approximately 0.221). The Auger 1 minority carrier lifetime for doped HgCdTe is given by where n is the equilibrium electron concentration. The Auger 7 minority carrier lifetime for intrinsic HgCdTe is approximately 10 times longer than the Auger 1 minority carrier lifetime: The Auger 7 minority carrier lifetime for doped HgCdTe is given by The total contribution of Auger 1 and Auger 7 recombination to the minority carrier lifetime is computed as Mechanical HgCdTe is a soft material due to the weak bonds Hg forms with tellurium. It is a softer material than any common III–V semiconductor. The Mohs hardness of HgTe is 1.9, CdTe is 2.9 and Hg0.5Cd0.5Te is 4. The hardness of lead salts is lower still. Thermal The thermal conductivity of HgCdTe is low; at low cadmium concentrations it is as low as 0.2 W·K−1⋅m−1. This means that it is unsuitable for high power devices. Although infrared light-emitting diodes and lasers have been made in HgCdTe, they must be operated cold to be efficient. The specific heat capacity is 150 J·kg−1⋅K−1. Optical HgCdTe is transparent in the infrared at photon energies below the energy gap. The refractive index is high, reaching nearly 4 for HgCdTe with high Hg content. Infrared detection HgCdTe is the only common material that can detect infrared radiation in both of the accessible atmospheric windows. These are from 3 to 5 μm (the mid-wave infrared window, abbreviated MWIR) and from 8 to 12 μm (the long-wave window, LWIR). Detection in the MWIR and LWIR windows is obtained using 30% [(Hg0.7Cd0.3)Te] and 20% [(Hg0.8Cd0.2)Te] cadmium respectively. HgCdTe can also detect in the short wave infrared SWIR atmospheric windows of 2.2 to 2.4 μm and 1.5 to 1.8 μm. HgCdTe is a common material in photodetectors of Fourier transform infrared spectrometers. This is because of the large spectral range of HgCdTe detectors and also the high quantum efficiency. It is also found in military field, remote sensing and infrared astronomy research. Military technology has depended on HgCdTe for night vision. In particular, the US air force makes extensive use of HgCdTe on all aircraft, and to equip airborne smart bombs. A variety of heat-seeking missiles are also equipped with HgCdTe detectors. HgCdTe detector arrays can also be found at most of the worlds major research telescopes including several satellites. Many HgCdTe detectors (such as Hawaii and NICMOS detectors) are named after the astronomical observatories or instruments for which they were originally developed. The main limitation of LWIR HgCdTe-based detectors is that they need cooling to temperatures near that of liquid nitrogen (77 K), to reduce noise due to thermally excited current carriers (see cooled infrared camera). MWIR HgCdTe cameras can be operated at temperatures accessible to thermoelectric coolers with a small performance penalty. Hence, HgCdTe detectors are relatively heavy compared to bolometers and require maintenance. On the other side, HgCdTe enjoys much higher speed of detection (frame rate) and is significantly more sensitive than some of its more economical competitors. HgCdTe can be used as a heterodyne detector, in which the interference between a local source and returned laser light is detected. In this case it can detect sources such as CO2 lasers. In heterodyne detection mode HgCdTe can be uncooled, although greater sensitivity is achieved by cooling. Photodiodes, photoconductors or photoelectromagnetic (PEM) modes can be used. A bandwidth well in excess of 1 GHz can be achieved with photodiode detectors. The main competitors of HgCdTe are less sensitive Si-based bolometers (see uncooled infrared camera), InSb and photon-counting superconducting tunnel junction (STJ) arrays. Quantum well infrared photodetectors (QWIP), manufactured from III–V semiconductor materials such as GaAs and AlGaAs, are another possible alternative, although their theoretical performance limits are inferior to HgCdTe arrays at comparable temperatures and they require the use of complicated reflection/diffraction gratings to overcome certain polarization exclusion effects which impact array responsivity. In the future, the primary competitor to HgCdTe detectors may emerge in the form of Quantum Dot Infrared Photodetectors (QDIP), based on either a colloidal or type-II superlattice structure. Unique 3-D quantum confinement effects, combined with the unipolar (non-exciton based photoelectric behavior) nature of quantum dots could allow comparable performance to HgCdTe at significantly higher operating temperatures. Initial laboratory work has shown promising results in this regard and QDIPs may be one of the first significant nanotechnology products to emerge. In HgCdTe, detection occurs when an infrared photon of sufficient energy kicks an electron from the valence band to the conduction band. Such an electron is collected by a suitable external readout integrated circuits (ROIC) and transformed into an electric signal. The physical mating of the HgCdTe detector array to the ROIC is often referred to as a "focal plane array". In contrast, in a bolometer, light heats up a tiny piece of material. The temperature change of the bolometer results in a change in resistance which is measured and transformed into an electric signal. Mercury zinc telluride has better chemical, thermal, and mechanical stability characteristics than HgCdTe. It has a steeper change of energy gap with mercury composition than HgCdTe, making compositional control harder. HgCdTe growth techniques Bulk crystal growth The first large scale growth method was bulk recrystallization of a liquid melt. This was the main growth method from the late 1950s to the early 1970s. Epitaxial growth Highly pure and crystalline HgCdTe is fabricated by epitaxy on either CdTe or CdZnTe substrates. CdZnTe is a compound semiconductor, the lattice parameter of which can be exactly matched to that of HgCdTe. This eliminates most defects from the epilayer of HgCdTe. CdTe was developed as an alternative substrate in the '90s. It is not lattice-matched to HgCdTe, but is much cheaper, as it can be grown by epitaxy on silicon (Si) or germanium (Ge) substrates. Liquid phase epitaxy (LPE), in which a CdZnTe substrate is lowered and spinning on top of the surface of a slowly cooling liquid HgCdTe melt. This gives the best results in terms of crystalline quality, and is still a common technique of choice for industrial production. In recent years, molecular beam epitaxy (MBE) has become widespread because of its ability to stack up layers of different alloy composition. This allows simultaneous detection at several wavelengths. Furthermore, MBE, and also MOVPE, allow growth on large area substrates such as CdTe on Si or Ge, whereas LPE does not allow such substrates to be used. Toxicity Mercury cadmium telluride is known to be a toxic material, with additional danger from the high vapor pressure of mercury at the material's melting point; in spite of this, it continues to be developed and used in its applications. See also Related materials Mercury telluride, Cadmium telluride, Mercury zinc telluride. Other infrared detection materials Indium antimonide, Indium arsenide, Indium arsenide antimonide, Lead selenide, QWIP Other Infrared, thermography. References Notes Bibliography . (Earliest known reference) Properties of Narrow-Gap Cadmium-Based Compounds, Ed. P. Capper (INSPEC, IEE, London, UK, 1994) HgCdTe Infrared Detectors, P. Norton, Opto-Electronics Review vol. 10(3), 159–174 (2002) . . Semiconductor Quantum Wells and Superlattices for Long-Wavelength Infrared Detectors M.O. Manasreh, Editor (Artech House, Norwood, MA), (1993). External links National Pollutant Inventory – Mercury and compounds Fact Sheet Korea i3system in Daejeon Mercury(II) compounds Cadmium compounds Tellurides II-VI semiconductors Infrared sensor materials
Mercury cadmium telluride
[ "Chemistry" ]
2,396
[ "Semiconductor materials", "II-VI semiconductors", "Inorganic compounds" ]
2,169,038
https://en.wikipedia.org/wiki/Homochirality
Homochirality is a uniformity of chirality, or handedness. Objects are chiral when they cannot be superposed on their mirror images. For example, the left and right hands of a human are approximately mirror images of each other but are not their own mirror images, so they are chiral. In biology, 19 of the 20 natural amino acids are homochiral, being L-chiral (left-handed), while sugars are D-chiral (right-handed). Homochirality can also refer to enantiopure substances in which all the constituents are the same enantiomer (a right-handed or left-handed version of an atom or molecule), but some sources discourage this use of the term. It is unclear whether homochirality has a purpose; however, it appears to be a form of information storage. One suggestion is that it reduces entropy barriers in the formation of large organized molecules. It has been experimentally verified that amino acids form large aggregates in larger abundance from an enantiopure samples of the amino acid than from racemic (enantiomerically mixed) ones. It is not clear whether homochirality emerged before or after life, and many mechanisms for its origin have been proposed. Some of these models propose three distinct steps: mirror-symmetry breaking creates a minute enantiomeric imbalance, chiral amplification builds on this imbalance, and chiral transmission is the transfer of chirality from one set of molecules to another. In biology Amino acids are the building blocks of peptides and enzymes while sugar-peptide chains are the backbone of RNA and DNA. In biological organisms, amino acids appear almost exclusively in the left-handed form (L-amino acids) and sugars in the right-handed form (R-sugars). Since the enzymes catalyze reactions, they enforce homochirality on a great variety of other chemicals, including hormones, toxins, fragrances and food flavors. Glycine is achiral, as are some other non-proteinogenic amino acids that are either achiral (such as dimethylglycine) or of the D enantiomeric form. Biological organisms easily discriminate between molecules with different chiralities. This can affect physiological reactions such as smell and taste. Carvone, a terpenoid found in essential oils, smells like mint in its L-form and caraway in its R-form. Limonene tastes like citrus when right-handed and pine when left-handed. Homochirality also affects the response to drugs. Thalidomide, in its left-handed form, cures morning sickness; in its right-handed form, it causes birth defects. Unfortunately, even if a pure left-handed version is administered, some of it can convert to the right-handed form in the patient. Many drugs are available as both a racemic mixture (equal amounts of both chiralities) and an enantiopure drug (only one chirality). Depending on the manufacturing process, enantiopure forms can be more expensive to produce than stereochemical mixtures. Chiral preferences can also be found at a macroscopic level. Snail shells can be right-turning or left-turning helices, but one form or the other is strongly preferred in a given species. In the edible snail Helix pomatia, only one out of 20,000 is left-helical. The coiling of plants can have a preferred chirality and even the chewing motion of cows has a 10% excess in one direction. Origins Symmetry breaking Theories for the origin of homochirality in the molecules of life can be classified as deterministic or based on chance depending on their proposed mechanism. If there is a relationship between cause and effect — that is, a specific chiral field or influence causing the mirror symmetry breaking — the theory is classified as deterministic; otherwise it is classified as a theory based on chance (in the sense of randomness) mechanisms. Another classification for the different theories of the origin of biological homochirality could be made depending on whether life emerged before the enantiodiscrimination step (biotic theories) or afterwards (abiotic theories). Biotic theories claim that homochirality is simply a result of the natural autoamplification process of life—that either the formation of life as preferring one chirality or the other was a chance rare event which happened to occur with the chiralities we observe, or that all chiralities of life emerged rapidly but due to catastrophic events and strong competition, the other unobserved chiral preferences were wiped out by the preponderance and metabolic, enantiomeric enrichment from the 'winning' chirality choices. If this was the case, remains of the extinct chirality sign should be found. Since this is not the case, nowadays biotic theories are no longer supported. The emergence of chirality consensus as a natural autoamplification process has also been associated with the 2nd law of thermodynamics. Deterministic theories Deterministic theories can be divided into two subgroups: if the initial chiral influence took place in a specific space or time location (averaging zero over large enough areas of observation or periods of time), the theory is classified as local deterministic; if the chiral influence is permanent at the time the chiral selection occurred, then it is classified as universal deterministic. The classification groups for local determinist theories and theories based on chance mechanisms can overlap. Even if an external chiral influence produced the initial chiral imbalance in a deterministic way, the outcome sign could be random since the external chiral influence has its enantiomeric counterpart elsewhere. In deterministic theories, the enantiomeric imbalance is created due to an external chiral field or influence, and the ultimate sign imprinted in biomolecules will be due to it. Deterministic mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction (via cosmic rays) or asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, β-Radiolysis or the magnetochiral effect. The most accepted universal deterministic theory is the electroweak interaction. Once established, chirality would be selected for. One supposition is that the discovery of an enantiomeric imbalance in molecules in the Murchison meteorite supports an extraterrestrial origin of homochirality: there is evidence for the existence of circularly polarized light originating from Mie scattering on aligned interstellar dust particles which may trigger the formation of an enantiomeric excess within chiral material in space. Interstellar and near-stellar magnetic fields can align dust particles in this fashion. Another speculation (the Vester-Ulbricht hypothesis) suggests that fundamental chirality of physical processes such as that of the beta decay (see Parity violation) leads to slightly different half-lives of biologically relevant molecules. Chance theories Chance theories are based on the assumption that "Absolute asymmetric synthesis, i.e., the formation of enantiomerically enriched products from achiral precursors without the intervention of chiral chemical reagents or catalysts, is in practice unavoidable on statistical grounds alone". Consider the racemic state as a macroscopic property described by a binomial distribution; the experiment of tossing a coin, where the two possible outcomes are the two enantiomers is a good analogy. The discrete probability distribution of obtaining n successes out of Bernoulli trials, where the result of each Bernoulli trial occurs with probability and the opposite occurs with probability is given by: . The discrete probability distribution of having exactly molecules of one chirality and of the other, is given by: . As in the experiment of tossing a coin, in this case, we assume both events ( or ) to be equiprobable, . The probability of having exactly the same amount of both enantiomers is inversely proportional to the square root of the total number of molecules . For one mol of a racemic compound, molecules, this probability becomes . The probability of finding the racemic state is so small that we can consider it negligible. In this scenario, there is a need to amplify the initial stochastic enantiomeric excess through any efficient mechanism of amplification. The most likely path for this amplification step is by asymmetric autocatalysis. An autocatalytic chemical reaction is that in which the reaction product is itself a reactive, in other words, a chemical reaction is autocatalytic if the reaction product is itself the catalyst of the reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalysing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other. Amplification Theory In 1953, Charles Frank proposed a model to demonstrate that homochirality is a consequence of autocatalysis. In his model the L and D enantiomers of a chiral molecule are autocatalytically produced from an achiral molecule A while suppressing each other through a reaction that he called mutual antagonism In this model the racemic state is unstable in the sense that the slightest enantiomeric excess will be amplified to a completely homochiral state. This can be shown by computing the reaction rates from the law of mass action: where is the rate constant for the autocatalytic reactions, is the rate constant for mutual antagonism reaction, and the concentration of A is kept constant for simplicity. The analytical solutions for are found to be . The ratio increases at a more than exponential rate if is positive (and vice versa). Every starting conditions different to lead to one of the asymptotes or . Thus the equality of and and so of and represents a condition of unstable equilibrium, this result depending on the presence of the term representing mutual antagonism. By defining the enantiomeric excess as the rate of change of enantiomeric excess can be calculated using chain rule from the rate of change of the concentrations of enantiomers L and D. Linear stability analysis of this equation shows that the racemic state is unstable. Starting from almost everywhere in the concentration space, the system evolves to a homochiral state. It is generally understood that autocatalysis alone does not yield to homochirality, and the presence of the mutually antagonistic relationship between the two enantiomers is necessary for the instability of the racemic mixture. However, recent studies show that homochirality could be achieved from autocatalysis in the absence of the mutually antagonistic relationship, but the underlying mechanism for symmetry-breaking is different. Experiments There are several laboratory experiments that demonstrate how a small amount of one enantiomer at the start of a reaction can lead to a large excess of a single enantiomer as the product. For example, the Soai reaction is autocatalytic. If the reaction is started with some of one of the product enantiomers already present, the product acts as an enantioselective catalyst for production of more of that same enantiomer. The initial presence of just 0.2 equivalent one enantiomer can lead to up to 93% enantiomeric excess of the product. Another study concerns the proline catalyzed aminoxylation of propionaldehyde by nitrosobenzene. In this system, a small enantiomeric excess of catalyst leads to a large enantiomeric excess of product. Serine octamer clusters are also contenders. These clusters of 8 serine molecules appear in mass spectrometry with an unusual homochiral preference, however there is no evidence that such clusters exist under non-ionizing conditions and amino acid phase behavior is far more prebiotically relevant. The recent observation that partial sublimation of a 10% enantioenriched sample of leucine results in up to 82% enrichment in the sublimate shows that enantioenrichment of amino acids could occur in space. Partial sublimation processes can take place on the surface of meteors where large variations in temperature exist. This finding may have consequences for the development of the Mars Organic Detector scheduled for launch in 2013 which aims to recover trace amounts of amino acids from the Mars surface exactly by a sublimation technique. A high asymmetric amplification of the enantiomeric excess of sugars are also present in the amino acid catalyzed asymmetric formation of carbohydrates One classic study involves an experiment that takes place in the laboratory. When sodium chlorate is allowed to crystallize from water and the collected crystals examined in a polarimeter, each crystal turns out to be chiral and either the L form or the D form. In an ordinary experiment the amount of L crystals collected equals the amount of D crystals (corrected for statistical effects). However, when the sodium chlorate solution is stirred during the crystallization process the crystals are either exclusively L or exclusively D. In 32 consecutive crystallization experiments 14 experiments deliver D-crystals and 18 others L-crystals. The explanation for this symmetry breaking is unclear but is related to autocatalysis taking place in the nucleation process. In a related experiment, a crystal suspension of a racemic amino acid derivative continuously stirred, results in a 100% crystal phase of one of the enantiomers because the enantiomeric pair is able to equilibrate in solution (compare with dynamic kinetic resolution). Transmission Once a significant enantiomeric enrichment has been produced in a system, the transference of chirality through the entire system is customary. This last step is known as the chiral transmission step. Many strategies in asymmetric synthesis are built on chiral transmission. Especially important is the so-called organocatalysis of organic reactions by proline for example in Mannich reactions. Some proposed models for the transmission of chiral asymmetry are polymerization, epimerization or copolymerization. Optical resolution in racemic amino acids There exists no theory elucidating correlations among L-amino acids. If one takes, for example, alanine, which has a small methyl group, and phenylalanine, which has a larger benzyl group, a simple question is in what aspect, L-alanine resembles L-phenylalanine more than D-phenylalanine, and what kind of mechanism causes the selection of all L-amino acids, because it might be possible that alanine was L and phenylalanine was D. It was reported in 2004 that excess racemic D,L-asparagine (Asn), which spontaneously forms crystals of either isomer during recrystallization, induces asymmetric resolution of a co-existing racemic amino acid such as arginine (Arg), aspartic acid (Asp), glutamine (Gln), histidine (His), leucine (Leu), methionine (Met), phenylalanine (Phe), serine (Ser), valine (Val), tyrosine (Tyr), and tryptophan (Trp). The enantiomeric excess of these amino acids was correlated almost linearly with that of the inducer, i.e., Asn. When recrystallizations from a mixture of 12 D,L-amino acids (Ala, Asp, Arg, Glu, Gln, His, Leu, Met, Ser, Val, Phe, and Tyr) and excess D,L-Asn were made, all amino acids with the same configuration with Asn were preferentially co-crystallized. It was incidental whether the enrichment took place in L- or D-Asn, however, once the selection was made, the co-existing amino acid with the same configuration at the α-carbon was preferentially involved because of thermodynamic stability in the crystal formation. The maximal ee was reported to be 100%. Based on these results, it is proposed that a mixture of racemic amino acids causes spontaneous and effective optical resolution, even if asymmetric synthesis of a single amino acid does not occur without an aid of an optically active molecule. This is the first study elucidating reasonably the formation of chirality from racemic amino acids with experimental evidences. History of term This term was introduced by Kelvin in 1904, the year that he published his Baltimore Lecture of 1884. Kelvin used the term homochirality as a relationship between two molecules, i.e. two molecules are homochiral if they have the same chirality. Recently, however, homochiral has been used in the same sense as enantiomerically pure. This is permitted in some journals (but not encouraged), its meaning changing into the preference of a process or system for a single optical isomer in a pair of isomers in these journals. See also Chiral life concept - of artificially synthesizing chiral-mirror version of life CIP system Stereochemistry Pfeiffer Effect Unsolved problems in chemistry References Further reading External links Observations Support Homochirality Theory. Photonics TechnologyWorld November 1998. Origins of Homochirality . Conference in Nordita Stockholm, February 2008. Origin of life Stereochemistry Pharmacology
Homochirality
[ "Physics", "Chemistry", "Biology" ]
3,714
[ "Pharmacology", "Origin of life", "Biochemistry", "Stereochemistry", "Chirality", "Space", "Medicinal chemistry", "Asymmetry", "nan", "Spacetime", "Symmetry", "Biological hypotheses" ]
2,169,417
https://en.wikipedia.org/wiki/Walter%20M.%20Elsasser
Walter Maurice Elsasser (March 20, 1904 – October 14, 1991) was a German-born American physicist, a developer of the presently accepted dynamo theory as an explanation of the Earth's magnetism. He proposed that this magnetic field resulted from electric currents induced in the fluid outer core of the Earth. He revealed the history of the Earth's magnetic field by the study of the magnetic orientation of minerals in rocks. He is also noted for his unpublished proposal of the wave-like diffraction of electron particles by a crystal. The subsequent Davisson–Germer experiment showing this effect led to a Nobel Prize in Physics. Between 1962 and 1968 he was a Professor of Geophysics at Princeton University. Between 1975 and 1991 he was an adjunct Professor of Geophysics at Johns Hopkins University The Olin Hall at the Johns Hopkins University has a Walter Elsasser Memorial in the lobby. Biography Elsasser was born in 1904 to a Jewish family in Mannheim, Germany. Before he became known for his geodynamo theory, while in Göttingen during the 1920s, he had suggested the experiment to test the wave aspect of electrons. This suggestion of Elsasser was later communicated by his senior colleague from Göttingen (Nobel Prize recipient Max Born) to physicists in England. This explained the results of the Davisson-Germer and Thomson experiments later awarded with the Nobel Prize in Physics. In 1935, while working in Paris, Elsasser calculated the binding energies of protons and neutrons in heavy radioactive nuclei. Eugene Wigner, J. Hans D. Jensen and Maria Goeppert-Mayer received the Nobel in 1963 for work developing out of Elsasser's initial formulation. Elsasser therefore came quite close to a Nobel prize on two occasions. During 1946–47, Elsasser published papers describing the first mathematical model for the origin of the Earth's magnetic field. He conjectured that it could be a self-sustaining dynamo, powered by convection in the liquid outer core, and described a possible feedback mechanism between flows having two different geometries, toroidal and poloidal (indeed, inventing the terms). This had been developed from about 1941 onwards, partly in his spare time during his scientific war service with the U.S. Army Signal Corps. During his later years, Elsasser became interested in what is now called systems biology and contributed a series of articles to Journal of Theoretical Biology. The final version of his thoughts on this subject can be found in his book Reflections on a Theory of Organisms, published in 1987 and again posthumously with a new foreword by Harry Rubin in 1998. Elsasser died in 1991 in Baltimore, Maryland, US. Biotonic laws A biotonic law, a phrase invented by Elsasser, is a principle of nature which is not contained in the principles of physics. Biotonic laws may also be considered as local instances of global organismic or organismal principles, such as the Organismic Principle of Natural Selection. Some, but not all, of Elsasser's theoretical biology work is still quite controversial, and in fact may disagree with several of the basic tenets of current systems biology that he may have helped to develop. Basic to Elsasser's biological thought is the notion of the great complexity of the cell. Elsasser deduced from this that any investigation of a causative chain of events in a biological system will reach a "terminal point", where the number of possible inputs into the chain will overwhelm the capacity of the scientist to make predictions, even with the most powerful computers. This might seem like a counsel of despair, but in fact Elsasser was not suggesting the abandonment of biology as a worthwhile research topic, but rather for a different kind of biology such that molecular causal chains are no longer the main focus of study. Correlation between supra-molecular events would become the main data source. Moreover, the heterogeneity of logical classes encompassed by all biological organisms without exception is an important part of Elsasser's legacy to both Complex systems biology and Relational Biology. Publications Awards Elsasser was elected to the National Academy of Sciences in 1957. From the American Geophysical Union he received the William Bowie Medal, its highest honor, in 1959; and the John Adam Fleming Medal (for contributions to geomagnetism) in 1971. He received the Penrose Medal from the Geological Society of America in 1979 and the Gauss Medal from Germany in 1977. In 1987, he was awarded the USA's National Medal of Science "for his fundamental and lasting contributions to physics, meteorology, and geophysics in establishing quantum mechanics, atmospheric radiation transfer, planetary magnetism and plate tectonics." See also Complex system biology List of geophysicists Mathematical and theoretical biology References Further reading Beyler R & Gatherer D (2007) Walter Elsasser (biography). In: Dictionary of Scientific Biography, new ed. New York: Charles Scribner's Sons Inc. External links Oral history interview transcript with Walter Elsasser on 29 May 1962, American Institute of Physics, Niels Bohr Library & Archives Oral history interview transcript with Walter Elsasser on 21 November 1985, American Institute of Physics, Niels Bohr Library & Archives Elsasser's photo Biography 1904 births 1991 deaths 20th-century American physicists 20th-century German physicists Jewish emigrants from Nazi Germany to the United States Scientists from Mannheim People from the Grand Duchy of Baden Systems biologists Theoretical biologists Heidelberg University alumni Ludwig Maximilian University of Munich alumni University of Göttingen alumni National Medal of Science laureates Scientists from Baltimore Jewish American physicists Fellows of the American Physical Society Jewish German physicists
Walter M. Elsasser
[ "Biology" ]
1,136
[ "Bioinformatics", "Theoretical biologists" ]
2,169,754
https://en.wikipedia.org/wiki/Charge%20pump
A charge pump is a kind of DC-to-DC converter that uses capacitors for energetic charge storage to raise or lower voltage. Charge-pump circuits are capable of high efficiencies, sometimes as high as 90–95%, while being electrically simple circuits. Description Charge pumps use some form of switching device to control the connection of a supply voltage across a load through a capacitor in a two stage cycle. In the first stage a capacitor is connected across the supply, charging it to that same voltage. In the second stage the circuit is reconfigured so that the capacitor is in series with the supply and the load. This doubles the voltage across the load - the sum of the original supply and the capacitor voltages. The pulsing nature of the higher voltage switched output is often smoothed by the use of an output capacitor. An external or secondary circuit drives the switching, typically at tens of kilohertz up to several megahertz. The high frequency minimizes the amount of capacitance required, as less charge needs to be stored and dumped in a shorter cycle. Charge pumps can double voltages, triple voltages, halve voltages, invert voltages, fractionally multiply or scale voltages (such as ×, ×, ×, etc.) and generate arbitrary voltages by quickly alternating between modes, depending on the controller and circuit topology. They are commonly used in low-power electronics (such as mobile phones) to raise and lower voltages for different parts of the circuitry - minimizing power consumption by controlling supply voltages carefully. Terminology for PLL The term charge pump is also commonly used in phase-locked loop (PLL) circuits even though there is no pumping action involved unlike in the circuit discussed above. A PLL charge pump is merely a bipolar switched current source. This means that it can output positive and negative current pulses into the loop filter of the PLL. It cannot produce higher or lower voltages than its power and ground supply levels. Applications A common application for charge-pump circuits is in RS-232 level shifters, where they are used to derive positive and negative voltages (often +10 V and −10 V) from a single 5 V or 3 V power supply rail. Charge pumps can also be used as LCD or white-LED drivers, generating high bias voltages from a single low-voltage supply, such as a battery. Charge pumps are extensively used in NMOS memories and microprocessors to generate a negative voltage "VBB" (about −3 V), which is connected to the substrate. This guarantees that all N+-to-substrate junctions are reversely biased by 3 V or more, decreasing junction capacitance and increasing circuit speed. A charge pump providing a negative voltage spike has been used in NES-compatible games not licensed by Nintendo in order to stun the Nintendo Entertainment System lockout chip. As of 2007, charge pumps are integrated into nearly all EEPROM and flash-memory integrated circuits. These devices require a high-voltage pulse to "clean out" any existing data in a particular memory cell before it can be written with a new value. Early EEPROM and flash-memory devices required two power supplies: +5 V (for reading) and +12 V (for erasing). , commercially available flash memory and EEPROM memory requires only one external power supply – generally 1.8 V or 3.3 V. A higher voltage, used to erase cells, is generated internally by an on-chip charge pump. Charge pumps are used in H bridges in high-side drivers for gate-driving high-side n-channel power MOSFETs and IGBTs. When the centre of a half bridge goes low, the capacitor is charged through a diode, and this charge is used to later drive the gate of the high-side FET a few volts above the source voltage so as to switch it on. This strategy works well, provided the bridge is regularly switched and avoids the complexity of having to run a separate power supply and permits the more efficient n-channel devices to be used for both switches. This circuit (requiring the periodic switching of the high-side FET) may also be called a "bootstrap" circuit, and some would differentiate between that and a charge pump (which would not require that switching). High-voltage vertical deflection signal generation for cathode-ray tube (CRT) monitors, done for example with the TDA1670A integrated circuit. To achieve maximum deviation, a CRT coil needs around 50 V. Using a charge pump voltage doubler from an existing 24 V supply eliminates the need for another supply voltage. Higher-power fast charge solutions for mobile devices rely on a charge pump instead of a buck converter to reduce the voltage, as higher efficiency reduces heat generation. The Samsung Galaxy S23, which takes an input current of 3 A, can charge its internal battery packs at 6 A thanks to a 2:1 current pump. Oppo's 240 W SUPERVOOC goes further and uses three charge pumps in parallel (98% claimed efficiency) to go from 24V/10A to 10V/24A, which is then taken by two parallel battery packs. See also Cockcroft–Walton generator Voltage multiplier Switched capacitor Charge transfer switch Voltage doubler References Applying the equivalent resistor concept to calculating the power losses in the charge pumps Charge pumps where the voltages across the capacitors follow the binary number system External links Charge Pump, inductorless, Voltage Regulators On-chip High-Voltage Generator Design Charge Pump DC/DC Converters. Applications, circuits and solutions using inductorless (charge pump) dc/dc converters. DC/DC Conversion without Inductors. General description of charge pump operation; example applications using Maxim controllers. Charge pump circuits overview.https://picture.iczhiku.com/resource/eetop/wYkRpwFlHrpHWnNB.pdf {new one} Tutorial by G. Palumbo and D. Pappalardo Electric power conversion Voltage regulation es:Multiplicador de tensión
Charge pump
[ "Physics" ]
1,280
[ "Voltage", "Physical quantities", "Voltage regulation" ]
2,169,841
https://en.wikipedia.org/wiki/British%20Plant%20Communities
British Plant Communities is a five-volume work, edited by John S. Rodwell and published by Cambridge University Press, which describes the plant communities which comprise the British National Vegetation Classification. Its coverage includes all native vegetation communities and some artificial ones of Great Britain, excluding Northern Ireland. The series is a major contribution to plant conservation in Great Britain, and, as such, covers material appropriate for professionals and amateurs interested in the conservation of native plant communities. Each book begins with an introduction to the techniques used to survey the particular vegetations within its scope, discussing sampling, the type of data collected, organization of the data, and analysing the data. Each community is discussed with an overall emphasis of the ecology of the community, so that users can consider the relationships of various plant communities to each other as a function of climatic or soil conditions, for example. The five volumes are: British Plant Communities Volume 1 – Woodlands and Scrub This volume was first published in 1991 in hardback () and in 1998 in paperback () British Plant Communities Volume 2 – Mires and Heaths This volume was first published in 1991 in hardback () and in 1998 in paperback () British Plant Communities Volume 3 – Grasslands and Montane Communities This volume was first published in 1992 in hardback () and in 1998 in paperback () British Plant Communities Volume 4 – Aquatic Communities, Swamps and Tall-herb Fens This volume was first published in 1995 in hardback () and in 1998 in paperback () British Plant Communities Volume 5 – Maritime Communities and Vegetation of Open Habitats This volume was first published in 2000 in both hardback () and paperback () Errors The following is a list of errors found in the published books: In Volume 1, on page 38–39, the branches leading from couplets 22 and 23 should read W12, not W14 In volume 3, on page 117, the first branch in couplet 3 should lead to couplet 4, to enable the subcommunities of CG3 to be discerned In volume 3, page 284, the Juncus trifidus – Racomitrium lanuginosum community is referred to as community U11, whereas it is in fact community U9. In volume 4, page 19, the "free-floating or rooted and submerged pondweed vegetation" group of aquatic communities is listed as having seven constituent communities, when in fact it has eight. In volume 5, page 20, the "middle saltmarsh" grouping is listed as having eight constituent communities, when in fact it has nine. In volume 5, pages 142–143, there is a discrepancy between the stated distribution of community SD3 ("confined to Scotland") and the mapped distribution, which shows the community also occurring in Lancashire and Cumbria. In volume 5, page 482, Sedum anglicum is listed as a constituent species of community OV33; however, it is not present in the floristic table for this community on page 437 External links Cambridge University Press 01 . Ecology of the British Isles Florae (publication) . Botany books British non-fiction books Books about the United Kingdom
British Plant Communities
[ "Biology" ]
641
[ "Flora", "British National Vegetation Classification communities", "Florae (publication)", "British National Vegetation Classification" ]
2,169,855
https://en.wikipedia.org/wiki/Carbon%20dioxide%20%28data%20page%29
This page provides supplementary chemical data on carbon dioxide. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions. MSDS for solid carbon dioxide is available from Pacific Dry Ice, inc. Structure and properties Thermodynamic properties It is used for smoking effects in different scenes. Solubility in water at various temperatures ‡Second column of table indicates solubility at each given temperature in volume of CO2 as it would be measured at 101.3 kPa and 0 °C per volume of water. The solubility is given for "pure water", i.e., water which contain only CO2. This water is going to be acidic. For example, at 25 °C the pH of 3.9 is expected (see carbonic acid). At less acidic pH values, the solubility will increase because of the pH-dependent speciation of CO2. Vapor pressure of solid and liquid Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Annotation "(s)" indicates equilibrium temperature of vapor over solid. Otherwise temperature is equilibrium of vapor over liquid. For kPa values, where datum is whole numbers of atmospheres exact kPa values are given, elsewhere 2 significant figures derived from mm Hg data. Phase diagram Liquid/vapor equilibrium thermodynamic data The table below gives thermodynamic data of liquid CO2 in equilibrium with its vapor at various temperatures. Heat content data, heat of vaporization, and entropy values are relative to the liquid state at 0 °C temperature and 3483 kPa pressure. To convert heat values to joules per mole values, multiply by 44.095 g/mol. To convert densities to moles per liter, multiply by 22.678 cm3 mol/(L·g). Data obtained from CRC Handbook of Chemistry and Physics, 44th ed. pages 2560–2561, except for critical temperature line (31.1 °C) and temperatures −30 °C and below, which are taken from Lange's Handbook of Chemistry, 10th ed. page 1463. Spectral data Notes References Carbon dioxide Chemical data pages Chemical data pages cleanup
Carbon dioxide (data page)
[ "Chemistry" ]
482
[ "Greenhouse gases", "Carbon dioxide", "Chemical data pages", "nan" ]
2,170,017
https://en.wikipedia.org/wiki/Energy%20Policy%20Act%20of%202005
The Energy Policy Act of 2005 () is a federal law signed by President George W. Bush on August 8, 2005, at Sandia National Laboratories in Albuquerque, New Mexico. The act, described by proponents as an attempt to combat growing energy problems, changed US energy policy by providing tax incentives and loan guarantees for energy production of various types. The most consequential aspect of the law was to greatly increase ethanol production to be blended with gasoline. The law also repealed the Public Utility Holding Company Act of 1935, effective February 2006. Provisions General provisions The Act increases the amount of biofuel (usually ethanol) that must be mixed with gasoline sold in the United States to by 2006, by 2009 and by 2012; two years later, the Energy Independence and Security Act of 2007 extended the target to by 2022. Under an amendment in the American Recovery and Reinvestment Act of 2009, Section 406, the Energy Policy Act of 2005 authorizes loan guarantees for innovative technologies that avoid greenhouse gases, which might include advanced nuclear reactor designs, such as pebble bed modular reactors (PBMRs) as well as carbon capture and storage and renewable energy; It seeks to increase coal as an energy source while also reducing air pollution, through authorizing $200 million annually for clean coal initiatives, repealing the current cap on coal leases, allowing the advanced payment of royalties from coal mines and requiring an assessment of coal resources on federal lands that are not national parks; It authorizes tax credits for wind and other alternative energy producers; It adds ocean energy sources, including wave and tidal power for the first time as separately identified, renewable technologies; It authorizes $50 million annually over the life of the law for biomass grants; It includes provisions aimed at making geothermal energy more competitive with fossil fuels in generating electricity; It requires the Department of Energy to: Study and report on existing natural energy resources including wind, solar, waves and tides; Study and report on national benefits of demand response and make a recommendation on achieving specific levels of benefits and encourages time-based pricing and other forms of demand response as a policy decision; Designate National Interest Electric Transmission Corridors where there are significant transmission limitations adversely affecting the public (the Federal Energy Regulatory Commission may authorize federal permits for transmission projects in these regions); Report in one year on how to dispose of high-level nuclear waste; It authorizes the Department of the Interior to grant leases for activity that involves the production, transportation or transmission of energy on the Outer Continental Shelf lands from sources other than gas and oil (Section 388); It requires all public electric utilities to offer net metering on request to their customers; It prohibits the manufacture and importation of mercury-vapor lamp ballasts after January 1, 2008; It provides tax breaks for those making energy conservation improvements to their homes; It provides incentives to companies to drill for oil in the Gulf of Mexico; It exempts oil and gas producers from certain requirements of the Safe Drinking Water Act; It extends the daylight saving time by four to five weeks, depending upon the year (see below); It requires that no drilling for gas or oil may be done in or underneath the Great Lakes; It requires that the Federal Fleet vehicles capable of operating on alternative fuels be operated on these fuels exclusively (Section 701); It sets federal reliability standards regulating the electrical grid (done in response to the 2003 North America blackout); It includes nuclear-specific provisions; It extends the Price-Anderson Nuclear Industries Indemnity Act through 2025; It authorizes cost-overrun support of up to $2 billion total for up to six new nuclear power plants; It authorizes production tax credit of up to $125 million total a year, estimated at 1.8 US¢/kWh during the first eight years of operation for the first 6.000 MW of capacity, consistent with renewables; It authorizes loan guarantees of up to 80% of project cost to be repaid within 30 years or 90% of the project's life; It authorizes $2.95 billion for R&D and the building of an advanced hydrogen cogeneration reactor at Idaho National Laboratory; It authorizes 'standby support' for new reactor delays that offset the financial impact of delays beyond the industry's control for the first six reactors, including 100% coverage of the first two plants with up to $500 million each and 50% of the cost of delays for plants three through six with up to $350 million each for; It allows nuclear plant employees and certain contractors to carry firearms; It prohibits the sale, export or transfer of nuclear materials and "sensitive nuclear technology" to any state sponsor of terrorist activities; It updates tax treatment of decommissioning funds; The law exempted fluids used in the natural gas extraction process of hydraulic fracturing (fracking) from protections under the Clean Air Act, Clean Water Act, Safe Drinking Water Act, and CERCLA ("Superfund"). It directs the Secretary of the Interior to complete a programmatic environmental impact statement for a commercial leasing program for oil shale and tar sands resources on public lands with an emphasis on the most geologically prospective lands within each of the states of Colorado, Utah, and Wyoming. Tax reductions by subject area $4.3 billion for nuclear power $2.8 billion for fossil fuel production $2.7 billion to extend the renewable electricity production credit $1.6 billion in tax incentives for investments in "clean coal" facilities $1.3 billion for energy conservation and efficiency $1.3 billion for alternative fuel vehicles and fuels (bioethanol, biomethane, liquified natural gas, propane) $500 million Clean Renewable Energy Bonds (CREBS) for government agencies for renewable energy projects. Change to daylight saving time The law amended the Uniform Time Act of 1966 by changing the start and end dates of daylight saving time, beginning in 2007. Clocks were set ahead one hour on the second Sunday of March (March 11, 2007) instead of on the first Sunday of April (April 1, 2007). Clocks were set back one hour on the first Sunday of November (November 4, 2007), rather than on the last Sunday of October (October 28, 2007). This had the net effect of slightly lengthening the duration of daylight saving time. Lobbyists for this provision included the Sporting Goods Manufacturers Association, the National Association of Convenience Stores, and the National Retinitis Pigmentosa Foundation Fighting Blindness. Lobbyists against this provision included the U.S. Conference of Catholic Bishops, the United Synagogue of Conservative Judaism, the National Parent-Teacher Association, the Calendaring and Scheduling Consortium, the Edison Electric Institute, and the Air Transport Association. This section of the act is controversial; some have questioned whether daylight saving results in net energy savings. Commercial building deduction The Act created the Energy Efficient Commercial Buildings Tax Deduction, a special financial incentive designed to reduce the initial cost of investing in energy-efficient building systems via an accelerated tax deduction under section §179D of the Internal Revenue Code (IRC) Many building owners are unaware that the [Policy Act of 2005] includes a tax deduction (§179D) for investments in "energy efficient commercial building property" designed to significantly reduce the heating, cooling, water heating and interior lighting cost of new or existing commercial buildings placed into service between January 1, 2006 and December 31, 2013. §179D includes full and partial tax deductions for investments in energy efficient commercial building that are designed to increase the efficiency of energy-consuming functions. Up to $.60 for lighting, $.60 for HVAC and $.60 for building envelope, creating a potential deduction of $1.80 per sq/ft. Interior lighting may also be improved using the Interim Lighting Rule, which provides a simplified process to earn the Deduction, capped at $0.30-$0.60/square foot. Improvements are compared to a baseline of ASHRAE 2001 standards. To obtain these benefits the facilities/energy division of a business, its tax department, and a firm specializing in EPAct 179D deductions needed to cooperate. IRS mandated software had to be used and an independent 3rd party had to certify the qualification. For municipal buildings, benefits were passed through to the primary designers/architects in an attempt to encourage innovative municipal design. The Commercial Buildings Tax Deduction expiration date had been extended twice, last by the Energy Improvement and Extension Act of 2008. With this extension, the CBTD could be claimed for qualifying projects completed before January 1, 2014. Energy management The commercial building tax deductions could be used to improve the payback period of a prospective energy improvement investment. The deductions could be combined by participating in demand response programs where building owners agree to curtail usage at peak times for a premium. The most common qualifying projects were in the area of lighting. Energy savings Summary of Energy Savings Percentages Provided by IRS Guidance Percentages permitted under Notice 2006-52 (Effective for property placed in service January 1, 2006 – December 31, 2008) Interior Lighting Systems 16⅔%, Heating, Cooling, Ventilation, and Hot Water Systems 16⅔%, Building Envelope 16⅔%. Percentages permitted under Notice 2008-40 (Effective for property placed in service January 1, 2006 – December 31, 2013) Interior Lighting Systems 20%, Heating, Cooling, Ventilation, and Hot Water Systems 20%, Building Envelope 10%. Percentages permitted under Notice 2012-22 Interior Lighting Systems 25%, Heating, Cooling, Ventilation, and Hot Water Systems 15%, Building Envelope 10%. Effective date of Notice 2012-22 – December 31, 2013; if §179D is extended beyond December 31, 2013, is also effective (except as otherwise provided in an amendment of §179D or the guidance thereunder) during the period of the extension. Cost estimate The Congressional Budget Office (CBO) review of the conference version of the bill estimated the Act would increase direct spending by $2.2 billion over the 2006–2010 period, and by $1.6 billion over the 2006–2015 period. The CBO did not attempt to estimate additional effects on discretionary spending. The CBO and the Joint Committee on Taxation estimated that the legislation would reduce revenues by $7.9 billion over the 2005–2010 period and by $12.3 billion over the 2005–2015 period. Support The collective reduction in national consumption of energy (gas and electricity) is significant for home heating. The Act provided gible financial incentives (tax credits) for average homeowners to make environmentally positive changes to their homes. It made improvements to home energy use more affordable for walls, doors, windows, roofs, water heaters, etc. Consumer spending, and hence the national economy, was abetted. Industry grew for manufacture of these environmentally positive improvements. These positive improvements have been near and long-term in effect. The collective reduction in national consumption of oil is significant for automotive vehicles. The Act provided tangible financial incentives (tax credits) for operators of hybrid vehicles. It helped fuel competition among auto makers to meet rising demands for fuel-efficient vehicles. Consumer spending, and hence the national economy, was abetted. Dependence on imported oil was reduced. The national trade deficit was improved. Industry grew for manufacture of these environmentally positive improvements. These positive improvements have been near and long-term in effect. Criticism The Washington Post contended that the spending bill was a broad collection of subsidies for United States energy companies; in particular, the nuclear and oil industries. Speaking for the National Republicans for Environmental Protection Association, President Martha Marks said that the organization was disappointed in the law because it did not support conservation enough, and continued to subsidize the well-established oil and gas industries that didn't require subsidizing. The law did not include provisions for drilling in the Arctic National Wildlife Refuge (ANWR); some Republicans claimed "access to the abundant oil reserves in ANWR would strengthen America's energy independence without harming the environment." Senator Hillary Clinton criticized Senator Barack Obama's vote for the bill in the 2008 Democratic Primary. Legislative history The Act was voted on and passed twice by the United States Senate, once prior to conference committee, and once after. In both cases, there were numerous senators who voted against the bill. John McCain, the Republican Party nominee for President of the United States in the 2008 election voted against the bill. Democrat Barack Obama, President of the United States from January 2009 to January 2017, voted in favor of the bill. Provisions in the original bill that were not in the act Limited liability for producers of MTBE. Drilling for oil in the Arctic National Wildlife Refuge (ANWR). Increasing vehicle efficiency standards (CAFE). Requiring increased reliance on non-greenhouse gas-emitting energy sources similar to the Kyoto Protocol. To remove from 18 CFR Part 366.1 the definitions of "electric utility company" and exempt wholesale generator (EWG), that an EWG is not an electric utility company. Preliminary Senate vote June 28, 2005, 10:00 a.m. Yeas - 85, Nays - 12 Conference committee The bill's conference committee included 14 Senators and 51 House members. The senators on the committee were: Republicans Domenici, Craig, Thomas, Alexander, Murkowski, Burr, Grassley and Democrats Bingaman, Akaka, Dorgan, Wyden, Johnson, and Baucus. Final Senate vote July 29, 2005, 12:50 p.m. Yeas - 74, Nays - 26 Legislative history See also Energy Policy Act of 1992 Public Utility Regulatory Policies Act (PURPA) of 1978 Demand response Energy crisis FutureGen, zero-emissions coal-fired power plant Hydrogen economy Internal Revenue Service Loan guarantee Nuclear Power 2010 Program Oil depletion Oil industry Power plant Price-Anderson Nuclear Industries Indemnity Act Public Utility Holding Company Act of 1935 Renewable energy in the United States Synthetic Liquid Fuels Act Energy policy of the United States References External links Government Energy Policy Act of 2005 as amended (PDF/details) in the GPO Statute Compilations collection Energy Policy Act of 2005 as enacted (details) in the US Statutes at Large on Congress.gov Department of Energy spotlight on the bill - listing consumer savings (tax breaks). Official News release and Allocution Bush / Albuquerque / 2005-08-08 Congressional Budget Office Cost Estimate for the bill conference agreement, July 27, 2005 Research Service summary Events GovEnergy Workshop and Trade Show News Christian Science Monitor: How Much New Oil? Not a Lot Boston Herald: Editorial Reuters: brief summary MSNBC: news story TaxPayer.net: How the Bill Passed – a view of the reasons for the bills passage and its costs to taxpayers. See also: TaxPayer.net on Subsidies Yahoo! News: bill signing CNN: Bush: Energy bill effects will be long-term WashingtonWatch.com page on P.L. 109-58: The Energy Policy Act of 2005 InfoWorld.com Sustainable IT blog: New daylight saving time not so bright an idea – a criticism of the change to daylight saving time Non-profit Clean Fuels Ohio - This site focuses on alternative fuels as well as alt-fuels incentives created by the Energy Policy Act of 2005. 2005 in the environment United States federal energy legislation United States federal taxation legislation Energy policy Renewable energy law Acts of the 109th United States Congress Daylight saving time in the United States
Energy Policy Act of 2005
[ "Environmental_science" ]
3,139
[ "Environmental social science", "Energy policy" ]
2,170,476
https://en.wikipedia.org/wiki/George%20Wetherill
George Wetherill (August 12, 1925 Philadelphia, Pennsylvania – July 19, 2006 Washington, D.C.) was a physicist and geologist and the director emeritus of the department of terrestrial magnetism at the Carnegie Institution of Washington, DC, US. In 2000, Wetherill received the J. Lawrence Smith Medal from the National Academy of Sciences "For his unique contributions to the cosmochronology of the planets and meteorites and to the orbital dynamics and formation of solar system bodies." In 2003, Wetherill received the Henry Norris Russell Lectureship, the highest honor bestowed by the American Astronomical Society, "For pioneering the application of modern physics and numerical simulations to the formation and evolution of terrestrial planets." Early life and education George Wetherill was born on August 12, 1925, in Philadelphia, Pennsylvania. Wetherill benefited from the G.I. Bill to receive four degrees, the Ph.B. (1948), S.B. (1949), S.M. (1951), and Ph.D., in physics (1953), all from the University of Chicago. He did his thesis research on the spontaneous fission of uranium, as well as nuclear processes in nature, as a U.S. Atomic Energy Commission Predoctoral Fellow. Career and achievements Department of Terrestrial Magnetism, 1953-1960 After receiving his Ph.D., Wetherill became a staff member at Carnegie's Department of Terrestrial Magnetism (DTM) in Washington, D.C. There, he joined an interdepartmental group who were working to date rocks using geochemical methods that measured natural radioactive decay. This involved determining the concentration and isotopic composition of inert gases such as argon, as well as the isotopes of strontium and lead. Wetherill originated the concept of the Concordia Diagram for the uranium-lead isotopic system; this diagram became the standard means for determining precise ages of rocks, and of detecting the possibility of metamorphism. It provides a basis for high-precision geochronology of rocks dating back through the history of the planet Earth. Wetherill was also a member of the Carnegie group that accurately determined the decay constants of potassium and rubidium, an effort that has also become fundamental to the measurement of geological time. University of California, Los Angeles Wetherill left DTM in 1960 to become a professor of geophysics and geology at the University of California, Los Angeles. There, he served as chairman of the interdepartmental curriculum in geochemistry (1964-1968), and as chairman of the Department of Planetary and Space Sciences (1968-1972). At UCLA, Wetherill further explored techniques for age-dating, examining extraterrestrial material with radiometric chronology techniques to meteorite and lunar samples. At the same time, he began to theorize about the origin of meteorites. His studies concentrated on collisions between objects in the asteroid belt together with resonances between their motions and those of planets. He computed how these events could move material into Earth-crossing orbits to become meteorites or larger Earth-impacting bodies responsible for the devastating impacts that caused mass extinctions of the majority of living species, including the dinosaurs. Later, Wetherill, along with scientists elsewhere, proposed that a certain unusual class of meteorites was not asteroidal in origin but instead came from the planet Mars. This was later confirmed by laboratory work elsewhere and is now well accepted. Department of Terrestrial Magnetism, 1975- In 1975, Wetherill returned to Carnegie's Department of Terrestrial Magnetism as director. He remained director until 1991, when he became a staff member. At DTM, he began extending his research efforts into questions concerning the origin of the terrestrial planets--Mercury, Venus, Earth, and Mars. He was stimulated by earlier studies by Victor Safronov (O. Yu. Schmidt Institute, Moscow), who showed that as a swarm of planetesimals coagulated into large bodies the swarm could evolve to produce a few terrestrial planets. Wetherill developed a technique to calculate numerically the orbital evolution and accumulation of planetesimal swarms, and he used the technique to reach specific predictions of the physical and orbital properties of terrestrial planets. His results agreed well with present observations. In addition to showing how the inner solar system formed, Wetherill's work provided the basis for a model of a giant-impact origin for the Moon and the core of Mercury. It also led to explanations for the isotopic abundances of present-day planetary atmospheres. Wetherill has shown that Jupiter plays an important role in the evolution of the Solar System; by ejecting comets from the solar system, it offers a protective presence to the inner planets. Wetherill's theoretical work supports discussions on the origins of the Solar System as well as on extrasolar planets. Community engagement Wetherill provided leadership in the scientific community by serving on advisory committees for NASA, the National Academy of Sciences, and the National Science Foundation. For 15 years, he was editor of the Annual Review of Earth and Planetary Sciences. He served as president of the Meteoritical Society, the Geochemical Society, the Planetology Section of the American Geophysical Union, the International Association of Geochemistry and Cosmochemistry, and was a member of the American Philosophical Society. Wetherill died at his home in Washington, D.C., Wednesday, July 19, 2006, after a long illness. Awards 1974, Member, National Academy of Sciences 1977, National Medal of Science, National Science Foundation 1981, Leonard Medal, Meteoritical Society 1984, G. K. Gilbert Award, Geological Society of America 1986, G. P. Kuiper Prize of the Division of Planetary Sciences of the American Astronomical Society 1991, Harry H. Hess Medal of the American Geophysical Union 1997, National Medal of Science awarded by President Clinton 2000, J. Lawrence Smith Medal, National Academy of Sciences 2003, Henry Norris Russell Lectureship, American Astronomical Society External links Washington Post obituary NASA Carnegie Institution Bio Publications International Center for Scientific research Obituary in Nature References 1925 births 2006 deaths 20th-century American physicists American nuclear physicists Members of the United States National Academy of Sciences National Medal of Science laureates University of California, Los Angeles faculty Scientists from Philadelphia Annual Reviews (publisher) editors Members of the American Philosophical Society Presidents of the Geochemical Society
George Wetherill
[ "Chemistry" ]
1,311
[ "Geochemists", "Presidents of the Geochemical Society" ]
4,107,511
https://en.wikipedia.org/wiki/Gum%20Nebula
The Gum Nebula (Gum 12) is an emission nebula that extends across 36° in the southern constellations Vela and Puppis. It lies approximately 450 parsecs from the Earth. Hard to distinguish, it was widely believed to be the greatly expanded (and still expanding) remains of a supernova that took place about a million years ago. More recent research suggests it may be an evolved H II region. It contains the 11,000-year-old Vela Supernova Remnant, along with the Vela Pulsar. The Gum Nebula contains about 32 cometary globules. These dense cloud cores are subject to such strong radiation from O-type stars γ2 Vel and ζ Pup and formerly the progenitor of the Vela Supernova Remnant that the cloud cores evaporate away from the hot stars into comet-like shapes. Like ordinary Bok globules, cometary globules are believed to be associated with star formation. A notable object inside one of these cometary globules is the Herbig-Haro object HH 46/47. It is named after its discoverer, the Australian astronomer Colin Stanley Gum (1924–1960). Gum had published his findings in 1955 in a work called A study of diffuse southern H-alpha nebulae (see Gum catalog). He also published the discovery of the Gum Nebula in 1952 in the journal The Observatory. The observations were made with the Commonwealth Observatory. The Gum nebula was photographed during Apollo 16 while the command module was in the double umbra of the Sun and Earth, using high-speed Kodak film. Popular culture The Gum Nebula is explored by the crew of the Starship Titan in the Star Trek novel Orion's Hounds. Gallery See also CG 4 Barnard's Loop References External links APOD: Gum Nebula, with mouse over (2009.08.22) Galaxy Map: Entry for Gum 12 in the Gum Catalog Galaxy Map: Detail chart for the Gould Belt (showing the location of Gum 12 relative to the Sun) Encyclopedia of Science: Entry for the Gum Nebula (erroneously called Gum 56) SouthernSkyPhoto.com Emission nebulae Puppis Vela (constellation)
Gum Nebula
[ "Astronomy" ]
449
[ "Vela (constellation)", "Puppis", "Constellations" ]
4,107,858
https://en.wikipedia.org/wiki/Semiconductor%20fabrication%20plant
In the microelectronics industry, a semiconductor fabrication plant, also called a fab or a foundry, is a factory where integrated circuits (ICs) are manufactured. The cleanroom is where all fabrication takes place and contains the machinery for integrated circuit production such as steppers and/or scanners for photolithography, etching, cleaning, and doping. All these devices are extremely precise and thus extremely expensive. Prices for pieces of equipment for the processing of 300 mm wafers range to upwards of $4,000,000 each with a few pieces of equipment reaching as high as $340,000,000 (e.g. EUV scanners). A typical fab will have several hundred equipment items. Semiconductor fabrication requires many expensive devices. Estimates put the cost of building a new fab at over one billion U.S. dollars with values as high as $3–4 billion not being uncommon. For example, TSMC invested $9.3 billion in its Fab15 in Taiwan. The same company estimations suggest that their future fab might cost $20 billion. A foundry model emerged in the 1990s: Companies owning fabs that produced their own designs were known as integrated device manufacturers (IDMs). Companies that outsourced manufacturing of their designs were termed fabless semiconductor companies. Those foundries which did not create their own designs were called pure-play semiconductor foundries. In the cleanroom, the environment is controlled to eliminate all dust, since even a single speck can ruin a microcircuit, which has nanoscale features much smaller than dust particles. The clean room must also be damped against vibration to enable nanometer-scale alignment of photolithography machines and must be kept within narrow bands of temperature and humidity. Vibration control may be achieved by using deep piles in the cleanroom's foundation that anchor the cleanroom to the bedrock, careful selection of the construction site, and/or using vibration dampers. Controlling temperature and humidity is critical for minimizing static electricity. Corona discharge sources can also be used to reduce static electricity. Often, a fab will be constructed in the following manner (from top to bottom): the roof, which may contain air handling equipment that draws, purifies and cools outside air, an air plenum for distributing the air to several floor-mounted fan filter units, which are also part of the cleanroom's ceiling, the cleanroom itself, which may or may not have more than one story, a return air plenum, the clean subfab that may contain support equipment for the machines in the cleanroom such as chemical delivery, purification, recycling and destruction systems, and the ground floor, that may contain electrical equipment. Fabs also often have some office space. History Typically an advance in chip-making technology requires a completely new fab to be built. In the past, the equipment to outfit a fab was not very expensive and there were a huge number of smaller fabs producing chips in small quantities. However, the cost of the most up-to-date equipment has since grown to the point where a new fab can cost several billion dollars. Another side effect of the cost has been the challenge to make use of older fabs. For many companies these older fabs are useful for producing designs for unique markets, such as embedded processors, flash memory, and microcontrollers. However, for companies with more limited product lines, it is often best to either rent out the fab, or close it entirely. This is due to the tendency of the cost of upgrading an existing fab to produce devices requiring newer technology to exceed the cost of a completely new fab. There has been a trend to produce ever larger wafers, so each process step is being performed on more and more chips at once. The goal is to spread production costs (chemicals, fab time) over a larger number of saleable chips. It is impossible (or at least impracticable) to retrofit machinery to handle larger wafers. This is not to say that foundries using smaller wafers are necessarily obsolete; older foundries can be cheaper to operate, have higher yields for simple chips and still be productive. The industry was aiming to move from the state-of-the-art wafer size 300 mm (12 in) to 450 mm by 2018. In March 2014, Intel expected 450 mm deployment by 2020. But in 2016, corresponding joint research efforts were stopped. Additionally, there is a large push to completely automate the production of semiconductor chips from beginning to end. This is often referred to as the "lights-out fab" concept. The International Sematech Manufacturing Initiative (ISMI), an extension of the US consortium SEMATECH, is sponsoring the "300 mm Prime" initiative. An important goal of this initiative is to enable fabs to produce greater quantities of smaller chips as a response to shorter lifecycles seen in consumer electronics. The logic is that such a fab can produce smaller lots more easily and can efficiently switch its production to supply chips for a variety of new electronic devices. Another important goal is to reduce the waiting time between processing steps. See also Foundry model for the business aspects of foundries and fabless companies List of semiconductor fabrication plants Rock's law Semiconductor consolidation Semiconductor device fabrication for the process of manufacturing devices Notes References Handbook of Semiconductor Manufacturing Technology, Second Edition by Robert Doering and Yoshio Nishi (Hardcover – Jul 9, 2007) Semiconductor Manufacturing Technology by Michael Quirk and Julian Serda (paperback – Nov 19, 2000) Fundamentals of Semiconductor Manufacturing and Process Control by Gary S. May and Costas J. Spanos (hardcover – May 22, 2006) The Essential Guide to Semiconductors (Essential Guide Series) by Jim Turley (paperback – Dec 29, 2002) Semiconductor Manufacturing Handbook (McGraw–Hill Handbooks) by Hwaiyu Geng (hardcover – April 27, 2005) Further reading "Chip Makers Watch Their Waste", The Wall Street Journal, July 19, 2007, p.B3 Semiconductor device fabrication Manufacturing plants
Semiconductor fabrication plant
[ "Materials_science" ]
1,252
[ "Semiconductor device fabrication", "Microtechnology" ]
4,108,991
https://en.wikipedia.org/wiki/L.%20Bruce%20Archer
Leonard Bruce Archer CBE (22 November 1922 – 16 May 2005) was a British chartered mechanical engineer and Professor of Design Research at the Royal College of Art (RCA) who championed research in design, and helped to establish design as an academic discipline. Archer spent most of his working life in schools of art and design, including more than 25 years at the RCA. He promoted the use of systems-level analysis, evidence-based design, and evaluation through field testing within industrial design, and led a multi-disciplinary team which employed these methods in practice, including most notably their application to the design of what became the standard British hospital bed. He went on to become head of a postgraduate research and teaching department where he identified that scholarly inquiry in design was just as vital as it was in the arts, the humanities, and the sciences, and argued that design warranted its own body of scholarship and knowledge no less than conventional academic disciplines. He proposed that modelling be recognised as the fundamental competence of design, just as numeracy underpins mathematics (and literacy, the humanities) and he believed that – like both literacy and numeracy – it should be widely taught. Archer trained a generation of design researchers, showing them how the procedures of scholarly research based on well-founded evidence and systematic analysis were as applicable in design as in the more traditional academic subjects. For design practice he argued there was a need for method and rigour, and for decisions to be recorded and explained so they could, if necessary, be defended. In the modern day, practitioners are familiar with these issues through the requirements of quality assurance, while in academia the Research Assessment Exercise has pushed even the art and design community into taking research seriously. Archer's ideas were radical and pioneering, and the very existence of his research department – in an art college – controversial. It was his own force of character and his persuasive ability to argue his case with absolute clarity and conviction that ensured the department's survival, and provided him with the opportunity to demonstrate that design is not just a craft skill but a knowledge-based discipline in its own right. Early life Leonard Bruce Archer (known primarily as Bruce Archer or L. Bruce Archer) was born in 1922. His father, Leonard Castella Archer (1898–1983), was a Regimental Sergeant Major in the Scots Guards and his mother, Ivy Hilda Hulett (1897-1974), was a dressmaker and a trained amateur artist. During his schooldays, at Henry Thornton Grammar School, he wanted to be a painter, but he was academically bright and not allowed to continue with art beyond fifteen. His school certificates were in entirely scientific subjects. The Second World War intervened before he could go to art school or university and he joined his father's regiment. He saw service in Italy but left after three years (1941–44) on medical grounds. Career Archer worked as an engineering designer in manufacturing, designing jigs and tools and later process plant. He attended evening classes for years at Northampton College, London (now City University) where he trained as a mechanical engineering designer, eventually gaining his Higher National Certificate in mechanical engineering. He became a member of the Institution of Engineering Designers in 1950, and in 1951 was awarded its national prize for the best thesis on design. He joined the Institution of Mechanical Engineers in the same year. Consultant Inspired by the Festival of Britain, which took place in 1951, Archer later stated: In 1953 he left full-time employment in industry to set up his own consultancy – the Scientists' and Technologists' Engineering Partnership – and started teaching evening classes at the Central School of Art and Design becoming a full-time lecturer there in 1957. He began writing articles for Design magazine, promoting what he called 'a rational approach to design'. At a party given by a colleague from the Central School, he was approached by Tomás Maldonado, Director of the Ulm School of Design, and offered a job acting as a bridge between two rival factions at the school: the 'scientists' and the 'artists'. On moving there in 1960, as a guest professor, he discovered two opposing belief systems: The ergonomists and psychologists believed in analysis and experiment as the basis for design,whereas The stylists were mostly concerned with form, and had evolved design rules about proportion, colour, and texture which they thought of as a logical system for creating the cool, minimalist look for which Ulm became famous. Archer tried to convey each side's belief systems across the divide, but each group thought he had aligned himself with the other. Maldonado had left Ulm even before Archer arrived, and Archer found himself isolated. Later he said that learning how the two cultures thought was a highly formative experience. Designing hospital equipment In 1961 Misha Black was appointed head of industrial design at the Royal College of Art and asked Archer to lead a research project called Studies in the function and design of non-surgical hospital equipment, being funded by the Nuffield Foundation. Archer returned in the summer of 1962 and, with a small multi-disciplinary team, identified four urgent design problems: a receptacle for soiled dressings, a means of reducing incorrect dispensing of medicines to ward patients, the need for a standard design for hospital beds, and a way to prevent smoke control doors being routinely propped open. They presented their report at the end of the first year to the Nuffield Foundation. Unfortunately: Archer and Black were both stunned. Undaunted, Archer took a job at the Eldorado ice cream factory in Southwark, loading ice cream into refrigerated vans every night and working at the college unpaid during the day. Eventually, commercial funding was found for the soiled dressings receptacle, and in 1963 he gave up his evening job when support was obtained from the King Edward's Hospital Fund for London to study the medicine-dispensing problem. A radical solution was devised - a medicine trolley on wheels that could be securely padlocked to a wall when not in use. The hospital bed problem was also re-examined. The King Edward's Hospital Fund became the King's Fund and was seeking a major exercise to promote its new nationwide role. It took on the standardisation of the hospital bed. Archer was appointed to a Working Party, and in due course won a contract for a standard specification and a prototype design. After widespread consultation, evidence gathering through direct observations, and extensive field trials using mock-ups and test devices, the specification was adopted by the Kings Fund and became a British Standard; a successful prototype was also developed by Kenneth Agnew at the college for a commercial bed manufacturer. The hospital bed project has been documented by an historian. The fire door problem was solved by the use of electro-magnetic door-holders wired to the fire alarm, which released the doors when the alarm was triggered. So solutions to all four of the original projects were delivered. In the process, Archer had demonstrated that work study, systems analysis, and ergonomics, were proper tools for use by designers, and that systematic methods were not inimical to creativity in design, but essential contributors to it. Professor Generalizing from his experiences in these and other design projects undertaken by what became the Industrial Design (Engineering) Research Unit, Archer presented his ideas at design conferences and prepared his paper 'Systematic method for designers', which was published by the Council for Industrial Design in 1965 after a series in Design magazine. A photocopied version of his 1968 doctoral dissertation, The Structure of Design Processes, was published by the National Technical Information Service of the U.S. Department of Commerce in 1969. Both items were translated into several languages, and he continued to receive requests for reprints for a decade or more afterwards. He was awarded the Kaufmann International Design Research Award in 1964. In 1967 he helped to found the cross-disciplinary Design Research Society, and was awarded a doctorate by the Royal College in 1968. Many of his ideas were brought together in 'Technological innovation: a methodology', a paper published by the Science Policy Foundation in 1971. In that same year the Rector of the college, Sir Robin Darwin, called him into his office and as Archer said later: Soon his Department of Design Research had a complement of more than thirty researchers. As they marched daily into the college's Senior Common Room they represented quite a large body of people, and were not entirely welcomed by staff from other departments. Archer himself reluctantly became what he described as a traveling salesman to ensure a steady flow of research contracts. After two or three years, there was a change of direction following a college decision to turn the Department of Design Research into a post-graduate teaching department like every other. Funding was won from the Science Research Council to study design processes, and postgraduates were recruited to undertake masters and doctoral studies. Design graduates arrived to learn how to conduct research, while others from disciplines like psychology and mathematics learned to apply their skills to the discipline of design. Archer's own lectures ranged widely across the philosophy of science, ethics, aesthetics, economics, innovation, measurement, and value theory, and were delivered with directness and enthusiasm. The department itself was organized in a highly systematic way, with procedural memoranda setting out agendas for every type of meeting including highly structured progress reviews for students. Every event was meticulously recorded in his daily log. From his belief that design was just as important an academic topic as the arts, the humanities and the sciences, Archer was instrumental in the move to see it taught as part of the school curriculum. He campaigned to influence the Department for Education and Science, and ran short courses at the college for school teachers. He launched a Department for Design Education at the college, giving teachers the opportunity to undertake masters level research into design. He was made a CBE in 1976. Director of Research In 1984, Jocelyn Stevens was appointed as Rector of the Royal College of Art, and he peremptorily closed the Department of Design Research. It had operated successfully for exactly 25 years. Archer himself was appointed Director of Research with college-wide responsibilities. Though approaching retirement age, his knowledge of the workings of the college and his academic credibility placed him in great demand, and Stevens thought nothing of contacting him at any time of day or night for advice. Retirement After retiring in 1988, Archer ran in-service training courses on research for art and design institutes and was active as Chair of the Design Research Society from 1988 to 1990, and later as its first President from 1992 to 2000. In March 2004, a dinner was held at the Royal College of Art organised by the Society at which he was presented with its Lifetime Achievement Award. Archer himself, though frail, made a typically forceful and eloquent acceptance speech in which he acknowledged the contributions of his many co-workers, and contrasted the skills of decision making and advocacy which typify design, with those of inquiry and analysis which are essential in research. Family Archer was married to Joan Henrietta Allen (1926-2001) for fifty years. They had one daughter, Miranda, who trained as an architect before becoming a high school teacher in design technology – the very subject that her father had done so much to see established in secondary education. See also References 1922 births 2005 deaths Academics of the Royal College of Art Academics of the Central School of Art and Design British Army personnel of World War II British industrial designers Commanders of the Order of the British Empire Design researchers Scots Guards soldiers Design studies Military personnel from London
L. Bruce Archer
[ "Engineering" ]
2,334
[ "Design", "Design studies" ]
4,109,042
https://en.wikipedia.org/wiki/Cell%20signaling
In biology, cell signaling (cell signalling in British English) is the process by which a cell interacts with itself, other cells, and the environment. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes. Typically, the signaling process involves three components: the signal, the receptor, and the effector. In biology, signals are mostly chemical in nature, but can also be physical cues such as pressure, voltage, temperature, or light. Chemical signals are molecules with the ability to bind and activate a specific receptor. These molecules, also referred as ligands, are chemically diverse, including ions (e.g. Na+, K+, Ca++, etc.), lipids (e.g. steroid, prostaglandin), peptides (e.g. insulin, ACTH), carbohydrates, glycosylated proteins (proteoglycans), nucleic acids, etc. Peptide and lipid ligands are particularly important, as most hormones belong to these classes of chemicals. Peptides are usually polar, hydrophilic molecules. As such they are unable to diffuse freely across the bi-lipid layer of the plasma membrane, so their action is mediated by a cell membrane bound receptor. On the other hand, liposoluble chemicals such as steroid hormones, can diffuse passively across the plasma membrane and interact with intracellular receptors. Cell signaling can occur over short or long distances, and can be further classified as autocrine, intracrine, juxtacrine, paracrine, or endocrine. Autocrine signaling occurs when the chemical signal acts on the same cell that produced the signaling chemical. Intracrine signaling occurs when the chemical signal produced by a cell acts on receptors located in the cytoplasm or nucleus of the same cell. Juxtacrine signaling occurs between physically adjacent cells. Paracrine signaling occurs between nearby cells. Endocrine interaction occurs between distant cells, with the chemical signal usually carried by the blood. Receptors are complex proteins or tightly bound multimer of proteins, located in the plasma membrane or within the interior of the cell such as in the cytoplasm, organelles, and nucleus. Receptors have the ability to detect a signal either by binding to a specific chemical or by undergoing a conformational change when interacting with physical agents. It is the specificity of the chemical interaction between a given ligand and its receptor that confers the ability to trigger a specific cellular response. Receptors can be broadly classified into cell membrane receptors and intracellular receptors. Cell membrane receptors can be further classified into ion channel linked receptors, G-Protein coupled receptors and enzyme linked receptors. Ion channels receptors are large transmembrane proteins with a ligand activated gate function. When these receptors are activated, they may allow or block passage of specific ions across the cell membrane. Most receptors activated by physical stimuli such as pressure or temperature belongs to this category. G-protein receptors are multimeric proteins embedded within the plasma membrane. These receptors have extracellular, trans-membrane and intracellular domains. The extracellular domain is responsible for the interaction with a specific ligand. The intracellular domain is responsible for the initiation of a cascade of chemical reactions which ultimately triggers the specific cellular function controlled by the receptor. Enzyme-linked receptors are transmembrane proteins with an extracellular domain responsible for binding a specific ligand and an intracellular domain with enzymatic or catalytic activity. Upon activation the enzymatic portion is responsible for promoting specific intracellular chemical reactions. Intracellular receptors have a different mechanism of action. They usually bind to lipid soluble ligands that diffuse passively through the plasma membrane such as steroid hormones. These ligands bind to specific cytoplasmic transporters that shuttle the hormone-transporter complex inside the nucleus where specific genes are activated and the synthesis of specific proteins is promoted. The effector component of the signaling pathway begins with signal transduction. In this process, the signal, by interacting with the receptor, starts a series of molecular events within the cell leading to the final effect of the signaling process. Typically the final effect consists in the activation of an ion channel (ligand-gated ion channel) or the initiation of a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify or modulate a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial signal (the first messenger). The downstream effects of these signaling pathways may include additional enzymatic activities such as proteolytic cleavage, phosphorylation, methylation, and ubiquitinylation. Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage. Each cell is programmed to respond to specific extracellular signal molecules, and is the basis of development, tissue repair, immunity, and homeostasis. Errors in signaling interactions may cause diseases such as cancer, autoimmunity, and diabetes. Taxonomic range In many small organisms such as bacteria, quorum sensing enables individuals to begin an activity only when the population is sufficiently large. This signaling between cells was first observed in the marine bacterium Aliivibrio fischeri, which produces light when the population is dense enough. The mechanism involves the production and detection of a signaling molecule, and the regulation of gene transcription in response. Quorum sensing operates in both gram-positive and gram-negative bacteria, and both within and between species. In slime molds, individual cells aggregate together to form fruiting bodies and eventually spores, under the influence of a chemical signal, known as an acrasin. The individuals move by chemotaxis, i.e. they are attracted by the chemical gradient. Some species use cyclic AMP as the signal; others such as Polysphondylium violaceum use a dipeptide known as glorin. In plants and animals, signaling between cells occurs either through release into the extracellular space, divided in paracrine signaling (over short distances) and endocrine signaling (over long distances), or by direct contact, known as juxtacrine signaling such as notch signaling. Autocrine signaling is a special case of paracrine signaling where the secreting cell has the ability to respond to the secreted signaling molecule. Synaptic signaling is a special case of paracrine signaling (for chemical synapses) or juxtacrine signaling (for electrical synapses) between neurons and target cells. Extracellular signal Synthesis and release Many cell signals are carried by molecules that are released by one cell and move to make contact with another cell. Signaling molecules can belong to several chemical classes: lipids, phospholipids, amino acids, monoamines, proteins, glycoproteins, or gases. Signaling molecules binding surface receptors are generally large and hydrophilic (e.g. TRH, Vasopressin, Acetylcholine), while those entering the cell are generally small and hydrophobic (e.g. glucocorticoids, thyroid hormones, cholecalciferol, retinoic acid), but important exceptions to both are numerous, and the same molecule can act both via surface receptors or in an intracrine manner to different effects. In animal cells, specialized cells release these hormones and send them through the circulatory system to other parts of the body. They then reach target cells, which can recognize and respond to the hormones and produce a result. This is also known as endocrine signaling. Plant growth regulators, or plant hormones, move through cells or by diffusing through the air as a gas to reach their targets. Hydrogen sulfide is produced in small amounts by some cells of the human body and has a number of biological signaling functions. Only two other such gases are currently known to act as signaling molecules in the human body: nitric oxide and carbon monoxide. Exocytosis Exocytosis is the process by which a cell transports molecules such as neurotransmitters and proteins out of the cell. As an active transport mechanism, exocytosis requires the use of energy to transport material. Exocytosis and its counterpart, endocytosis, the process that brings substances into the cell, are used by all cells because most chemical substances important to them are large polar molecules that cannot pass through the hydrophobic portion of the cell membrane by passive transport. Exocytosis is the process by which a large amount of molecules are released; thus it is a form of bulk transport. Exocytosis occurs via secretory portals at the cell plasma membrane called porosomes. Porosomes are permanent cup-shaped lipoprotein structures at the cell plasma membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell. In exocytosis, membrane-bound secretory vesicles are carried to the cell membrane, where they dock and fuse at porosomes and their contents (i.e., water-soluble molecules) are secreted into the extracellular environment. This secretion is possible because the vesicle transiently fuses with the plasma membrane. In the context of neurotransmission, neurotransmitters are typically released from synaptic vesicles into the synaptic cleft via exocytosis; however, neurotransmitters can also be released via reverse transport through membrane transport proteins. Forms of Cell Signaling Autocrine Autocrine signaling involves a cell secreting a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on that same cell, leading to changes in the cell itself. This can be contrasted with paracrine signaling, intracrine signaling, or classical endocrine signaling. Intracrine In intracrine signaling, the signaling chemicals are produced inside the cell and bind to cytosolic or nuclear receptors without being secreted from the cell.. In intracrine signaling, signals are relayed without being secreted from the cell. The intracrine signals not being secreted outside of the cell is what sets apart intracrine signaling from the other cell signaling mechanisms such as autocrine signaling. In both autocrine and intracrine signaling, the signal has an effect on the cell that produced it. Juxtacrine Juxtacrine signaling is a type of cell–cell or cell–extracellular matrix signaling in multicellular organisms that requires close contact. There are three types: A membrane ligand (protein, oligosaccharide, lipid) and a membrane protein of two adjacent cells interact. A communicating junction links the intracellular compartments of two adjacent cells, allowing transit of relatively small molecules. An extracellular matrix glycoprotein and a membrane protein interact. Additionally, in unicellular organisms such as bacteria, juxtacrine signaling means interactions by membrane contact. Juxtacrine signaling has been observed for some growth factors, cytokine and chemokine cellular signals, playing an important role in the immune response. Juxtacrine signalling via direct membrane contacts is also present between neuronal cell bodies and motile processes of microglia both during development, and in the adult brain. Paracrine In paracrine signaling, a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors, hormones which travel considerably longer distances via the circulatory system; juxtacrine interactions; and autocrine signaling. Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain. Paracrine signals such as retinoic acid target only cells in the vicinity of the emitting cell. Neurotransmitters represent another example of a paracrine signal. Some signaling molecules can function as both a hormone and a neurotransmitter. For example, epinephrine and norepinephrine can function as hormones when released from the adrenal gland and are transported to the heart by way of the blood stream. Norepinephrine can also be produced by neurons to function as a neurotransmitter within the brain. Estrogen can be released by the ovary and function as a hormone or act locally via paracrine or autocrine signaling. Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily. Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses. Endocrine Endocrine signals are called hormones. Hormones are produced by endocrine cells and they travel through the blood to reach all parts of the body. Specificity of signaling can be controlled if only some cells can respond to a particular hormone. Endocrine signaling involves the release of hormones by internal glands of an organism directly into the circulatory system, regulating distant target organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans, the major endocrine glands are the thyroid gland and the adrenal glands. The study of the endocrine system and its disorders is known as endocrinology. Receptors Cells receive information from their neighbors through a class of proteins known as receptors. Receptors may bind with some molecules (ligands) or may interact with physical agents like light, mechanical temperature, pressure, etc. Reception occurs when the target cell (any cell with a receptor protein specific to the signal molecule) detects a signal, usually in the form of a small, water-soluble molecule, via binding to a receptor protein on the cell surface, or once inside the cell, the signaling molecule can bind to intracellular receptors, other elements, or stimulate enzyme activity (e.g. gasses), as in intracrine signaling. Signaling molecules interact with a target cell as a ligand to cell surface receptors, and/or by entering into the cell through its membrane or endocytosis for intracrine signaling. This generally results in the activation of second messengers, leading to various physiological effects. In many mammals, early embryo cells exchange signals with cells of the uterus. In the human gastrointestinal tract, bacteria exchange signals with each other and with human epithelial and immune system cells. For the yeast Saccharomyces cerevisiae during mating, some cells send a peptide signal (mating factor pheromones) into their environment. The mating factor peptide may bind to a cell surface receptor on other yeast cells and induce them to prepare for mating. Cell surface receptors Cell surface receptors play an essential role in the biological systems of single- and multi-cellular organisms and malfunction or damage to these proteins is associated with cancer, heart disease, and asthma. These trans-membrane receptors are able to transmit information from outside the cell to the inside because they change conformation when a specific ligand binds to it. There are three major types: Ion channel linked receptors, G protein–coupled receptors, and enzyme-linked receptors. Ion channel linked receptors Ion channel linked receptors are a group of transmembrane ion-channel proteins which open to allow ions such as Na+, K+, Ca2+, and/or Cl− to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand), such as a neurotransmitter. When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response. These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands, by channel blockers, ions, or the membrane potential. LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors, ionotropic glutamate receptors and ATP-gated channels. G protein–coupled receptors G protein-coupled receptors are a large group of evolutionarily-related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. Coupling with G proteins, they are called seven-transmembrane receptors because they pass through the cell membrane seven times. The G-protein acts as a "middle man" transferring the signal from its activated receptor to its target and therefore indirectly regulates that target protein. Ligands can bind either to extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (Rhodopsin-like family). They are all activated by agonists although a spontaneous auto-activation of an empty receptor can also be observed. G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases. There are two principal signal transduction pathways involving the G protein-coupled receptors: cAMP signal pathway and phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13). G protein-coupled receptors are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of pharmaceutical research. Enzyme-linked receptors Enzyme-linked receptors (or catalytic receptors) are transmembrane receptors that, upon activation by an extracellular ligand, causes enzymatic activity on the intracellular side. Hence a catalytic receptor is an integral membrane protein possessing both enzymatic, catalytic, and receptor functions. They have two important domains, an extra-cellular ligand binding domain and an intracellular domain, which has a catalytic function; and a single transmembrane helix. The signaling molecule binds to the receptor on the outside of the cell and causes a conformational change on the catalytic function located on the receptor inside the cell. Examples of the enzymatic activity include: Receptor tyrosine kinase, as in fibroblast growth factor receptor. Most enzyme-linked receptors are of this type. Serine/threonine-specific protein kinase, as in bone morphogenetic protein Guanylate cyclase, as in atrial natriuretic factor receptor Intracellular receptors Intracellular receptors exist freely in the cytoplasm, nucleus, or can be bound to organelles or membranes. For example, the presence of nuclear and mitochondrial receptors is well documented. The binding of a ligand to the intracellular receptor typically induces a response in the cell. Intracellular receptors often have a level of specificity, this allows the receptors to initiate certain responses when bound to a corresponding ligand. Intracellular receptors typically act on lipid soluble molecules. The receptors bind to a group of DNA binding proteins. Upon binding, the receptor-ligand complex translocates to the nucleus where they can alter patterns of gene expression. Steroid hormone receptor Steroid hormone receptors are found in the nucleus, cytosol, and also on the plasma membrane of target cells. They are generally intracellular receptors (typically cytoplasmic or nuclear) and initiate signal transduction for steroid hormones which lead to changes in gene expression over a time period of hours to days. The best studied steroid hormone receptors are members of the nuclear receptor subfamily 3 (NR3) that include receptors for estrogen (group NR3A) and 3-ketosteroids (group NR3C). In addition to nuclear receptors, several G protein-coupled receptors and ion channels act as cell surface receptors for certain steroid hormones. Mechanisms of Receptor Down-Regulation Receptor mediated endocytosis is common way of turning receptors "off". Endocytic down regulation is regarded as a means for reducing receptor signaling. The process involves the binding of a ligand to the receptor, which then triggers the formation of coated pits, the coated pits transform to coated vesicles and are transported to the endosome. Receptor Phosphorylation is another type of receptor down-regulation. Biochemical changes can reduce receptor affinity for a ligand. Reducing the sensitivity of the receptor is a result of receptors being occupied for a long time. This results in a receptor adaptation in which the receptor no longer responds to the signaling molecule. Many receptors have the ability to change in response to ligand concentration. Signal transduction pathways When binding to the signaling molecule, the receptor protein changes in some way and starts the process of transduction, which can occur in a single step or as a series of changes in a sequence of different molecules (called a signal transduction pathway). The molecules that compose these pathways are known as relay molecules. The multistep process of the transduction stage is often composed of the activation of proteins by addition or removal of phosphate groups or even the release of other small molecules or ions that can act as messengers. The amplifying of a signal is one of the benefits to this multiple step sequence. Other benefits include more opportunities for regulation than simpler systems do and the fine-tuning of the response, in both unicellular and multicellular organism. In some cases, receptor activation caused by ligand binding to a receptor is directly coupled to the cell's response to the ligand. For example, the neurotransmitter GABA can activate a cell surface receptor that is part of an ion channel. GABA binding to a GABAA receptor on a neuron opens a chloride-selective ion channel that is part of the receptor. GABAA receptor activation allows negatively charged chloride ions to move into the neuron, which inhibits the ability of the neuron to produce action potentials. However, for many cell surface receptors, ligand-receptor interactions are not directly linked to the cell's response. The activated receptor must first interact with other proteins inside the cell before the ultimate physiological effect of the ligand on the cell's behavior is produced. Often, the behavior of a chain of several interacting cell proteins is altered following receptor activation. The entire set of cell changes induced by receptor activation is called a signal transduction mechanism or pathway. A more complex signal transduction pathway is the MAPK/ERK pathway, which involves changes of protein–protein interactions inside the cell, induced by an external signal. Many growth factors bind to receptors at the cell surface and stimulate cells to progress through the cell cycle and divide. Several of these receptors are kinases that start to phosphorylate themselves and other proteins when binding to a ligand. This phosphorylation can generate a binding site for a different protein and thus induce protein–protein interaction. In this case, the ligand (called epidermal growth factor, or EGF) binds to the receptor (called EGFR). This activates the receptor to phosphorylate itself. The phosphorylated receptor binds to an adaptor protein (GRB2), which couples the signal to further downstream signaling processes. For example, one of the signal transduction pathways that are activated is called the mitogen-activated protein kinase (MAPK) pathway. The signal transduction component labeled as "MAPK" in the pathway was originally called "ERK," so the pathway is called the MAPK/ERK pathway. The MAPK protein is an enzyme, a protein kinase that can attach phosphate to target proteins such as the transcription factor MYC and, thus, alter gene transcription and, ultimately, cell cycle progression. Many cellular proteins are activated downstream of the growth factor receptors (such as EGFR) that initiate this signal transduction pathway. Some signaling transduction pathways respond differently, depending on the amount of signaling received by the cell. For instance, the hedgehog protein activates different genes, depending on the amount of hedgehog protein present. Complex multi-component signal transduction pathways provide opportunities for feedback, signal amplification, and interactions inside one cell between multiple signals and signaling pathways. A specific cellular response is the result of the transduced signal in the final stage of cell signaling. This response can essentially be any cellular activity that is present in a body. It can spur the rearrangement of the cytoskeleton, or even as catalysis by an enzyme. These three steps of cell signaling all ensure that the right cells are behaving as told, at the right time, and in synchronization with other cells and their own functions within the organism. At the end, the end of a signal pathway leads to the regulation of a cellular activity. This response can take place in the nucleus or in the cytoplasm of the cell. A majority of signaling pathways control protein synthesis by turning certain genes on and off in the nucleus. In unicellular organisms such as bacteria, signaling can be used to 'activate' peers from a dormant state, enhance virulence, defend against bacteriophages, etc. In quorum sensing, which is also found in social insects, the multiplicity of individual signals has the potentiality to create a positive feedback loop, generating coordinated response. In this context, the signaling molecules are called autoinducers. This signaling mechanism may have been involved in evolution from unicellular to multicellular organisms. Bacteria also use contact-dependent signaling, notably to limit their growth. Signaling molecules used by multicellular organisms are often called pheromones. They can have such purposes as alerting against danger, indicating food supply, or assisting in reproduction. Short-term cellular responses . Regulating gene activity . Notch signaling pathway Notch is a cell surface protein that functions as a receptor. Animals have a small set of genes that code for signaling proteins that interact specifically with Notch receptors and stimulate a response in cells that express Notch on their surface. Molecules that activate (or, in some cases, inhibit) receptors can be classified as hormones, neurotransmitters, cytokines, and growth factors, in general called receptor ligands. Ligand receptor interactions such as that of the Notch receptor interaction, are known to be the main interactions responsible for cell signaling mechanisms and communication. notch acts as a receptor for ligands that are expressed on adjacent cells. While some receptors are cell-surface proteins, others are found inside cells. For example, estrogen is a hydrophobic molecule that can pass through the lipid bilayer of the membranes. As part of the endocrine system, intracellular estrogen receptors from a variety of cell types can be activated by estrogen produced in the ovaries. In the case of Notch-mediated signaling, the signal transduction mechanism can be relatively simple. As shown in Figure 2, the activation of Notch can cause the Notch protein to be altered by a protease. Part of the Notch protein is released from the cell surface membrane and takes part in gene regulation. Cell signaling research involves studying the spatial and temporal dynamics of both receptors and the components of signaling pathways that are activated by receptors in various cell types. Emerging methods for single-cell mass-spectrometry analysis promise to enable studying signal transduction with single-cell resolution. In notch signaling, direct contact between cells allows for precise control of cell differentiation during embryonic development. In the worm Caenorhabditis elegans, two cells of the developing gonad each have an equal chance of terminally differentiating or becoming a uterine precursor cell that continues to divide. The choice of which cell continues to divide is controlled by competition of cell surface signals. One cell will happen to produce more of a cell surface protein that activates the Notch receptor on the adjacent cell. This activates a feedback loop or system that reduces Notch expression in the cell that will differentiate and that increases Notch on the surface of the cell that continues as a stem cell. See also Scaffold protein Biosemiotics Molecular cellular cognition Crosstalk (biology) Bacterial outer membrane vesicles Membrane vesicle trafficking Host–pathogen interaction Retinoic acid JAK-STAT signaling pathway Imd pathway Localisation signal Oscillation Protein dynamics Systems biology Lipid signaling Redox signaling Signaling cascade Cell Signaling Technology – an antibody development and production company Netpath – a curated resource of signal transduction pathways in humans Synthetic Biology Open Language Nanoscale networking – leveraging biological signaling to construct ad hoc in vivo communication networks Soliton model in neuroscience – physical communication via sound waves in membranes Temporal feedback References Further reading "The Inside Story of Cell Communication". learn.genetics.utah.edu. Retrieved 2018-10-20. "When Cell Communication Goes Wrong". learn.genetics.utah.edu. Retrieved 2018-10-24. External links NCI-Nature Pathway Interaction Database: authoritative information about signaling pathways in human cells. Signaling Pathways Project: cell signaling hypothesis generation knowledgebase constructed using biocurated archived transcriptomic and ChIP-Seq datasets Cell biology Cell communication Systems biology Human female endocrine system
Cell signaling
[ "Biology" ]
6,575
[ "Cell communication", "Cell biology", "Cellular processes", "Systems biology" ]
4,109,196
https://en.wikipedia.org/wiki/Logical%20matrix
A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science. Matrix representation of a relation If R is a binary relation between the finite indexed sets X and Y (so ), then R can be represented by the logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of M are defined by In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers: i ranges from 1 to the cardinality (size) of X, and j ranges from 1 to the cardinality of Y. See the article on indexed sets for more detail. Example The binary relation R on the set is defined so that aRb holds if and only if a divides b evenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relation R holds. {(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}. The corresponding representation as a logical matrix is which includes a diagonal of ones, since each number divides itself. Other examples A permutation matrix is a (0, 1)-matrix, all of whose columns and rows each have exactly one nonzero element. A Costas array is a special case of a permutation matrix. An incidence matrix in combinatorics and finite geometry has ones to indicate incidence between points (or vertices) and lines of a geometry, blocks of a block design, or edges of a graph. A design matrix in analysis of variance is a (0, 1)-matrix with constant row sums. A logical matrix may represent an adjacency matrix in graph theory: non-symmetric matrices correspond to directed graphs, symmetric matrices to ordinary graphs, and a 1 on the diagonal corresponds to a loop at the corresponding vertex. The biadjacency matrix of a simple, undirected bipartite graph is a (0, 1)-matrix, and any (0, 1)-matrix arises in this way. The prime factors of a list of m square-free, n-smooth numbers can be described as an m × π(n) (0, 1)-matrix, where π is the prime-counting function, and aij is 1 if and only if the jth prime divides the ith number. This representation is useful in the quadratic sieve factoring algorithm. A bitmap image containing pixels in only two colors can be represented as a (0, 1)-matrix in which the zeros represent pixels of one color and the ones represent pixels of the other color. A binary matrix can be used to check the game rules in the game of Go. The four valued logic of two bits, transformed by 2x2 logical matrices, forms a transition system. A recurrence plot and its variants are matrices that shows which pairs of points are closer than a certain vicinity threshold in a phase space. Some properties The matrix representation of the equality relation on a finite set is the identity matrix I, that is, the matrix whose entries on the diagonal are all 1, while the others are all 0. More generally, if relation R satisfies then R is a reflexive relation. If the Boolean domain is viewed as a semiring, where addition corresponds to logical OR and multiplication to logical AND, the matrix representation of the composition of two relations is equal to the matrix product of the matrix representations of these relations. This product can be computed in expected time O(n2). Frequently, operations on binary matrices are defined in terms of modular arithmetic mod 2—that is, the elements are treated as elements of the Galois field . They arise in a variety of representations and have a number of more restricted special forms. They are applied e.g. in XOR-satisfiability. The number of distinct m-by-n binary matrices is equal to 2mn, and is thus finite. Lattice Let n and m be given and let U denote the set of all logical m × n matrices. Then U has a partial order given by In fact, U forms a Boolean algebra with the operations and & or between two matrices applied component-wise. The complement of a logical matrix is obtained by swapping all zeros and ones for their opposite. Every logical matrix has a transpose Suppose A is a logical matrix with no columns or rows identically zero. Then the matrix product, using Boolean arithmetic, contains the m × m identity matrix, and the product contains the n × n identity. As a mathematical structure, the Boolean algebra U forms a lattice ordered by inclusion; additionally it is a multiplicative lattice due to matrix multiplication. Every logical matrix in U corresponds to a binary relation. These listed operations on U, and ordering, correspond to a calculus of relations, where the matrix multiplication represents composition of relations. Logical vectors If m or n equals one, then the m × n logical matrix (mij) is a logical vector or bit string. If m = 1, the vector is a row vector, and if n = 1, it is a column vector. In either case the index equaling 1 is dropped from denotation of the vector. Suppose and are two logical vectors. The outer product of P and Q results in an m × n rectangular relation A reordering of the rows and columns of such a matrix can assemble all the ones into a rectangular part of the matrix. Let h be the vector of all ones. Then if v is an arbitrary logical vector, the relation R = v hT has constant rows determined by v. In the calculus of relations such an R is called a vector. A particular instance is the universal relation . For a given relation R, a maximal rectangular relation contained in R is called a concept in R. Relations may be studied by decomposing into concepts, and then noting the induced concept lattice. Consider the table of group-like structures, where "unneeded" can be denoted 0, and "required" denoted by 1, forming a logical matrix To calculate elements of , it is necessary to use the logical inner product of pairs of logical vectors in rows of this matrix. If this inner product is 0, then the rows are orthogonal. In fact, small category is orthogonal to quasigroup, and groupoid is orthogonal to magma. Consequently there are zeros in , and it fails to be a universal relation. Row and column sums Adding up all the ones in a logical matrix may be accomplished in two ways: first summing the rows or first summing the columns. When the row sums are added, the sum is the same as when the column sums are added. In incidence geometry, the matrix is interpreted as an incidence matrix with the rows corresponding to "points" and the columns as "blocks" (generalizing lines made of points). A row sum is called its point degree, and a column sum is the block degree. The sum of point degrees equals the sum of block degrees. An early problem in the area was "to find necessary and sufficient conditions for the existence of an incidence structure with given point degrees and block degrees; or in matrix language, for the existence of a (0, 1)-matrix of type v × b with given row and column sums". This problem is solved by the Gale–Ryser theorem. See also List of matrices Binatorix (a binary De Bruijn torus) Bit array Disjunct matrix Redheffer matrix Truth table Three-valued logic Notes References External links Boolean algebra Matrices
Logical matrix
[ "Mathematics" ]
1,654
[ "Boolean algebra", "Mathematical logic", "Mathematical objects", "Matrices (mathematics)", "Fields of abstract algebra" ]
4,109,488
https://en.wikipedia.org/wiki/Urban%20forest
An urban forest is a forest, or a collection of trees, that grow within a city, town or a suburb. In a wider sense, it may include any kind of woody plant vegetation growing in and around human settlements. As opposed to a forest park, whose ecosystems are also inherited from wilderness leftovers, urban forests often lack amenities like public bathrooms, paved paths, or sometimes clear borders which are distinct features of parks. Care and management of urban forests is called urban forestry. Urban forests can be privately and publicly owned. Some municipal forests may be located outside of the town or city to which they belong. Urban forests play an important role in ecology of human habitats in many ways. Aside from the beautification of the urban environment, they offer many benefits like impacting climate and the economy while providing shelter to wildlife and recreational area for city dwellers. Examples In many countries there is a growing understanding of the importance of the natural ecology in urban forests. There are numerous projects underway aimed at restoration and preservation of ecosystems, ranging from simple elimination of leaf-raking and elimination of invasive plants to full-blown reintroduction of original species and riparian ecosystems. Some sources claim that the largest man-made urban forest in the world is located in Johannesburg in South Africa. The city is located in the highveld, a grassland biome typically lacking large numbers of trees, yet Johannesburg is still a very densely wooded city with reportedly 10 million artificially introduced trees and is rated as the city with the eighth highest tree coverage in the world. Rio de Janeiro is also home to two of the vastest urban forests in the world, one of which is considered by some sources to be the largest one. Tijuca Forest is the most famous. It began as a restoration policy in 1844 to conserve the natural remnants of forest and replant in areas previously cleared for sugar and coffee. Despite the worldwide recognition of Tijuca Forest, another forest in the same city encompasses roughly three times the size of its more prominent neighbor: Pedra Branca State Park occupies 12,500 hectares (30,888 acres) of city land, against Tijuca's 3,953 hectares (9,768 acres). The larger metropolitan area encircles the forests which moderate the humid climate and provide sources of recreation for urban dwellers. Along with seven other smaller full protection conservation units in the city, they form an extensive natural area that contains the Transcarioca Trail, a 180-km footpath. Sanjay Gandhi National Park in Mumbai, Maharashtra, India is also an example of an urban forest. It covers roughly around 20% area of the city. The forest is filled with many animals freely roaming around. It also has an important cultural site of ancient history situated in it known as the Kanheri caves. Nebraska National Forest is the largest man-made forest in the United States located in the state of Nebraska. It lies in several counties within the state and is a popular destination for campers year-round. Several cities within the United States have also taken initiative investing in their urban forests to improve the well-being and economies of their communities. Some notable cities among them are Austin, Atlanta, New York, Seattle, and Washington, D.C. New York, for example, has taken initiative to combat climate change by planting millions of trees around the city. In Austin, private companies are funding tree-planting campaigns for environmental and energy-saving purposes. Environmental impact Urban forests play an important role in benefitting the environmental conditions of their respective cities. They moderate local climate, slowing wind and stormwater, and filter air and sunlight. They are critical in cooling the urban heat island effect, thus potentially reducing the number of unhealthful ozone days that plague major cities in peak summer months. Air pollution reduction As cities struggle to comply with air quality standards, trees can help to clean the air. The most serious pollutants in the urban atmosphere are ozone, nitrogen oxides (NOx), sulfuric oxides (SOx) and particulate pollution. Ground-level ozone, or smog, is created by chemical reactions between NOx and volatile organic compounds (VOCs) in the presence of sunlight. High temperatures increase the rate of this reaction. Vehicle emissions (especially diesel), and emissions from industrial facilities are the major sources of NOx. Vehicle emissions, industrial emissions, gasoline vapors, chemical solvents, trees and other plants are the major sources of VOCs. Particulate pollution, or particulate matter (PM10 and PM25), is made up of microscopic solids or liquid droplets that can be inhaled and retained in lung tissue causing serious health problems. Most particulate pollution begins as smoke or diesel soot and can cause serious health risk to people with heart and lung diseases and irritation to healthy citizens. Trees are an important, cost-effective solution to reducing pollution and improving air quality. Trees reduce temperatures and smog With an extensive and healthy urban forest air quality can be drastically improved. Trees help to lower air temperatures and the urban heat island effect in urban areas. This reduction of temperature not only lowers energy use, it also improves air quality, as the formation of ozone is dependent on temperature. Trees reduce temperature not only by directly shading: when there is a large number of trees it create a difference in temperatures between the area when they are located and the neighbor area. This creates a difference in atmospheric pressure between the two areas, which creates wind. This phenomenon is called urban breeze cycle if the forest is near the city and park breeze cycle if the forest is in the city. That wind helps to lower temperature in the city. As temperatures climb, the formation of ozone increases. Healthy urban forests decrease temperatures, and reduce the formation of ozone. Large shade trees can reduce local ambient temperatures by 3 to 5 °C Maximum mid-day temperature reductions due to trees range from 0.04 °C to 0.2 °C per 1% canopy cover increase. In Sacramento County, California, it was estimated that doubling the canopy cover to five million trees would reduce summer temperatures by 3 degrees. This reduction in temperature would reduce peak ozone levels by as much as 7% and smoggy days by 50%. Lower temperatures reduce emissions in parking lots Temperature reduction from shade trees in parking lots lowers the amount of evaporative emissions from parked cars. Unshaded parking lots can be viewed as miniature heat islands, where temperatures can be even higher than surrounding areas. Tree canopies will reduce air temperatures significantly. Although the bulk of hydrocarbon emissions come from tailpipe exhaust, 16% of hydrocarbon emissions are from evaporative emissions that occur when the fuel delivery systems of parked vehicles are heated. These evaporative emissions and the exhaust emissions of the first few minutes of engine operation are sensitive to local microclimate. If cars are shaded in parking lots, evaporative emissions from fuel and volatilized plastics will be greatly reduced. Cars parked in parking lots with 50% canopy cover emit 8% less through evaporative emissions than cars parked in parking lots with only 8% canopy cover. Due to the positive effects trees have on reducing temperatures and evaporative emissions in parking lots, cities like Davis, California, have established parking lot ordinances that mandate 50% canopy cover over paved areas. "Cold Start" emissions The volatile components of asphalt pavement evaporate more slowly in shaded parking lots and streets. The shade not only reduces emissions, but reduces shrinking and cracking so that maintenance intervals can be lengthened. Less maintenance means less hot asphalt (fumes) and less heavy equipment (exhaust). The same principle applies to asphalt-based roofing. Active pollutant removal Trees also reduce pollution by actively removing it from the atmosphere. Leaf stomata, the pores on the leaf surface, take in polluting gases which are then absorbed by water inside the leaf. Some species of trees are more susceptible to the uptake of pollution, which can negatively affect plant growth. Ideally, trees should be selected that take in higher quantities of polluting gases and are resistant to the negative effects they can cause. A study across the Chicago region determined that trees removed approximately 17 tonnes of carbon monoxide (CO), 93 tonnes of sulfur dioxide (SO2), 98 tonnes of nitrogen dioxide (NO2), and 210 tonnes of ozone (O3) in 1991. Carbon sequestration Urban forest managers are sometimes interested in the amount of carbon removed from the air and stored in their forest as wood in relation to the amount of carbon dioxide released into the atmosphere while running tree maintenance equipment powered by fossil fuels. Interception of particulate matter In addition to the uptake of harmful gases, trees act as filters intercepting airborne particles and reducing the amount of harmful particulate matter. The particles are captured by the surface area of the tree and its foliage. These particles temporarily rest on the surface of the tree, as they can be washed off by rainwater, blown off by high winds, or fall to the ground with a dropped leaf. Although trees are only a temporary host to particulate matter, if they did not exist, the temporarily housed particulate matter would remain airborne and harmful to humans. Increased tree cover will increase the amount of particulate matter intercepted from the air. Large evergreen trees with dense foliage collect the most particulate matter. The Chicago study determined that trees removed approximately 234 tonnes of particulate matter less than 10 micrometres (PM10) in 1991. Large healthy trees greater than 75 cm in trunk diameter remove approximately 70 times more air pollution annually (1.4 kg/yr) than small healthy trees less than 10 cm in diameter (0.02 kg/yr). Rainwater runoff reduction Urban forests and trees help purify water sources by slowing down rain as it falls to the earth and help it soak into the soil, thereby naturally filtering out pollutants that can potentially enter water supply sources. They reduce storm water runoff and mitigate flood damage, protecting the surrounding rivers and lakes. Trees also help alleviate the load on "grey" infrastructure (such as sewers and drains) via evapotranspiration. Trees are ideally suited as their canopies can intercept water (and provide dense vegetation), whilst their roots can pump substantial amounts of water back into the atmosphere as water vapor, all with a relatively small footprint. Urban wildlife Trees in urban forests provide food and shelter for wildlife in cities. Birds and small mammals use trees as nesting sites, and reptiles use the shade they provide to keep cool in the hot summer months. Furthermore, trees provide shade necessary for shrubbery. Not only do urban forests protect land animals and plants, they also sustain fish and water animals that need shade and lower temperatures to survive. Wealthier neighborhoods often have more tree cover (both community-subsidized and on private property) which results in an accumulation of benefits on those sections of a city; a study of neighborhoods in Los Angeles found higher levels of bird diversity in historically richer sections of town, and larger populations of synanthropic birds in historically poorer sections of town. Economic impacts The economic benefits of trees and various other plants have been understood for a long time. Recently, more of these benefits are becoming quantified. Quantification of the economic benefits of trees helps justify public and private expenditures to maintain them. One of the most obvious examples of economic utility is the example of the deciduous tree planted on the south and west of a building (in the Northern Hemisphere), or north and east (in the Southern Hemisphere). The shade shelters and cools the building during the summer, but allows the sun to warm it in the winter after the leaves fall. The physical effects of trees—the shade (solar regulation), humidity control, wind control, erosion control, evaporative cooling, sound and visual screening, traffic control, pollution absorption and precipitation—all have economic benefits. Energy and CO2 consumption Urban forests contribute to the reduction of energy usage and CO2 emissions primarily through the indirect effects of an efficient forestry implementation. The shade provided by trees reduces the need for heating and cooling throughout the year. As a result, energy conservation is achieved which leads to a reduction of CO2 emissions by power plants. Computer models indicate that annual energy consumption can be reduced by 30 billion kWh using 100 million trees in U.S. urban areas. This energy consumption decrease equates to monetary savings of $2 billion. Additionally, the reduction of energy demand would reduce power plant CO2 emissions by 9 million tons per year. Water filtration The stormwater retention provided by urban forests can provide monetary savings even in arid regions where water is expensive or watering conservation is enforced. One example of this can be seen in a study carried out over 40 years in Tucson, AZ, which analyzed the savings of stormwater management costs. Over this period, it was calculated that $600,000 in stormwater treatment costs were saved. It was also observed that the net water consumption was reduced when comparing the water required for irrigation against power plant water consumption due to the effects of urban forests on energy usage. In another instance, New York City leaders in the late 1990s chose to pursue a natural landscape management instead of an expensive water treatment system to clean the Catskill/Delaware watershed. New Yorkers today enjoy some of the healthiest drinking water in the world. Tourism and local business expansion The USDA Guide notes on page 17 that "Businesses flourish, people linger and shop longer, apartments and office space rent quicker, tenants stay longer, property values increase, new business and industry is attracted" by trees. Increases in property values Urban forests have been linked to an increase in property value surrounding residents. An empirical study from Finland showed a 4.9% increase in property valuation when located just one kilometer closer to a forest. Another source claims this increase can range as high as 20%. The reduction of air, light, and noise pollution provided by forests is cause for the notable pricing differentials. Sociological impacts Community health impact Urban forests offer many benefits to their surrounding communities. Removing pollutants and greenhouse gases from the air is one key reason why cities are adopting the practice. Removing pollutants from the air, urban forests can lower risks of asthma and lung cancer. Communities that rely on well-water may also see a positive change in water purity due to filtration. The amenities provided by the city of each urban forest varies. Some amenities include trails and pathways for walking or running, picnic tables, and bathrooms. These healthy spaces provide for the community a place to gather and live a more active lifestyle. Mental health impact Living near urban forests have shown positive impacts on mental health. As an experimental mental health intervention in the city of Philadelphia, trash was removed from vacant lots, some of them being selectively "greened" by plantings trees, grass, and installing small fences. Residents near the "greened" lots who had incomes below the poverty line reported a 68% decrease in feelings of depression, while residents with incomes above the poverty line reported a decrease of 41%. The Biophilia hypothesis argues that people are instinctively drawn to nature, while Attention Restoration Theory goes on to demonstrate tangible improvements in medical, academic and other outcomes, from access to nature. Proper planning and community involvement are important for the positive results to be realized. Increased home values and incomes In addition to providing economic benefits at the community level, trees also benefit individual homeowners. A tree on a home's landscape or around the house can increase the dollar value received for the home upon sale. According to one study, a tree planted in the front yard can increase a home's sale price by $7,130 and raise the sale prices of surrounding homes. Healthy urban forests also correlate with higher incomes. In communities that have thriving urban forests, there are higher incomes, higher numbers of jobs associated with those communities, and higher productivity of workers. See also Green belt found around various urban clusters Million Tree Initiative in multiple urban areas in the world Tree Cities of the World Urban forestry Urban green space Urban prairie Urban reforestation Urban forest inequity 3-30-300 Rule by Cecil Konijnendijk References Notes Bibliography Nowak, D. (2000). Tree Species Selection, Design, and Management to Improve Air Quality Construction Technology. Annual meeting proceedings of the American Society of Landscape Architects (available online, pdf file). Nowak, D. The Effects of Urban Trees on Air Quality USDA Forest Service (available online, pdf file). Nowak, D. (1995). Trees Pollute? A "Tree Explains It All". Proceedings of the 7th National Urban Forest Conference (available online, pdf file). Nowak, D. (1993). Plant Chemical Emissions. Miniature Roseworld 10 (1) (available online, pdf file). Nowak, D. & Wheeler, J. Program Assistant, ICLEI. February 2006. McPherson, E. G. & Simpson, J. R. (2000). Reducing Air Pollution Through Urban Forestry. Proceedings of the 48th meeting of California Pest Council (available online, pdf file). McPherson, E. G., Simpson, J. R. & Scott, K. (2002). Actualizing Microclimate and Air Quality Benefits with Parking Lot Shade Ordinances. Wetter und Leben 4: 98 (available online, pdf file). External links Urban Forestry South Center for Urban Forest Research Urban Forest Ecosystems Institute Urban Forestry USDA Forest Service Northeastern Area Environmental design Forestry and the environment
Urban forest
[ "Engineering" ]
3,561
[ "Environmental design", "Design" ]
4,109,505
https://en.wikipedia.org/wiki/Solstice%20Cyclists
The Solstice Cyclists (also known as The Painted [Naked] Cyclists of the Solstice Parade, or The Painted Cyclists) is an artistic, non-political, clothing-optional bike ride celebrating the summer solstice. It is the unofficial start of the Summer Solstice Parade & Pageant, an event produced by the Fremont Arts Council in the Fremont district of Seattle. The event was started by streakers who crashed the parade. The first people to do so were a small group of friends and roommates from the adjacent (Wallingford) neighborhood, several of whom were bicycle couriers by trade. Participants now emphasize bodypainting and other artistry. The group is the largest and fastest growing ensemble associated with the parade. The parade, put on by Fremont Arts Council, is held on a Saturday close to the actual solstice. Art bikes are common and cycles include BMX bikes, cycle rickshaws, unicycles, clown bicycles, tall bikes, lowrider bicycles, tandem bicycles and tricycles. People come from all over the country to ride. Full and partial (especially topfree) nudity is popular, but not mandatory. While cyclists open the parade, they are not in the parade line-up (except in 2003 when they had a float). Parade rules say "any printed communications, written words, recognizable logos, signage, leaf-letting, or advertising in any form are prohibited on the parade route." Recent events include a pre-ride bodypainting party, a party ride through the city, and the parade itself at noon. Controversy 2001 and subsequent years were controversial for the naked cyclists, including references to them as "parade crashers". In 2001, police and organizers posted laws against indecent exposure to warn of possible prosecution. Organizers claimed cyclists were getting in the way of the event's artistic freedom. An editorial that day (May 17, 2001) in The Seattle Times said: "They have stolen the spotlight on a parade that is supposed to be about art, not about being unclothed. Some Fremonters appear to resent that and do not want the nudists doing this. However, many welcome the cyclists. Neither of them want the cyclists wrestled to the pavement by police, spoiling the atmosphere of their parade." History and media coverage 1992 1992 was the first year cyclists are said to have appeared in the fourth annual Solstice Parade with the Mighty Nice Naked Girls playing these Blue Sousaphones. Most likely these were streakers. 1993 In 1993 7-10 people in the Solstice Parade cycled naked, maybe three of whom were bodypainted. Reference to the second year of naked cyclists: "It could only happen in Fremont", said one of the coordinators, Barbara Luecke. "Only such a rich artistic community could shake off the staid reserve from nearby Ballard to let loose with such creative energy and fun. ... Buck-naked cyclists who streaked the parade for the second year may have crossed the boundary of good taste (would that be Leary Way Northwest?). But one, at least, was wearing a helmet (proof that people in Seattle can get wild, but not too wild)." 1995 Eight naked men were reported to have cycled through the parade: "All the nudes: Overheard at the Fremont Solstice Parade on Saturday was a woman spectator commenting: "Oh, no. Not eight naked men on bicycles. I hate naked men on bicycles." A separate article a week before the parade referred to previous years with naked cyclists. 1997 A cyclist was reported to have hit a child, resulting in the Fremont Arts Council asking police to be present in 1998. "Per tradition, there also were naked bicycle riders. They zoomed by so quickly it was hard to tell, um, the type of bike they were riding. 'I wish they had sort of stopped and waved,' said Blue Hesik Lan." 1998 One of the ride's organizers became involved in the ride for the first time, the only one bodypainted in a group of about six. Two 28-year-old naked cyclists were arrested because, according to police, they "cut into the marching order" of the parade. Four police were involved. The city did not file charges because, according to the prosecuting office: "in order to prove indecent exposure, it's necessary to show the person's intent was to be obscene and cause alarm." Also of note is the sightings of nude cyclists in the Capitol Hill neighborhood this year. "Crowds booed when last year's naked riders were arrested and handcuffed." "Bicyclists riding au naturel is nothing new to the quirky parade, which is known for participants in outlandish and sometimes risque costumes. But police say yesterday's arrests were made primarily for safety: The nude bicyclists typically dash quickly in and out of the parade audience." "So, why is the only focus on the nude bikers? They were only a part of the parade for a few minutes. I did not see them." 1999 In the eighth year, a second-time rider hosted a bodypainting party at her Wallingford residence in response to SPD's actions of 1998 and friction between the Fremont Arts Council and Mark Sidran/City of Seattle. Twenty-odd friends gathered to get painted and ride together to the parade, including a woman who wore a 3-buttcheek bodysuit costume rather than paint. Members of the Fremont Arts Council launched a spoof of the naked bicyclists as well. Wearing flesh-colored bodysuits with exaggerated body parts sewn on, they cycled down the parade route while two bicyclists pretending to be police officers gave chase. When the truly naked cyclists showed up, they blended right in with their Fremont Arts Council bodysuit imposters. "And, of course, there were the infamous and crowd-pleasing nude bikers, a regular attraction eagerly awaited by the parade watchers. ... 'This is not authorized by the organizers,' said Steve Lynch, one of the volunteers responsible for safety and order during the event. 'But it's just for fun, so no interventions.'" "Here in the self-anointed center of the universe, where the Waiting for the Interurban sculptures wear more clothing than the nude cyclists who grace the annual Solstice Parade, high-tech is moving in." "Meanwhile, Hadrann says the scent of rebellion is in the air in Fremont - or maybe it's just another rumor. 'Some people in the community are going to get nude if he (Sidran) starts arresting the cyclists,' he says. ... 'First, there was 50, now there's like 100 people. . . . Who knows what kind of chain reaction this is going to bring.'" This article also includes Seattle Police Department Lt. Mark Kuehn's suggestions for safety for nude cyclists such as: "Refrain from trying out saddles in the nude, for obvious sanitary reasons. Hadrann suggests shoppers take along a few pairs of Chinese disposable underwear (made of paper) for saddle-buying expeditions." "The council decided this week against posting 'no nudity' signs for the neighborhood's arts parade, where two men were arrested for naked bike riding last year. Police had asked that the signs be posted for this year's parade, set for Saturday. ...Council President Bradley Erhlich said the public nudity might be a form of artistic expression. ... 'If it is art, then the Arts Council should support them,' Erhlich said. ... Crowds booed when last year's naked riders were arrested and handcuffed." 2000 In the nude cyclists' ninth year, bodypainting artist Steven Bradford joined the bodypainting team and assisted in transforming 4 women into Fire, Earth, Air, and Water at the painting party at Fire's home. 2001 In 2001, according to The Seattle Times, there were 50 cyclists, mostly in bodypaint. To the amusement of many, this year an artist had a painting in the parade showing a naked female bicyclist next to a baton-wielding police officer. The pose itself could have either shown the apprehension or the cop gleefully stopping for a picture next to the bicyclist. The panel was put on a small platform on wheels and parade goers were invited to have their pictures taken with their heads poking out of the holes of the naked bicyclists and the officer. In 2001, the city threatened to withdraw the event permit for the Fremont Arts Council because of the nudity. Signs were actually made warning naked cyclists that they may be subject to arrest. The city ended up backing off before the event day. Fremont Arts Council parade organizers urged riders to participate within the artistic spirit of the event. Many locals were very upset that the city would threaten to arrest one of the parade's most popular and creative ensembles. The blowback effect, as predicted by Seattle City Council Chair Nick Licata, ended up being more publicity and popularity for the cyclists which, in turn, led to more cyclists wanting to join the ensemble. In efforts to combat this effect, the Seattle City Council was invited by the Fremont Arts Council to participate in the parade. Nick Licata was the only one who agreed and ended up cycling through as the "un-naked cyclist". After jeers of "Take your clothes off" he was met by a parade monitor who told him to get off the parade route, stating "Yeh? We still don't have bike riding in the parade. If one person rides then others will and then the whole parade will have bikes riding all over the place." Licata later lamented in a Seattle Times article, "I was waving to the photographer - smack in the middle of a pack of painted, naked bicyclists." "There was no better illustration of the fair's quirkiness than in its parade - with its wild costumes, floats and giant puppets - and nude bicyclists, which led to a flap over the permit for this year's parade. ... Before the city issued this year's parade permit, police said they have gotten numerous complaints about the nude cyclists every year. They asked the Fremont Arts Council to post signs along the parade route warning cyclists, who are not a sanctioned part of the parade, about laws against indecent exposure. The council said no, even though members discouraged the nudity. ... In 1998, two bikers in the buff were arrested. None were arrested this year." 2002 "What solstice is complete without nude cyclists? To get your annual fix, see the Fremont Summer Solstice Parade and Fair on Saturday and Sunday." "As has been the tradition, a number of unauthorized naked bicycle riders start the parade. Last year there were 50 — most in body paint." 2003 2003 marked the twelfth year of naked cyclists taking part in the Solstice Parade. The parade took place on June 21, 2003. Numbers quadrupled from previous years to between 75 and 80 riders. An internet discussion forum was established for the first time. The bodypainting party took place at the host's house in the Ravenna neighborhood, with a photo shoot at Cowen Park. The procession then began south through the University District on Roosevelt and then on 45th through Wallingford to Phinney Ridge. This is also the first year that the cyclists were officially part of the parade with their Helios-themed float, which several cyclists (partially dressed) climbed aboard after they cycled through the parade. The float featured wispy clouds and gold-painted "chariot" exercise bikes to evoke a sense of pulling the sun through the summer. Ironically, toward the end of the parade, and despite all the "Happy Solstice" chants, the sky clouded over and it began to rain. Two digital video films were produced from footage of this year's event. One is called Naked & Painted: The Fremont Solstice Riders 2003 and is sold to friends and future potential riders with proceeds going to a local charity. The other video was called Solstice: A Celebration of the Art of Bodypainting produced by James W. Taylor/Circle Rock Productions and premiered at Naked Freedom Film Festival , held at the Seattle Art Museum on May 15, 2004. Unusually cool weather this year resulting in a number of weather-themed paint jobs. Also in 2003, much publicity was focused on David Zaitzeff's determination to walk naked through the Solstice Parade. Zaitzeff sued Seattle police Chief Gil Kerlikowske in a federal lawsuit because he "desires to go nude at the Fremont Solstice Parade without fear of unjust arrest". U.S. District Judge Robert Lasnik said that because Zaitzeff had not been arrested for indecent exposure, the court couldn't make a prospective ruling on the matter. Much later in the year there was a suggestion to have the group become part of a larger international naked bike ride, later known as the World Naked Bike Ride (WNBR). The idea was unpopular because the Solstice Parade, unlike WNBR, is a non-political arts event. Secondary reasons for not liking the idea included that WNBR would not be as spontaneous of an event and some may not be as inclined to participate in an artistic way. 2004 The parade took place on June 19, 2004. About 116 cyclists participated, setting a new record. The main group started to ride from the pre-parade bodypainting party at the old Segway building in Ballard. (The building was later demolished to make way for Ballard Civic Center Park.) The ride proceeded down NW Market Street to Leary Way to the parade. The cyclists did not have a float in the parade in 2004, but there were more elaborate art installations on bikes. 2004 also marked the beginning of the Synchronised Cycling Drill Team within the group. The year's theme was Noah's Ark animals. One of the cyclists provided rides to children along the parade route in her cycle rickshaw. A week prior to the event, on June 12, was the first annual World Naked Bike Ride Seattle event and was the first time a major naked cycling event has crossed the channel into downtown Seattle. This ride featured a pre-ride bodypainting party at Gas Works Park, where the end of the painted cyclists ride traditionally took place. 2005 The parade took place on June 18, 2005. Approximately 138 cyclists leave bodypainting party on the south side of the Lake Washington Ship Canal, and once joined by those waiting at the parade, the numbers probably grew to around 160 cyclists. Part of the ride included going down the Ballard Bridge on 15th Avenue and turning again on NW Market Street. About five cyclists broke off from the group after the end of the parade ride and rode around Green Lake and came back to Fremont. One of the big controversies in 2005 was the Fremont Arts Council excluding People Undergoing Real Experiences (PURE) (now known as Pure cirkus) from dressing "up as pirates with two people suspended on a pirate ship float from hooks in their skin" as they go through the parade. Much of the media noted that while the naked cyclists are tolerated and widely popular, this has become the new controversial area for the council. A week later, a third painted ride, called the Body Pride Ride, was started by one of the painted cyclists, and took place for the first time in the Seattle Gay Pride Parade on Capitol Hill. A WNBR mini-ride in September marked 2005 as a record-setting year not only for the number of painted cyclists participating, but also for doubling the number of painted naked rides in Seattle to a total of four. "If bike riders rode nude in a Los Angeles summer solstice celebration, the LAPD would shoot them dead, after a 'slow speed' chase televised on all 28 local channels." "Really, that's just the crazy naked bicyclists who precede the parade every year. They get all the press, all the hype, all the lasting impressions. People who work on the parade openly despise them. ... The nude bikers take away from all the legitimate art that volunteers spend countless hours creating. With one exhibitional blow, months of hard work by solstice parade artists is knocked from our collective conscious." 2006 The 18th Annual Summer Solstice Parade & Pageant, on June 17, 2006, marked the 15th year that naked cyclists have participated in the parade. On March 27, 2006, the Painted Cyclists went public with a public portal website: The Painted Cyclists. KUOW-FM in Seattle did an interview for their program called Weekday with the cyclists at the Ballard neighborhood where the bodypainting party was taking place, taking up at least three residential lots. The interview was reportedly about cycling safety in Seattle. This is confirmed by several cyclists and pictures taken at the bodypainting party. The segment aired on June 22, 2006. 2007 The 19th Annual Solstice Parade took place on June 16, 2007, marking the sixteenth consecutive year the painted cyclists have ridden in the parade. 267 cyclists took to the streets at 11:45 a.m. with little or no confrontation, legal or otherwise. The annual painting party took place at a nearby residence on NW 48th Street in Ballard. For the first time, a video presented by a painted inline skater was shot at the painting party and posted to the website of The Stranger newspaper. 2008 The 20th Annual Solstice Parade took place on June 21, 2008, marking the 17th consecutive year that the painted cyclists have accompanied. Slightly fewer cyclists than in previous years rode, and tension between both the Seattle police and the Fremont Arts Council was minimal. The painting party took place this time in Belltown, meaning the cyclists had to ride a full three miles through the Seattle neighborhood of Queen Anne to get to Fremont. Painting parties were also going on independently of the main party downtown, so riders had to coordinate meeting up at a common location before entering the parade. After making a surprise entrance by entering through the crowd at the middle of the parade route, on Fremont Avenue (which had never been done before), riders were initially rerouted because of the timing of the permits. They later reentered the parade route closer to the traditional starting point and proceeded through the parade, ending the ride at Gas Works Park. 2009 The 21st Annual Solstice Parade took place on June 20, 2009, marking the 18th consecutive year of the painted cyclists. The painting party took place at Hale's Ales in Ballard, and attracted an estimated 430 cyclists, plus painters. After riding through Ballard and watching their numbers swell as riders from independent paint parties joined the group, the riders traversed the parade route in Fremont, ending once again at Gas Works Park. Media coverage included an article and video by the Seattle P-I. Ensembles included the "Stimulus Package" group, appropriate for a year of controversial economic bailouts. This year was also the first year of the Gardens Everywhere Bike Parade. 2010 The 22nd Annual Solstice Parade took place on Saturday, June 19, 2010, marking the nineteenth consecutive year of the painted (and some not-so-painted) solstice cyclists. The painting party again took place at Hale's Ales in Ballard, and attracted hundreds of cyclists, plus painters. Then they jumped on their bikes and headed to Ballard for a warm-up ride in the relatively chilly mid-50s air, surprising unsuspecting drivers and whooping it up down Market Street before returning for the noon start of the parade, where the riders completed the parade route in Fremont. An optional repeat loop-back plan through part of the route was added in 2010, designed to extend the experience for both the riders who opted for it as well as the crowd lining the streets, with a side benefit of minimizing any time gap between the end of the cyclists and the start of the parade proper. That plan met with some confusion due to communication issues with parade officials, and therefore mixed results, but riders vowed to remedy that in 2011. The cyclists ended at the now-traditional clothing-optional "victory celebration" at and around Kite Hill in Gas Works Park. 2011 The parade took place on June 18, the day before Fathers Day. The skies were overcast and the temperature was in the mid-50s with intermittent misty light rain—for the second year in a row, the third Saturday in June was unseasonably cool. But that did not deter the 600+ riders nor curb their enthusiasm. For many, the day started at the old Ballard Library building, which had been rented as the central location for body painting. Earlier in the week, plastic had been laid on the floor and taped into one big surface, then tables set out to delineate separate painting areas. Aisles were painted on the floor in orange with "Keep Clear" to keep the fire marshal happy and facilitate movement around the area; bumping into someone covered in paint leaves a mark. In the northwest corner was the film crew for Beyond Naked, a documentary about four first time riders. The parade riders gathered in the parking lot next to the library awaiting the go-ahead for the trip through Ballard. Most participants shivered in the cold while logistics were confirmed. Then, off through Ballard, to the amazement of unsuspecting pedestrians, many whooping and hollering encouragement. From there the parade snaked through Fremont in a circuitous route, getting longer at each pass, until it finally ended at Gas Works Park. 2012 The solstice cyclists rode for the 21st consecutive year, starting out by hosting their painting party at the Old Ballard Library, on NW 57th St. and 24th Avenue (the same place as the previous year). The weather was overcast with temperatures in the 60's. 2013 After a morning paint party at a marine hangar in Seattle's Shilshole neighborhood, hundreds of cyclists and skaters rode through Ballard, pausing for a group photo at Ross Park, before entering the parade course in Fremont at 2:45 pm. For the first time in several years the weather was hot and sunny, and as a result the crowd watching the parade was especially dense, and the number of participants was much larger than in previous years, with estimates ranging from several hundred to 1000. 2015 A small group of longtime organizers once again hosted a mass paint party in the Silshole neighborhood. The organizers took a video of the painted cyclists heading out through the gate and counted well over 945 cyclists just at this paint party. En route to the parade starting point they were joined by hundreds of other painted cyclists who swelled the ranks at each passing intersection . The artistry displayed on naked riders' bodies has become more and more intricate each year with some riders bringing their own personal painters to the party. Donations were collected at the entrance and after all expenses were paid the paint party volunteers donated $4,500 to the Fremont Art Council in order to help pay the parade expenses. 2016 In the weeks prior to the parade this year there was some friction between the Fremont Arts Council, the official parade organizers, and the cyclist organizers, a non-hierarchical consensus group that comes together once a year and has no formal organization. The FAC wanted the cyclists to become official members of the parade and under their control. The cyclist group are basically anarchists in the way they operate, and refused. They agreed to disagree and things went on as usual with close cooperation between the two groups. The numbers dwindled this year as the day dawned cold and damp. A seasoned group of 10 volunteers once again opened the gates to the large, group paint party at a private marina in the 4100 block of Shilshole Ave. N.W. At an entry portal donations were taken and only people who were cyclists to be painted or their painters were allowed to enter. Over several hours 600 clothed cyclists entered and at noon 600 cold (but exuberant) painted naked cyclists exited. The group rode through the Ballard neighborhood and then on to the Fremont Solstice Parade route where they were joined by approximately 100 more painted cyclists from private paint parties. After subtracting expenses, the organized paint party group was able to donate $2,000 to the Fremont Arts Council to help defray the parade costs the following spring. 2019 The warm and sunny weather brought over 700 people to the main cyclist paint party this year. As in past years, the painted, naked (mostly) cyclists filed out of the boat marina in the Ballard neighborhood and did a ride through the streets of the Ballard neighborhood. As the riders entered the parade route, prior to the start of the actual parade, they were not allowed to circulate on the route as had been arranged beforehand with the official Fremont Solstice Parade organizers. Instead they were forced to ride straight through the route with no slow looping back as had become the custom and agreed upon pattern for many years. This left the artists who had spent hours painting, and those who had come to view the parade and the cyclists with a less than satisfactory experience. The alliance between the free-spirited biking artists and the Fremont Arts Council who holds the permit for the actual parade is rapidly fraying at the edges and the outcome remains to be seen. The cyclist paint party group donated $4,000 to the Fremont Arts Council to help toward the cost of the parade. 2020 and 2021 There was no parade either year due to COVID-19, but a small group of cyclists met up at Gasworks Park and rode through the city. The parade restarted in 2022. See also References External links Recurring events established in 1992 Culture of Seattle Festivals in Seattle Fremont, Seattle Naked cycling events Pacific Northwest art Clothing-optional events Unofficial observances 1992 establishments in Washington (state) Summer solstice
Solstice Cyclists
[ "Astronomy" ]
5,333
[ "Time in astronomy", "Summer solstice" ]
4,109,541
https://en.wikipedia.org/wiki/On-board%20data%20handling
The on-board data handling (OBDH) subsystem of a spacecraft is the subsystem which carries and stores data between the various electronics units and the ground segment, via the telemetry, tracking and command (TT&C) subsystem. In the earlier decades of the space industry, the OBDH function was usually considered a part of the TT&C, particularly before computers became common on board. In recent years, the OBDH function has expanded, so much that it is generally considered a separate subsystem to the TT&C, which is these days concerned solely with the RF link between the ground and the spacecraft. Functions commonly performed by the OBDH are: Reception, error correction and decoding of telecommands (TCs) from the TT&C Forwarding of telecommands for execution by the target Avionics Storage of telecommands until a defined time ('time tagged' TCs) Storage of telecommands until a defined position ('position tagged' TCs) Measurement of discrete values such as voltages, temperatures, binary statuses etc. Collection of measurements made by other units and subsystems via one or more data busses, such as MIL-STD-1553 Real-time buffering of the measurements in a data pool Provision of a processing capability to achieve the aims of the mission, often using the data collected Collation and encoding of pre-defined telemetry frames Storage of telemetry frames in a mass memory Downlinking of telemetry to the ground, via the TT&C Management and distribution of time signals Telecommand reception The OBDH receives the TCs as a synchronous PCM data stream from the TT&C Telecommand execution The desired effect of the telecommand may be just to change a value in the on-board software, or to open/close a latching relay to reconfigure or power a unit, or maybe to fire a thruster or main engine. Whichever effect is desired, the OBDH subsystem will facilitate this either by sending an electric pulse from the OBC, or by passing the command through a data bus to the unit which will eventually execute the TC. Some TCs are part of a large block of commands, used to upload updated software or data tables to fine tune the operation of the spacecraft, or to deal with anomalies. Time-tagged telecommands It is often required to delay a command's execution until a certain time. This is often because the spacecraft is not in view of the ground station, but may also be for reasons of precision. The OBC will store the TC until the required time in a queue, and then execute it. Position-tagged telecommands Similar to time-tagged commands are commands that are stored for execution until the spacecraft is at a specified position. These are most useful for Earth observation satellites, which need to start an observation over a specified point of the Earth's surface. The spacecraft, often in Sun-synchronous orbits, take a precisely repeating track over the Earth. Observations which are taken from the same position may be compared using interferometry, if they are in close enough register. The precise position required is sensed using GPS. Once a position tagged command has been executed, it may be flagged for deletion or left to execute again when the spacecraft is once again over the same point. Processing function The modern OBDH always uses an on-board computer (OBC) that is reliable, usually with redundant processors. The processing power is made available to other applications which support the spacecraft bus, such as attitude control algorithms, thermal control, failure detection isolation and recovery. If the mission itself requires only a small amount of computing power (such as a small scientific satellite) then the payload may also be controlled by the software running on the OBC, to save launch mass and the considerable expense of a dedicated payload computer. See also Spacecraft bus References External links https://ecss.nl/standard/ecss-e-st-50-04c-space-data-links-telecommand-protocols-synchronization-and-channel-coding/ Avionics
On-board data handling
[ "Technology" ]
870
[ "Avionics", "Aircraft instruments" ]
4,109,643
https://en.wikipedia.org/wiki/Tiffeneau%E2%80%93Demjanov%20rearrangement
The Tiffeneau–Demjanov rearrangement is the chemical reaction of a 1-aminomethyl-cycloalkanol with nitrous acid to form an enlarged cycloketone. The Tiffeneau–Demjanov ring expansion, Tiffeneau–Demjanov rearrangement, or TDR, provides an easy way to increase amino-substituted cycloalkanes and cycloalkanols in size by one carbon. Ring sizes from cyclopropane through cyclooctane are able to undergo Tiffeneau–Demjanov ring expansion with some degree of success. Yields decrease as initial ring size increases, and the ideal use of TDR is for synthesis of five, six, and seven membered rings. A principal synthetic application of Tiffeneau–Demjanov ring expansion is to bicyclic or polycyclic systems. Several reviews on this reaction have been published. Discovery The reaction now known as the Tiffeneau–Demjanov rearrangement (TDR) was discovered in two steps. The first step of occurred in 1901 when Russian chemist Nikolai Demyanov discovered that aminomethylcycloalkanes produce novel products upon treatment with nitrous acid. When this product was identified as the expanded alcohol in 1903, the Demjanov rearrangement was coined. The Demjanov rearrangement itself has since been successfully used in industry and synthetical organic chemistry. However, its scope is limited. The Demjanov rearrangement is only best suited for expanding four, five, and six member aminomethylcycloalkanes. Moreover, alkenes and un-expanded cycloalcohols form as by-products. Yields diminish as the starting cycloalkane becomes larger. A discovery by French scientists a few years before World War II would result in the modern TDR reaction. In 1937, Marc Tiffeneau, Weill, and Tchoubar published in Comptes Rendus their finding that 1-aminomethylcycloahexanol converts readily to cycloheptanone upon treatment with nitrous acid. Perhaps due to such a large ring being expanded, the authors did not immediately relate it to the Demjanov rearrangement. Instead, they envisioned that their reaction was similar to one discovered by Wallack in 1906. Upon oxidation with permanganate, cycloglycols will dehydrate to yield an aldehyde via an epoxide intermediate. The authors postulated that deamination resulted in a similar epoxide intermediate that subsequently formed a ring enlarge cycloketone. However, in the time that followed, scientists began to realize that these reactions were related. By the early 1940s, TDR was in organic vernacular. Tiffeneau's discovery enlarged the synthetic scope of the Demjanov rearrangement as now seven and eight carbon rings could be enlarged. Since the resulting cycloketone could be easily converted to a cycloaminoalcohol again, this new method quickly became popular among organic chemists. Basic mechanism The basic reaction mechanism is a diazotation of the amino group by nitrous acid followed by expulsion of nitrogen and formation of a primary carbocation. A rearrangement reaction with ring expansion forms a more stable oxonium ion which is deprotonated. Early development of mechanism Although chemists at the time knew very well what the product of a symmetrical 1-aminomethylcycloalcohol would be when exposed to nitrous acid, there was significant debate on the reaction's mechanism that lasted up until the 1980s. Scientists were puzzled over the array of products they would obtain when the reaction was performed on an unsymmetrical 1-aminomethylcycloalcohols and bridged cyclo-systems. Even today, experiments continue that are designed to shed light into the more subtle mechanistic features of this reaction and increase yields of desired expanded products. In 1960, Peter A.S. Smith and Donald R. Baer, both of the University of Michigan, published a treatise on the TDR. Their proposed mechanism contained within provides an excellent perspective of scientist's understanding of the TDR at that time. The mechanism proposed by Baer and Smith was the summation of several sources of information. Since the early 1950s, it had been postulated by many that the TDR mechanism involved a carbonium ion. However, a major breakthrough in the development of the TDR mechanism came with the improved understanding of the phenomenon behind amine groups reacting with nitrous acid. Meticulous kinetic studies throughout the late 1950s led scientists to believe that nitrous acid reacts with an amine by first producing a nitrous acid derivative, potentially N2O3. While this derivative would prove incorrect as it relates to TDR, scientists of the time still correctly came to the conclusion that the derivative would react with the amine to produce the diazonium ion. The inferred instability of this diazonium ion gave solid evidence for the existence of a carbocation in the TDR mechanism. Another piece of information that had implications in the mechanism of the TDR was the simple fact that the reaction proceeds more easily with the conditions discovered by Tiffeneau. By placing an alcohol on the carbon on the reagent, reaction rates and yields are much improved to those of the simple Demjanov rearrangement. Moreover, few unwanted by products are formed, such as olefins. These aforementioned observations were the center around which Smith and Baer's mechanism was constructed. It is easy to see that hydrogen's presence would mean that hydride shifts would occur in competition with carbon shifts during the course of the reaction. Moreover, this shift is likely as it would move place positive charge from a 1° carbon to a 3° carbon. In a mildly basic solvent such as water, this new intermediate could easily produce an olefin by an E1-like reaction. Such olefins are typically seen in simple Demjanov rearrangements but are not seen in the TDR. The alcohol's presence explains how this E1 reaction does not occur. Moreover, having an alcohol present puts the developing positive charge of the ring enlarged intermediate next to an oxygen. This would be more favorable than hydrogen as oxygen can lend electron density to the carbonium ion via resonance. This again favors ring expansion and is another caveat that shows how it incorporates higher yields of the TDR over the Demjanov rearrangement. Smith and Baer' mechanism was also able to account for other observations of the time. Tiffeneau–Demjanov rearrangements of1-aminomethylcycloalkanols with alkyl substitutions on the side aminomethyl chain had been accomplished by many scientists before 1960. Smith and Baer investigated how such substitution affects the TDR by synthesizing various 1-hydroxycyclohexylbenzyl-amines and exposing them to TDR conditions. Seeing as six member rings are routinely enlarged by the TDR, one might expect the reaction to occur. However instead of the anticipated ring enlargements, only diols are seen as products. Five member analogues to the above substituted reagents enlarge under TDR conditions. Alkyl substitutions as opposed to aryl substitutions result in diminished TDRs. Smith and Baer assert that these observations support their mechanism. Since substitution stabilizes the carbonium ion after damnification, the resulting carbonium ion is more likely to react with a nucleophile present (water in this case) and not undergo rearrangement. Five member rings rearrange due to the ring strain encouraging the maneuver. This strain makes the carbocation unstable enough to cause a carbon to shift. Problems with the early mechanism As definitive as Smith and Baer's early mechanism seems, there are several phenomena that it did not account for. The problem with their mechanism mainly focused around TDR precursors that have alkyl substituents on the ring. When said substituent is placed on the ring as to make the molecule still symmetric, one product is formed upon exposure to TDR conditions. However, if the alkyl is placed on the ring as to make the molecule unsymmetric, several products could form. The principal method for synthesizing the starting amino alcohols is through the addition of cyanide anion to a cyclic ketone. The resulting hydroxynitrile is then reduced, forming the desired amino alcohol. This method forms diastereomers, possibly affecting the regioselectivity of the reaction. For nearly all asymmetric precursors, one product isomer is formed preferentially to another. As TDR was routinely being used to synthesize various steroids and bicyclic compounds, their precursors were rarely symmetric. As a result, a lot of time was spent identifying and separating products. At the time, this phenomenon baffled chemists. Due to spectroscopic and separation limitations, it was very difficult for scientists to probe this caveat of the TDR in a sophisticated way. However, most believed that what was governing preferential product formation involved the migratory aptitudes of competing carbons and/or steric control. Migratory aptitude made reference to the possibility that the preferred product of the reaction was the result of an initial stability of one carbon migrating in preference to another. This possibility was more the belief and subject of research of earlier scientists, including Marc Tiffeneau himself. However, in the early 1960s, more and more scientists were starting to think that steric factors were the driving force behind the selectivity for this reaction. Sterics and stereochemistry in the mechanism As chemists continued to probe this reaction with more and more advanced technology and methods, other factors began to be tabled as possibilities for what was controlling product formation of unsymmetrical amino alcohols. In 1963, Jones and Price of the University of Toronto demonstrated how remote substituents in steroids play a role in product distribution. In 1968, Carlson and Behn of the University of Kansas discovered that experimental conditions also play a role. These latter scientists established that in ring extension via the TDR, initial temperature and concentration of reagents all played a role in eventual product distribution. Indeed, other avenues of the TDR were being explored and charted. However, Carlson and Behn did manage to report a significant breakthrough in the realm of sterics and migratory aptitudes as they relate to the TDR. As it might be expected based on electronic reasoning, the more highly substituted carbon should migrate preferentially to a less substituted carbon. However, this is not always seen and often accounts of migratory aptitudes show fickle preferences. Thus, the authors assert that such aptitudes are of little importance. Sterically, thanks chiefly to improved spectroscopic methods, they were able to confirm that having the amine group equatorial to the alkane ring corresponded to drastically different product yields. According to the authors, the preferential formation of D from A does not reflect a preferred conformation of A. Their modeling indicates that both A and B are initially just as likely to become C. He concludes that there must be a steric interaction to develop in the transition state during migration that makes A preferentially form D when exposed to the TDR conditions. The idea that sterics played a factor during migration and was not a factor just at the beginning to the reaction, was new. Carlson and Behn speculate that the factor might lay in transannular hydrogen interactions along the path of migration. Their modeling suggested that this interaction may be more severe for A forming C. However, they are not certain enough to offer this as a definitive explanation as they concede that more subtle conformational and/or electronic effects could be at work as well. At this point, the mechanism proposed by Smith and Baer seemed to be on its way out. If steric interactions relating to carbon migration during the reaction's transition state were important, this did not support the carbocation envisioned by Smith and Baer. Research around bi-cyclics during the 1970s would shed even more light into the TDR mechanism. In 1973, McKinney and Patel of Marquette University published an article in which they used the TDR for expanding norcamphor and dehydronorcamphor. Two of their observations are important. One centers on the expansion of exo and endo-2-norbornylcarbinyl systems. One might expect in (I) that A would migrate in preference to B seeing as such a migration would place the developing charge on a 2° carbon and pass the specie through a more favorable chair-like intermediate. This is not seen. Only 38% of the product exhibits A migration. To account for why A migration is not dominant in the expansion of I, the authors assert a least movement argument. Simply put, the migration of the non-bridgehead carbon provides for the least amount of total atom movement, something that plays into the energetics of the reaction. This least movement consideration would prove important in the TDR mechanism as it accounts for products with intermediates passing through unfavorable conformations. However, McKinney and Patel also confirm that traditional factors such as developing positive charge stability still play a crucial role in the direction of expansion. They accomplish this by expanding 2-norbornenyl carbinyl systems. By adding a simple double bond to these systems, the authors see a significant increase in the migration of the bridgehead carbon A (50% in this case.) The authors attribute this jump in migration to the fact that this bride carbon migrating allows the developing positive charge to be stabilizing by resonance contributed by the double bond. Therefore, carbocation/ positive charge effects can not be ignored in the discussion of the factors influencing product distribution. Later mechanistic studies As evidence continued to mount during the years after Smith and Baer's publication in 1960, it was obvious that the TDR mechanism would need revisiting. This new mechanism would have to de-stress the carbocation as there are other factors that influence ring expansion. Orientation of the developing diazonium ion, the possibility of steric interactions during the reaction, and atomic movement would all have to be included. In 1982, Cooper and Jenner published such a mechanism. Their mechanism has stood to this day as the current understanding of the TDR. The most obvious departure from Smith and Baer's mechanism is that Cooper and Jenner represent the diazonium departure and subsequent alkyl shift as a concerted step. Such a feature allows for sterics, orientations, and atomic movement to be factors. However, distribution of positive charge is still important in this mechanism as it does explain much of the observed behavior of the TDR. Another observation that should be made is that there is no preference given to these aforementioned factors in the mechanism. That is to say, even today it is very difficult to predict which carbon will migrate preferentially. Indeed, the TDR has become more useful as spectroscopic and separation techniques have advanced. Such advancements allows for the quick identification and isolation of desired products. Since the mid-1980s, most organic chemists have resigned themselves to accepting the fact that the TDR is governed by several factors that often seem fickle in importance. As a result, much research is now being directed towards the development of techniques to increase migration of a specific carbon. One example of such an effort has recently come out of the University of Melbourne. Noting that group 4 metal substituents can stabilize positive charge that is β to them, Chow, McClure, and White attempted to use this to direct TDRs in 2004. They hypothesized that placing a silicon trimethyl group β to a carbon that can migrate would increase such migration. Their results show that this does occur to a small extent. The authors believe that the reason why the carbon migration increases only slightly is that positive charge is not a large factor in displacing the diazonium ion. Since this ion is such a good leaving group, it requires very little 'push' from the developing carbon-carbon bond. Their results again highlight the fact that multiple factors determine the direction of carbon migration. See also Demjanov rearrangement Pinacol rearrangement References Rearrangement reactions Ring expansion reactions Name reactions
Tiffeneau–Demjanov rearrangement
[ "Chemistry" ]
3,398
[ "Name reactions", "Ring expansion reactions", "Rearrangement reactions", "Organic reactions" ]
4,109,750
https://en.wikipedia.org/wiki/Nacrite
Nacrite Al2Si2O5(OH)4 is a clay mineral that is polymorphous (or polytypic) with kaolinite. It crystallizes in the monoclinic system. X-ray diffraction analysis is required for positive identification. Nacrite was first described in 1807 for an occurrence in Saxony, Germany. The name is from nacre in reference to the dull luster of the surface of nacrite masses scattering light with slight iridescences resembling those of the mother of pearls secreted by oysters. References Clay minerals group Polymorphism (materials science) Monoclinic minerals Minerals in space group 9
Nacrite
[ "Materials_science", "Engineering" ]
139
[ "Materials science stubs", "Polymorphism (materials science)", "Materials science" ]
4,110,341
https://en.wikipedia.org/wiki/Cartan%20model
In mathematics, the Cartan model is a differential graded algebra that computes the equivariant cohomology of a space. References Stefan Cordes, Gregory Moore, Sanjaye Ramgoolam, Lectures on 2D Yang-Mills Theory, Equivariant Cohomology and Topological Field Theories, , 1994. Algebraic topology
Cartan model
[ "Mathematics" ]
71
[ "Topology stubs", "Fields of abstract algebra", "Topology", "Algebraic topology" ]
4,110,552
https://en.wikipedia.org/wiki/Lippmann%E2%80%93Schwinger%20equation
The Lippmann–Schwinger equation (named after Bernard Lippmann and Julian Schwinger) is one of the most used equations to describe particle collisions – or, more precisely, scattering – in quantum mechanics. It may be used in scattering of molecules, atoms, neutrons, photons or any other particles and is important mainly in atomic, molecular, and optical physics, nuclear physics and particle physics, but also for seismic scattering problems in geophysics. It relates the scattered wave function with the interaction that produces the scattering (the scattering potential) and therefore allows calculation of the relevant experimental parameters (scattering amplitude and cross sections). The most fundamental equation to describe any quantum phenomenon, including scattering, is the Schrödinger equation. In physical problems, this differential equation must be solved with the input of an additional set of initial and/or boundary conditions for the specific physical system studied. The Lippmann–Schwinger equation is equivalent to the Schrödinger equation plus the typical boundary conditions for scattering problems. In order to embed the boundary conditions, the Lippmann–Schwinger equation must be written as an integral equation. For scattering problems, the Lippmann–Schwinger equation is often more convenient than the original Schrödinger equation. The Lippmann–Schwinger equation's general form is (in reality, two equations are shown below, one for the sign and other for the sign): The potential energy describes the interaction between the two colliding systems. The Hamiltonian describes the situation in which the two systems are infinitely far apart and do not interact. Its eigenfunctions are and its eigenvalues are the energies . Finally, is a mathematical technicality necessary for the calculation of the integrals needed to solve the equation. It is a consequence of causality, ensuring that scattered waves consist only of outgoing waves. This is made rigorous by the limiting absorption principle. Usage The Lippmann–Schwinger equation is useful in a very large number of situations involving two-body scattering. For three or more colliding bodies it does not work well because of mathematical limitations; Faddeev equations may be used instead. However, there are approximations that can reduce a many-body problem to a set of two-body problems in a variety of cases. For example, in a collision between electrons and molecules, there may be tens or hundreds of particles involved. But the phenomenon may be reduced to a two-body problem by describing all the molecule constituent particle potentials together with a pseudopotential. In these cases, the Lippmann–Schwinger equations may be used. Of course, the main motivations of these approaches are also the possibility of doing the calculations with much lower computational efforts. Derivation We will assume that the Hamiltonian may be written as where is the free Hamiltonian (or more generally, a Hamiltonian with known eigenvectors). For example, in nonrelativistic quantum mechanics may be Intuitively is the interaction energy of the system. Let there be an eigenstate of : Now if we add the interaction into the mix, the Schrödinger equation reads Now consider the Hellmann–Feynman theorem, which requires the energy eigenvalues of the Hamiltonian to change continuously with continuous changes in the Hamiltonian. Therefore, we wish that as . A naive solution to this equation would be where the notation denotes the inverse of . However is singular since is an eigenvalue of . As is described below, this singularity is eliminated in two distinct ways by making the denominator slightly complex: By insertion of a complete set of free particle states, the Schrödinger equation is turned into an integral equation. The "in" and "out" states are assumed to form bases too, in the distant past and distant future respectively having the appearance of free particle states, but being eigenfunctions of the complete Hamiltonian. Thus endowing them with an index, the equation becomes Methods of solution From the mathematical point of view the Lippmann–Schwinger equation in coordinate representation is an integral equation of Fredholm type. It can be solved by discretization. Since it is equivalent to the differential time-independent Schrödinger equation with appropriate boundary conditions, it can also be solved by numerical methods for differential equations. In the case of the spherically symmetric potential it is usually solved by partial wave analysis. For high energies and/or weak potential it can also be solved perturbatively by means of Born series. The method convenient also in the case of many-body physics, like in description of atomic, nuclear or molecular collisions is the method of R-matrix of Wigner and Eisenbud. Another class of methods is based on separable expansion of the potential or Green's operator like the method of continued fractions of Horáček and Sasakawa. Very important class of methods is based on variational principles, for example the Schwinger-Lanczos method combining the variational principle of Schwinger with Lanczos algorithm. Interpretation as in and out states The S-matrix paradigm In the S-matrix formulation of particle physics, which was pioneered by John Archibald Wheeler among others, all physical processes are modeled according to the following paradigm. One begins with a non-interacting multiparticle state in the distant past. Non-interacting does not mean that all of the forces have been turned off, in which case for example protons would fall apart, but rather that there exists an interaction-free Hamiltonian H0, for which the bound states have the same energy level spectrum as the actual Hamiltonian . This initial state is referred to as the in state. Intuitively, it consists of elementary particles or bound states that are sufficiently well separated that their interactions with each other are ignored. The idea is that whatever physical process one is trying to study may be modeled as a scattering process of these well separated bound states. This process is described by the full Hamiltonian , but once it's over, all of the new elementary particles and new bound states separate again and one finds a new noninteracting state called the out state. The S-matrix is more symmetric under relativity than the Hamiltonian, because it does not require a choice of time slices to define. This paradigm allows one to calculate the probabilities of all of the processes that we have observed in 70 years of particle collider experiments with remarkable accuracy. But many interesting physical phenomena do not obviously fit into this paradigm. For example, if one wishes to consider the dynamics inside of a neutron star sometimes one wants to know more than what it will finally decay into. In other words, one may be interested in measurements that are not in the asymptotic future. Sometimes an asymptotic past or future is not even available. For example, it is very possible that there is no past before the Big Bang. In the 1960s, the S-matrix paradigm was elevated by many physicists to a fundamental law of nature. In S-matrix theory, it was stated that any quantity that one could measure should be found in the S-matrix for some process. This idea was inspired by the physical interpretation that S-matrix techniques could give to Feynman diagrams restricted to the mass-shell, and led to the construction of dual resonance models. But it was very controversial, because it denied the validity of quantum field theory based on local fields and Hamiltonians. The connection to Lippmann–Schwinger Intuitively, the slightly deformed eigenfunctions of the full Hamiltonian H are the in and out states. The are noninteracting states that resemble the in and out states in the infinite past and infinite future. Creating wavepackets This intuitive picture is not quite right, because is an eigenfunction of the Hamiltonian and so at different times only differs by a phase. Thus, in particular, the physical state does not evolve and so it cannot become noninteracting. This problem is easily circumvented by assembling and into wavepackets with some distribution of energies over a characteristic scale . The uncertainty principle now allows the interactions of the asymptotic states to occur over a timescale and in particular it is no longer inconceivable that the interactions may turn off outside of this interval. The following argument suggests that this is indeed the case. Plugging the Lippmann–Schwinger equations into the definitions and of the wavepackets we see that, at a given time, the difference between the and wavepackets is given by an integral over the energy . A contour integral This integral may be evaluated by defining the wave function over the complex E plane and closing the E contour using a semicircle on which the wavefunctions vanish. The integral over the closed contour may then be evaluated, using the Cauchy integral theorem, as a sum of the residues at the various poles. We will now argue that the residues of approach those of at time and so the corresponding wavepackets are equal at temporal infinity. In fact, for very positive times t the factor in a Schrödinger picture state forces one to close the contour on the lower half-plane. The pole in the from the Lippmann–Schwinger equation reflects the time-uncertainty of the interaction, while that in the wavepackets weight function reflects the duration of the interaction. Both of these varieties of poles occur at finite imaginary energies and so are suppressed at very large times. The pole in the energy difference in the denominator is on the upper half-plane in the case of , and so does not lie inside the integral contour and does not contribute to the integral. The remainder is equal to the wavepacket. Thus, at very late times , identifying as the asymptotic noninteracting out state. Similarly one may integrate the wavepacket corresponding to at very negative times. In this case the contour needs to be closed over the upper half-plane, which therefore misses the energy pole of , which is in the lower half-plane. One then finds that the and wavepackets are equal in the asymptotic past, identifying as the asymptotic noninteracting in state. The complex denominator of Lippmann–Schwinger This identification of the 's as asymptotic states is the justification for the in the denominator of the Lippmann–Schwinger equations. A formula for the S-matrix The S-matrix is defined to be the inner product of the ath and bth Heisenberg picture asymptotic states. One may obtain a formula relating the S-matrix to the potential V using the above contour integral strategy, but this time switching the roles of and . As a result, the contour now does pick up the energy pole. This can be related to the 's if one uses the S-matrix to swap the two 's. Identifying the coefficients of the 's on both sides of the equation one finds the desired formula relating S to the potential In the Born approximation, corresponding to first order perturbation theory, one replaces this last with the corresponding eigenfunction of the free Hamiltonian , yielding which expresses the S-matrix entirely in terms of V and free Hamiltonian eigenfunctions. These formulas may in turn be used to calculate the reaction rate of the process , which is equal to Homogenization With the use of Green's function, the Lippmann–Schwinger equation has counterparts in homogenization theory (e.g. mechanics, conductivity, permittivity). See also Bethe–Salpeter equation References Bibliography Original publications Scattering
Lippmann–Schwinger equation
[ "Physics", "Chemistry", "Materials_science" ]
2,434
[ "Nuclear physics", "Scattering", "Condensed matter physics", "Particle physics" ]
4,110,803
https://en.wikipedia.org/wiki/Wigner%20distribution%20function
The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis. The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, and it is of importance in quantum mechanics in phase space (see, by way of comparison: Wigner quasi-probability distribution, also called the Wigner function or the Wigner–Ville distribution). Given the shared algebraic structure between position-momentum and time-frequency conjugate pairs, it also usefully serves in signal processing, as a transform in time-frequency analysis, the subject of this article. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function provides the highest possible temporal vs frequency resolution which is mathematically possible within the limitations of the uncertainty principle. The downside is the introduction of large cross terms between every pair of signal components and between positive and negative frequencies, which makes the original formulation of the function a poor fit for most analysis applications. Subsequent modifications have been proposed which preserve the sharpness of the Wigner distribution function but largely suppress cross terms. Mathematical definition There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. Given the time series , its non-stationary auto-covariance function is given by where denotes the average over all possible realizations of the process and is the mean, which may or may not be a function of time. The Wigner function is then given by first expressing the autocorrelation function in terms of the average time and time lag , and then Fourier transforming the lag. So for a single (mean-zero) time series, the Wigner function is simply given by The motivation for the Wigner function is that it reduces to the spectral density function at all times for stationary processes, yet it is fully equivalent to the non-stationary autocorrelation function. Therefore, the Wigner function tells us (roughly) how the spectral density changes in time. Time-frequency analysis example Here are some examples illustrating how the WDF is used in time-frequency analysis. Constant input signal When the input signal is constant, its time-frequency distribution is a horizontal line along the time axis. For example, if x(t) = 1, then Sinusoidal input signal When the input signal is a sinusoidal function, its time-frequency distribution is a horizontal line parallel to the time axis, displaced from it by the sinusoidal signal's frequency. For example, if , then Chirp input signal When the input signal is a linear chirp function, the instantaneous frequency is a linear function. This means that the time frequency distribution should be a straight line. For example, if , then its instantaneous frequency is and its WDF Delta input signal When the input signal is a delta function, since it is only non-zero at t=0 and contains infinite frequency components, its time-frequency distribution should be a vertical line across the origin. This means that the time frequency distribution of the delta function should also be a delta function. By WDF The Wigner distribution function is best suited for time-frequency analysis when the input signal's phase is 2nd order or lower. For those signals, WDF can exactly generate the time frequency distribution of the input signal. Boxcar function , the rectangular function ⇒ Cross term property The Wigner distribution function is not a linear transform. A cross term ("time beats") occurs when there is more than one component in the input signal, analogous in time to frequency beats. In the ancestral physics Wigner quasi-probability distribution, this term has important and useful physics consequences, required for faithful expectation values. By contrast, the short-time Fourier transform does not have this feature. Negative features of the WDF are reflective of the Gabor limit of the classical signal and physically unrelated to any possible underlay of quantum structure. The following are some examples that exhibit the cross-term feature of the Wigner distribution function. In order to reduce the cross-term difficulty, several approaches have been proposed in the literature, some of them leading to new transforms as the modified Wigner distribution function, the Gabor–Wigner transform, the Choi-Williams distribution function and Cohen's class distribution. Properties of the Wigner distribution function The Wigner distribution function has several evident properties listed in the following table. Projection property Energy property Recovery property Mean condition frequency and mean condition time Moment properties Real properties Region properties Multiplication theorem Convolution theorem Correlation theorem Time-shifting covariance Modulation covariance Scale covariance Windowed Wigner Distribution Function When a signal is not time limited, its Wigner Distribution Function is hard to implement. Thus, we add a new function(mask) to its integration part, so that we only have to implement part of the original function instead of integrating all the way from negative infinity to positive infinity. Original function: Function with mask: where is real and time-limited Implementation According to definition: Suppose that for for and We take as example where is a real function And then we compare the difference between two conditions. Ideal: When mask function , which means no mask function. 3 Conditions Then we consider the condition with mask function: We can see that have value only between –B to B, thus conducting with can remove cross term of the function. But if x(t) is not a Delta function nor a narrow frequency function, instead, it is a function with wide frequency or ripple. The edge of the signal may still exist between –B and B, which still cause the cross term problem. for example: See also Time-frequency representation Short-time Fourier transform Spectrogram Gabor transform Autocorrelation Gabor–Wigner transform Modified Wigner distribution function Optical equivalence theorem Polynomial Wigner–Ville distribution Cohen's class distribution function Wigner quasi-probability distribution Transformation between distributions in time-frequency analysis Bilinear time–frequency distribution References Further reading J. Ville, 1948. "Théorie et Applications de la Notion de Signal Analytique", Câbles et Transmission, 2, 61–74 . T. A. C. M. Classen and W. F. G. Mecklenbrauker, 1980. "The Wigner distribution-a tool for time-frequency signal analysis; Part I," Philips J. Res., vol. 35, pp. 217–250. L. Cohen (1989): Proceedings of the IEEE 77 pp. 941–981, Time-frequency distributions---a review L. Cohen, Time-Frequency Analysis, Prentice-Hall, New York, 1995. S. Qian and D. Chen, Joint Time-Frequency Analysis: Methods and Applications, Chap. 5, Prentice Hall, N.J., 1996. B. Boashash, "Note on the Use of the Wigner Distribution for Time Frequency Signal Analysis", IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 36, No. 9, pp. 1518–1521, Sept. 1988. . B. Boashash, editor,Time-Frequency Signal Analysis and Processing – A Comprehensive Reference, Elsevier Science, Oxford, 2003, . F. Hlawatsch, G. F. Boudreaux-Bartels: "Linear and quadratic time-frequency signal representation," IEEE Signal Processing Magazine, pp. 21–67, Apr. 1992. R. L. Allen and D. W. Mills, Signal Analysis: Time, Frequency, Scale, and Structure, Wiley- Interscience, NJ, 2004. Jian-Jiun Ding, Time frequency analysis and wavelet transform class notes, the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2015. Kakofengitis, D., & Steuernagel, O. (2017). "Wigner's quantum phase space current in weakly anharmonic weakly excited two-state systems" European Physical Journal Plus 14.07.2017 External links Sonogram Visible Speech Under GPL Licensed Freeware for the visual extraction of the Wigner Distribution. Signal processing Transforms
Wigner distribution function
[ "Mathematics", "Technology", "Engineering" ]
1,677
[ "Functions and mappings", "Telecommunications engineering", "Computer engineering", "Signal processing", "Mathematical objects", "Mathematical relations", "Transforms" ]
4,110,937
https://en.wikipedia.org/wiki/Penrose%20interpretation
The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level. Overview Penrose's idea is inspired by quantum gravity because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real," while their mutual decoherence precludes subsequent observable interactions. Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level". He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse. Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation. Recent theoretical work indicates an increasingly deep inter-relation between quantum mechanics and gravitation. Physical consequences Accepting that wavefunctions are physically real, Penrose believes that matter can exist in more than one place at one time. In his opinion, a macroscopic system, like a human being, cannot exist in more than one place for a measurable time, as the corresponding energy difference is very large. A microscopic system, like an electron, can exist in more than one location significantly longer (thousands of years), until its space-time curvature separation reaches collapse threshold. In Einstein's theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose's theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes. Penrose speculates that the transition between macroscopic and quantum states begins at the scale of dust particles (the mass of which is close to a Planck mass). He has proposed an experiment to test this theory, called FELIX (free-orbit experiment with laser interferometry X-rays), in which an X-ray laser in space is directed toward a tiny mirror and fissioned by a beam splitter from tens of thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror while moving to another mirror and move the tiny mirror back as it returns, and according to conventional quantum theories, the tiny mirror can exist in superposition for a significant period of time. This would prevent any photons from reaching the detector. If Penrose's hypothesis is correct, the mirror's superposition will collapse to one location in about a second, allowing half the photons to reach the detector. However, because this experiment would be difficult to arrange, a table-top version that uses optical cavities to trap the photons long enough for achieving the desired delay has been proposed instead. See also Diósi–Penrose model Interpretations of quantum mechanics Orchestrated objective reduction Gravitational decoherence Schrödinger–Newton equation Stochastic quantum mechanics Relevant books by Roger Penrose The Emperor's New Mind The Road to Reality Shadows of the Mind References External links Molecules – Quantum Interpretations QM – the Penrose Interpretation (Internet Archive) Roger Penrose discusses his experiment on the BBC (25 minutes in)   Quantum measurement Interpretations of quantum mechanics
Penrose interpretation
[ "Physics" ]
1,012
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
4,111,503
https://en.wikipedia.org/wiki/Mining%20simulator
A mining simulator is a type of simulation used for entertainment as well as in training purposes for mining companies. These simulators replicate elements of real-world mining operations on surrounding screens displaying three-dimensional imagery, motion platforms, and scale models of typical and atypical mining environments and machinery. The results of the simulations can provide useful information in the form of greater competence in on-site safety, which can lead to greater efficiency and decreased risk of accidents. Training Mining simulators are used to replicate real-world conditions of mining, assessing real-time responses from the trainee operator to react to what tasks or obstacles appear around them. This is often achieved through the use of surrounding three-dimensional imagery, motion platforms, and realistic replicas of actual mining equipment. Trainee operator employees are often taught in a program where they are scored against both their peers and an expert benchmark to produce a final evaluation of competence with the tasks they may need to complete in real-life. Criticism Mining companies that have implemented mining simulators into their training have shown greater employee competence in on-site safety, leading to an overall more productive working environment, and a higher chance of profitability for the company in the long-run by decreasing the risk of accidents, injuries, or deaths on the site though prior education. Being able to simulate real-world mining hazards in a safe and controlled environment has also shown to help prepare employees on proper procedure and protocol in the event of an on-site accident without the need to physically experience one, which often cannot be safely taught in the real-world. Simulating mining environments further helps to familiarize employees with mining equipment and vehicles before entering a real job site, leading to increased productivity, and a chance to correct inefficiencies while still in training. Varieties Mining simulator setups can range in size and features, which relates to the price and fidelity of the product. A simple simulator setup may only need to be installed on one Personal Computer or a virtual reality headset, but most often consist of three to six monitors and a motion platform. Any higher cost setups often are housed inside high-cube containers which may contain inside lighting, air conditioning, heating, and other amenities and add-ons which may not directly affect the effectiveness of the simulation training. Some mining simulators can also be mobile and move locations, which can be particularly helpful for use of the same simulator between multiple schools or colleges for apprentice programs. Entertainment Aside from practical training purposes, mining simulators have in more recent times also been created for entertainment as well as gaming purposes. The appeal of the genre of games comes from the ability for them to be played on other than specialized equipment, including more widely available Personal Computers, PlayStation, and Xbox consoles. The genre of game also gained popularity from the broader amount of resources that could be added and mined in-game, often substituting more realistically found resources for precious minerals such as gold or diamonds, but coal mining games do exist. Non-rock or mineral mining simulation games have also emerged, with cryptocurrency mining simulations becoming a popular subgenre, allowing players to simulate mining for coins such as Bitcoin, Ethereum, and Dogecoin. See also Flight simulation Driving simulation Train simulation Submarine simulation References 3D graphics software Mining equipment Real-time simulation Virtual reality
Mining simulator
[ "Technology", "Engineering" ]
671
[ "Mining equipment", "Real-time simulation" ]
4,112,321
https://en.wikipedia.org/wiki/Ethics%20of%20care
The ethics of care (alternatively care ethics or EoC) is a normative ethical theory that holds that moral action centers on interpersonal relationships and care or benevolence as a virtue. EoC is one of a cluster of normative ethical theories that were developed by some feminists and environmentalists since the 1980s. While consequentialist and deontological ethical theories emphasize generalizable standards and impartiality, ethics of care emphasize the importance of response to the individual. The distinction between the general and the individual is reflected in their different moral questions: "what is just?" versus "how to respond?" Carol Gilligan, who is considered the originator of the ethics of care, criticized the application of generalized standards as "morally problematic, since it breeds moral blindness or indifference". Assumptions of the framework include: persons are understood to have varying degrees of dependence and interdependence; other individuals affected by the consequences of one's choices deserve consideration in proportion to their vulnerability; and situational details determine how to safeguard and promote the interests of individuals. Historical background The originator of the ethics of care was Carol Gilligan, an American ethicist and psychologist. Gilligan created this model as a critique of her mentor, developmental psychologist Lawrence Kohlberg's model of moral development. Gilligan observed that measuring moral development by Kohlberg's stages of moral development found boys to be more morally mature than girls, and this result held for adults as well (although when education is controlled for there are no gender differences). Gilligan argued that Kohlberg's model was not objective, but rather a masculine perspective on morality, founded on principles of justice and rights. In her 1982 book In a Different Voice, she further posited that men and women have tendencies to view morality in different terms. Her theory claimed women tended to emphasize empathy and compassion over the notions of morality in terms of abstract duties or obligations that are privileged in Kohlberg's scale. Dana Ward stated, in an unpublished paper, that Kohlberg's scale is psychometrically sound. Subsequent research suggests that the differences in care-based or justice-based ethical approaches may be due to gender differences, or differences in life situations of genders. Gilligan's summarizing of gender differences provided feminists with a voice to question moral values and practices of the society as masculine. Relationship to traditional ethical positions Care ethics is different from other ethical models, such as consequentialist theories (e.g. utilitarianism) and deontological theories (e.g. Kantian ethics), in that it seeks to incorporate traditionally feminine virtues and values which, proponents of care ethics contend, are absent in traditional models of ethics. One of these values is the placement of caring and relationship over logic and reason. In care ethics, reason and logic are subservient to natural care, that is, care that is done out of inclination. This is in contrast to deontology, where actions taken out of inclination are unethical. Virginia Held has noted the similarities between care ethics and virtue ethics but distinguished it from the virtue ethics of British moralists such as Hume in that people are seen as fundamentally relational rather than independent individuals. Other philosophers have argued about the relation between care ethics and virtue ethics, taking various positions on the question of how closely the two are related. Jason Josephson Storm argued for close parallels between the ethics of care and traditional Buddhist virtue ethics, especially the prioritization of compassion by Śāntideva and others. Other scholars had also previously connected ethics of care with Buddhist ethics. Care ethics as feminist ethics While some feminists have criticized care-based ethics for reinforcing traditional gender stereotypes, others have embraced parts of the paradigm under the theoretical concept of care-focused feminism. Care-focused feminism, alternatively called gender feminism, is a branch of feminist thought informed primarily by the ethics of care as developed by Carol Gilligan and Nel Noddings. This theory is critical of how caring is socially engendered, being assigned to women and consequently devalued. "Care-focused feminists regard women's capacity for care as a human strength" which can and should be taught to and expected of men as well as women. Noddings proposes that ethical caring could be a more concrete evaluative model of moral dilemma, than an ethic of justice. Noddings' care-focused feminism requires practical application of relational ethics, predicated on an ethic of care. Ethics of care is a basis for care-focused feminist theorizing on maternal ethics. These theories recognize caring as an ethically relevant issue. Critical of how society engenders caring labor, theorists Sara Ruddick, Virginia Held, and Eva Feder Kittay suggest caring should be performed and care givers valued in both public and private spheres. This proposed paradigm shift in ethics encourages the view that an ethic of caring be the social responsibility of both men and women. Joan Tronto argues that the definition of "ethic of care" is ambiguous due in part to it not playing a central role in moral theory. She argues that considering moral philosophy is engaged with human goodness, then care would appear to assume a significant role in this type of philosophy. However, this is not the case and Tronto further stresses the association between care and "naturalness". The latter term refers to the socially and culturally constructed gender roles where care is mainly assumed to be the role of the woman. As such, care loses the power to take a central role in moral theory. Tronto states there are four ethical qualities of care: Attentiveness: Attentiveness is crucial to the ethics of care because care requires a recognition of others' needs in order to respond to them. The question which arises is the distinction between ignorance and inattentiveness. Tronto poses this question as such, "But when is ignorance simply ignorance, and when is it inattentiveness"? Responsibility: In order to care, we must take it upon ourselves, thus responsibility. The problem associated with this second ethical element of responsibility is the question of obligation. Obligation is often, if not already, tied to pre-established societal and cultural norms and roles. Tronto makes the effort to differentiate the terms "responsibility" and "obligation" with regards to the ethic of care. Responsibility is ambiguous, whereas obligation refers to situations where action or reaction is due, such as the case of a legal contract. This ambiguity allows for ebb and flow in and between class structures and gender roles, and to other socially constructed roles that would bind responsibility to those only befitting of those roles. Competence: To provide care also means competency. One cannot simply acknowledge the need to care, accept the responsibility, but not follow through with enough adequacy - as such action would result in the need of care not being met. Responsiveness: This refers to the "responsiveness of the care receiver to the care". Tronto states, "Responsiveness signals an important moral problem within care: by its nature, care is concerned with conditions of vulnerability and inequality". She further argues responsiveness does not equal reciprocity. Rather, it is another method to understand vulnerability and inequality by understanding what has been expressed by those in the vulnerable position, as opposed to re-imagining oneself in a similar situation. In 2013, Tronto added a fifth ethical quality: Plurality, communication, trust and respect; solidarity or caring with: Together, these are the qualities necessary for people to come together in order to take collective responsibility, to understand their citizenship as always imbricated in relations of care, and to take seriously the nature of caring needs in society. In politics It is often suggested that the ethics of care is only applicable within families and groups of friends, but many feminist theorists have argued against this suggestion, including Ruddick, Manning, Held, and Tronto. Attempts have been made to apply principles from the ethics of care more generally, by identifying values in one particular caring relationship and applying these values to other situations. Moral values are seen as embedded in acts of care. The ethics of care is contrasted with theories based on the "liberal individual" and a social contract, following Locke and Hobbes. Ethics-of-care theorists note that in many situations, such as childhood, there are very large power imbalances between individuals, and so these relationships are based on care rather than any form of contract. Noting the power imbalances that can exist in society, it is argued that care may be a better basis to understand society than freedom and social contracts. In mental health Psychiatrist Kaila Rudolph noted that care ethics aligns with a trauma-informed care framework in psychiatry. Criticism In the field of nursing, the ethics of care has been criticized by Peter Allmark, Helga Kuhse, and John Paley. Allmark criticized its focus on the mental state of the carer, on the grounds that subjectively caring does not prevent an individual's care from being harmful. Allmark also criticized the theory for conflicting with the idea of treating everyone with unbiased consideration, which he considered necessary in certain situations. Care ethics has been criticised for failing to protect the individual from paternalism, noting there is a risk of caregivers mistaking their needs for those of the people they care for. Individuals may need to cultivate the ability to distinguish their own needs from those that they care for, with Ruddick arguing for a need to respect the "embodied willfulness" of those who are cared for. See also Theorists References Further reading Care Altruism Environmentalism Ecofeminism Feminism Feminist ethics Liberalism Left-wing politics Progressivism Relational ethics Social justice Feminist philosophy Ethical theories
Ethics of care
[ "Biology" ]
1,998
[ "Behavior", "Altruism" ]
4,112,456
https://en.wikipedia.org/wiki/Cefradine
Cefradine (INN) or cephradine (BAN) is a first generation cephalosporin antibiotic. Indications Respiratory tract infections (such as tonsillitis, pharyngitis, and lobar pneumonia) caused by group A beta-hemolytic streptococci and S. pneumoniae (formerly D. pneumonia). Otitis media caused by group A beta-hemolytic streptococci, S. pneumoniae, H. influenzae, and staphylococci. Skin and skin structure infections caused by staphylococci (penicillin-susceptible and penicillin-resistant) and beta-hemolytic streptococci. Urinary tract infections, including prostatitis, caused by E. coli, P. mirabilis and Klebsiella species. Formulations Cefradine is distributed in the form of capsules containing 250 mg or 500 mg, as a syrup containing 250 mg/5 ml, or in vials for injection containing 500 mg or 1 g. It is not approved by the FDA for use in the United States. Synthesis Birch reduction of D-α-phenylglycine led to diene (2). This was N-protected using tert-butoxycarbonylazide and activated for amide formation via the mixed anhydride method using isobutylchloroformate to give 3. Mixed anhydride 3 reacted readily with 7-aminodesacetoxycephalosporanic acid to give, after deblocking, cephradine (5). Production names The antibiotic is produced under many brand names across the world. Bangladesh: Ancef, Ancef forte, Aphrin, Avlosef, Cefadin, Cephadin, Cephran, Cephran-DS, Cusef, Cusef DS, Dicef , Dicef forte, Dolocef, Efrad, Elocef, Extracef, Extracef-DS, Intracef, Kefdrin, Lebac, Lebac Forte, Medicef, Mega-Cef, Megacin, Polycef, Procef, Procef, Procef forte, Rocef, Rocef Forte DS, Sefin, Sefin DS, Sefnin, Sefrad, Sefrad DS, Sefril, Sefril-DS, Sefro, Sefro-HS, Sephar, Sephar-DS, Septa, Sinaceph, SK-Cef, Sk-Cef DS, Supracef and Supracef-F, Torped, Ultrasef, Vecef, Vecef-DS, Velogen, Sinaceph, Velox China: Cefradine, Cephradine, Kebili, Saifuding, Shen You, Taididing, Velosef, Xianyi, and Xindadelei Colombia: Cefagram, Cefrakov, Cefranil , Cefrex, and Kliacef Egypt: Cefadrin, Cefadrine, Cephradine, Cephraforte, Farcosef, Fortecef, Mepadrin, Ultracef, and Velosef France: Dexef Hong Kong: Cefradine and ChinaQualisef-250 Indonesia: Dynacef, Velodine, and Velodrom Lebanon: Eskacef, Julphacef, and Velosef Lithuania: Tafril Myanmar: Sinaceph Oman: Ceframed, Eskasef, Omadine, and Velocef Pakistan: Abidine, Ada-Cef, Ag-cef, Aksosef, Amspor, Anasef, Antimic, Atcosef, Bactocef, Biocef, Biodine, Velora, Velosef Peru: Abiocef, Cefradinal, Cefradur, Cefrid, Terbodina II, Velocef, Velomicin Philippines: Altozef, Racep, Senadex, Solphride, Yudinef, Zefadin, Zefradil, and Zolicef Poland: Tafril Portugal: Cefalmin, Cefradur South Africa: Cefril A South Korea: Cefradine and Tricef Taiwan: Cefadin, Cefamid, Cefin, Cekodin, Cephradine, Ceponin, Lacef, Licef-A, Lisacef, Lofadine, Recef, S-60, Sefree, Sephros, Topcef, Tydine, Unifradine, and U-Save UK: Cefradune (Kent) Vietnam: Eurosefro and Incef See also Cephapirin Cephacetrile Cefamandole Ampicillin (Has the same chemical formula) Notes References Cephalosporin antibiotics Enantiopure drugs
Cefradine
[ "Chemistry" ]
1,053
[ "Stereochemistry", "Enantiopure drugs" ]
4,112,457
https://en.wikipedia.org/wiki/Head%20shadow
A head shadow (or acoustic shadow) is a region of reduced amplitude of a sound because it is obstructed by the head. It is an example of diffraction. Sound may have to travel through and around the head in order to reach an ear. The obstruction caused by the head can account for attenuation (reduced amplitude) of overall intensity as well as cause a filtering effect. The filtering effects of head shadowing are an essential element of sound localisation—the brain weighs the relative amplitude, timbre, and phase of a sound heard by the two ears and uses the difference to interpret directional information. The shadowed ear, the ear further from the sound source, receives sound slightly later (up to approximately 0.7 ms later) than the unshadowed ear, and the timbre, or frequency spectrum, of the shadowed sound wave is different because of the obstruction of the head. The head shadow causes particular difficulty in sound localisation in people suffering from unilateral hearing loss. It is a factor to consider when correcting hearing loss with directional hearing aids. See also Interaural intensity difference Hearing References Sources Acoustics Otology
Head shadow
[ "Physics" ]
237
[ "Classical mechanics", "Acoustics" ]
4,113,494
https://en.wikipedia.org/wiki/Pax%20genes
In evolutionary developmental biology, Paired box (Pax) genes are a family of genes coding for tissue specific transcription factors containing an N-terminal paired domain and usually a partial, or in the case of four family members (PAX3, PAX4, PAX6 and PAX7), a complete homeodomain to the C-terminus. An octapeptide as well as a Pro-Ser-Thr-rich C terminus may also be present. Pax proteins are important in early animal development for the specification of specific tissues, as well as during epimorphic limb regeneration in animals capable of such. The paired domain was initially described in 1987 as the "paired box" in the Drosophila protein paired (prd; ). Groups Within the mammalian family, there are four well defined groups of Pax genes. Pax group 1 (Pax 1 and 9), Pax group 2 (Pax 2, 5 and 8), Pax group 3 (Pax 3 and 7) and Pax group 4 (Pax 4 and 6). Two more families, Pox-neuro and Pax-α/β, exist in basal bilaterian species. Orthologous genes exist throughout the Metazoa, including extensive study of the ectopic expression in Drosophila using murine Pax6. The two rounds of whole-genome duplications in vertebrate evolution is responsible for the creation of as many as 4 paralogs for each Pax protein. Members PAX1 has been identified in mice with the development of vertebrate and embryo segmentation, and some evidence this is also true in humans. It transcribes a 440 amino acid protein from 4 exons and 1,323 in humans. In the mouse Pax1 mutation has been linked to undulated mutant suffering from skeletal malformations. PAX2 has been identified with kidney and optic nerve development. It transcribes a 417 amino acid protein from 11 exons and 4,261 in humans. Mutation of PAX2 in humans has been associated with renal-coloboma syndrome as well as oligomeganephronia. PAX3 has been identified with ear, eye and facial development. It transcribes a 479 amino acid protein in humans. Mutations in it can cause Waardenburg syndrome. PAX3 is frequently expressed in melanomas and contributes to tumor cell survival. PAX4 has been identified with pancreatic islet beta cells. It transcribes a 350 amino acid protein from 9 exons and 2,010 in humans. Knockout mice lacking Pax4 expression fail to develop insulin-producing cells. Pax4 undergoes mutual reciprocal interaction with the transcription factor Arx to endow pancreatic endocrine cells with insulin and glucagon cells respectively PAX5 has been identified with neural and spermatogenesis development and b-cell differentiation. It transcribes a 391 amino acid protein from 10 exons and 3,644 in humans. PAX6 (eyeless) is the most researched and appears throughout the literature as a "master control" gene for the development of eyes and sensory organs, certain neural and epidermal tissues as well as other homologous structures, usually derived from ectodermal tissues. PAX7 has been possibly associated with myogenesis. It transcribes a protein of 520 amino acids from 8 exons and 2,260 in humans. PAX7 directs postnatal renewal and propagation of myogenic satellite cells but not for the specification. PAX8 has been associated with thyroid specific expression. It transcribes a protein of 451 amino acids from 11 exons and 2,526 in humans. Pax8 loss-of-function mutant mice lack follicular cells of the thyroid gland. PAX9 has found to be associated with a number of organ and other skeletal developments, particularly teeth. It transcribes a protein of 341 amino acids from 4 exons and 1,644 in humans. See also Homeobox Evolutionary developmental biology Body plan SOX genes References ==Further reading== External links A Review of the Highly Conserved PAX6 Gene in Eye Development Regulation Paired domain in PROSITE Developmental genes and proteins Transcription factors
Pax genes
[ "Chemistry", "Biology" ]
890
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
4,113,714
https://en.wikipedia.org/wiki/X.75
X.75 is an International Telecommunication Union (ITU) (formerly CCITT) standard specifying the interface for interconnecting two X.25 networks. Description X.75 is almost identical to X.25. The significant difference is that while X.25 specifies the interface between a subscriber (Data Terminal Equipment (DTE)) and the network (Data Circuit-terminating Equipment (DCE)), X.75 specifies the interface between two networks (Signalling Terminal Equipment (STE)), and refers to these two STE as STE-X and STE-Y. This gives rise to some subtle differences in the protocol compared with X.25. For example, X.25 only allows network-generated reset and clearing causes to be passed from the network (DCE) to the subscriber (DTE), and not the other way around, since the subscriber is not a network. However, at the interconnection of two X.25 networks, either network might reset or clear an X.25 call, so X.75 allows network-generated reset and clearing causes to be passed in either direction. Although outside the scope of both X.25 and X.75, which define external interfaces to an X.25 network, X.75 can also be found as the protocol operating between switching nodes inside some X.25 networks. See also Internetworking Public data network X.121 Further reading External links ITU-T Recommendation X.75 History of computer networks Network layer protocols Wide area networks ITU-T recommendations ITU-T X Series Recommendations X.25
X.75
[ "Technology" ]
328
[ "History of computer networks", "History of computing" ]
4,114,999
https://en.wikipedia.org/wiki/Ames%20window
The Ames trapezoid or Ames window is an image on, for example, a flat piece of cardboard that seems to be a rectangular window but is, in fact, a trapezoid. Both sides of the piece of cardboard have the same image. The cardboard is hung vertically from a wire so it can rotate around continuously, or is attached to a vertical mechanically rotating axis for continuous rotation. When the rotation of the window is observed, the window appears to rotate through less than 180 degrees, though the exact amount of travel that is perceived varies with the dimensions of the trapezoid. It seems that the rotation stops momentarily and reverses its direction. It is therefore not perceived to be rotating continuously in one section but instead is misperceived to be oscillating. This phenomenon was discovered by Adelbert Ames, Jr. in 1947. Legacy During the 1960s, the concept of "transactional ambiguity" was studied and promulgated by some psychologists based on the use of the Ames Window. This hypothesis held that a viewer's mental expectation or "set" could affect the actual perception of ambiguous stimuli, extending the long-held belief that mental set could affect one's feelings and conclusions about stimuli to the actual visual perception of the stimuli itself. The Ames Window was used in experiments to test this hypothesis by having subjects look through a pinhole to view the rotating window with a grey wooden rod placed through one pane at an oblique angle. Subjects were divided into two experimental groups; one told that the rod was rubber and the other that it was steel. The hypothesis held that there should be a statistically significant difference between these two groups; the steel group more often seeing the rod cutting through the pane while the rubber group more often saw it as wrapping around it. These experiments were popular in university experimental psychology courses, with results sometimes supporting the hypothesis while other times not. Although literature describing "transactional ambiguity" and the hypothesis of the perceptual effect of mental set has largely disappeared from the scene, it remains an interesting and provocative use of the visually ambiguous demonstrations for which Ames was well known, and if true provides additional scientific foundation for the "eye witness" phenomenon well known in law enforcement and research circles. See also Ames room Anamorphosis References Sources Behrens, R.R. (2009a). "Adelbert Ames II" entry in Camoupedia: A Compendium of Research on Art, Architecture and Camouflage. Dysart IA: Bobolink Books, pp. 25–26. . Behrens, R.R. (2009b). "Ames Demonstrations in Perception" in E. Bruce Goldstein, ed., Encyclopedia of Perception. Sage Publications, pp. 41–44. . External links Adelbert Ames, Fritz Heider and the Ames Chair Demonstration Demonstration on The Curiosity Show Explanation by YouTube Producer Derek Muller Optical illusions
Ames window
[ "Physics" ]
590
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
4,115,260
https://en.wikipedia.org/wiki/Mobile%20marketing
Mobile marketing is a multi-channel online marketing technique focused at reaching a specific audience on their smartphones, feature phones, tablets, or any other related devices through websites, e-mail, SMS and MMS, social media, or mobile applications. Mobile marketing can provide customers with time and location sensitive, personalized information that promotes goods, services, appointment reminders and ideas. In a more theoretical manner, academic Andreas Kaplan defines mobile marketing as "any marketing activity conducted through a ubiquitous network to which consumers are constantly connected using a personal mobile device". SMS marketing Marketing through cellphones SMS (Short Message Service) became increasingly popular in the early 2000s in Europe and some parts of Asia when businesses started to collect mobile phone numbers and send off wanted (or unwanted) content. On average, SMS messages have a 98% open rate and are read within 3 minutes, making them highly effective at reaching recipients quickly. Over the past few years, SMS marketing has become a legitimate advertising channel in some parts of the world. This is because, unlike email over the public internet, the carriers who police their own networks have set guidelines and best practices for the mobile media industry (including mobile advertising). The IAB (Interactive Advertising Bureau)has established guidelines and is evangelizing the use of the mobile channel for marketers. While this has been fruitful in developed regions such as North America, Western Europe and some other countries, mobile SPAM messages (SMS sent to mobile subscribers without a legitimate and explicit opt-in by the subscriber) remain an issue in many other parts of the world, partly due to the carriers selling their member databases to third parties. In India, however, the government's efforts to create the National Do Not Call Registry have helped cellphone users stop SMS advertisements by sending a simple SMS or calling 1909. Mobile marketing approaches through SMS have expanded rapidly in Europe and Asia as a new channel to reach the consumer. SMS initially received negative media coverage in many parts of Europe for being a new form of spam as some advertisers purchased lists and sent unsolicited content to consumer's phones; however, as guidelines are put in place by the mobile operators, SMS has become the most popular branch of the Mobile Marketing industry with several 100 million advertising SMS sent out every month in Europe alone. This is thanks in part to SMS messages being hardware agnostic—they can be delivered to practically any mobile phone, smartphone or feature phone and accessed without a Wi-Fi or mobile data connection. This is important to note since there were over 5 billion unique mobile phone subscribers worldwide in 2017, which is about 66% of the world population. However, nowadays, the mobile phone has become a focal device in people’s lives, and many people cannot live without it. These advanced mobile technologies bring people more business opportunities that connect business people and consumers at any time and place. Because of this, digital marketing has become more essential, and mobile marketing is one of the newest digital marketing channels that people are considering; it can get information about the features of goods that people like without the need for buyers to go to the actual store. SMS marketing has both inbound and outbound marketing strategies. Inbound marketing focuses on lead generation, and outbound marketing focuses on sending messages for sales, promotions, contests, donations, television program voting, appointments and event reminders. There are 5 key components to SMS marketing: sender ID, message size, content structure, spam compliance, and message delivery. Sender ID A sender ID is a name or number that identifies who the sender is. For commercial purposes, virtual numbers, short codes, SIM hosting, and custom names are most commonly used and can be leased through bulk SMS providers. Shared Virtual Numbers As the name implies, shared virtual numbers are shared by many different senders. They're usually free, but they can't receive SMS replies, and the number changes from time to time without notice or consent. Senders may have different shared virtual numbers on different days, which may make it confusing or untrustworthy for recipients depending on the context. For example, shared virtual numbers may be suitable for 2-factor authentication text messages, as recipients are often expecting these text messages, which are often triggered by actions that the recipients make. But for text messages that the recipient isn't expecting, like a sales promotion, a dedicated virtual number may be preferred. Dedicated Virtual Numbers To avoid sharing numbers with other senders, and for brand recognition and number consistency, leasing a dedicated virtual number, which are also known as a long code or long number (international number format, e.g. +44 7624 805000 or US number format, e.g. 757 772 8555), is a viable option. Unlike a shared number, it can receive SMS replies. Senders can choose from a list of available dedicated virtual numbers from a bulk SMS provider. Prices for dedicated virtual numbers can vary. Some numbers, often called Gold numbers, are easier to recognize, and therefore more expensive to lease. Senders may also get creative and choose a vanity number. These numbers spell out a word or phrase using the keypad, like +1-(123)-ANUMBER. Short codes Shortcodes offer very similar features to a dedicated virtual number but are short mobile numbers that are usually 5-6 digits. Their length and availability are different in each and every country. These are usually more expensive and are commonly used by enterprises and governmental organizations. For mass messaging, shortcodes are preferred over a dedicated virtual number because of their higher throughput and are great for time-sensitive campaigns and emergencies. In Europe the first cross-carrier SMS shortcode campaign was run by Txtbomb in 2001 for an Island Records release, In North America, it was the Labatt Brewing Company in 2002. Over the past few years, mobile short codes have been increasingly popular as a new channel to communicate to the mobile consumer. Brands have begun to treat the mobile shortcode as a mobile domain name, allowing the consumer to text message the brand at an event, in-store and off any traditional media. Short codes provide a direct line between a brand and their customer base. Once a company has a dedicated short code, they are able to directly message their audience without worrying if the messages are being delivered, unlike long code D.I.D.s (Direct Inward Dial, another term for phone number). Whereas long code texts face a higher level of scrutiny, short codes give you unrivalled throughput without triggering red flags from the carriers. SIM hosting Physical and virtual SIM hosting allows a mobile number sourced from a carrier to be used for receiving SMS as part of a marketing campaign. The SIM associated with the number is hosted by a bulk SMS provider. With physical SIM hosting, a SIM is physically hosted in a GSM modem and SMS received by the SIM are relayed to the customer. With virtual SIM hosting, the SIM is roamed onto the Bulk SMS provider's partner mobile network and SMS sent to the mobile number are routed from the mobile network's SS7 network to an SMSC or virtual mobile gateway, and then onto the customer. Custom Sender ID A custom sender ID, also known as an alphanumeric sender ID, enables users to set a business name as the sender ID for one-way organization-to-consumer messages. Custom sender IDs are only supported in certain countries and are up to 11 characters long, and support uppercase and lowercase ASCII letters and digits 0-9. Senders are not allowed to use digits only as this would mimic a shortcode or virtual number that they do not have access to. Reputable bulk SMS providers will check customer sender IDs beforehand to make sure senders are not misusing or abusing them. Message Size The message size will then determine the number of SMS messages that are sent, which then determines the amount of money spent on marketing a product or service. Not all characters in a message are the same size. A single SMS message has a maximum size of 1120 bits. This is important because there are two types of character encodings, GSM and Unicode. Latin-based languages like English are GSM based encoding, which are 7 bits per character. This is where text messages typically get their 160 characters per SMS limit. Long messages that exceed this limit are concatenated. They are split into smaller messages, which are recombined by the receiving phone. Concatenated messages can only fit 153 characters instead of 160. For example, a 177 character message is sent as 2 messages. The first is sent with 153 characters and the second with 24 characters. The process of SMS concatenation can happen up to 4 times for most bulk SMS providers, which allows senders a maximum of 612 character messages per campaign. Non-Latin based languages, like Chinese, and also emojis use a different encoding process called Unicode or Unicode Transformation Format (UTF-8). It is meant to encompass all characters for efficiency but has a caveat. Each Unicode character is 16 bits in size, which takes more information to send, therefore limiting SMS messages to 70 characters. Messages that are larger than 70 characters are also concatenated. These messages can fit 67 characters and can be concatenated up to 4 times for a maximum of 268 characters. Content Structure Special elements that can be placed inside a text message include: UTF-8 Characters: Send SMS in different languages, special characters, or emojis Keywords: Use keywords to trigger an automated response Links: Track campaigns easily by using shortened URLs to custom landing pages Interactive Elements: Pictures, animations, audio, or video Texting is simple, however, when it comes to SMS marketing - there are many different content structures that can be implemented. Popular message types include sale alerts, reminders, keywords, and multimedia messaging services (MMS). SMS Sales Alerts Sale alerts are the most basic form of SMS marketing. They are generally used for clearance, flash sales, and special promotions. Typical messages include coupon codes, and information like expiration dates, products, and website links for additional information. SMS Transaction Alerts Transaction Alerts are used by financial institutions to notify their customer about a financial transaction done from their account. Some SMS only highlights the amount transacted while some also include the balance amount left in the account. SMS Reminders Reminders are commonly used in appointment-based industries or for recurring events. Some senders choose to ask their recipients to respond to the reminder text with an SMS keyword to confirm their appointment. This can really help improve the sender's workflow and reduce missed appointments, leading to improved productivity and revenue. SMS Keywords This allows people to text a custom keyword to a dedicated virtual number or short code. Through custom keywords, users can opt-in to service with minimal effort. Once a keyword is triggered, an autoresponder can be set to guide the user to the next step. They can also activate different functions, which include entering a contest, forwarding to an email or mobile number, group chat, and sending an auto-response. Keywords also allow users to opt-in to receive further marketing correspondence. When using a long code number you face higher levels of scrutiny from Telecom Companies. When sending SMS messages through long code you are unable to send messages with a link in the first message. This is done at the carrier level to help cut down on spam. Using keyword responses, a company can create a bridge between themselves and the user. Carriers will recognize users responding to an SMS with a keyword as a conversation and will allow links to be delivered. Spam Compliance Similar to email, SMS has anti-spam laws which differ from country to country. As a general rule, it's important to obtain the recipient's permission before sending any text message, especially an SMS marketing type of message. Permission can be obtained in a myriad of ways, including allowing prospects or customers to tick a permission checkbox on a website, filling in a form, or getting a verbal agreement. In most countries, SMS senders need to identify themselves as their business name inside their initial text message. Identification can be placed in either the sender ID or within the message body copy. Spam prevention laws may also apply to SMS marketing messages, which must include a method to opt out of messages. One key criterion for provisioning is that the consumer opts in to the service. The mobile operators demand a double opt-in from the consumer and the ability for the consumer to opt-out of the service at any time by sending the word STOP via SMS. These guidelines are established in the CTIA Playbook and the MMA Consumer Best Practices Guidelines which are followed by all mobile marketers in the United States. In Canada, opt-in became mandatory once the Fighting Internet and Wireless Spam Act came into force in 2014. Message Delivery Simply put, SMS infrastructure is made up of special servers that talk to each other, using software called Short Message Service Centre (SMSC) that use a special protocol called Short Message Peer to Peer (SMPP). Through the SMPP connections, bulk SMS providers (also known as SMS Gateways) like the ones mentioned above can send text messages and process SMS replies and delivery receipts. When a user sends messages through a bulk SMS provider, it gets delivered to the recipient's carrier via an ON-NET connection or the International SS7 Network. SS7 Network Operators around the world are connected by a network known as Signaling System #7. It's used to exchange information related to phone calls, number translations, prepaid billing systems, and is the backbone of SMS. SS7 is what carriers around the world use to talk to each other. ON-NET Routing ON-NET routing is the most popular form of messaging globally. It's the most reliable and preferable way for telecommunications/carriers to receive messages, as the messages from the bulk SMS provider is sent to them directly. For senders that need consistency and reliability, seeking a provider that uses ON-NET routing should be the preferred option. Grey Routing Grey Routing is a term given to messages that are sent to carriers (often offshore) that have low cost interconnect agreements with other carriers. Instead of sending the messages directly to the intended carrier, some bulk SMS providers send it to an offshore carrier, which will relay the message to the intended carrier. At the cost of consistency and reliability, this roundabout way is cheaper, and these routes can disappear without notice and are slower. Many carriers don't like this type of routing, and will often block them with filters set up in their SMSCs. Hybrid Routing Some bulk SMS providers have the option to combine more reliable grey routing on lower value carriers with their ON-NET offerings. If the routes are managed well, then messages can be delivered reliably. Hybrid routing is more common for SMS marketing messages, where timeliness and reliable delivery is less of an issue. SMS Service Providers The easiest and most efficient way of sending an SMS marketing campaign is through a bulk SMS service provider. Enterprise-grade SMS providers will usually allow new customers the option to sign-up for a free trial account before committing to their platform. Reputable companies also offer free spam compliance, real-time reporting, link tracking, SMS API, multiple integration options, and a 100% delivery guarantee. Most providers can provide link shorteners and built-in analytics to help track the return on investment of each campaign. Depending on the service provider and country, each text message can cost up to a few cents each. Senders intending to send a lot of text messages per month or per year may get discounts from service providers. Since spam laws differ from country to country, SMS service providers are usually location-specific. This is a list of the most popular and reputable SMS companies in each continent, with some information about the number of phones in use. It is important to note that message pricing, message delivery, and service offerings will also differ substantially from country to country. Africa Asia Australia/Oceania North America Europe South America MMS MMS mobile marketing can contain a timed slideshow of images, text, audio and video. This mobile content is delivered via MMS (Multimedia Message Service). Nearly all new phones produced with a color screen are capable of sending and receiving standard MMS message. Brands are able to both send (mobile terminated) and receive (mobile originated) rich content through MMS A2P (application-to-person) mobile networks to mobile subscribers. In some networks, brands are also able to sponsor messages that are sent P2P (person-to-person). A typical MMS message based on the GSM encoding can have up to 1500 characters, whereas one based on Unicode can have up to 500 characters. Messages that are longer than the limit are truncated and not concatenated like an SMS. Good examples of mobile-originated MMS marketing campaigns are Motorola's ongoing campaigns at House of Blues venues, where the brand allows the consumer to send their mobile photos to the LED board in real-time as well as blog their images online. Push notifications Push notifications were first introduced to smartphones by Apple with the Push Notification Service in 2009. For Android devices, Google developed Android Cloud to Messaging or C2DM in 2010. Google replaced this service with Google Cloud Messaging in 2013. Commonly referred to as GCM, Google Cloud Messaging served as C2DM's successor, making improvements to authentication and delivery, new API endpoints and messaging parameters, and the removal of limitations on API send-rates and message sizes. It is a message that pops up on a mobile device. It is the delivery of information from a software application to a computing device without any request from the client or the user. They look like SMS notifications but they reach only the users who installed the app. The specifications vary for iOS and Android users. SMS and push notifications can be part of a well-developed inbound mobile marketing strategy. According to mobile marketing company Leanplum, Android sees open rates nearly twice as high as those on iOS. Android sees open rates of 3.48 percent for push notification, versus iOS which has open rates of 1.77 percent. App-based marketing With the strong growth in the use of smartphones, app usage has also greatly increased. The annual number of mobile app downloads over the last few years has exponentially grown, with hundreds of billions of downloads in 2018, and the number of downloads expecting to climb by 2022. Therefore, mobile marketers have increasingly taken advantage of smartphone apps as a marketing resource. Marketers aim to optimize the visibility of an app in a store, which will maximize the number of downloads. This practice is called App Store Optimization (ASO). There is a lot of competition in this field as well. However, just like other services, it is not easy anymore to rule the mobile application market. Most companies have acknowledged the potential of Mobile Apps to increase the interaction between a company and its target customers. With the fast progress and growth of the smartphone market, high-quality Mobile app development is essential to obtain a strong position in a mobile app store. The term app marketing has not yet been defined in a unified scientific definition and is also used in various ways in practice. The term refers on the one hand to those activities that serve to generate app downloads and thus attract new users for a mobile app. In some cases, the term is also used to describe the promotional sending of push notifications and in-app messages. Here are several models for App marketing. 1. Content embedded mode For the most part at present, the downloading APP from APP store is free, for APP development enterprise, need a way to flow to liquidate, implantable advertising and APP combines content marketing and game characters to seamlessly integrating user experience, so as to improve advertising hits. With these free downloading apps, developers use in-app purchases or subscription to profit. 2. Advertising model advertisement implantation mode is a common marketing mode in most APP applications. Through Banner ads, consumer announcements, or in-screen advertising, users will jump to the specified page and display the advertising content when users click. This model is more intuitive, and can attract users' attention quickly. 3. User participation mode is mainly applied to website transplantation and brand APP. The company publishes its own brand APP to the APP store for users to download, so that users can intuitively understand the enterprise or product information better. As a practical tool, this APP brings great convenience to users' life. User reference mode enables users to have a more intimate experience, so that users can understand the product, enhance the brand image of the enterprise, and seize the user's heart. 4. The shopping website embedded mode is the traditional Internet electric business offering platforms in the mobile APP, which is convenient for users to browse commodity information anytime and anywhere, order to purchase and order tracking. This model has promoted the transformation of traditional e-commerce enterprises from shopping to mobile Internet channels, which is a necessary way to use mobile APP for online and offline interactive development, such as Amazon, eBay and so on. The above several patterns for the more popular marketing methods, as for the details while are not mentioned too much, but the hope can help you to APP marketing have a preliminary understanding, and on the road more walk more far in the marketing. In-game mobile marketing There are essentially three major trends in mobile gaming right now: interactive real-time 3D games, massive multi-player games and social networking games. This means a trend towards more complex and more sophisticated, richer game play. On the other side, there are the so-called casual games, i.e. games that are very simple and very easy to play. Most mobile games today are such casual games and this will probably stay so for quite a while to come. Brands are now delivering promotional messages within mobile games or sponsoring entire games to drive consumer engagement. This is known as mobile advergaming or ad-funded mobile game. In in-game mobile marketing, advertisers pay to have their name or products featured in the mobile games. For instance, racing games can feature real cars made by Ford or Chevy. Advertisers have been both creative and aggressive in their attempts to integrate ads organically in the mobile games. Although investment in mobile marketing strategies like advergaming is slightly more expensive than what is intended for a mobile app, a good strategy can make the brand derive a substantial revenue. Games that use advergaming make the users remember better the brand involved. This memorization increases virality of the content so that the users tend to recommend them to their friends and acquaintances, and share them via social networks. One form of in-game mobile advertising is what allows players to actually play. As a new and effective form of advertising, it allows consumers to try out the content before they actually install it. This type of marketing can also really attract the attention of users like casual players. These advertising blur the lines between game and advertising, and provide players with a richer experience that allows them to spend their precious time interacting with advertising. This kind of advertisement is not only interesting, but also brings some benefits to marketers. As this kind of in-gaming mobile marketing can create more effective conversion rates because they are interactive and have faster conversion speeds than general advertising. Moreover, games can also offer a stronger lifetime value. They measure the quality of the consumer in advance to provide some more in-depth experience. So this type of advertising can be more effective in improving user stickiness than advertising channels such as stories and video. QR codes Two-dimensional barcodes that are scanned with a mobile phone camera. They can take a user to the particular advertising webpage a QR code is attached to. QR codes are often used in mobile gamification when they appear as surprises during a mobile app game and directs users to the specific landing page. Such codes are also a bridge between physical medium and online via mobile: businesses print QR codes on promotional posters, brochures, postcards, and other physical advertising materials. Bluetooth Bluetooth technology is a wireless short range digital communication that allows devices to communicate without the now superseded RS-232 cables. Proximity systems Mobile marketing via proximity systems, or proximity marketing, relies on GSM 03.41 which defines the Short Message Service - Cell Broadcast. SMS-CB allows messages (such as advertising or public information) to be broadcast to all mobile users in a specified geographical area. In the Philippines, GSM-based proximity broadcast systems are used by select Government Agencies for information dissemination on Government-run community-based programs to take advantage of its reach and popularity (Philippines has the world's highest traffic of SMS). It is also used for commercial service known as Proxima SMS. Bluewater, a super-regional shopping center in the UK, has a GSM based system supplied by NTL to help its GSM coverage for calls, it also allows each customer with a mobile phone to be tracked though the center which shops they go into and for how long. The system enables special offer texts to be sent to the phone. For example, a retailer could send a mobile text message to those customers in their database who have opted-in, who happen to be walking in a mall. That message could say "Save 50% in the next 5 minutes only when you purchase from our store." Snacks company, Mondelez International, makers of Cadbury and Oreo products has committed to exploring proximity-based messaging citing significant gains in point-of-purchase influence. Location-based services Location-based services (LBS) are offered by some cell phone networks as a way to send custom advertising and other information to cell-phone subscribers based on their current location. The cell-phone service provider gets the location from a GPS chip built into the phone, or using radiolocation and trilateration based on the signal-strength of the closest cell-phone towers (for phones without GPS features). In the United Kingdom, which launched location-based services in 2003, networks do not use trilateration; LBS uses a single base station, with a "radius" of inaccuracy, to determine a phone's location. Some location-based services work without GPS tracking technique, instead transmitting content between devices peer-to-peer. There are various methods for companies to utilize a device's location. 1.Store locators. Utilizing the location-based feedback, the nearest store location can be found rapidly by retail clients. 2.Proximity-based marketing. Companies can deliver advertisements merely to individuals in the same geographical location. Location-based services send advertisements prospective customers of the area who may truly take action on the information. 3.Travel information. Location-based services can provide actual time information for the smartphones, such as traffic condition and weather forecast, then the customers can make the plan. 4.Roadside assistance. In the event of sudden traffic accidents, the roadside assistance company can develop an app to track the customer's real-time location without navigation. Ringless voicemail The advancement of mobile technologies has allowed the ability to leave a voice mail message on a mobile phone without ringing the line. The technology was pioneered by VoAPP, which used the technology in conjunction with live operators as a debt collection service. The FCC has ruled that the technology is compliant with all regulations. CPL expanded on the existing technology to allow for a completely automated process including the replacement of live operators with pre recorded messages. User-controlled media Mobile marketing differs from most other forms of marketing communication in that it is often user (consumer) initiated (mobile originated, or MO) message, and requires the express consent of the consumer to receive future communications. A call delivered from a server (business) to a user (consumer) is called a mobile terminated (MT) message. This infrastructure points to a trend set by mobile marketing of consumer controlled marketing communications. Due to the demands for more user controlled media, mobile messaging infrastructure providers have responded by developing architectures that offer applications to operators with more freedom for the users, as opposed to the network-controlled media. Along with these advances to user-controlled Mobile Messaging 2.0, blog events throughout the world have been implemented in order to launch popularity in the latest advances in mobile technology. In June 2007, Airwide Solutions became the official sponsor for the Mobile Messaging 2.0 blog that provides the opinions of many through the discussion of mobility with freedom. GPS plays an important role in location-based marketing. Privacy concerns Mobile advertising has become more and more popular. However, some mobile advertising is sent without a required permission from the consumer causing privacy violations. It should be understood that irrespective of how well advertising messages are designed and how many additional possibilities they provide, if consumers do not have confidence that their privacy will be protected, this will hinder their widespread deployment. But if the messages originate from a source where the user is enrolled in a relationship/loyalty program, privacy is not considered violated and even interruptions can generate goodwill. The privacy issue became even more salient as it was before with the arrival of mobile data networks. A number of important new concerns emerged mainly stemming from the fact that mobile devices are intimately personal and are always with the user, and four major concerns can be identified: mobile spam, personal identification, location information and wireless security. Aggregate presence of mobile phone users could be tracked in a privacy-preserving fashion. Classification Kaplan categorizes mobile marketing along the degree of consumer knowledge and the trigger of communication into four groups: strangers, groupies, victims, and patrons. Consumer knowledge can be high or low and according to its degree organizations can customize their messages to each individual user, similar to the idea of one-to-one marketing. Regarding the trigger of communication, Kaplan differentiates between push communication, initiated by the organization, and pull communication, initiated by the consumer. Within the first group (low knowledge/push), organizations broadcast a general message to a large number of mobile users. Given that the organization cannot know which customers have ultimately been reached by the message, this group is referred to as "strangers". Within the second group (low knowledge/pull), customers opt to receive information but do not identify themselves when doing so. The organizations therefore does not know which specific clients it is dealing with exactly, which is why this cohort is called "groupies". In the third group (high knowledge/push) referred to as "victims", organizations know their customers and can send them messages and information without first asking permission. The last group (high knowledge/pull), the "patrons" covers situations where customers actively give permission to be contacted and provide personal information about themselves, which allows for one-to-one communication without running the risk of annoying them. References Mobile content
Mobile marketing
[ "Technology" ]
6,327
[ "Mobile content", "Mobile marketing" ]
4,115,757
https://en.wikipedia.org/wiki/Context%20management
Context management is a dynamic computer process that uses 'subjects' of data in one application, to point to data resident in a separate application also containing the same subject. Context management allows users to choose a subject once in one application, and have all other applications containing information on that same subject 'tune' to the data they contain, thus eliminating the need to redundantly select the subject in the varying applications. An example from the healthcare industry where Context Management is widely used, multiple applications operating "in context" through use of a context manager would allow a user to select a patient (i.e., the subject) in one application and when the user enters the other application, that patient's information is already pre-fetched and presented, obviating the need to re-select the patient in the second application. The further the user 'drills' into the application (e.g., test, result, diagnosis, etc.) all context aware applications continue to drill-down into the data, in context with the user's requests. Context management is gaining in prominence in healthcare due to the creation of the Clinical Context Object Workgroup standard committee (CCOW) which has created a standardized protocol enabling applications to function in a 'context aware' state. Context management is gaining in the Business Rule market as well. Knowing the context of any information exchange is critical. For example, a seller may need to know such things as: is this shipment urgent, is this a preferred customer, do they need English or Spanish, what model is the part for? Without context, mistakes and run-on costs rapidly ensue. In automating information integration, knowing and defining context of use is the single most pervasive and important factor. This context mechanism is at the heart of allowing users to quantify what their context factors are precisely, this removes the guesswork from business transaction exchanges between business transaction information management partners and allows them to formalize their collaboration agreements exactly. References External links OASIS Content Assembly Mechanism (CAM) TC CCOW Resources Standards Computer file formats XML-based standards
Context management
[ "Technology" ]
424
[ "Computer standards", "XML-based standards" ]
5,481,447
https://en.wikipedia.org/wiki/C%2B%2B11
C++11 is a version of a joint technical standard, ISO/IEC 14882, by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), for the C++ programming language. C++11 replaced the prior version of the C++ standard, named C++03, and was later replaced by C++14. The name follows the tradition of naming language versions by the publication year of the specification, though it was formerly named C++0x because it was expected to be published before 2010. Although one of the design goals was to prefer changes to the libraries over changes to the core language, C++11 does make several additions to the core language. Areas of the core language that were significantly improved include multithreading support, generic programming support, uniform initialization, and performance. Significant changes were also made to the C++ Standard Library, incorporating most of the C++ Technical Report 1 (TR1) libraries, except the library of mathematical special functions. C++11 was published as ISO/IEC 14882:2011 in September 2011 and is available for a fee. The working draft most similar to the published C++11 standard is N3337, dated 16 January 2012; it has only editorial corrections from the C++11 standard. C++11 is fully supported by Clang 3.3 and later. C++11 is fully supported by GNU Compiler Collection (GCC) 4.8.1 and later. Design goals The design committee attempted to stick to a number of goals in designing C++11: Maintain stability and compatibility with older code Prefer introducing new features via the standard library, rather than extending the core language Improve C++ to facilitate systems and library design, rather than introduce new features useful only to specific applications Increase type safety by providing safer alternatives to earlier unsafe techniques Increase performance and the ability to work directly with hardware Provide proper solutions for real-world problems Make C++ easy to teach and to learn without removing any utility needed by expert programmers Attention to beginners is considered important, because most computer programmers will always be such, and because many beginners never widen their knowledge, limiting themselves to work in aspects of the language in which they specialize. Extensions to the C++ core language One function of the C++ committee is the development of the language core. Areas of the core language that were significantly improved include multithreading support, generic programming support, uniform initialization, and performance. Core language runtime performance enhancements These language features primarily exist to provide some kind of runtime performance benefit, either of memory or of computing speed. Rvalue references and move constructors In C++03 (and before), temporaries (termed "rvalues", as they often lie on the right side of an assignment) were intended to never be modifiable — just as in C — and were considered to be indistinguishable from const T& types; nevertheless, in some cases, temporaries could have been modified, a behavior that was even considered to be a useful loophole. C++11 adds a new non-const reference type called an , identified by T&&. This refers to temporaries that are permitted to be modified after they are initialized, for the purpose of allowing "move semantics". A chronic performance problem with C++03 is the costly and unneeded deep copies that can happen implicitly when objects are passed by value. To illustrate the issue, consider that an std::vector<T> is, internally, a wrapper around a C-style array with a defined size. If an std::vector<T> temporary is created or returned from a function, it can be stored only by creating a new std::vector<T> and copying all the rvalue's data into it. Then the temporary and all its memory is destroyed. (For simplicity, this discussion neglects the return value optimization.) In C++11, a of std::vector<T> that takes an rvalue reference to an std::vector<T> can copy the pointer to the internal C-style array out of the rvalue into the new std::vector<T>, then set the pointer inside the rvalue to null. Since the temporary will never again be used, no code will try to access the null pointer, and because the pointer is null, its memory is not deleted when it goes out of scope. Hence, the operation not only forgoes the expense of a deep copy, but is safe and invisible. Rvalue references can provide performance benefits to existing code without needing to make any changes outside the standard library. The type of the returned value of a function returning an std::vector<T> temporary does not need to be changed explicitly to std::vector<T> && to invoke the move constructor, as temporaries are considered rvalues automatically. (However, if std::vector<T> is a C++03 version without a move constructor, then the copy constructor will be invoked with a const std::vector<T>&, incurring a significant memory allocation.) For safety reasons, some restrictions are imposed. A named variable will never be considered to be an rvalue even if it is declared as such. To get an rvalue, the function template std::move() should be used. Rvalue references can also be modified only under certain circumstances, being intended to be used primarily with move constructors. Due to the nature of the wording of rvalue references, and to some modification to the wording for lvalue references (regular references), rvalue references allow developers to provide perfect function forwarding. When combined with variadic templates, this ability allows for function templates that can perfectly forward arguments to another function that takes those particular arguments. This is most useful for forwarding constructor parameters, to create factory functions that will automatically call the correct constructor for those particular arguments. This is seen in the emplace_back set of the C++ standard library methods. constexpr – Generalized constant expressions C++ has always had the concept of constant expressions. These are expressions such as 3+4 that will always yield the same results, at compile time and at runtime. Constant expressions are optimization opportunities for compilers, and compilers frequently execute them at compile time and hardcode the results in the program. Also, in several places, the C++ specification requires using constant expressions. Defining an array requires a constant expression, and enumerator values must be constant expressions. However, a constant expression has never been allowed to contain a function call or object constructor. So a piece of code as simple as this is invalid: int get_five() {return 5;} int some_value[get_five() + 7]; // Create an array of 12 integers. Ill-formed C++ This was not valid in C++03, because get_five() + 7 is not a constant expression. A C++03 compiler has no way of knowing if get_five() actually is constant at runtime. In theory, this function could affect a global variable, call other non-runtime constant functions, etc. C++11 introduced the keyword constexpr, which allows the user to guarantee that a function or object constructor is a compile-time constant. The above example can be rewritten as follows: constexpr int get_five() {return 5;} int some_value[get_five() + 7]; // Create an array of 12 integers. Valid C++11 This allows the compiler to understand, and verify, that get_five() is a compile-time constant. Using constexpr on a function imposes some limits on what that function can do. First, the function must have a non-void return type. Second, the function body cannot declare variables or define new types. Third, the body may contain only declarations, null statements and a single return statement. There must exist argument values such that, after argument substitution, the expression in the return statement produces a constant expression. Before C++11, the values of variables could be used in constant expressions only if the variables are declared const, have an initializer which is a constant expression, and are of integral or enumeration type. C++11 removes the restriction that the variables must be of integral or enumeration type if they are defined with the constexpr keyword: constexpr double earth_gravitational_acceleration = 9.8; constexpr double moon_gravitational_acceleration = earth_gravitational_acceleration / 6.0; Such data variables are implicitly const, and must have an initializer which must be a constant expression. To construct constant expression data values from user-defined types, constructors can also be declared with constexpr. A constexpr constructor's function body can contain only declarations and null statements, and cannot declare variables or define types, as with a constexpr function. There must exist argument values such that, after argument substitution, it initializes the class's members with constant expressions. The destructors for such types must be trivial. The copy constructor for a type with any constexpr constructors should usually also be defined as a constexpr constructor, to allow objects of the type to be returned by value from a constexpr function. Any member function of a class, such as copy constructors, operator overloads, etc., can be declared as constexpr, so long as they meet the requirements for constexpr functions. This allows the compiler to copy objects at compile time, perform operations on them, etc. If a constexpr function or constructor is called with arguments which aren't constant expressions, the call behaves as if the function were not constexpr, and the resulting value is not a constant expression. Likewise, if the expression in the return statement of a constexpr function does not evaluate to a constant expression for a given invocation, the result is not a constant expression. constexpr differs from consteval, introduced in C++20, in that the latter must always produce a compile time constant, while constexpr does not have this restriction. Modification to the definition of plain old data In C++03, a class or struct must follow a number of rules for it to be considered a plain old data (POD) type. Types that fit this definition produce object layouts that are compatible with C, and they could also be initialized statically. The C++03 standard has restrictions on what types are compatible with C or can be statically initialized despite there being no technical reason a compiler couldn't accept the program; if someone were to create a C++03 POD type and add a non-virtual member function, this type would no longer be a POD type, could not be statically initialized, and would be incompatible with C despite no change to the memory layout. C++11 relaxed several of the POD rules, by dividing the POD concept into two separate concepts: trivial and standard-layout. A type that is trivial can be statically initialized. It also means that it is valid to copy data around via memcpy, rather than having to use a copy constructor. The lifetime of a trivial type begins when its storage is defined, not when a constructor completes. A trivial class or struct is defined as one that: Has a trivial default constructor. This may use the default constructor syntax (SomeConstructor() = default;). Has trivial copy and move constructors, which may use the default syntax. Has trivial copy and move assignment operators, which may use the default syntax. Has a trivial destructor, which must not be virtual. Constructors are trivial only if there are no virtual member functions of the class and no virtual base classes. Copy/move operations also require all non-static data members to be trivial. A type that is standard-layout means that it orders and packs its members in a way that is compatible with C. A class or struct is standard-layout, by definition, provided: It has no virtual functions It has no virtual base classes All its non-static data members have the same access control (public, private, protected) All its non-static data members, including any in its base classes, are in the same one class in the hierarchy The above rules also apply to all the base classes and to all non-static data members in the class hierarchy It has no base classes of the same type as the first defined non-static data member A class/struct/union is considered POD if it is trivial, standard-layout, and all of its non-static data members and base classes are PODs. By separating these concepts, it becomes possible to give up one without losing the other. A class with complex move and copy constructors may not be trivial, but it could be standard-layout and thus interoperate with C. Similarly, a class with public and private non-static data members would not be standard-layout, but it could be trivial and thus memcpy-able. Core language build-time performance enhancements Extern template In C++03, the compiler must instantiate a template whenever a fully specified template is encountered in a translation unit. If the template is instantiated with the same types in many translation units, this can dramatically increase compile times. There is no way to prevent this in C++03, so C++11 introduced extern template declarations, analogous to extern data declarations. C++03 has this syntax to oblige the compiler to instantiate a template: template class std::vector<MyClass>; C++11 now provides this syntax: extern template class std::vector<MyClass>; which tells the compiler not to instantiate the template in this translation unit. Core language usability enhancements These features exist for the primary purpose of making the language easier to use. These can improve type safety, minimize code repetition, make erroneous code less likely, etc. Initializer lists C++03 inherited the initializer-list feature from C. A struct or array is given a list of arguments in braces, in the order of the members' definitions in the struct. These initializer-lists are recursive, so an array of structs or struct containing other structs can use them. struct Object { float first; int second; }; Object scalar = {0.43f, 10}; //One Object, with first=0.43f and second=10 Object anArray[] = {{13.4f, 3}, {43.28f, 29}, {5.934f, 17}}; //An array of three Objects This is very useful for static lists, or initializing a struct to some value. C++ also provides constructors to initialize an object, but they are often not as convenient as the initializer list. However, C++03 allows initializer-lists only on structs and classes that conform to the Plain Old Data (POD) definition; C++11 extends initializer-lists, so they can be used for all classes including standard containers like std::vector. C++11 binds the concept to a template, called std::initializer_list. This allows constructors and other functions to take initializer-lists as parameters. For example: class SequenceClass { public: SequenceClass(std::initializer_list<int> list); }; This allows SequenceClass to be constructed from a sequence of integers, such as: SequenceClass some_var = {1, 4, 5, 6}; This constructor is a special kind of constructor, called an initializer-list-constructor. Classes with such a constructor are treated specially during uniform initialization (see below) The template class std::initializer_list<> is a first-class C++11 standard library type. They can be constructed statically by the C++11 compiler via use of the {} syntax without a type name in contexts where such braces will deduce to an std::initializer_list, or by explicitly specifying the type like std::initializer_list<SomeType>{args} (and so on for other varieties of construction syntax). The list can be copied once constructed, which is cheap and will act as a copy-by-reference (the class is typically implemented as a pair of begin/end pointers). An std::initializer_list is constant: its members cannot be changed once it is created, and nor can the data in those members be changed (which rules out moving from them, requiring copies into class members, etc.). Although its construction is specially treated by the compiler, an std::initializer_list is a real type, and so it can be used in other places besides class constructors. Regular functions can take typed std::initializer_lists as arguments. For example: void function_name(std::initializer_list<float> list); // Copying is cheap; see above function_name({1.0f, -3.45f, -0.4f}); Examples of this in the standard library include the std::min() and std::max() templates taking std::initializer_lists of numeric type. Standard containers can also be initialized in these ways: std::vector<std::string> v = { "xyzzy", "plugh", "abracadabra" }; std::vector<std::string> v({ "xyzzy", "plugh", "abracadabra" }); std::vector<std::string> v{ "xyzzy", "plugh", "abracadabra" }; // see "Uniform initialization" below Uniform initialization C++03 has a number of problems with initializing types. Several ways to do this exist, and some produce different results when interchanged. The traditional constructor syntax, for example, can look like a function declaration, and steps must be taken to ensure that the compiler's most vexing parse rule will not mistake it for such. Only aggregates and POD types can be initialized with aggregate initializers (using SomeType var = {/*stuff*/};). C++11 provides a syntax that allows for fully uniform type initialization that works on any object. It expands on the initializer list syntax: struct BasicStruct { int x; double y; }; struct AltStruct { AltStruct(int x, double y) : x_{x} , y_{y} {} private: int x_; double y_; }; BasicStruct var1{5, 3.2}; AltStruct var2{2, 4.3}; The initialization of var1 behaves exactly as though it were aggregate-initialization. That is, each data member of an object, in turn, will be copy-initialized with the corresponding value from the initializer-list. Implicit type conversion will be used where needed. If no conversion exists, or only a narrowing conversion exists, the program is ill-formed. The initialization of var2 invokes the constructor. One can also do this: struct IdString { std::string name; int identifier; }; IdString get_string() { return {"foo", 42}; //Note the lack of explicit type. } Uniform initialization does not replace constructor syntax, which is still needed at times. If a class has an initializer list constructor (TypeName(initializer_list<SomeType>);), then it takes priority over other forms of construction, provided that the initializer list conforms to the sequence constructor's type. The C++11 version of std::vector has an initializer list constructor for its template type. Thus this code: std::vector<int> the_vec{4}; will call the initializer list constructor, not the constructor of std::vector that takes a single size parameter and creates the vector with that size. To access the latter constructor, the user will need to use the standard constructor syntax directly. Type inference In C++03 (and C), to use a variable, its type must be specified explicitly. However, with the advent of template types and template metaprogramming techniques, the type of something, particularly the well-defined return value of a function, may not be easily expressed. Thus, storing intermediates in variables is difficult, possibly needing knowledge of the internals of a given metaprogramming library. C++11 allows this to be mitigated in two ways. First, the definition of a variable with an explicit initialization can use the auto keyword. This creates a variable of the specific type of the initializer: auto some_strange_callable_type = std::bind(&some_function, _2, _1, some_object); auto other_variable = 5; The type of some_strange_callable_type is simply whatever the particular template function override of std::bind returns for those particular arguments. This type is easily determined procedurally by the compiler as part of its semantic analysis duties, but is not easy for the user to determine upon inspection. The type of other_variable is also well-defined, but it is easier for the user to determine. It is an int, which is the same type as the integer literal. This use of the keyword auto in C++ re-purposes the semantics of this keyword, which was originally used in the typeless predecessor language B in a related role of denoting an untyped automatic variable definition. Further, the keyword decltype can be used to determine the type of expression at compile-time. For example: int some_int; decltype(some_int) other_integer_variable = 5; This is more useful in conjunction with auto, since the type of auto variable is known only to the compiler. However, decltype can also be very useful for expressions in code that makes heavy use of operator overloading and specialized types. auto is also useful for reducing the verbosity of the code. For instance, instead of writing for (std::vector<int>::const_iterator itr = myvec.cbegin(); itr != myvec.cend(); ++itr) the programmer can use the shorter for (auto itr = myvec.cbegin(); itr != myvec.cend(); ++itr) which can be further compacted since "myvec" implements begin/end iterators: for (const auto& x : myvec) This difference grows as the programmer begins to nest containers, though in such cases typedefs are a good way to decrease the amount of code. The type denoted by decltype can be different from the type deduced by auto. #include <vector> int main() { const std::vector<int> v(1); auto a = v[0]; // a has type int decltype(v[0]) b = 1; // b has type const int&, the return type of // std::vector<int>::operator[](size_type) const auto c = 0; // c has type int auto d = c; // d has type int decltype(c) e; // e has type int, the type of the entity named by c decltype((c)) f = c; // f has type int&, because (c) is an lvalue decltype(0) g; // g has type int, because 0 is an rvalue } Range-based for loop C++11 extends the syntax of the for statement to allow for easy iteration over a range of elements: int my_array[5] = {1, 2, 3, 4, 5}; // double the value of each element in my_array: for (int& x : my_array) x *= 2; // similar but also using type inference for array elements for (auto& x : my_array) x *= 2; This form of for, called the “range-based for”, will iterate over each element in the list. It will work for C-style arrays, initializer lists, and any type that has begin() and end() functions defined for it that return iterators. All the standard library containers that have begin/end pairs will work with the range-based for statement. Lambda functions and expressions C++11 provides the ability to create anonymous functions, called lambda functions. These are defined as follows: [](int x, int y) -> int { return x + y; } The return type (-> int in this example) can be omitted as long as all return expressions return the same type. A lambda can optionally be a closure. Alternative function syntax Standard C function declaration syntax was perfectly adequate for the feature set of the C language. As C++ evolved from C, it kept the basic syntax and extended it where needed. However, as C++ grew more complex, it exposed several limits, especially regarding template function declarations. For example, in C++03 this is invalid: template<class Lhs, class Rhs> Ret adding_func(const Lhs &lhs, const Rhs &rhs) {return lhs + rhs;} //Ret must be the type of lhs+rhs The type Ret is whatever the addition of types Lhs and Rhs will produce. Even with the aforementioned C++11 functionality of decltype, this is not possible: template<class Lhs, class Rhs> decltype(lhs+rhs) adding_func(const Lhs &lhs, const Rhs &rhs) {return lhs + rhs;} //Not valid C++11 This is not valid C++ because lhs and rhs have not yet been defined; they will not be valid identifiers until after the parser has parsed the rest of the function prototype. To work around this, C++11 introduced a new function declaration syntax, with a trailing-return-type: template<class Lhs, class Rhs> auto adding_func(const Lhs &lhs, const Rhs &rhs) -> decltype(lhs+rhs) {return lhs + rhs;} This syntax can be used for more mundane function declarations and definitions: struct SomeStruct { auto func_name(int x, int y) -> int; }; auto SomeStruct::func_name(int x, int y) -> int { return x + y; } The use of the “auto” keyword in this case is just part of the syntax and does not perform automatic type deduction in C++11. However, starting with C++14, the trailing return type can be removed entirely and the compiler will deduce the return type automatically. Object construction improvement In C++03, constructors of a class are not allowed to call other constructors in an initializer list of that class. Each constructor must construct all of its class members itself or call a common member function, as follows: class SomeType { public: SomeType(int new_number) { Construct(new_number); } SomeType() { Construct(42); } private: void Construct(int new_number) { number = new_number; } int number; }; Constructors for base classes cannot be directly exposed to derived classes; each derived class must implement constructors even if a base class constructor would be appropriate. Non-constant data members of classes cannot be initialized at the site of the declaration of those members. They can be initialized only in a constructor. C++11 provides solutions to all of these problems. C++11 allows constructors to call other peer constructors (termed delegation). This allows constructors to utilize another constructor's behavior with a minimum of added code. Delegation has been used in other languages e.g., Java and Objective-C. This syntax is as follows: class SomeType { int number; public: SomeType(int new_number) : number(new_number) {} SomeType() : SomeType(42) {} }; In this case, the same effect could have been achieved by making new_number a default parameter. The new syntax, however, allows the default value (42) to be expressed in the implementation rather than the interface — a benefit to maintainers of library code since default values for function parameters are “baked in” to call sites, whereas constructor delegation allows the value to be changed without recompilation of the code using the library. This comes with a caveat: C++03 considers an object to be constructed when its constructor finishes executing, but C++11 considers an object constructed once any constructor finishes execution. Since multiple constructors will be allowed to execute, this will mean that each delegating constructor will be executing on a fully constructed object of its own type. Derived class constructors will execute after all delegation in their base classes is complete. For base-class constructors, C++11 allows a class to specify that base class constructors will be inherited. Thus, the C++11 compiler will generate code to perform the inheritance and the forwarding of the derived class to the base class. This is an all-or-nothing feature: either all of that base class's constructors are forwarded or none of them are. Also, an inherited constructor will be shadowed if it matches the signature of a constructor of the derived class, and restrictions exist for multiple inheritance: class constructors cannot be inherited from two classes that use constructors with the same signature. The syntax is as follows: class BaseClass { public: BaseClass(int value); }; class DerivedClass : public BaseClass { public: using BaseClass::BaseClass; }; For member initialization, C++11 allows this syntax: class SomeClass { public: SomeClass() {} explicit SomeClass(int new_value) : value(new_value) {} private: int value = 5; }; Any constructor of the class will initialize value with 5, if the constructor does not override the initialization with its own. So the above empty constructor will initialize value as the class definition states, but the constructor that takes an int will initialize it to the given parameter. It can also use constructor or uniform initialization, instead of the assignment initialization shown above. Explicit overrides and final In C++03, it is possible to accidentally create a new virtual function, when one intended to override a base class function. For example: struct Base { virtual void some_func(float); }; struct Derived : Base { virtual void some_func(int); }; Suppose the Derived::some_func is intended to replace the base class version. But instead, because it has a different signature, it creates a second virtual function. This is a common problem, particularly when a user goes to modify the base class. C++11 provides syntax to solve this problem. struct Base { virtual void some_func(float); }; struct Derived : Base { virtual void some_func(int) override; // ill-formed - doesn't override a base class method }; The override special identifier means that the compiler will check the base class(es) to see if there is a virtual function with this exact signature. And if there is not, the compiler will indicate an error. C++11 also adds the ability to prevent inheriting from classes or simply preventing overriding methods in derived classes. This is done with the special identifier final. For example: struct Base1 final { }; struct Derived1 : Base1 { }; // ill-formed because the class Base1 has been marked final struct Base2 { virtual void f() final; }; struct Derived2 : Base2 { void f(); // ill-formed because the virtual function Base2::f has been marked final }; In this example, the virtual void f() final; statement declares a new virtual function, but it also prevents derived classes from overriding it. It also has the effect of preventing derived classes from using that particular function name and parameter combination. Neither override nor final are language keywords. They are technically identifiers for declarator attributes: they gain special meaning as attributes only when used in those specific trailing contexts (after all type specifiers, access specifiers, member declarations (for struct, class and enum types) and declarator specifiers, but before initialization or code implementation of each declarator in a comma-separated list of declarators); they do not alter the declared type signature and do not declare or override any new identifier in any scope; the recognized and accepted declarator attributes may be extended in future versions of C++ (some compiler-specific extensions already recognize added declarator attributes, to provide code generation options or optimization hints to the compiler, or to generate added data into the compiled code, intended for debuggers, linkers, and deployment of the compiled code, or to provide added system-specific security attributes, or to enhance reflective programming (reflection) abilities at runtime, or to provide added binding information for interoperability with other programming languages and runtime systems; these extensions may take parameters between parentheses after the declarator attribute identifier; for ANSI conformance, these compiler-specific extensions should use the double underscore prefix convention). In any other location, they can be valid identifiers for new declarations (and later use if they are accessible). Null pointer constant and type For the purposes of this section and this section alone, every occurrence of "0" is meant as "a constant expression which evaluates to 0, which is of type int". In reality, the constant expression can be of any integral type. Since the dawn of C in 1972, the constant 0 has had the double role of constant integer and null pointer constant. The ambiguity inherent in the double meaning of 0 was dealt with in C by using the preprocessor macro NULL, which commonly expands to either ((void*)0) or 0. C++ forbids implicit conversion from void * to other pointer types, thus removing the benefit of casting 0 to void *. As a consequence, only 0 is allowed as a null pointer constant. This interacts poorly with function overloading: void foo(char *); void foo(int); If NULL is defined as 0 (which is usually the case in C++), the statement foo(NULL); will call foo(int), which is almost certainly not what the programmer intended, and not what a superficial reading of the code suggests. C++11 corrects this by introducing a new keyword to serve as a distinguished null pointer constant: nullptr. It is of type nullptr_t, which is implicitly convertible and comparable to any pointer type or pointer-to-member type. It is not implicitly convertible or comparable to integral types, except for bool. While the original proposal specified that an rvalue of type nullptr_t should not be convertible to bool, the core language working group decided that such a conversion would be desirable, for consistency with regular pointer types. The proposed wording changes were unanimously voted into the Working Paper in June 2008. A similar proposal was also brought to the C standard working group and was accepted for inclusion in C23. For backwards compatibility reasons, 0 remains a valid null pointer constant. char *pc = nullptr; // OK int *pi = nullptr; // OK bool b = nullptr; // OK. b is false. int i = nullptr; // error foo(nullptr); // calls foo(nullptr_t), not foo(int); /* Note that foo(nullptr_t) will actually call foo(char *) in the example above using an implicit conversion, only if no other functions are overloading with compatible pointer types in scope. If multiple overloadings exist, the resolution will fail as it is ambiguous, unless there is an explicit declaration of foo(nullptr_t). In standard types headers for C++11, the nullptr_t type should be declared as: typedef decltype(nullptr) nullptr_t; but not as: typedef int nullptr_t; // prior versions of C++ which need NULL to be defined as 0 typedef void *nullptr_t; // ANSI C which defines NULL as ((void*)0) */ Strongly typed enumerations In C++03, enumerations are not type-safe. They are effectively integers, even when the enumeration types are distinct. This allows the comparison between two enum values of different enumeration types. The only safety that C++03 provides is that an integer or a value of one enum type does not convert implicitly to another enum type. Further, the underlying integral type is implementation-defined; code that depends on the size of the enumeration is thus non-portable. Lastly, enumeration values are scoped to the enclosing scope. Thus, it is not possible for two separate enumerations in the same scope to have matching member names. C++11 allows a special classification of enumeration that has none of these issues. This is expressed using the enum class (enum struct is also accepted as a synonym) declaration: enum class Enumeration { Val1, Val2, Val3 = 100, Val4 // = 101 }; This enumeration is type-safe. Enum class values are not implicitly converted to integers. Thus, they cannot be compared to integers either (the expression Enumeration::Val4 == 101 gives a compile error). The underlying type of enum classes is always known. The default type is int; this can be overridden to a different integral type as can be seen in this example: enum class Enum2 : unsigned int {Val1, Val2}; With old-style enumerations the values are placed in the outer scope. With new-style enumerations they are placed within the scope of the enum class name. So in the above example, Val1 is undefined, but Enum2::Val1 is defined. There is also a transitional syntax to allow old-style enumerations to provide explicit scoping, and the definition of the underlying type: enum Enum3 : unsigned long {Val1 = 1, Val2}; In this case the enumerator names are defined in the enumeration's scope (Enum3::Val1), but for backwards compatibility they are also placed in the enclosing scope. Forward-declaring enums is also possible in C++11. Formerly, enum types could not be forward-declared because the size of the enumeration depends on the definition of its members. As long as the size of the enumeration is specified either implicitly or explicitly, it can be forward-declared: enum Enum1; // Invalid in C++03 and C++11; the underlying type cannot be determined. enum Enum2 : unsigned int; // Valid in C++11, the underlying type is specified explicitly. enum class Enum3; // Valid in C++11, the underlying type is int. enum class Enum4 : unsigned int; // Valid in C++11. enum Enum2 : unsigned short; // Invalid in C++11, because Enum2 was formerly declared with a different underlying type. Right angle bracket C++03's parser defines “>>” as the right shift operator or stream extraction operator in all cases. However, with nested template declarations, there is a tendency for the programmer to neglect to place a space between the two right angle brackets, thus causing a compiler syntax error. C++11 improves the specification of the parser so that multiple right angle brackets will be interpreted as closing the template argument list where it is reasonable. This can be overridden by using parentheses around parameter expressions using the “>”, “>=” or “>>” binary operators: template<bool Test> class SomeType; std::vector<SomeType<1>2>> x1; // Interpreted as a std::vector of SomeType<true>, // followed by "2 >> x1", which is not valid syntax for a declarator. 1 is true. std::vector<SomeType<(1>2)>> x1; // Interpreted as std::vector of SomeType<false>, // followed by the declarator "x1", which is valid C++11 syntax. (1>2) is false. Explicit conversion operators C++98 added the explicit keyword as a modifier on constructors to prevent single-argument constructors from being used as implicit type conversion operators. However, this does nothing for actual conversion operators. For example, a smart pointer class may have an operator bool() to allow it to act more like a primitive pointer: if it includes this conversion, it can be tested with if (smart_ptr_variable) (which would be true if the pointer was non-null and false otherwise). However, this allows other, unintended conversions as well. Because C++ bool is defined as an arithmetic type, it can be implicitly converted to integral or even floating-point types, which allows for mathematical operations that are not intended by the user. In C++11, the explicit keyword can now be applied to conversion operators. As with constructors, it prevents using those conversion functions in implicit conversions. However, language contexts that specifically need a Boolean value (the conditions of if-statements and loops, and operands to the logical operators) count as explicit conversions and can thus use a bool conversion operator. For example, this feature solves cleanly the safe bool issue. Template aliases In C++03, it is possible to define a typedef only as a synonym for another type, including a synonym for a template specialization with all actual template arguments specified. It is not possible to create a typedef template. For example: template <typename First, typename Second, int Third> class SomeType; template <typename Second> typedef SomeType<OtherType, Second, 5> TypedefName; // Invalid in C++03 This will not compile. C++11 adds this ability with this syntax: template <typename First, typename Second, int Third> class SomeType; template <typename Second> using TypedefName = SomeType<OtherType, Second, 5>; The using syntax can also be used as type aliasing in C++11: typedef void (*FunctionType)(double); // Old style using FunctionType = void (*)(double); // New introduced syntax Unrestricted unions In C++03, there are restrictions on what types of objects can be members of a union. For example, unions cannot contain any objects that define a non-trivial constructor or destructor. C++11 lifts some of these restrictions. If a union member has a non trivial special member function, the compiler will not generate the equivalent member function for the union and it must be manually defined. This is a simple example of a union permitted in C++11: #include <new> // Needed for placement 'new'. struct Point { Point() {} Point(int x, int y): x_(x), y_(y) {} int x_, y_; }; union U { int z; double w; Point p; // Invalid in C++03; valid in C++11. U() {} // Due to the Point member, a constructor definition is now needed. U(const Point& pt) : p(pt) {} // Construct Point object using initializer list. U& operator=(const Point& pt) { new(&p) Point(pt); return *this; } // Assign Point object using placement 'new'. }; The changes will not break any existing code since they only relax current rules. Core language functionality improvements These features allow the language to do things that were formerly impossible, exceedingly verbose, or needed non-portable libraries. Variadic templates In C++11, templates can take variable numbers of template parameters. This also allows the definition of type-safe variadic functions. New string literals C++03 offers two kinds of string literals. The first kind, contained within double quotes, produces a null-terminated array of type const char. The second kind, defined as L"", produces a null-terminated array of type const wchar_t, where wchar_t is a wide-character of undefined size and semantics. Neither literal type offers support for string literals with UTF-8, UTF-16, or any other kind of Unicode encodings. C++11 supports three Unicode encodings: UTF-8, UTF-16, and UTF-32. The definition of the type char has been modified to explicitly express that it is at least the size needed to store an eight-bit coding of UTF-8, and large enough to contain any member of the compiler's basic execution character set. It was formerly defined as only the latter in the C++ standard itself, then relying on the C standard to guarantee at least 8 bits. Furthermore, C++11 adds two new character types: char16_t and char32_t. These are designed to store UTF-16 and UTF-32 respectively. Creating string literals for each of the supported encodings can be done thus: u8"I'm a UTF-8 string." u"This is a UTF-16 string." U"This is a UTF-32 string." The type of the first string is the usual const char[]. The type of the second string is const char16_t[] (note lower case 'u' prefix). The type of the third string is const char32_t[] (upper case 'U' prefix). When building Unicode string literals, it is often useful to insert Unicode code points directly into the string. To do this, C++11 allows this syntax: u8"This is a Unicode Character: \u2018." u"This is a bigger Unicode Character: \u2018." U"This is a Unicode Character: \U00002018." The number after the \u is a hexadecimal number; it does not need the usual 0x prefix. The identifier \u represents a 16-bit Unicode code point; to enter a 32-bit code point, use \U and a 32-bit hexadecimal number. Only valid Unicode code points can be entered. For example, code points on the range U+D800–U+DFFF are forbidden, as they are reserved for surrogate pairs in UTF-16 encodings. It is also sometimes useful to avoid escaping strings manually, particularly for using literals of XML files, scripting languages, or regular expressions. C++11 provides a raw string literal: R"(The String Data \ Stuff " )" R"delimiter(The String Data \ Stuff " )delimiter" In the first case, everything between the "( and the )" is part of the string. The " and \ characters do not need to be escaped. In the second case, the "delimiter( starts the string, and it ends only when )delimiter" is reached. The string delimiter can be any string up to 16 characters in length, including the empty string. This string cannot contain spaces, control characters, (, ), or the \ character. Using this delimiter string, the user can have the sequence )" within raw string literals. For example, R"delimiter("(a-z)")delimiter" is equivalent to "\"(a-z)\"". Raw string literals can be combined with the wide literal or any of the Unicode literal prefixes: u8R"XXX(I'm a "raw UTF-8" string.)XXX" uR"*(This is a "raw UTF-16" string.)*" UR"(This is a "raw UTF-32" string.)" User-defined literals C++03 provides a number of literals. The characters 12.5 are a literal that is resolved by the compiler as a type double with the value of 12.5. However, the addition of the suffix f, as in 12.5f, creates a value of type float that contains the value 12.5. The suffix modifiers for literals are fixed by the C++ specification, and C++03 code cannot create new literal modifiers. By contrast, C++11 enables the user to define new kinds of literal modifiers that will construct objects based on the string of characters that the literal modifies. Transformation of literals is redefined into two distinct phases: raw and cooked. A raw literal is a sequence of characters of some specific type, while the cooked literal is of a separate type. The C++ literal 1234, as a raw literal, is this sequence of characters '1', '2', '3', '4'. As a cooked literal, it is the integer 1234. The C++ literal 0xA in raw form is '0', 'x', 'A', while in cooked form it is the integer 10. Literals can be extended in both raw and cooked forms, with the exception of string literals, which can be processed only in cooked form. This exception is due to the fact that strings have prefixes that affect the specific meaning and type of the characters in question. All user-defined literals are suffixes; defining prefix literals is not possible. All suffixes starting with any character except underscore (_) are reserved by the standard. Thus, all user-defined literals must have suffixes starting with an underscore (_). User-defined literals processing the raw form of the literal are defined via a literal operator, which is written as operator "". An example follows: OutputType operator "" _mysuffix(const char * literal_string) { // assumes that OutputType has a constructor that takes a const char * OutputType ret(literal_string); return ret; } OutputType some_variable = 1234_mysuffix; // assumes that OutputType has a get_value() method that returns a double assert(some_variable.get_value() == 1234.0) The assignment statement OutputType some_variable = 1234_mysuffix; executes the code defined by the user-defined literal function. This function is passed "1234" as a C-style string, so it has a null terminator. An alternative mechanism for processing integer and floating point raw literals is via a variadic template: template<char...> OutputType operator "" _tuffix(); OutputType some_variable = 1234_tuffix; OutputType another_variable = 2.17_tuffix; This instantiates the literal processing function as operator "" _tuffix<'1', '2', '3', '4'>(). In this form, there is no null character terminating the string. The main purpose for doing this is to use C++11's constexpr keyword to ensure that the compiler will transform the literal entirely at compile time, assuming OutputType is a constexpr-constructible and copyable type, and the literal processing function is a constexpr function. For numeric literals, the type of the cooked literal is either unsigned long long for integral literals or long double for floating point literals. (Note: There is no need for signed integral types because a sign-prefixed literal is parsed as an expression containing the sign as a unary prefix operator and the unsigned number.) There is no alternative template form: OutputType operator "" _suffix(unsigned long long); OutputType operator "" _suffix(long double); OutputType some_variable = 1234_suffix; // Uses the 'unsigned long long' overload. OutputType another_variable = 3.1416_suffix; // Uses the 'long double' overload. In accord with the formerly mentioned new string prefixes, for string literals, these are used: OutputType operator "" _ssuffix(const char * string_values, size_t num_chars); OutputType operator "" _ssuffix(const wchar_t * string_values, size_t num_chars); OutputType operator "" _ssuffix(const char16_t * string_values, size_t num_chars); OutputType operator "" _ssuffix(const char32_t * string_values, size_t num_chars); OutputType some_variable = "1234"_ssuffix; // Uses the 'const char *' overload. OutputType some_variable = u8"1234"_ssuffix; // Uses the 'const char *' overload. OutputType some_variable = L"1234"_ssuffix; // Uses the 'const wchar_t *' overload. OutputType some_variable = u"1234"_ssuffix; // Uses the 'const char16_t *' overload. OutputType some_variable = U"1234"_ssuffix; // Uses the 'const char32_t *' overload. There is no alternative template form. Character literals are defined similarly. Multithreading memory model C++11 standardizes support for multithreaded programming. There are two parts involved: a memory model which allows multiple threads to co-exist in a program and library support for interaction between threads. (See this article's section on threading facilities.) The memory model defines when multiple threads may access the same memory location, and specifies when updates by one thread become visible to other threads. Thread-local storage In a multi-threaded environment, it is common for every thread to have some unique variables. This already happens for the local variables of a function, but it does not happen for global and static variables. A new thread-local storage duration (in addition to the existing static, dynamic and automatic) is indicated by the storage specifier thread_local. Any object which could have static storage duration (i.e., lifetime spanning the entire execution of the program) may be given thread-local duration instead. The intent is that like any other static-duration variable, a thread-local object can be initialized using a constructor and destroyed using a destructor. Explicitly defaulted special member functions In C++03, the compiler provides, for classes that do not provide them for themselves, a default constructor, a copy constructor, a copy assignment operator (operator=), and a destructor. The programmer can override these defaults by defining custom versions. C++ also defines several global operators (such as operator new) that work on all classes, which the programmer can override. However, there is very little control over creating these defaults. Making a class inherently non-copyable, for example, may be done by declaring a private copy constructor and copy assignment operator and not defining them. Attempting to use these functions is a violation of the One Definition Rule (ODR). While a diagnostic message is not required, violations may result in a linker error. In the case of the default constructor, the compiler will not generate a default constructor if a class is defined with any constructors. This is useful in many cases, but it is also useful to be able to have both specialized constructors and the compiler-generated default. C++11 allows the explicit defaulting and deleting of these special member functions. For example, this class explicitly declares that a default constructor can be used: class SomeType { SomeType() = default; //The default constructor is explicitly stated. SomeType(OtherType value); }; Explicitly deleted functions A function can be explicitly disabled. This is useful for preventing implicit type conversions. The = delete specifier can be used to prohibit calling a function with particular parameter types. For example: void noInt(double i); void noInt(int) = delete; An attempt to call noInt() with an int parameter will be rejected by the compiler, instead of performing a silent conversion to double. Calling noInt() with a float still works. It is possible to prohibit calling the function with any type other than double by using a template: double onlyDouble(double d) {return d;} template<typename T> double onlyDouble(T) = delete; calling onlyDouble(1.0) will work, while onlyDouble(1.0f) will generate a compiler error. Class member functions and constructors can also be deleted. For example, it is possible to prevent copying class objects by deleting the copy constructor and operator =: class NonCopyable { NonCopyable(); NonCopyable(const NonCopyable&) = delete; NonCopyable& operator=(const NonCopyable&) = delete; }; Type long long int In C++03, the largest integer type is long int. It is guaranteed to have at least as many usable bits as int. This resulted in long int having size of 64 bits on some popular implementations and 32 bits on others. C++11 adds a new integer type long long int to address this issue. It is guaranteed to be at least as large as a long int, and have no fewer than 64 bits. The type was originally introduced by C99 to the standard C, and most C++ compilers supported it as an extension already. Static assertions C++03 provides two methods to test assertions: the macro assert and the preprocessor directive #error. However, neither is appropriate for use in templates: the macro tests the assertion at execution-time, while the preprocessor directive tests the assertion during preprocessing, which happens before instantiation of templates. Neither is appropriate for testing properties that are dependent on template parameters. The new utility introduces a new way to test assertions at compile-time, using the new keyword static_assert. The declaration assumes this form: static_assert (constant-expression, error-message); Here are some examples of how static_assert can be used: static_assert((GREEKPI > 3.14) && (GREEKPI < 3.15), "GREEKPI is inaccurate!"); template<class T> struct Check { static_assert(sizeof(int) <= sizeof(T), "T is not big enough!"); }; template<class Integral> Integral foo(Integral x, Integral y) { static_assert(std::is_integral<Integral>::value, "foo() parameter must be an integral type."); } When the constant expression is false the compiler produces an error message. The first example is similar to the preprocessor directive #error, although the preprocessor does only support integral types. In contrast, in the second example the assertion is checked at every instantiation of the template class Check. Static assertions are useful outside of templates also. For instance, a given implementation of an algorithm might depend on the size of a long long being larger than an int, something the standard does not guarantee. Such an assumption is valid on most systems and compilers, but not all. Allow sizeof to work on members of classes without an explicit object In C++03, the sizeof operator can be used on types and objects. But it cannot be used to do this: struct SomeType { OtherType member; }; sizeof(SomeType::member); // Does not work with C++03. Okay with C++11 This should return the size of OtherType. C++03 disallows this, so it is a compile error. C++11 allows it. It is also allowed for the alignof operator introduced in C++11. Control and query object alignment C++11 allows variable alignment to be queried and controlled with alignof and alignas. The alignof operator takes the type and returns the power of 2 byte boundary on which the type instances must be allocated (as a std::size_t). When given a reference type alignof returns the referenced type's alignment; for arrays it returns the element type's alignment. The alignas specifier controls the memory alignment for a variable. The specifier takes a constant or a type; when supplied a type alignas(T) is shorthand for alignas(alignof(T)). For example, to specify that a char array should be properly aligned to hold a float: alignas(float) unsigned char c[sizeof(float)] Allow garbage collected implementations Prior C++ standards provided for programmer-driven garbage collection via set_new_handler, but gave no definition of object reachability for the purpose of automatic garbage collection. C++11 defines conditions under which pointer values are "safely derived" from other values. An implementation may specify that it operates under strict pointer safety, in which case pointers that are not derived according to these rules can become invalid. Attributes C++11 provides a standardized syntax for compiler/tool extensions to the language. Such extensions were traditionally specified using #pragma directive or vendor-specific keywords (like __attribute__ for GNU and __declspec for Microsoft). With the new syntax, added information can be specified in a form of an attribute enclosed in double square brackets. An attribute can be applied to various elements of source code: int [[attr1]] i [[attr2, attr3]]; [[attr4(arg1, arg2)]] if (cond) { [[vendor::attr5]] return i; } In the example above, attribute attr1 applies to the type of variable i, attr2 and attr3 apply to the variable itself, attr4 applies to the if statement and vendor::attr5 applies to the return statement. In general (but with some exceptions), an attribute specified for a named entity is placed after the name, and before the entity otherwise, as shown above, several attributes may be listed inside one pair of double square brackets, added arguments may be provided for an attribute, and attributes may be scoped by vendor-specific attribute namespaces. It is recommended that attributes have no language semantic meaning and do not change the sense of a program when ignored. Attributes can be useful for providing information that, for example, helps the compiler to issue better diagnostics or optimize the generated code. C++11 provides two standard attributes itself: noreturn to specify that a function does not return, and carries_dependency to help optimizing multi-threaded code by indicating that function arguments or return value carry a dependency. C++ standard library changes A number of new features were introduced in the C++11 standard library. Many of these could have been implemented under the old standard, but some rely (to a greater or lesser extent) on new C++11 core features. A large part of the new libraries was defined in the document C++ Standards Committee's Library Technical Report (called TR1), which was published in 2005. Various full and partial implementations of TR1 are currently available using the namespace std::tr1. For C++11 they were moved to namespace std. However, as TR1 features were brought into the C++11 standard library, they were upgraded where appropriate with C++11 language features that were not available in the initial TR1 version. Also, they may have been enhanced with features that were possible under C++03, but were not part of the original TR1 specification. Upgrades to standard library components C++11 offers a number of new language features that the currently existing standard library components can benefit from. For example, most standard library containers can benefit from Rvalue reference based move constructor support, both for quickly moving heavy containers around and for moving the contents of those containers to new memory locations. The standard library components were upgraded with new C++11 language features where appropriate. These include, but are not necessarily limited to: Rvalue references and the associated move support Support for the UTF-16 encoding unit, and UTF-32 encoding unit Unicode character types Variadic templates (coupled with Rvalue references to allow for perfect forwarding) Compile-time constant expressions decltype explicit conversion operators Functions declared defaulted or deleted Further, much time has passed since the prior C++ standard. Much code using the standard library has been written. This has revealed parts of the standard libraries that could use some improving. Among the many areas of improvement considered were standard library allocators. A new scope-based model of allocators was included in C++11 to supplement the prior model. Threading facilities While the C++03 language provides a memory model that supports threading, the primary support for actually using threading comes with the C++11 standard library. A thread class (std::thread) is provided, which takes a function object (and an optional series of arguments to pass to it) to run in the new thread. It is possible to cause a thread to halt until another executing thread completes, providing thread joining support via the std::thread::join() member function. Access is provided, where feasible, to the underlying native thread object(s) for platform-specific operations by the std::thread::native_handle() member function. For synchronization between threads, appropriate mutexes (std::mutex, std::recursive_mutex, etc.) and condition variables (std::condition_variable and std::condition_variable_any) are added to the library. These are accessible via Resource Acquisition Is Initialization (RAII) locks (std::lock_guard and std::unique_lock) and locking algorithms for easy use. For high-performance, low-level work, communicating between threads is sometimes needed without the overhead of mutexes. This is done using atomic operations on memory locations. These can optionally specify the minimum memory visibility constraints needed for an operation. Explicit memory barriers may also be used for this purpose. The C++11 thread library also includes futures and promises for passing asynchronous results between threads, and std::packaged_task for wrapping up a function call that can generate such an asynchronous result. The futures proposal was criticized because it lacks a way to combine futures and check for the completion of one promise inside a set of promises. Further high-level threading facilities such as thread pools have been remanded to a future C++ technical report. They are not part of C++11, but their eventual implementation is expected to be built entirely on top of the thread library features. The new std::async facility provides a convenient method of running tasks and tying them to a std::future. The user can choose whether the task is to be run asynchronously on a separate thread or synchronously on a thread that waits for the value. By default, the implementation can choose, which provides an easy way to take advantage of hardware concurrency without oversubscription, and provides some of the advantages of a thread pool for simple usages. Tuple types Tuples are collections composed of heterogeneous objects of pre-arranged dimensions. A tuple can be considered a generalization of a struct's member variables. The C++11 version of the TR1 tuple type benefited from C++11 features like variadic templates. To implement reasonably, the TR1 version required an implementation-defined maximum number of contained types, and substantial macro trickery. By contrast, the implementation of the C++11 version requires no explicit implementation-defined maximum number of types. Though compilers will have an internal maximum recursion depth for template instantiation (which is normal), the C++11 version of tuples will not expose this value to the user. Using variadic templates, the declaration of the tuple class looks as follows: template <class ...Types> class tuple; An example of definition and use of the tuple type: typedef std::tuple <int, double, long &, const char *> test_tuple; long lengthy = 12; test_tuple proof (18, 6.5, lengthy, "Ciao!"); lengthy = std::get<0>(proof); // Assign to 'lengthy' the value 18. std::get<3>(proof) = " Beautiful!"; // Modify the tuple’s fourth element. It's possible to create the tuple proof without defining its contents, but only if the tuple elements' types possess default constructors. Moreover, it's possible to assign a tuple to another tuple: if the two tuples’ types are the same, each element type must possess a copy constructor; otherwise, each element type of the right-side tuple must be convertible to that of the corresponding element type of the left-side tuple or that the corresponding element type of the left-side tuple has a suitable constructor. typedef std::tuple <int , double, string > tuple_1 t1; typedef std::tuple <char, short , const char * > tuple_2 t2 ('X', 2, "Hola!"); t1 = t2; // Ok, first two elements can be converted, // the third one can be constructed from a 'const char *'. Just like std::make_pair for std::pair, there exists std::make_tuple to automatically create std::tuples using type deduction and auto helps to declare such a tuple. std::tie creates tuples of lvalue references to help unpack tuples. std::ignore also helps here. See the example: auto record = std::make_tuple("Hari Ram", "New Delhi", 3.5, 'A'); std::string name ; float gpa ; char grade ; std::tie(name, std::ignore, gpa, grade) = record ; // std::ignore helps drop the place name std::cout << name << ' ' << gpa << ' ' << grade << std::endl ; Relational operators are available (among tuples with the same number of elements), and two expressions are available to check a tuple's characteristics (only during compilation): std::tuple_size<T>::value returns the number of elements in the tuple T, std::tuple_element<I, T>::type returns the type of the object number I of the tuple T. Hash tables Including hash tables (unordered associative containers) in the C++ standard library is one of the most recurring requests. It was not adopted in C++03 due to time constraints only. Although hash tables are less efficient than a balanced tree in the worst case (in the presence of many collisions), they perform better in many real applications. Collisions are managed only via linear chaining because the committee didn't consider it to be opportune to standardize solutions of open addressing that introduce quite a lot of intrinsic problems (above all when erasure of elements is admitted). To avoid name clashes with non-standard libraries that developed their own hash table implementations, the prefix “unordered” was used instead of “hash”. The new library has four types of hash tables, differentiated by whether or not they accept elements with the same key (unique keys or equivalent keys), and whether they map each key to an associated value. They correspond to the four existing binary search tree based associative containers, with an prefix. The new classes fulfill all the requirements of a container class, and have all the methods needed to access elements: insert, erase, begin, end. This new feature didn't need any C++ language core extensions (though implementations will take advantage of various C++11 language features), only a small extension of the header <functional> and the introduction of headers <unordered_set> and <unordered_map>. No other changes to any existing standard classes were needed, and it doesn't depend on any other extensions of the standard library. std::array and std::forward_list In addition to the hash tables two more containers was added to the standard library. The std::array is a fixed size container that is more efficient than std::vector but safer and easier to use than a c-style array. The std::forward_list is a single linked list that provides more space efficient storage than the double linked std::list when bidirectional iteration is not needed. Regular expressions The new library, defined in the new header <regex>, is made of a couple of new classes: regular expressions are represented by instance of the template class std::regex; occurrences are represented by instance of the template class std::match_results, std::regex_iterator is used to iterate over all matches of a regex The function std::regex_search is used for searching, while for ‘search and replace’ the function std::regex_replace is used which returns a new string. Here is an example of the use of std::regex_iterator: #include <regex> const char *pattern = R"([^ ,.\t\n]+)"; // find words separated by space, comma, period tab newline std::regex rgx(pattern); // throws exception on invalid pattern const char *target = "Unseen University - Ankh-Morpork"; // Use a regex_iterator to identify all words of 'target' separated by characters of 'pattern'. auto iter = std::cregex_iterator(target, target + strlen(target), rgx); // make an end of sequence iterator auto end = std::cregex_iterator(); for (; iter != end; ++iter) { std::string match_str = iter->str(); std::cout << match_str << '\n'; } The library <regex> requires neither alteration of any existing header (though it will use them where appropriate) nor an extension of the core language. In POSIX C, regular expressions are also available via the C POSIX library#regex.h. General-purpose smart pointers C++11 provides , and improvements to and from TR1. is deprecated. Extensible random number facility The C standard library provides the ability to generate pseudorandom numbers via the function rand. However, the algorithm is delegated entirely to the library vendor. C++ inherited this functionality with no changes, but C++11 provides a new method for generating pseudorandom numbers. C++11's random number functionality is split into two parts: a generator engine that contains the random number generator's state and produces the pseudorandom numbers; and a distribution, which determines the range and mathematical distribution of the outcome. These two are combined to form a random number generator object. Unlike the C standard rand, the C++11 mechanism will come with three base generator engine algorithms: linear_congruential_engine, subtract_with_carry_engine, and mersenne_twister_engine. C++11 also provides a number of standard distributions: uniform_int_distribution, uniform_real_distribution, bernoulli_distribution, binomial_distribution, geometric_distribution, negative_binomial_distribution, poisson_distribution, exponential_distribution, gamma_distribution, weibull_distribution, extreme_value_distribution, normal_distribution, lognormal_distribution, chi_squared_distribution, cauchy_distribution, fisher_f_distribution, student_t_distribution, discrete_distribution, piecewise_constant_distribution and piecewise_linear_distribution. The generator and distributions are combined as in this example: #include <random> #include <functional> std::uniform_int_distribution<int> distribution(0, 99); std::mt19937 engine; // Mersenne twister MT19937 auto generator = std::bind(distribution, engine); int random = generator(); // Generate a uniform integral variate between 0 and 99. int random2 = distribution(engine); // Generate another sample directly using the distribution and the engine objects. Wrapper reference A wrapper reference is obtained from an instance of the class template reference_wrapper. Wrapper references are similar to normal references (‘&’) of the C++ language. To obtain a wrapper reference from any object the function template ref is used (for a constant reference cref is used). Wrapper references are useful above all for function templates, where references to parameters rather than copies are needed: // This function will take a reference to the parameter 'r' and increment it. void func (int &r) { r++; } // Template function. template<class F, class P> void g (F f, P t) { f(t); } int main() { int i = 0; g (func, i); // 'g<void (int &r), int>' is instantiated // then 'i' will not be modified. std::cout << i << std::endl; // Output -> 0 g (func, std::ref(i)); // 'g<void(int &r),reference_wrapper<int>>' is instantiated // then 'i' will be modified. std::cout << i << std::endl; // Output -> 1 } This new utility was added to the existing <functional> header and didn't need further extensions of the C++ language. Polymorphic wrappers for function objects Polymorphic wrappers for function objects are similar to function pointers in semantics and syntax, but are less tightly bound and can indiscriminately refer to anything which can be called (function pointers, member function pointers, or functors) whose arguments are compatible with those of the wrapper. An example can clarify its characteristics: std::function<int (int, int)> func; // Wrapper creation using // template class 'function'. std::plus<int> add; // 'plus' is declared as 'template<class T> T plus( T, T ) ;' // then 'add' is type 'int add( int x, int y )'. func = add; // OK - Parameters and return types are the same. int a = func (1, 2); // NOTE: if the wrapper 'func' does not refer to any function, // the exception 'std::bad_function_call' is thrown. std::function<bool (short, short)> func2 ; if (!func2) { // True because 'func2' has not yet been assigned a function. bool adjacent(long x, long y); func2 = &adjacent; // OK - Parameters and return types are convertible. struct Test { bool operator()(short x, short y); }; Test car; func = std::ref(car); // 'std::ref' is a template function that returns the wrapper // of member function 'operator()' of struct 'car'. } func = func2; // OK - Parameters and return types are convertible. The template class function was defined inside the header <functional>, without needing any change to the C++ language. Type traits for metaprogramming Metaprogramming consists of creating a program that creates or modifies another program (or itself). This can happen during compilation or during execution. The C++ Standards Committee has decided to introduce a library for metaprogramming during compiling via templates. Here is an example of a meta-program using the C++03 standard: a recursion of template instances for calculating integer exponents: template<int B, int N> struct Pow { // recursive call and recombination. enum{ value = B*Pow<B, N-1>::value }; }; template< int B > struct Pow<B, 0> { // ''N == 0'' condition of termination. enum{ value = 1 }; }; int quartic_of_three = Pow<3, 4>::value; Many algorithms can operate on different types of data; C++'s templates support generic programming and make code more compact and useful. Nevertheless, it is common for algorithms to need information on the data types being used. This information can be extracted during instantiation of a template class using type traits. Type traits can identify the category of an object and all the characteristics of a class (or of a struct). They are defined in the new header <type_traits>. In the next example there is the template function ‘elaborate’ which, depending on the given data types, will instantiate one of the two proposed algorithms (Algorithm::do_it). // First way of operating. template< bool B > struct Algorithm { template<class T1, class T2> static int do_it (T1 &, T2 &) { /*...*/ } }; // Second way of operating. template<> struct Algorithm<true> { template<class T1, class T2> static int do_it (T1, T2) { /*...*/ } }; // Instantiating 'elaborate' will automatically instantiate the correct way to operate. template<class T1, class T2> int elaborate (T1 A, T2 B) { // Use the second way only if 'T1' is an integer and if 'T2' is // in floating point, otherwise use the first way. return Algorithm<std::is_integral<T1>::value && std::is_floating_point<T2>::value>::do_it( A, B ) ; } Via type traits, defined in header <type_traits>, it's also possible to create type transformation operations (static_cast and const_cast are insufficient inside a template). This type of programming produces elegant and concise code; however, the weak point of these techniques is the debugging: it's uncomfortable during compilation and very difficult during program execution. Uniform method for computing the return type of function objects Determining the return type of a template function object at compile-time is not intuitive, particularly if the return value depends on the parameters of the function. As an example: struct Clear { int operator()(int) const; // The parameter type is double operator()(double) const; // equal to the return type. }; template <class Obj> class Calculus { public: template<class Arg> Arg operator()(Arg& a) const { return member(a); } private: Obj member; }; Instantiating the class template Calculus<Clear>, the function object of calculus will have always the same return type as the function object of Clear. However, given class Confused below: struct Confused { double operator()(int) const; // The parameter type is not int operator()(double) const; // equal to the return type. }; Attempting to instantiate Calculus<Confused> will cause the return type of Calculus to not be the same as that of class Confused. The compiler may generate warnings about the conversion from int to double and vice versa. TR1 introduces, and C++11 adopts, the template class std::result_of that allows one to determine and use the return type of a function object for every declaration. The object CalculusVer2 uses the std::result_of object to derive the return type of the function object: template< class Obj > class CalculusVer2 { public: template<class Arg> typename std::result_of<Obj(Arg)>::type operator()(Arg& a) const { return member(a); } private: Obj member; }; In this way in instances of function object of CalculusVer2<Confused> there are no conversions, warnings, or errors. The only change from the TR1 version of std::result_of is that the TR1 version allowed an implementation to fail to be able to determine the result type of a function call. Due to changes to C++ for supporting decltype, the C++11 version of std::result_of no longer needs these special cases; implementations are required to compute a type in all cases. Improved C compatibility For compatibility with C, from C99, these were added: Preprocessor: variadic macros, concatenation of adjacent narrow/wide string literals, _Pragma() – equivalent of #pragma. long long – integer type that is at least 64 bits long. __func__ – macro evaluating to the name of the function it is in. Headers: cstdbool (stdbool.h), cstdint (stdint.h), cinttypes (inttypes.h). Features originally planned but removed or not included Heading for a separate TR: Modules Decimal types Math special functions Postponed: Concepts More complete or required garbage collection support Reflection Macro scopes Features removed or deprecated The term sequence point was removed, being replaced by specifying that either one operation is sequenced before another, or that two operations are unsequenced. The former use of the keyword export was removed. The keyword itself remains, being reserved for potential future use. Dynamic exception specifications are deprecated. Compile-time specification of non-exception-throwing functions is available with the noexcept keyword, which is useful for optimization. std::auto_ptr is deprecated, having been superseded by std::unique_ptr. Function object base classes (std::unary_function, std::binary_function), adapters to pointers to functions and adapters to pointers to members, and binder classes are all deprecated. See also C11 References External links The C++ Standards Committee C++0X: The New Face of Standard C++ Herb Sutter's blog coverage of C++11 Anthony Williams' blog coverage of C++11 A talk on C++0x given by Bjarne Stroustrup at the University of Waterloo The State of the Language: An Interview with Bjarne Stroustrup (15 August 2008) Wiki page to help keep track of C++ 0x core language features and their availability in compilers Online C++11 standard library reference Online C++11 compiler Bjarne Stroustrup's C++11 FAQ More information on C++11 features:range-based for loop, why auto_ptr is deprecated, etc. C++ Programming language standards Articles with example C++ code C++ programming language family IEC standards ISO standards sv:C++#Historia
C++11
[ "Technology" ]
19,309
[ "Computer standards", "Programming language standards", "IEC standards" ]
5,481,576
https://en.wikipedia.org/wiki/High-Z%20Supernova%20Search%20Team
The High-Z Supernova Search Team was an international cosmology collaboration which used Type Ia supernovae to chart the expansion of the universe. The team was formed in 1994 by Brian P. Schmidt, then a post-doctoral research associate at Harvard University, and Nicholas B. Suntzeff, a staff astronomer at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The original team submitted a proposal on September 29, 1994 called A Pilot Project to Search for Distant Type Ia Supernova to the CTIO. The team on the first observing proposal comprised: Nicholas Suntzeff (PI); Brian Schmidt (Co-I); (other Co-Is) R. Chris Smith, Robert Schommer, Mark M. Phillips, Mario Hamuy, Roberto Aviles, Jose Maza, Adam Riess, Robert Kirshner, Jason Spyromilio, and Bruno Leibundgut. The project was awarded four nights of telescope time on the CTIO Víctor M. Blanco Telescope on the nights of February 25, 1995, and March 6, 24, and 29, 1995. The pilot project led to the discovery of supernova SN1995Y. In 1995, the HZT elected Brian P. Schmidt of the Mount Stromlo Observatory which is part of the Australian National University to manage the team. The team expanded to roughly 20 astronomers located in the United States, Europe, Australia, and Chile. They used the Víctor M. Blanco telescope to discover Type Ia supernovae out to redshifts of z = 0.9. The discoveries were verified with spectra taken mostly from the telescopes of the Keck Observatory, and the European Southern Observatory. In January 1998, Notre Dame astrophysicist Peter Garnavich, then working at the Harvard-Smithsonian Center for Astrophysics, led a High-Z team publication that used the Hubble Space Telescope to study three high-redshift supernovae. These results indicated that the universe did not contain enough matter to halt its expansion and that the universe would likely expand forever. In a May 1998 study led by Adam Riess, the High-Z Team became the first to publish evidence that the expansion of the Universe is accelerating. The team later spawned Project ESSENCE led by Christopher Stubbs of Harvard University and the Higher-Z Team in 2002 led by Adam Riess of Johns Hopkins University and Space Telescope Science Institute. In 2011, Riess and Schmidt, along with Saul Perlmutter of the Supernova Cosmology Project, were awarded the Nobel Prize in Physics for this work. Awards 1998: Breakthrough of the Year, Science magazine 2006: Shaw Prize 2007: Gruber Prize in Cosmology 2011: Nobel Prize in Physics 2011: Albert Einstein Medal 2015: Breakthrough Prize in Fundamental Physics 2015: Wolf Prize in Physics Members Mount Stromlo Observatory and the Australian National University Brian P. Schmidt CTIO Nicholas Suntzeff Robert Schommer R. Chris Smith Mario Hamuy (1994–1997) Las Campanas Observatory Mark M. Phillips (1994–2000) Pontificia Universidad Católica de Chile Alejandro Clocchiatti (starting in 1996) University of Chile Jose Maza (1994–1997) European Southern Observatory Bruno Leibundgut Jason Spyromilio University of Hawaii John Tonry (starting in 1996) University of California, Berkeley Alexei Filippenko (starting in 1996) Weidong Li (starting in 1999) Space Telescope Science Institute Adam Riess Ron Gilliland (1996–2000) University of Washington Christopher Stubbs (starting in 1995) Craig Hogan (starting in 1995) David Reiss (1995–1999) Alan Diercks (1995–1999) Harvard University Christopher Stubbs (starting in 2003) Robert Kirshner Thomas Matheson (starting 1999) Saurabh Jha (starting 1997) Peter Challis University of Notre Dame Peter Garnavich Stephen Holland (starting 2000) References External links High-Z Supernova Search Team Mainsite International research institutes Physical cosmology
High-Z Supernova Search Team
[ "Physics", "Astronomy" ]
819
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
5,481,844
https://en.wikipedia.org/wiki/Bear%20pit
A bear pit was historically used to display bears, typically for entertainment and especially bear-baiting. The pit area was normally surrounded by a high fence, above which the spectators would look down on the bears. The most traditional form of maintaining bears in captivity is keeping them in pits, although many zoos replaced these by more elaborate and spacious enclosures that attempt to replicate their natural habitats, for the benefit of the animals and the visitors. A noteworthy example is found in Bern, Switzerland. Known as the Bärengraben, it was built in 1857, and is still in use though much modified: after an outcry around 2000, when the bears were still in two circular pits and shut up at night, a park was constructed on the riverbank by the pits with generous access to the river Aare which has greatly improved the three bears' accommodation. Other meanings Another meaning is for an unusually aggressive political arena, in which direct, heated attacks are common. A bear pit also refers to a type of trap used to deter or trap bears. It usually consists of a large earthen pit with sharpened pikes in the bottom to impale the bear. They are most often used to deter bears from approaching a cabin, rather than as a means of actually catching them. The term bear pit is also used to describe a tournament or sparring format, sometimes also referred to as "king of the hill". The participants form a queue behind the first two to compete. Once each match is over, the winner remains to face the next opponent in line, while the loser goes to the end of the queue. For tournaments, it is usual that the process is continued for a set period of time during which the victor of each match is noted. At the end of the tournament the winner is determined to be the contestant that won the most matches. Multiple bear pits may also be employed, with defeated contestants able to choose which queue to re-enter. In Scotland, the phrase bear pit is used to describe bars or public houses that are known to have a violent reputation. See also Berenkuil (traffic) Menagerie Zoo References External links Sheffield Botanical Gardens Bear Pit Bear parc in Bern Bears Zoos Baiting (blood sport) Cruelty to animals es:Foso del oso
Bear pit
[ "Engineering" ]
462
[]
5,481,852
https://en.wikipedia.org/wiki/Xi%20baryon
The Xi baryons or cascade particles are a family of subatomic hadron particles which have the symbol and may have an electric charge () of +2 , +1 , 0, or −1 , where is the elementary charge. Like all conventional baryons, particles contain three quarks. baryons, in particular, contain either one up or one down quark and two other, more massive quarks. The two more massive quarks are any two of strange, charm, or bottom (doubles allowed). For notation, the assumption is that the two heavy quarks in the are both strange; subscripts "c" and "b" are added for each even heavier charm or bottom quark that replaces one of the two presumed strange quarks. They are historically called the cascade particles because of their unstable state; they are typically observed to decay rapidly into lighter particles, through a chain of decays (cascading decays). The first discovery of a charged Xi baryon was in cosmic ray experiments by the Manchester group in 1952. The first discovery of the neutral Xi particle was at Lawrence Berkeley Laboratory in 1959. It was also observed as a daughter product from the decay of the omega baryon () observed at Brookhaven National Laboratory in 1964. The Xi spectrum is important to nonperturbative quantum chromodynamics (QCD), such as lattice QCD. History The particle is also known as the cascade B particle and contains quarks from all three families. It was discovered by DØ and CDF experiments at Fermilab. The discovery was announced on 12 June 2007. It was the first known particle made of quarks from all three quark generations – namely, a down quark, a strange quark, and a bottom quark. The DØ and CDF collaborations reported the consistent masses of the new state. The Particle Data Group world average mass is . For notation, the assumption is that the two heavy quarks are both strange, denoted by a simple  ; a subscript "c" is added for each constituent charm quark, and a "b" for each bottom quark. Hence , , , , etc. Unless specified, the non-up/down quark content of Xi baryons is strange (i.e. there is one up or down quark and two strange quarks). However a contains one up, one strange, and one bottom quark, while a contains one up and two bottom quarks. In 2012, the CMS experiment at the Large Hadron Collider detected a baryon (reported mass ). (Here,"*" indicates a baryon decuplet.) The LHCb experiment at CERN discovered two new Xi baryons in 2014: and . In 2017, the LHCb researchers reported yet another Xi baryon: the double charmed baryon, consisting of two heavy charm quarks and one up quark. The mass of is about 3.8 times that of a proton. List of Xi baryons Isospin and spin values in parentheses have not been firmly established by experiments, but are predicted by the quark model and are consistent with the measurements. Table notes See also Delta baryon Hyperon Lambda baryon List of baryons List of mesons List of particles Nucleon Omega baryon Sigma baryon Timeline of particle discoveries References External links Baryons Nuclear physics
Xi baryon
[ "Physics" ]
714
[ "Nuclear physics" ]
5,481,991
https://en.wikipedia.org/wiki/PSR%20J0537%E2%88%926910
|- style="vertical-align: top;" | Distance | 170,000 ly PSR J0537-6910 is a pulsar that is 4,000 years old (not including the light travel time to Earth). It is located about 170,000 light-years away, in the southern constellation of Dorado, and is located in the Large Magellanic Cloud. It rotates at 62 hertz. A team at LANL advanced that it is possible to predict starquakes in J0537-6910, meaning that it may be possible to devise a way to forecast glitches at least in some exceptional pulsars. The same team observed magnetic pole drift on this pulsar with observational data from Rossi X-ray Timing Explorer. References External links Scientists Can Predict Pulsar Starquakes (SpaceDaily) Jun 07, 2006 SIMBAD entry for PSR J0537-6910 See also Supernova LHA 120-N 157B Stars in the Large Magellanic Cloud Dorado Pulsars
PSR J0537−6910
[ "Astronomy" ]
224
[ "Dorado", "Constellations" ]
5,482,426
https://en.wikipedia.org/wiki/Indole%20test
The indole test is a biochemical test performed on bacterial species to determine the ability of the organism to convert tryptophan into indole. This division is performed by a chain of a number of different intracellular enzymes, a system generally referred to as "tryptophanase." Biochemistry Indole is generated by reductive deamination from tryptophan via the intermediate molecule indolepyruvic acid. Tryptophanase catalyzes the deamination reaction, during which the amine (-NH2) group of the tryptophan molecule is removed. Final products of the reaction are indole, pyruvic acid, ammonium (NH4+) and energy. Pyridoxal phosphate is required as a coenzyme. Performing a test Like many biochemical tests on bacteria, results of an indole test are indicated by a change in color following a reaction with an added reagent. Pure bacterial culture must be grown in sterile tryptophan or peptone broth for 24–48 hours before performing the test. Following incubation, five drops of Kovac's reagent (isoamyl alcohol, para-Dimethylaminobenzaldehyde, concentrated hydrochloric acid) are added to the culture broth. A positive result is shown by the presence of a red or reddish-violet color in the surface alcohol layer of the broth. A negative result appears yellow. A variable result can also occur, showing an orange color as a result. This is due to the presence of skatole, also known as methyl indole or methylated indole, another possible product of tryptophan degradation. The positive red color forms as a result of a series of reactions. The para-Dimethylaminobenzaldehyde reacts with indole present in the medium to form a red rosindole dye. The isoamyl alcohol forms a complex with rosindole dye, which causes it to precipitate. The remaining alcohol and the precipitate then rise to the surface of the medium. A variation on this test using Ehrlich's reagent (using ethyl alcohol in place of isoamyl alcohol, developed by Paul Ehrlich) is used when performing the test on nonfermenters and anaerobes. Indole-Positive Bacteria Bacteria that test positive for cleaving indole from tryptophan include: Aeromonas hydrophila, Aeromonas punctata, Bacillus alvei, Edwardsiella sp., Escherichia coli, Flavobacterium sp., Haemophilus influenzae, Klebsiella oxytoca, Proteus sp. (not P. mirabilis and P. penneri), Plesiomonas shigelloides, Pasteurella multocida, Pasteurella pneumotropica, Vibrio sp., and Lactobacillus reuteri. Indole-Negative Bacteria Bacteria which give negative results for the indole test include: Actinobacillus spp., Aeromonas salmonicida, Alcaligenes sp., most Bacillus sp., Bordetella sp., Enterobacter sp., most Haemophilus sp., most Klebsiella sp., Neisseria sp., Mannheimia haemolytica, Pasteurella ureae, Proteus mirabilis, P. penneri, Pseudomonas sp., Salmonella sp., Serratia sp., Yersinia sp., and Rhizobium sp. The Indole test is one of the four tests of the IMViC series, which tests for evidence of an enteric bacterium. The other three tests include: the methyl red test [M], the Voges–Proskauer test [V] and the citrate test [C]. References MacFaddin, Jean F. "Biochemical Tests for Identification of Medical Bacteria." Williams & Wilkins, 1980, pp 173 – 183. Example of typical indole reactions Angen, O.; Mutters, R.; Caugant, D. A.; Olsen, J. E.; Bisgaard, M. (1999). "Taxonomic relationships of the [Pasteurella] haemolytica complex as evaluated by DNA-DNA hybridizations and 16S rRNA sequencing with proposal of Mannheimia haemolytica gen. nov., comb, nov., Mannheimia granulomatis comb. nov., Mannheimia glucosida sp. nov., Mannheimia ruminalis sp. nov. and Mannheimia varigena sp. nov.". International Journal of Systematic Bacteriology. 49 (1): 67–86. . . Bacteriology terminology Microbiology techniques
Indole test
[ "Chemistry", "Biology" ]
1,022
[ "Microbiology techniques", "Bacteriology terminology" ]
5,482,472
https://en.wikipedia.org/wiki/XMMXCS%202215-1738
XMMXCS 2215-1738 is a galaxy cluster that lies 10 billion light-years away and has a redshift value of z=1.45. It was discovered by the XMM Cluster Survey in 2006. Discovered in 2006, XMMXCS 2215-1738 is one of the most distant galaxy clusters known. It is embedded in intergalactic gas that has a temperature of 10 million degrees. The estimated mass of the cluster is 500 trillion solar masses, most coming from dark matter. The cluster was discovered and studied using the XMM-Newton and Keck Telescopes. The cluster is surprisingly large and evolved for a cluster that existed when the universe was only 3 billion years old. Led by University of Sussex researchers, part of the XMM Cluster Survey (XCS) used X-ray Multi Mirror (XMM) Newton satellite to find it, Keck Telescope to determine distance, and used the Hubble Space Telescope to further image it. It contains hundreds of reddish galaxies surrounded by x-ray-emitting gas. The galaxy is called XMMXCS 2215-1734 in many references, with some news sources listing both names. The source of the naming contradiction between XMMXCS 2215-1734 and XMMXCS 2215-1738 is not known. However, XMMXCS 2215-1738 seems to be the more accurate. See also 2XMM J083026+524133 galaxy cluster XMM-Newton XMM Cluster Survey List of the most distant astronomical objects References Most Distant Galaxy Cluster Found 10 Billion light-years Away XMM Cluster Survey (XCS) The XMM Cluster Survey: A Massive Galaxy Cluster at z=1.45 S. A. Stanford (arXiv preprint) Sun, 4 Jun 2006 16:23:55 GMT External links Astronomers Find Most Distant Galaxy Cluster Yet (SpaceDaily) Jun 7, 2006 Maturity of Farthest Galaxy Cluster Surprises Astronomers Christine L. Kulyk (SPACE.com) 8 June 2006 06:20 am ET Galaxy clusters Aquarius (constellation)
XMMXCS 2215-1738
[ "Astronomy" ]
426
[ "Galaxy clusters", "Astronomical objects", "Constellations", "Aquarius (constellation)" ]
5,482,655
https://en.wikipedia.org/wiki/Refactorable%20number
A refactorable number or tau number is an integer n that is divisible by the count of its divisors, or to put it algebraically, n is such that . The first few refactorable numbers are listed in as 1, 2, 8, 9, 12, 18, 24, 36, 40, 56, 60, 72, 80, 84, 88, 96, 104, 108, 128, 132, 136, 152, 156, 180, 184, 204, 225, 228, 232, 240, 248, 252, 276, 288, 296, ... For example, 18 has 6 divisors (1 and 18, 2 and 9, 3 and 6) and is divisible by 6. There are infinitely many refactorable numbers. Properties Cooper and Kennedy proved that refactorable numbers have natural density zero. Zelinsky proved that no three consecutive integers can all be refactorable. Colton proved that no refactorable number is perfect. The equation has solutions only if is a refactorable number, where is the greatest common divisor function. Let be the number of refactorable numbers which are at most . The problem of determining an asymptotic for is open. Spiro has proven that There are still unsolved problems regarding refactorable numbers. Colton asked if there are arbitrarily large such that both and are refactorable. Zelinsky wondered if there exists a refactorable number , does there necessarily exist such that is refactorable and . History First defined by Curtis Cooper and Robert E. Kennedy where they showed that the tau numbers have natural density zero, they were later rediscovered by Simon Colton using a computer program he wrote ("HR") which invents and judges definitions from a variety of areas of mathematics such as number theory and graph theory. Colton called such numbers "refactorable". While computer programs had discovered proofs before, this discovery was one of the first times that a computer program had discovered a new or previously obscure idea. Colton proved many results about refactorable numbers, showing that there were infinitely many and proving a variety of congruence restrictions on their distribution. Colton was only later alerted that Kennedy and Cooper had previously investigated the topic. See also Divisor function References Integer sequences
Refactorable number
[ "Mathematics" ]
487
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
5,482,898
https://en.wikipedia.org/wiki/Antimony%20tribromide
Antimony tribromide (SbBr3) is a chemical compound containing antimony in its +3 oxidation state. Production Antimony tribromide may be made by the reaction of antimony with elemental bromine, or by the reaction of antimony trioxide with hydrobromic acid. Alternatively, it can be prepared by the action of bromine on a mixture of antimony sulfide and antimony trioxide at 250 °C. Chemical properties Antimony tribromide has two crystalline forms, both having orthorhombic symmetries. When a warm carbon disulfide solution of SbBr3 is rapidly cooled, it crystallizes into the needle-like α-SbBr3, which then slowly converts to the more stable β form. Antimony tribromide hydrolyzes in water to form hydrobromic acid and antimony trioxide: 2 SbBr3 + 3 H2O → Sb2O3 + 6 HBr Uses It can be added to polymers such as polyethylene as a fire retardant. It is also used in the production of other antimony compounds, in chemical analysis, as a mordant, and in dyeing. References Bromides Metal halides Antimony(III) compounds
Antimony tribromide
[ "Chemistry" ]
254
[ "Bromides", "Inorganic compounds", "Metal halides", "Salts" ]
5,482,961
https://en.wikipedia.org/wiki/Antimony%20triiodide
Antimony triiodide is the chemical compound with the formula SbI3. This ruby-red solid is the only characterized "binary" iodide of antimony, i.e. the sole compound isolated with the formula SbxIy. It contains antimony in its +3 oxidation state. Like many iodides of the heavier main group elements, its structure depends on the phase. Gaseous SbI3 is a molecular, pyramidal species as anticipated by VSEPR theory. In the solid state, however, the Sb center is surrounded by an octahedron of six iodide ligands, three of which are closer and three more distant. For the related compound BiI3, all six Bi—I distances are equal. Production It may be formed by the reaction of antimony with elemental iodine, or the reaction of antimony trioxide with hydroiodic acid. Alternatively, it may be prepared by the interaction of antimony and iodine in boiling benzene or tetrachloroethane. Uses SbI3 has been used as a dopant in the preparation of thermoelectric materials. References External links Iodides Metal halides Antimony(III) compounds
Antimony triiodide
[ "Chemistry" ]
252
[ "Inorganic compounds", "Metal halides", "Salts" ]
5,482,977
https://en.wikipedia.org/wiki/Gardasil
Gardasil is an HPV vaccine for use in the prevention of certain strains of human papillomavirus (HPV). It was developed by Merck & Co. High-risk human papilloma virus (hr-HPV) genital infection is the most common sexually transmitted infection among women. The HPV strains that Gardasil protects against are sexually transmitted, specifically HPV types 6, 11, 16 and 18. HPV types 16 and 18 cause an estimated 70% of cervical cancers, and are responsible for most HPV-induced anal, vulvar, vaginal, and penile cancer cases. HPV types 6 and 11 cause an estimated 90% of genital warts cases. HPV type 16 is responsible for almost 90% of HPV-positive oropharyngeal cancers, and the prevalence is higher in males than females. Though Gardasil does not treat existing infection, vaccination is still recommended for HPV-positive individuals, as it may protect against one or more different strains of the disease. The vaccine was approved for medical use in the United States in 2006, initially for use in females aged 9–26. In 2007, the Advisory Committee on Immunization Practices recommended Gardasil for routine vaccination of girls aged 11 and 12 years. As of August 2009, vaccination was recommended for both males and females before adolescence and the beginning of potential sexual activity. By 2011, the vaccine had been approved in 120 other countries. In 2014, the US Food and Drug Administration (FDA) approved a nine-valent version, Gardasil 9, to protect against infection with the strains covered by the first generation of Gardasil as well as five other HPV strains responsible for 20% of cervical cancers (types 31, 33, 45, 52, and 58). In 2018, the FDA approved expanded use of Gardasil 9 for individuals 27 to 45 years old. Types Gardasil is available as Gardasil which protects against 4 types of HPV (6, 11, 16, 18) and Gardasil 9 which protects against an additional 5 types (31, 33, 45, 52, 58). Medical uses In the United States, Gardasil is indicated for: girls and women 9 through 45 years of age for the prevention of the following diseases: Cervical, vulvar, vaginal, anal, oropharyngeal and other head and neck cancers caused by Human Papillomavirus (HPV) types 16, 18, 31, 33, 45, 52, and 58. Genital warts (condyloma acuminata) caused by HPV types 6 and 11. girls and women 9 through 45 years of age for the following precancerous or dysplastic lesions caused by HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58: Cervical intraepithelial neoplasia (CIN) grade 2/3 and cervical adenocarcinoma in situ (AIS). Cervical intraepithelial neoplasia (CIN) grade 1. Vulvar intraepithelial neoplasia (VIN) grade 2 and grade 3. Vaginal intraepithelial neoplasia (VaIN) grade 2 and grade 3. Anal intraepithelial neoplasia (AIN) grades 1, 2, and 3. boys and men 9 through 45 years of age for the prevention of the following diseases: Anal, oropharyngeal and other head and neck cancers caused by HPV types 16, 18, 31, 33, 45, 52, and 58. Genital warts (condyloma acuminata) caused by HPV types 6 and 11. boys and men 9 through 45 years of age for the following precancerous or dysplastic lesions caused by HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58: Anal intraepithelial neoplasia (AIN) grades 1, 2, and 3. In the European Union, Gardasil is indicated for active immunization of individuals from the age of nine years against the following HPV diseases: Premalignant lesions and cancers affecting the cervix, vulva, vagina and anus caused by vaccine HPV types Genital warts (Condyloma acuminata) caused by specific HPV types. Gardasil is a vaccine to prevent HPV, that, for maximum effect, is recommended for individuals prior to them becoming sexually active. Moreover, evidence supports the conclusion that women who were already infected with one or more of the four HPV types targeted by the vaccine (HPV types 6, 11, 16, or 18) were protected from clinical disease caused by the remaining HPV types in the vaccine. HPV types 16 and 18 cause an estimated 70% of cervical cancers, and are responsible for most HPV-induced anal cancers. Gardasil also protects against vulvar and vaginal cancers caused by HPV types 16 and 18, as well as most penile cancers caused by these two HPV types. In addition, protection against HPV types 6 and 11 may eliminate up to 90% of the cases of genital warts. Common plantar warts—e.g., caused by HPV types 1, 2, and 4—are not prevented by this vaccine. In 2010, Gardasil was approved by the FDA for prevention of anal cancer and associated precancerous lesions due to HPV types 6, 11, 16, and 18 in people aged 9 through 26 years. HPV infections, especially HPV 16, contribute to some head and neck cancer (HPV is found in an estimated 26–35% of head and neck squamous cell carcinoma). In principle, HPV vaccines may help reduce incidence of such cancers caused by HPV, but this has not been demonstrated. In June 2020, the FDA approved the use of Gardasil for the prevention of head and neck cancers. The FDA approved Gardasil 9 for women and men aged 27 to 45 based on the vaccine being 88% effective against persistent HPV infections that cause certain types genital warts and cancers in females. Vaccine efficacy in males in this age group was inferred. Efficacy A 2020 longitudinal study tracking over 1.6 million Swedish girls and women over an eleven-year period found half as many cervical cancer cases in all women who had been vaccinated, and amongst women who had been vaccinated before the age of 17 a 78% reduction in cervical cancer, "a substantially reduced risk of invasive cervical cancer at the population level." An alternative vaccine known as Cervarix protects against two oncogenic strains of HPV, 16 and 18. The National Cancer Institute says, "To date, protection against the targeted HPV types has been found to last for at least 10 years with Gardasil, at least 9 years with Cervarix, and at least 6 years with Gardasil 9. Long-term studies of vaccine efficacy that are still in progress will help scientists better understand the total duration of protection." Gardasil has been shown to be partially effective (approximately 38%) in preventing cervical cancer caused by ten other high-risk HPV types. Antibody levels at month 3 (one month post-dose number two) are substantially higher than at month 24 (18 months post-dose number three), suggesting that protection is achieved by month 3 and perhaps earlier. In 2014, the World Health Organization (WHO) recommended that countries offer the vaccine in a two dose schedule to girls aged under 15, with each dose at least six months apart. The United Kingdom, Switzerland, Mexico, and Quebec province of Canada are among the countries or territories that have implemented this . The CDC recommended the vaccines be delivered in two shots over six months. Males Gardasil is also effective in males, providing protection against genital warts, anal warts, anal cancer, and some potentially precancerous lesions caused by some HPV types. Gardasil vaccine has been shown to decrease the risk of young men contracting genital warts. In the United States, the FDA approved administration of the Gardasil vaccine to males between ages 9 and 26 in 2009. The FDA approved administration of the Gardasil 9 vaccine to males between ages 9 and 15 in 2014, and extended the age indication, by including males between ages 16 and 26, in 2015. In the UK, HPV vaccines are licensed for males aged 9 to 15 and for females aged 9 to 26. Men who have sex with men (MSM) are particularly at risk for conditions associated with HPV types 6, 11, 16, and 18; diseases and cancers that have a higher incidence among MSM include anal intraepithelial neoplasias, anal cancers, and genital warts. HPV type 16 is also responsible for almost 90% of HPV-positive oropharyngeal squamous-cell carcinoma (OPSCC), a form of cancer that affects the mouth, tonsils, and throat; the prevalence of HPV-positive oropharyngeal cancers is higher in males than females. A 2005 study found that 95% of HIV-infected gay men also had anal HPV infection, of whom 50% had precancerous HPV-caused lesions. Administration Gardasil is given in three injections over six months. The second injection is two months after the first, and the third injection is six months after the first shot was administered. Alternatively, in some countries it is given as two injections with at least six months between them, for individuals aged 9 years up to and including 13 years. Adverse effects , more than 170 million doses of Gardasil had been distributed worldwide. The vaccine was tested in thousands of females (ages 9 to 26). The US Food and Drug Administration (FDA) and the US Centers for Disease Control and Prevention (CDC) consider the vaccine to be safe. It does not contain mercury, thiomersal, live viruses or dead viruses, but virus-like particles, which cannot reproduce in the human body. The vaccine has mostly minor side effects, such as pain around the injection area. Fainting is more common among adolescents receiving the Gardasil vaccine than in other kinds of vaccinations. Patients should remain seated for 15 minutes after they receive the HPV vaccine. There have been reports that the shot is more painful than other common vaccines, and the manufacturer Merck partly attributes this to the virus-like particles within the vaccine. General side effects of the shot may include joint and muscle pain, fatigue, physical weakness and general malaise. The FDA and the CDC said that with millions of vaccinations "by chance alone some serious adverse effects and deaths" will occur in the time period following vaccination, but they have nothing to do with the vaccine. More than twenty women who received the Gardasil vaccine have died, but these deaths have not been causally connected to the shot, as correlation does not imply causation. Where information has been available, the cause of death was explained by other factors. Likewise, a small number of cases of Guillain–Barré syndrome (GBS) have been reported following vaccination with Gardasil, though there is no evidence linking GBS to the vaccine. It is unknown why a person develops GBS, or what initiates the disease. The FDA and the CDC monitor events to see if there are patterns, or more serious events than would be expected from chance alone. The majority (68%) of side effects data were reported by the manufacturer, but in about 90% of the manufacturer reported events, no follow-up information was given that would be useful to investigate the event further. In February 2009, the Spanish Ministry of Health suspended use of one batch of Gardasil after health authorities in the Valencia region reported that two girls had become ill after receiving the injection. Merck has stated that there was no evidence Gardasil was responsible for the two illnesses. Ingredients The following are the ingredients found in the different formulations of HPV vaccines: Major capsid protein L1 epitope of HPV types 6, 11, 16, and 18 (Gardasil) Major capsid protein L1 epitope of HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58 (Gardasil-9) Major capsid protein L1 epitope of HPV types 16 and 18 (Cervarix) amorphous aluminum hydroxyphosphate sulfate (adjuvant) sodium chloride yeast protein L-histidine polysorbate 80 sodium borate sodium dihydrogen phosphate dihydrate (Cervarix only) 3-O-Desacyl-4′-monophosphoryl lipid (MPL) A (Cervarix only) Aluminum hydroxide (Cervarix only) Trichoplusia ni insect cells (Cervarix only) Biotechnology The HPV major capsid protein, L1, can spontaneously self-assemble into virus-like particles (VLPs) that resemble authentic HPV virions. Gardasil contains recombinant VLPs assembled from the L1 proteins of HPV types 6, 11, 16 and 18. Since VLPs lack the viral DNA, they cannot induce cancer. They do, however, trigger an antibody response that protects vaccine recipients from becoming infected with the HPV types represented in the vaccine. The L1 proteins are produced by separate fermentations in recombinant Saccharomyces cerevisiae and self-assembled into VLPs. Public health The National Cancer Institute writes: Long-term impact and cost-effectiveness Whether the effects are temporary or lifelong, widespread vaccination could have a substantial public health impact. As of 2018, studies have proven that cervical cancer rates have dropped significantly since the introduction of Gardasil. Before Gardasil was introduced in 2006, 270,000 women died of cervical cancer worldwide in 2002. As of 2014, the mortality rate from cervical cancer has dropped 50% from 1975 which is due to the Gardasil vaccination along with increased focus on cervical screening. Acting FDA administrator Andrew von Eschenbach said the vaccine will have "a dramatic effect" on the health of women around the world. Gardasil is an important tool in reducing cervical cancer rates even in countries where screening programs are routine. The National Cancer Institute estimated that 9,700 women would develop cervical cancer in 2006, and 3,700 would die. Merck and CSL Limited are expected to market Gardasil as a cancer vaccine, rather than an STD vaccine. In the early years of Gardasil's introduction it was unclear how widespread the use of the three-shot series would be, in part because of its $525 list price ($175 each for three shots). But as of 2013, vaccine coverage has been rising. In 2013, about 55% of girls ages 13–17 years had at least one dose of the vaccination covered, up from 29% in 2007. Coverage for women ages 18–34 also has increased significantly since 2007. Studies using different pharmacoeconomic models predict that vaccinating young women with Gardasil in combination with screening programs may be more cost effective than screening alone. These results have been important in decisions by many countries to start vaccination programs. For example, the Canadian government approved $300 million to buy the HPV vaccine in 2008 after deciding from studies that the vaccine would be cost-effective especially by immunizing young women. Marc Steben, an investigator for the vaccine, wrote that the financial burden of HPV related cancers on the Canadian people was already $300 million per year in 2005, so the vaccine could reduce this burden and be cost-effective. Since penile and anal cancers are much less common than cervical cancer, HPV vaccination of young men is likely to be much less cost-effective than for young women yet is still recommended due to the existent risk (including oral cancer). The August 2009 issue of the Journal of the American Medical Association had an article reiterating the safety of Gardasil and another questioning the way it was presented to doctors and parents. According to the CDC, as of 2012, use of the HPV vaccine had cut rates of infection with HPV-6, -11, -16 and -18 in half in American teenagers (from 11.5% to 4.3%) and by one third in American women in their early twenties (from 18.5% to 12.1%). History Research findings that pioneered the development of the vaccine began in 1991 by investigators Jian Zhou and Ian Frazer in The University of Queensland, Australia. Researchers at UQ found a way to form non-infectious virus-like particles (VLP), which could also strongly activate the immune system. Subsequently, the vaccine was developed in parallel by researchers at Georgetown University Medical Center in America, the University of Rochester in America, the University of Queensland in Australia, and the US National Cancer Institute. MedImmune, GSK, and Merck & Co. advanced these technologies and conducted clinical trials. In December 2014, the FDA approved Gardasil 9, which protects against nine strains of HPV. Society and culture United States A few conservative groups, such as the Family Research Council (FRC), have expressed their fears that vaccination with Gardasil might give girls a false sense of security regarding sex and lead to promiscuity, but no evidence exists to suggest that girls who were vaccinated went on to engage in more sexual activity than unvaccinated girls. Merck, the manufacturer of the vaccine, has lobbied that state governments make vaccination with Gardasil mandatory for school attendance, which has upset some conservative and libertarian groups. The governor of Texas, Rick Perry, issued an executive order adding Gardasil to the state's required vaccination list, which was later overturned by the Texas legislature. Even though Perry also allowed parents to opt out of the program more easily, Perry's order was criticized, by fellow presidential candidates Rick Santorum and Michele Bachmann during the 2012 Republican Party presidential debate as being an overreach of state power in a decision properly left to parents. Japan In June 2013, the Japanese government issued a notice that "cervical cancer vaccinations should no longer be recommended for girls aged 12 to 16" while an investigation is conducted into certain adverse events including pain and numbness in 38 girls. The vaccines sold in Japan are Cervarix, made by GSK plc (formerly GlaxoSmithKline) of the United Kingdom, and Gardasil, made by Merck Sharp & Dohme. An estimated 3.28 million people have received the vaccination; 1,968 cases of possible side effects have been reported. In January 2014, the Vaccine Adverse Reactions Review Committee concluded that there was no evidence to suggest a causal association between the HPV vaccine and the reported adverse events, but did not reinstate proactive recommendations for its use. A study on girls in Sapporo showed that since the Japanese government's suspension of recommending the vaccine, completion rates for the full course of vaccination have dropped to 0.6%. On 26 November 2021, the Ministry of Health, Labour, and Welfare of Japan officially issued an announcement to resume active recommendations of the HPV vaccine after 8.5 years of suspension and municipalities are expected to restart such active recommendations from April 2022. References Further reading External links Cancer vaccines Life sciences industry Drugs developed by Merck & Co. Papillomavirus Protein subunit vaccines Cervical cancer
Gardasil
[ "Biology" ]
4,078
[ "Viruses", "Papillomavirus", "Life sciences industry" ]
5,482,989
https://en.wikipedia.org/wiki/Antimony%20triselenide
Antimony triselenide is the chemical compound with the formula . The material exists as the sulfosalt mineral antimonselite (IMA symbol: Atm), which crystallizes in an orthorhombic space group. In this compound, antimony has a formal oxidation state +3 and selenium −2. The bonding in this compound has covalent character as evidenced by the black color and semiconducting properties of this and related materials. The low-frequency dielectric constant (ε0) has been measured to be 133 along the c axis of the crystal at room temperature, which is unusually large. Its band gap is 1.18 eV at room temperature. The compound may be formed by the reaction of antimony with selenium and has a melting point of 885 K. Applications is now being actively explored for application thin-film solar cells. A record light-to-electricity conversion efficiency of 9.2% has been reported. References Selenides Antimony(III) compounds
Antimony triselenide
[ "Chemistry" ]
211
[ "Inorganic compounds", "Inorganic compound stubs" ]
5,482,995
https://en.wikipedia.org/wiki/Pelargonium%20graveolens
Pelargonium graveolens is a Pelargonium species native to the Cape Provinces and the Northern Provinces of South Africa, Zimbabwe and Mozambique. Common names include rose geranium, sweet scented geranium, old-fashioned rose geranium, and rose-scent geranium. Etymology Pelargonium comes from the Greek πελαργός pelargos which means stork. Another name for pelargoniums is stork's-bills due to the shape of their fruit. The specific epithet graveolens refers to the strong-smelling leaves. Common names Pelargonium graveolens is also known by taxonomic synonyms Geranium terebinthinaceum Cav. and Pelargonium terebinthinaceum (Cav.) Desf. "Rose geranium" is sometimes used to refer to Pelargonium incrassatum (Andrews) Sims or its synonym Pelargonium roseum (Andrews) DC. – the herbal name. Commercial vendors often list the source of geranium or rose geranium essential oil as Pelargonium graveolens, regardless of its botanical name. Description Pelargonium graveolens is an erect, aromatic, multi-branched subshrub, that grows up to 1.5 m and has a spread of 1 m. The leaves are deeply incised, velvety and soft to the touch (due to glandular hairs). The above-ground parts of the plant are more or less hairy and glandular. The alternately arranged leaves are divided into petioles and leaf blades. The leaf blade is soft, heart-shaped and palmately divided, blunt with lobed to coarsely toothed leaf lobes. The natural form smells of mint. Some cultivars have a scent similar to rose petals, although the leaf shape and scent vary (others have little or no scent). Some leaves are deeply incised and others less so, being slightly lobed like P. capitatum. The flowers vary from pale pink to almost white which appear from late winter to summer, peaking in spring. Distribution It is native to Mozambique and Zimbabwe in southern, tropical Africa, and South Africa (Cape Province, Transvaal). Pelargoniums have been cultivated in South Africa and Namibia for at least 200 years. The plant is also found in the Canary Islands, Corsica, Costa Rica, Cuba, the Dominican Republic, Haiti, southwestern Mexico, and Puerto Rico, where it has been introduced. Cultivars and hybrids Many plants are cultivated under the species name "Pelargonium graveolens" but differ from wild specimens as they are of hybrid origin (probably a cross between P. graveolens, P. capitatum and/or P. radens). There are many cultivars and they have a wide variety of scents, including rose, citrus, mint and cinnamon as well as various fruits. Cultivars and hybrids include: P. 'Graveolens' (or Pelargonium graveolens hort.) - A rose-scented cultivar; possibly a hybrid between P. graveolens and P. radens or P. capitatum. This cultivar is often incorrectly labeled as Pelargonium graveolens (the species). The main difference between the species and this cultivar is the dissection of the leaf. The species' has about 5 lobes but the cultivar has about 10. P. 'Citrosum' - A lemony, citronella-scented cultivar, similar to P. 'Graveolens'. It is meant to repel mosquitos and rumour has it that it was made by genetically bonding genes from the citronella grass but this is highly unlikely. P. 'Cinnamon Rose' - A cinnamon-scented cultivar. P. 'Dr Westerlund' - A lemony rose-scented cultivar, similar to P. 'Graveolens'. P. 'Graveolens Bontrosai' - A genetically challenged form; the leaves are smaller and curl back on themselves and the flowers often do not open fully. Known as P. 'Colocho' in the US. P. 'Grey Lady Plymouth' - A lemony rose-scented cultivar similar to P. 'Lady Plymouth'. The leaves are grey–green in colour. P. 'Lady Plymouth' - A minty lemony rose-scented cultivar. A very popular variety with a definite mint scent. Possibly a P. radens hybrid. P. 'Lara Starshine' - A lemony rose-scented cultivar, similar to P. 'Graveolens' but with more lemony scented leaves and reddish pink flowers. Bred by Australian plantsman Cliff Blackman. P. 'Lucaeflora' - A rose-scented variety, much more similar to the species than most other cultivars and varieties. P. × melissinum - The lemon balm pelargonium (lemon balm - Melissa officinalis). This is a hybrid between P. crispum and P. graveolens. P. 'Mint Rose' - A minty rose-scented cultivar similar to P. 'Lady Plymouth' but without the variegation of the leaves and lemony undertones. P. 'Secret Love' - An unusual eucalyptus-scented cultivar with pale pink flowers. P. 'Van Leeni' - A lemony rose-scented cultivar, similar to P. 'Graveolens' and P. 'Dr Westerlund'. Others known: Camphor Rose, Capri, Granelous and Little Gem. Uses Both the true species and the cultivated plant may be called rose geranium – pelargoniums are often called geraniums, as they fall within the plant family Geraniaceae, and were previously classified in the same genus. The common P. 'Graveolens' or P. 'Rosat' has great importance in the perfume industry. It is cultivated on a large scale and its foliage is distilled for its scent. Pelargonium distillates and absolutes, commonly known as "geranium oil", are sold for aromatherapy and massage therapy applications. They are also sometimes used to supplement or adulterate more expensive rose oils. As a flavoring, the flowers and leaves are used in cakes, jams, jellies, ice creams, sorbets, salads, sugars, and teas. In addition, it is used as a flavoring agent in some pipe tobaccos, being one of the characteristic "Lakeland scents." Rose geranium, known as Mâatercha or Ätarcha in Morocco, is used as a flavorful herb to complement spearmint tea. It is often added alongside spearmint or other minty herbs to enhance the overall flavor profile of the tea, adding a floral and aromatic note to the brew. In Cyprus, where it is known as , it is used to flavour and scent the sugar syrup in apricot preserves, known as . Chemical constituents A modern analysis listed the presence of over 50 organic compounds in the essential oil of P. graveolens from an Australian source. Analyses of Indian geranium oils indicated a similar phytochemical profile, and showed that the major constituents (in terms of % composition) were citronellol + nerol and geraniol. Gallery References graveolens Medicinal plants Perfume ingredients Essential oils Flora of Mozambique Flora of Zimbabwe Flora of the Cape Provinces Flora of the Northern Provinces Crops originating from South Africa Taxa named by Charles Louis L'Héritier de Brutelle
Pelargonium graveolens
[ "Chemistry" ]
1,586
[ "Essential oils", "Natural products" ]
5,483,077
https://en.wikipedia.org/wiki/Scandium%28III%29%20sulfide
Scandium(III) sulfide is a chemical compound of scandium and sulfur with the chemical formula Sc2S3. It is a yellow solid. Structure The crystal structure of Sc2S3 is closely related to that of sodium chloride, in that it is based on a cubic close packed array of anions. Whereas NaCl has all the octahedral interstices in the anion lattice occupied by cations, Sc2S3 has one third of them vacant. The vacancies are ordered, but in a very complicated pattern, leading to a large, orthorhombic unit cell belonging to the space group Fddd. Synthesis Metal sulfides are usually prepared by heating mixtures of the two elements, but in the case of scandium, this method yields scandium monosulfide, ScS. Sc2S3 can be prepared by heating scandium(III) oxide under flowing hydrogen sulfide in a graphite crucible to 1550 °C or above for 2–3 hours. The crude product is then purified by chemical vapor transport at 950 °C using iodine as the transport agent. Sc2O3 + 3H2S → Sc2S3 + 3H2O Scandium(III) sulfide can be prepared by reacting scandium(III) chloride with dry hydrogen sulfide at elevated temperature: 2 ScCl3 + 3 H2S → Sc2S3 + 6 HCl Reactivity Above 1100 °C, Sc2S3 loses sulfur, forming nonstoichiometric compounds such as Sc1.37S2. References Sesquisulfides Scandium compounds Inorganic compounds
Scandium(III) sulfide
[ "Chemistry" ]
334
[ "Inorganic compounds" ]
5,483,115
https://en.wikipedia.org/wiki/Selenium%20hexasulfide
Selenium hexasulfide is a chemical compound with the chemical formula . Its molecular structure is an 8-membered ring, consisting of two selenium and six sulfur atoms (diselenacyclooctasulfane), analogous to the ring, an allotrope of sulfur (cyclooctasulfur or cyclooctasulfane), and other 8-membered rings of selenium sulfides with formula . There are several isomers depending on the relative placement of the selenium atoms in the ring: 1,2-diselenacyclooctasulfane (with the two Se atoms adjacent), 1,3-diselenacyclooctasulfane, 1,4-diselenacyclooctasulfane, and 1,5-diselenacyclooctasulfane (with the Se atoms opposite). It is an oxidizing agent. The 1,2 isomer can be prepared by reaction of chlorosulfanes and dichlorodiselane with potassium iodide in carbon disulfide. The reaction produces also cyclooctaselenium and all other eight-member cyclic selenium sulfides, except selenacyclooctasulfane , and several six- and seven-membered rings. References Sulfides Selenium compounds Oxidizing agents Interchalcogens
Selenium hexasulfide
[ "Chemistry" ]
300
[ "Redox", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs" ]
5,483,173
https://en.wikipedia.org/wiki/Selenium%20trioxide
Selenium trioxide is the inorganic compound with the formula SeO3. It is white, hygroscopic solid. It is also an oxidizing agent and a Lewis acid. It is of academic interest as a precursor to Se(VI) compounds. Preparation Selenium trioxide is difficult to prepare because it is unstable with respect to the dioxide: 2 SeO3 → 2 SeO2 + O2 It has been generated in a number of ways despite the fact that the dioxide does not combust under normal conditions. One method entails dehydration of anhydrous selenic acid with phosphorus pentoxide at 150–160 °C. Another method is the reaction of liquid sulfur trioxide with potassium selenate. SO3 + K2SeO4 → K2SO4 + SeO3 Reactions In its chemistry SeO3 generally resembles sulfur trioxide, SO3, rather than tellurium trioxide, TeO3. The substance reacts explosively with oxidizable organic compounds. At 120 °C SeO3 reacts with selenium dioxide to form the Se(VI)-Se(IV) compound diselenium pentaoxide: SeO3 + SeO2 → Se2O5 It reacts with selenium tetrafluoride to form selenoyl fluoride, the selenium analogue of sulfuryl fluoride 2SeO3 + SeF4 → 2SeO2F2 + SeO2 As with SO3 adducts are formed with Lewis bases such as pyridine, dioxane and ether. With lithium oxide and sodium oxide it reacts to form salts of SeVIO54− and SeVIO66−: With Li2O, it gives Li4SeO5, containing the trigonal pyramidal anion SeVIO54− with equatorial bonds, 170.6–171.9 pm; and longer axial Se−O bonds of 179.5 pm. With Na2O it gives Na4SeO5, containing the square pyramidal SeVIO54−, with Se−O bond lengths ranging from range 172.9 → 181.5 pm, and Na12(SeO4)3(SeO6), containing octahedral SeVIO66−. SeVIO66− is the conjugate base of the unknown orthoselenic acid (Se(OH)6). Structure In the solid phase SeO3 consists of cyclic tetramers, with an 8 membered (Se−O)4 ring. Selenium atoms are 4-coordinate, bond lengths being Se−O bridging are 175 pm and 181 pm, non-bridging 156 and 154 pm. SeO3 in the gas phase consists of tetramers and monomeric SeO3 which is trigonal planar with an Se−O bond length of 168.78 pm. References Further reading Oxides Selenium(VI) compounds Oxidizing agents Interchalcogens
Selenium trioxide
[ "Chemistry" ]
605
[ "Oxides", "Redox", "Oxidizing agents", "Salts" ]
5,483,302
https://en.wikipedia.org/wiki/Selenium%20oxybromide
Selenium oxybromide (SeOBr2) is a selenium oxohalide chemical compound. Preparation Selenium oxybromide can be prepared through the reaction of selenium dioxide and selenium tetrabromide. Selenium and selenium dioxide are reacted with bromine to form selenium monobromide and selenium tetrabromide. Dissolving the selenium dioxide in the tetrabromide will produce the oxybromide. 2 Se + Br2 → Se2Br2 Se2Br2 + 3 Br2 → 2 SeBr4 SeBr4 + SeO2 → 2 SeOBr2 Structure Evidence from infrared and polarized Raman spectroscopy suggests that selenium oxybromide adopts a pyramidal molecular geometry with Cs symmetry, like other chalcogen(IV) oxohalides such as thionyl bromide () and selenium oxydichloride (). Properties Selenium oxybromide is a reddish-brown solid with a low melting point (41.6 °C) and chemical properties similar to selenium oxychloride. It boils at 220 °C and decomposes near the boiling point, making distillation an ineffective purification method. Its electrical conductivity in the liquid state just above the melting temperature is 6×10−5 S/m. SeOBr2 is hydrolyzed by water to form H2SeO3 and HBr. SeOBr2 is highly reactive, with most reactions taking place in the liquid state. Selenium will dissolve in it, forming Se2Br2. Iron, copper, gold, platinum, and zinc are all attacked by SeOBr2. References Selenium(IV) compounds Oxobromides
Selenium oxybromide
[ "Chemistry" ]
369
[ "Inorganic compounds", "Inorganic compound stubs" ]
5,483,392
https://en.wikipedia.org/wiki/Samarium%28II%29%20chloride
Samarium(II) chloride (SmCl2) is a chemical compound, used as a radical generating agent in the ketone-mediated intraannulation reaction. Preparation Reduction of samarium(III) chloride with samarium metal in a vacuum at a temperature of 800 °C to 900 °C, or with hydrogen gas at 350 °C yields samarium(II) chloride: 2 SmCl3 + Sm → 3 SmCl2 2 SmCl3 + H2 → 2 SmCl2 + 2 HCl Samarium(II) chloride can also be prepared by reducing samarium(III) chloride with lithium metal/naphthalene in THF: SmCl3 + Li → SmCl2 + LiCl A similar reaction has been observed with sodium. Structure Samarium(II) chloride adopts the PbCl2 (cotunnite) structure. References Chlorides Lanthanide halides Samarium(II) compounds
Samarium(II) chloride
[ "Chemistry" ]
190
[ "Chlorides", "Inorganic compounds", "Inorganic compound stubs", "Salts" ]
5,483,516
https://en.wikipedia.org/wiki/Computon
A computon is a combined unit of computing power, including processor cycles, memory, disk storage and bandwidth, proposed in 2005 by researchers at Hewlett-Packard, with the word being a cross between "computation" and "photon", the name for a packet of electromagnetic energy. HP hoped that the computon would become the computing industry's equivalent to public utility's watt-hour. See also Computron Grid computing Distributed computing Computronium References Who wants to buy a computon?, The Economist, 12 March 2005 Grid computing: Electricity is sold by the kilowatt-hour. Now a researcher has proposed that computing power should be sold by the computon External links Sun Power Units Computer performance
Computon
[ "Technology" ]
152
[ "Computer performance" ]
5,483,579
https://en.wikipedia.org/wiki/Tin%28II%29%20bromide
Tin(II) bromide is a chemical compound of tin and bromine with a chemical formula of SnBr2. Tin is in the +2 oxidation state. The stability of tin compounds in this oxidation state is attributed to the inert pair effect. Structure and bonding In the gas phase SnBr2 is non-linear with a bent configuration similar to SnCl2 in the gas phase. The Br-Sn-Br angle is 95° and the Sn-Br bond length is 255pm. There is evidence of dimerisation in the gaseous phase. The solid state structure is related to that of SnCl2 and PbCl2 and the tin atoms have five near bromine atom neighbours in an approximately trigonal bipyramidal configuration. Two polymorphs exist: a room-temperature orthorhombic polymorph, and a high-temperature hexagonal polymorph. Both contain (SnBr2)∞ chains but the packing arrangement differs. Preparation Tin(II) bromide can be prepared by the reaction of metallic tin and HBr distilling off the H2O/HBr and cooling: Sn + 2 HBr → SnBr2 + H2 However, the reaction will produce tin (IV) bromide in the presence of oxygen. Reactions SnBr2 is soluble in donor solvents such as acetone, pyridine and dimethylsulfoxide to give pyramidal adducts. A number of hydrates are known, 2SnBr2·H2O, 3SnBr2·H2O & 6SnBr2·5H2O which in the solid phase have tin coordinated by a distorted trigonal prism of 6 bromine atoms with Br or H2O capping 1 or 2 faces. When dissolved in HBr the pyramidal SnBr3− ion is formed. Like SnCl2 it is a reducing agent. With a variety of alkyl bromides oxidative addition can occur to yield the alkyltin tribromide e.g. SnBr2 + RBr → RSnBr3 Tin(II) bromide can act as a Lewis acid forming adducts with donor molecules e.g. trimethylamine where it forms NMe3·SnBr2 and 2NMe3·SnBr2 It can also act as both donor and acceptor in, for example, the complex F3B·SnBr2·NMe3 where it is a donor to boron trifluoride and an acceptor to trimethylamine. References Bromides Metal halides Tin(II) compounds Reducing agents
Tin(II) bromide
[ "Chemistry" ]
559
[ "Inorganic compounds", "Redox", "Reducing agents", "Salts", "Bromides", "Metal halides" ]
5,484,271
https://en.wikipedia.org/wiki/Cockshoot
In fowl hunting, a cockshoot, also called cockshut or cock-road, was a broad glade, an opening in a forest, through which woodcock might shoot. During the day, woodcocks remain out of sight, unless disturbed; but at night, they take flight in search of water. Flying generally low, they will follow along any openings in the woods. Hunters would place nets across the glade to catch any such birds. If such broad glades did not exist, hunters would cut roads through woods, thickets, groves, etc. They usually made these roads about 40 ft (12 m) wide, perfectly straight and clear; and to two opposite trees, they tied a net, which had a stone fastened to each corner. Then, having a place to lie hidden, at a proper distance, a stake was placed nearby, to which was fastened the lines of the net. When they perceived the game flying up the road, they unwound the lines from the stake; the stones would then pull down the netting, catching the birds. Various dictionaries erroneously applied the term cockshoot to the net itself, and claimed that the proper spelling was cockshut, believing that the word referred to something which shut in the birds. From this came the phrases cockshut time or light, referring to evening twilight, or nightfall, when woodcocks are likely to fly in the open. This alternate spelling is now more prevalent than the original, though usually occurring as in the previously-mentioned phrase, or as a surname, than as a reference to the original, obsolete hunting practice. Quotations See also Cockshutt (disambiguation) References Fowling Forests
Cockshoot
[ "Biology" ]
352
[ "Forests", "Ecosystems" ]
5,485,216
https://en.wikipedia.org/wiki/Edward%20Martell
Edward Ambrose Martell (February 23, 1918 – July 12, 1995) was an American radiochemist for the US National Center for Atmospheric Research (NCAR) in Boulder, Colorado. He fought fervently throughout his life against the medical establishment and the National Institute of Health for what he perceived to be insufficient research into radiation-induced lung cancer, particularly in regard to cigarette smoking. Education Martell was born in Spencer, Massachusetts. He attended the U.S. Military Academy in West Point, New York. He was commissioned as a second lieutenant after graduating in 1942 and served in the Okinawa campaign of World War II, retiring with the rank of lieutenant colonel. He received a Ph.D. in radiochemistry from the University of Chicago in 1950. Willard Libby was Martell's mentor at the university through the late 1940s and early 1950s. Research After receiving his Ph.D., he became a group leader at the Fermi Institute for Nuclear Studies at the University of Chicago and also took up a position at the Air Force Cambridge Research Laboratory in Bedford, Massachusetts. He managed radiation-effects projects studying a series of nuclear weapons tests in Nevada and the 1954 hydrogen bomb tests at the Bikini Atoll in the South Pacific. In 1962, after witnessing the devastating effects of nuclear weapons, Martell decided to pursue a different direction in his life and took up a position as a radiochemist in the Atmospheric Chemistry Division at NCAR in Boulder, Colorado. In 1980 he published a paper in Newscript in which he argued that radium progeny, particularly polonium-210, are responsible for the cancer-causing effects of cigarettes. He followed this up in 1983 with a subsequent research paper in which he calculated that smokers who die of lung cancer have been exposed to 80-100 rads of radiation. In 1993 he published a paper in which he theorized that ionizing radiation in artesian groundwater was the energy source which fueled the evolution of DNA and the first living cells, after exchanging ideas with the University of Colorado's Nobel Prize-winning chemist Tom Cech. At the time of his death, he was working on a book called "Natural Radionuclides and Life". Positions and efforts During his time at NCAR he served as president of the International Commission on Atmospheric Chemistry and Radioactivity within the International Association of Meteorology and Atmospheric Sciences. He was also a of the American Association for the Advancement of Science and a member of numerous other scientific societies. He served as an expert witness during hearings before the U.S. Congress and United Nations on radioactive fallout. He also spearheaded the cleanup of plutonium contamination in the soil surrounding the Rocky Flats nuclear weapons manufacturing facility located outside of Boulder, after measuring levels of radioactivity surrounding the site. He also supported the Southern Poverty Law Center which represented the victims of government-sponsored radiation testing on low-income black citizens. Personal life Martell married Marian Elizabeth Marks. He had four children. References External links NCAR Mourns the Death of Ed Martell, Its Only Radiochemist 1918 births 1995 deaths People from Spencer, Massachusetts 20th-century American chemists Nuclear chemists Radiation health effects researchers American whistleblowers United States Military Academy alumni
Edward Martell
[ "Chemistry" ]
648
[ "Nuclear chemists" ]
5,485,600
https://en.wikipedia.org/wiki/Insular%20dwarfism
Insular dwarfism, a form of phyletic dwarfism, is the process and condition of large animals evolving or having a reduced body size when their population's range is limited to a small environment, primarily islands. This natural process is distinct from the intentional creation of dwarf breeds, called dwarfing. This process has occurred many times throughout evolutionary history, with examples including various species of dwarf elephants that evolved during the Pleistocene epoch, as well as more ancient examples, such as the dinosaurs Europasaurus and Magyarosaurus. This process, and other "island genetics" artifacts, can occur not only on islands, but also in other situations where an ecosystem is isolated from external resources and breeding. This can include caves, desert oases, isolated valleys and isolated mountains ("sky islands"). Insular dwarfism is one aspect of the more general "island effect" or "Foster's rule", which posits that when mainland animals colonize islands, small species tend to evolve larger bodies (island gigantism), and large species tend to evolve smaller bodies. This is itself one aspect of island syndrome, which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts. Possible causes There are several proposed explanations for the mechanism which produces such dwarfism. One is a selective process where only smaller animals trapped on the island survive, as food periodically declines to a borderline level. The smaller animals need fewer resources and smaller territories, and so are more likely to get past the break-point where population decline allows food sources to replenish enough for the survivors to flourish. Smaller size is also advantageous from a reproductive standpoint, as it entails shorter gestation periods and generation times. In the tropics, small size should make thermoregulation easier. Among herbivores, large size confers advantages in coping with both competitors and predators, so a reduction or absence of either would facilitate dwarfing; competition appears to be the more important factor. Among carnivores, the main factor is thought to be the size and availability of prey resources, and competition is believed to be less important. In tiger snakes, insular dwarfism occurs on islands where available prey is restricted to smaller sizes than are normally taken by mainland snakes. Since prey size preference in snakes is generally proportional to body size, small snakes may be better adapted to take small prey. Differences of Dwarfism & gigantism The inverse process, wherein small animals breeding on isolated islands lacking the predators of large land masses may become much larger than normal, is called island gigantism. An excellent example is the dodo, the ancestors of which were normal-sized pigeons. There are also several species of giant rats, one still extant, that coexisted with both Homo floresiensis and the dwarf stegodonts on Flores. The process of insular dwarfing can occur relatively rapidly by evolutionary standards. This is in contrast to increases in maximum body size, which are much more gradual. When normalized to generation length, the maximum rate of body mass decrease during insular dwarfing was found to be over 30 times greater than the maximum rate of body mass increase for a ten-fold change in mammals. The disparity is thought to reflect the fact that pedomorphism offers a relatively easy route to evolve smaller adult body size; on the other hand, the evolution of larger maximum body size is likely to be interrupted by the emergence of a series of constraints that must be overcome by evolutionary innovations before the process can continue. Factors influencing the extent of dwarfing For both herbivores and carnivores, island size, the degree of island isolation and the size of the ancestral continental species appear not to be of major direct importance to the degree of dwarfing. However, when considering only the body masses of recent top herbivores and carnivores, and including data from both continental and island land masses, the body masses of the largest species in a land mass were found to scale to the size of the land mass, with slopes of about 0.5 log(body mass/kg) per log(land area/km2). There were separate regression lines for endothermic top predators, ectothermic top predators, endothermic top herbivores and (on the basis of limited data) ectothermic top herbivores, such that food intake was 7- to 24-fold higher for top herbivores than for top predators, and about the same for endotherms and ectotherms of the same trophic level (this leads to ectotherms being 5 to 16 times heavier than corresponding endotherms). It has been suggested that for dwarf elephants, competition was an important factor in body size, with islands with competing herbivores having significantly larger dwarf elephants than those where competing herbivores were absent. Examples Non-avian dinosaurs Recognition that insular dwarfism could apply to dinosaurs arose through the work of Ferenc Nopcsa, a Hungarian-born aristocrat, adventurer, scholar, and paleontologist. Nopcsa studied Transylvanian dinosaurs intensively, noticing that they were smaller than their cousins elsewhere in the world. For example, he unearthed six-meter-long sauropods, a group of dinosaurs which elsewhere commonly grew to 30 meters or more. Nopcsa deduced that the area where the remains were found was an island, Hațeg Island (now the Haţeg or Hatzeg basin in Romania) during the Mesozoic era. Nopcsa's proposal of dinosaur dwarfism on Hațeg Island is today widely accepted after further research confirmed that the remains found are not from juveniles. Sauropods Other In addition, the genus Balaur was initially described as a Velociraptor-sized dromaeosaurid (and in consequence a dubious example of insular dwarfism), but has been since reclassified as a secondarily flightless stem bird, closer to modern birds than Jeholornis (thus actually an example of insular gigantism). Birds Squamates Mammals Pilosans Proboscideans Primates Carnivorans Non-ruminant ungulates Bovids Cervids and relatives Plants See also Island gigantism Island syndrome Island tameness Pleistocene extinctions Notes References External links Strange world of island species October 31, 2004 The Observer Animal size Evolutionary biology Dwarfism
Insular dwarfism
[ "Biology" ]
1,323
[ "Evolutionary biology", "Animal size", "Organism size" ]
5,485,738
https://en.wikipedia.org/wiki/Aeon%20of%20Horus
The Aeon of Horus, which began in the early 20th century, is considered the current era in Thelemic philosophy. This Aeon is marked by a significant shift in spiritual and societal paradigms, emphasizing self-realization, individualism, and the pursuit of one's True Will. The child god Horus symbolizes this era, representing a break from past dogmas and the dawn of a new age of enlightenment and spiritual awakening. The Aeon card in the Thoth Tarot deck, designed by Crowley and painted by Lady Frieda Harris, represents the Aeon of Horus. The card, traditionally known as "Judgement" in other decks, symbolizes the transformative and revelatory nature of this new aeon. It depicts Horus and Hoor-paar-kraat, reflecting the themes of rebirth, transformation, and the dawning of a new era of consciousness and spiritual awakening in Thelemic philosophy. Description by Crowley The modern Aeon of Horus is portrayed as a time of self-realization as well as a growing interest in all things spiritual, and is considered to be dominated by the principle of the child. The Word of its Law is Thelema (will), which is complemented by Agape (love), and its formula is Abrahadabra. Individuality and finding the individual's True Will are the dominant aspects; its formula is that of growth, in consciousness and love, toward self-realization. Concerning the Aeon of Horus, Crowley wrote: And also, in his Little Essays Toward Truth: Sometimes Crowley compared the Word of Horus with other formulas, whose reigns appear to overlap with the Aeon of Osiris and/or Isis. From his The Confessions of Aleister Crowley: Key characteristics Self-realization and True Will The primary focus of the Aeon of Horus is the discovery and fulfillment of one's True Will. This concept is central to Thelema, where each individual is encouraged to find and follow their unique path in life. Aleister Crowley's reception of The Book of the Law marked the beginning of this aeon, with the central tenet being "Do what thou wilt shall be the whole of the Law." Israel Regardie viewed Crowley's revelation of the aeon as a monumental shift towards new spiritual and psychological paradigms, emphasizing individual spiritual enlightenment and personal responsibility. Kenneth Grant also highlights this transformative power, noting how the Aeon of Horus calls individuals to embrace their True Will and transcend old paradigms. Individualism and spiritual awakening The Aeon of Horus emphasizes personal freedom and the breaking away from authoritarian structures that characterized the previous aeons. This era is about embracing one's inner divinity and achieving spiritual enlightenment. Lon Milo DuQuette explains that the Aeon of Horus is about the growth of individual consciousness and the realization of one's spiritual potential. Kenneth Grant further elaborates on this idea, noting how the symbolism of Horus reflects a break from the constraints of previous aeons and heralds a new era of spiritual liberation. J. Daniel Gunther interprets the Aeon as a period where humanity is poised for significant spiritual evolution, driven by the awakening of individual consciousness. The child god symbolism Horus, the child god, represents innocence, new beginnings, and the potential for growth. This symbolism is reflected in the Thelemic emphasis on exploring new spiritual paths and understanding. Kenneth Grant discusses the symbolism of Horus as the crowned and conquering child, embodying the qualities of renewal and triumph over past limitations. Richard Kaczynski offers insights into how Crowley's experiences and writings shaped the Aeon of Horus and its principles, detailing the profound impact of Crowley's work on modern esoteric thought. Relationship with the Age of Aquarius Lon Milo DuQuette commented on the connection between the Aeon of Horus and the Age of Aquarius, stating, Christopher Partridge, in The Re-Enchantment of the West, examines the rise of New Age spirituality and its intersections with occult traditions, including Thelema. He notes that the New Age movement, often associated with the Age of Aquarius, draws upon concepts introduced by Crowley and his contemporaries. Partridge points out that the New Age's emphasis on individual spiritual experience and global transformation parallels the revolutionary spirit of the Aeon of Horus, as proclaimed by Crowley. Richard Kaczynski, in Perdurabo: The Life of Aleister Crowley, discusses how Crowley's proclamation of the Aeon of Horus aligns with broader cultural shifts that some associate with the Age of Aquarius. He explores the synchronicity between Crowley's work and the evolving spiritual landscape of the 20th century, highlighting how Crowley's ideas resonate with the themes of personal liberation and spiritual transformation that characterize the Age of Aquarius. Timekeeping In the Aeon of Horus, Thelemites often use a unique system of dating that incorporates elements of Tarot, astrology, and Thelemic principles. This system aligns significant events and periods with corresponding Tarot trumps and the positions of the Sun and Moon in the zodiac. Thelemic year cycles and representation The Thelemic calendar begins in 1904, the year in which Crowley received The Book of the Law and inaugurated the Aeon of Horus. Each year in the Thelemic calendar is represented by a Tarot trump. This association is based on a cycle that repeats every 22 years, corresponding to the 22 Major Arcana cards of the Tarot. The years are divided into "docosades" of 22 years each, denoted by Roman numerals. For example, the year 1947 (the year of Crowley's death) corresponds to "The Universe" (XXI), as 1947 - 1904 gives 43, and dividing 43 by 22 leaves a remainder of 21, corresponding to the 21st card, "The Universe" (XXI). Thus, the year 1947 would be written as Anno Ixxi, where I indicates the second docosade (The Magician) and xxi is the year within that docosade. Sun and moon sign Thelemic timekeeping also considers the astrological positions of the Sun and Moon. For instance, on December 1, 1947, at the time of Crowley's death, the Sun was in Sagittarius (♐) at 8°, and the Moon was in Cancer (♋) at 20°. Recording time in a magical record The Magician might date their magical record entries at specific times of day, in line with the practices outlined in Liber Resh vel Helios. This practice involves saluting the Sun at dawn, noon, sunset, and midnight, thereby making entries that are aligned with these solar positions. Example of Thelemic date The date of Aleister Crowley's death, December 1, 1947, in Thelemic terms could be expressed as: Dies Lunae, Anno Ixxi, ☉ in 8° ♐, ☽ in 20° ♋ This interpretation of Anno Ixxi, "The Completion of the Magician", aligns with the symbolism of the Tarot and the progress through the docosades. See also References Citations Works cited Further reading Calendar eras Horus Thelema Timekeeping
Aeon of Horus
[ "Physics" ]
1,484
[ "Spacetime", "Timekeeping", "Physical quantities", "Time" ]
5,485,741
https://en.wikipedia.org/wiki/Home%20modifications
Home modifications are defined as environmental interventions aiming to support activity performance in the home. More specifically, home modifications often are changes made to the home environment to help people with functional disability or impairment to be more independent and safe in their own homes and reduce any risk of injury to themselves or their caregivers. Examples of home modifications include installing ramps and rails, altering kitchen and bathroom areas (relocating switches and lowering bench heights), installing emergency alarms. There are a number of words in common use that are confused with home modifications. For example, home modification is more than home improvement, home renovation or remodelling; those three terms refer to changes made to housing for purposes other than disability. Other more closely related terms that are interchanged with home modifications in literature include combinations of home/housing/environment/residential coupled with modification/adaptation/intervention. Each of these terms has a nuanced meaning analysed in further detail by Bridge (2006). However these terms have in common a context of housing, health and care for people with disability or impairment. Home modifications directly impact the accessibility to and within a home and are considered one aspect of the complex relationship between housing and health. Home modifications are also the subject of ongoing public health and economics research because of their potential to support aging in place and substitute for caregiving. In addition to aging in place and caregiving, there is some evidence that home modifications can reduce falls in the home. References Accessible building Architectural design Caregiving
Home modifications
[ "Engineering" ]
300
[ "Accessible building", "Design", "Architectural design", "Architecture" ]
5,487,863
https://en.wikipedia.org/wiki/Retarded%20position
Einstein's equations admit gravity wave-like solutions. In the case of a moving point-like mass and in the linearized limit of a weak-gravity approximation these solutions of the Einstein equations are known as the Liénard–Wiechert gravitational potentials. Wave-like solutions (variations) in gravitational field at any point of space at some instant of time t are generated by the mass taken in the preceding (or retarded) instant of time s < t on its world-line at a vertex of the null cone connecting the mass and the field point. The position of the mass that generates the field is called the retarded position and the Liénard–Wiechert potentials are called the retarded potentials. Gravitational waves caused by acceleration of a mass appear to come from the position and direction of the mass at the time it was accelerated (the retarded time and position). The retarded time and the retarded position of the mass are a direct consequence of the finite value of the speed of gravity, the speed with which gravitational waves propagate in space. As in the case of the Liénard–Wiechert potentials for electromagnetic effects and waves, the static potentials from a moving gravitational mass (i.e., its simple gravitational field, also known as gravitostatic field) are "updated," so that they point to the mass's actual position at constant velocity, with no retardation effects. This happens also for static electric and magnetic effects and is required by Lorentz symmetry, since any mass or charge moving with constant velocity at a great distance, could be replaced by a moving observer at the same distance, with the object now at "rest." In this latter case, the static gravitational field seen by the observer would be required to point to the same position, which is the non-retarded position of the object (mass). Only gravitational waves, caused by acceleration of a mass, and which cannot be removed by a change in a distant observer's inertial frame, must be subject to aberration, and thus originate from a retarded position and direction, due to their finite velocity of travel from their source. Such waves correspond to electromagnetic waves radiated from an accelerated charge. Note that for gravitational masses moving past each other in straight lines (or for that matter for electromagnetically charged objects), there is little or no retardation effect on the effect from them, which is mediated by "static" components of the fields. So long as no radiation is emitted, conservation of momentum requires that forces between objects (either electromagnetic or gravitational forces) point at objects' instantaneous and up-to-date positions, and not in the direction of their speed-of-light-delayed (retarded) positions. However, since no information can be transmitted from such an interaction, such influences (which seem to exceed that of the influence of light), cannot be used to violate principles of relativity. See also Faster than light Liénard–Wiechert potential Further reading Does Gravity Travel at the Speed of Light? in The Physics FAQ General relativity
Retarded position
[ "Physics" ]
644
[ "General relativity", "Theory of relativity" ]
5,488,222
https://en.wikipedia.org/wiki/Unitary%20divisor
In mathematics, a natural number a is a unitary divisor (or Hall divisor) of a number b if a is a divisor of b and if a and are coprime, having no common factor other than 1. Equivalently, a divisor a of b is a unitary divisor if and only if every prime factor of a has the same multiplicity in a as it has in b. The concept of a unitary divisor originates from R. Vaidyanathaswamy (1931), who used the term block divisor. Example The integer 5 is a unitary divisor of 60, because 5 and have only 1 as a common factor. On the contrary, 6 is a divisor but not a unitary divisor of 60, as 6 and have a common factor other than 1, namely 2. Sum of unitary divisors The sum-of-unitary-divisors function is denoted by the lowercase Greek letter sigma thus: σ*(n). The sum of the k-th powers of the unitary divisors is denoted by σ*k(n): It is a multiplicative function. If the proper unitary divisors of a given number add up to that number, then that number is called a unitary perfect number. Properties Number 1 is a unitary divisor of every natural number. The number of unitary divisors of a number n is 2k, where k is the number of distinct prime factors of n. This is because each integer N > 1 is the product of positive powers prp of distinct prime numbers p. Thus every unitary divisor of N is the product, over a given subset S of the prime divisors {p} of N, of the prime powers prp for p ∈ S. If there are k prime factors, then there are exactly 2k subsets S, and the statement follows. The sum of the unitary divisors of n is odd if n is a power of 2 (including 1), and even otherwise. Both the count and the sum of the unitary divisors of n are multiplicative functions of n that are not completely multiplicative. The Dirichlet generating function is Every divisor of n is unitary if and only if n is square-free. The set of all unitary divisors of n forms a Boolean algebra with meet given by the greatest common divisor and join by the least common multiple. Equivalently, the set of unitary divisors of n forms a Boolean ring, where the addition and multiplication are given by where denotes the greatest common divisor of a and b. Odd unitary divisors The sum of the k-th powers of the odd unitary divisors is It is also multiplicative, with Dirichlet generating function Bi-unitary divisors A divisor d of n is a bi-unitary divisor if the greatest common unitary divisor of d and n/d is 1. This concept originates from D. Suryanarayana (1972). [The number of bi-unitary divisors of an integer, in The Theory of Arithmetic Functions, Lecture Notes in Mathematics 251: 273–282, New York, Springer–Verlag]. The number of bi-unitary divisors of n is a multiplicative function of n with average order where A bi-unitary perfect number is one equal to the sum of its bi-unitary aliquot divisors. The only such numbers are 6, 60 and 90. OEIS sequences is σ*0(n) is σ*1(n)  to are σ*2(n) to σ*8(n)  is , the number of unitary divisors is σ(o)*0(n)  is σ(o)*1(n)  is is References Section B3. Section 4.2 External links Mathoverflow | Boolean ring of unitary divisors Number theory
Unitary divisor
[ "Mathematics" ]
822
[ "Discrete mathematics", "Number theory" ]
5,488,499
https://en.wikipedia.org/wiki/Continuum%20function
In mathematics, the continuum function is , i.e. raising 2 to the power of κ using cardinal exponentiation. Given a cardinal number, it is the cardinality of the power set of a set of the given cardinality. See also Continuum hypothesis Cardinality of the continuum Beth number Easton's theorem Gimel function Cardinal numbers References
Continuum function
[ "Mathematics" ]
71
[ "Cardinal numbers", "Mathematical objects", "Numbers", "Infinity" ]
5,488,986
https://en.wikipedia.org/wiki/Alphabetic%20principle
According to the alphabetic principle, letters and combinations of letters are the symbols used to represent the speech sounds of a language based on systematic and predictable relationships between written letters, symbols, and spoken words. The alphabetic principle is the foundation of any alphabetic writing system (such as the English variety of the Latin alphabet, one of the more common types of writing systems in use today). In the education field, it is known as the alphabetic code. Alphabetic writing systems that use an (in principle) almost perfectly phonemic orthography have a single letter (or digraph or, occasionally, trigraph) for each individual phoneme and a one-to-one correspondence between sounds and the letters that represent them, although predictable allophonic alternation is normally not shown. Such systems are used, for example, in the modern languages Serbo-Croatian (arguably, an example of perfect phonemic orthography), Macedonian, Estonian, Finnish, Italian, Romanian, Spanish, Georgian, Hungarian, Turkish, and Esperanto. The best cases have a straightforward spelling system, enabling a writer to predict the spelling of a word given its pronunciation and similarly enabling a reader to predict the pronunciation of a word given its spelling. Ancient languages with such almost perfectly phonemic writing systems include Avestic, Latin, Vedic, and Sanskrit (Devanāgarī—an abugida; see Vyakarana). On the other hand, French and English have a strong difference between sounds and symbols. The alphabetic principle is closely tied to phonics, as it is the systematic relationship between spoken words and their visual representation (letters). The alphabetic principle does not underlie logographic writing systems like Chinese or syllabic writing systems such as Japanese kana. Korean was formerly written partially with Chinese characters, but is now written in the fully alphabetic Hangul system, in which the letters are not written linearly, but arranged in syllabic blocks which resemble Chinese characters. Latin alphabet Most orthographies that use the Latin writing system are imperfectly phonological and diverge from that ideal to a greater or lesser extent. This is because the ancient Romans designed the alphabet specifically for Latin. In the Middle Ages, it was adapted to the Romance languages, the direct descendants of Latin, as well as to the Celtic, Germanic, Baltic, and some Slavic languages, and finally to most of the languages of Europe. English orthography English orthography is based on the alphabetic principle, but the acquisition of sounds and spellings from a variety of languages and differential sound change within English have left Modern English spelling patterns confusing. Spelling patterns usually follow certain conventions but nearly every sound can be legitimately spelled with different letters or letter combinations. For example, the digraph ee almost always represents (feed), but in many varieties of English the same sound can also be represented by a single e (be), the letter y (fifty), by i (graffiti) or the digraphs ie (field), ei (deceit), ea (feat), ey (key), eo (people), oe (amoeba), ae (aeon), is (debris), it (esprit), ui (mosquito) or these letter patterns: ee-e (cheese), ea-e (leave), i-e (ravine), e-e (grebe), ea-ue (league), ei-e (deceive), ie-e (believe), i-ue (antique), eip (receipt). On the other hand, one symbol, such as the digraph th, can represent more than one phoneme: voiceless interdental /θ/ as in thin, voiced interdental /ð/ as in this, simple /t/ as in Thomas, or even the consonant cluster /tθ/ as in eighth. The spelling systems for some languages, such as Spanish or Italian, are relatively simple because they adhere closely to the ideal one-to-one correspondence between sounds and the letter patterns that represent them. In English the spelling system is more complex and varies considerably in the degree to which it follows uniform patterns. There are several reasons for this, including: first, the alphabet has 26 letters, but the English language has 40 sounds that must be reflected in word spellings; second, English spelling began to be standardized in the 15th century, and most spellings have not been revised to reflect the long-term changes in pronunciation that are typical for all languages; and third, English frequently adopts foreign words without changing the spelling of those words. Role in beginning reading Learning the connection between written letters and spoken sounds has been viewed as a critical heuristic to word identification for decades. Understanding that there is a direct relationship between letters and sounds enables an emergent reader to decode the pronunciation of an unknown written word and associate it with a known spoken word. Typically, emergent readers identify the majority of unfamiliar printed words by sounding them out. Similarly, understanding the relationship of letters and sounds is also seen as a critical heuristic for learning to spell. Two contrasting philosophies exist with regard to emergent readers learning to associate letters to speech sounds in English. Proponents of phonics argue that this relationship needs to be taught explicitly and to be learned to automaticity, in order to facilitate the rapid word recognition upon which comprehension depends. Others, including advocates of whole-language who hold that reading should be taught holistically, assert that children can naturally intuit the relationship between letters and sounds. This debate is often referred to as the reading wars. See also Dyslexia Orthographic depth Phonemic orthography Phonetic spelling Reading Synthetic phonics References Further reading Learning to read Orthography Symbols Phonics Writing Reading (process)
Alphabetic principle
[ "Mathematics" ]
1,206
[ "Symbols" ]
5,489,107
https://en.wikipedia.org/wiki/Coated%20paper
Coated paper (also known as enamel paper, gloss paper, and thin paper) is paper that has been coated by a mixture of materials or a polymer to impart certain qualities to the paper, including weight, surface gloss, smoothness, or reduced ink absorbency. Various materials, including kaolinite, calcium carbonate, bentonite, and talc, can be used to coat paper for high-quality printing used in the packaging industry and in magazines. The chalk or china clay is bound to the paper with synthetic s, such as styrene-butadiene latexes and natural organic binders such as starch. The coating formulation may also contain chemical additives as dispersants, resins, or polyethylene to give water resistance and wet strength to the paper, or to protect against ultraviolet radiation. Coated papers have been traditionally used for printing magazines. Varieties Machine-finished coated paper Machine-finished coated paper (MFC) has a basis weight of 48–80 g/m2. They have good surface properties, high print gloss and adequate sheet stiffness. MFC papers are made of 60–85% groundwood or thermomechanical pulp (TMP) and 15–40% chemical pulp with a total pigment content of 20–30%. The paper can be soft nip calendered or supercalendered. These are often used in paperbacks. Coated fine paper Coated fine paper or woodfree coated paper (WFC) are primarily produced for offset printing: Standard coated fine papers This paper quality is normally used for advertising materials, books, annual reports and high-quality catalogs. Grammage ranges from 90–170 g/m2 and ISO brightness between 80–96%. The fibre furnish consists of more than 90% chemical pulp. Total pigment content are in the range 30–45%, where calcium carbonate and clay are the most common. Low coat weight papers These paper grades have lower coat weights than the standard WFC (3–14 g/m2/side) and the grammage and pigment content are also generally lower, 55–135 g/m2 and 20–35% respectively. s Art papers are one of the highest-quality printing papers and are used for illustrated books, calendars and brochures. The grammage varies from 100 to 230 g/m2. These paper grades are triple coated with 20–40 g/m2/side and have matte or glossy finish. Higher qualities often contain cotton. Plastic coatings Plastic-coated paper includes types of paper coatings; polyethylene or polyolefin extrusion coating, silicone, and wax coating to make paper cups and photographic paper. Biopolymer coatings are available as more sustainable alternatives to common petrochemical coatings like low-density polyethylene (LDPE) or mylar. It is most used in the food and drink packaging industry. The plastic is used to improve functions such as water resistance, tear strength, abrasion resistance, ability to be heat sealed, etc. Some papers are laminated by heat or adhesive to a plastic film to provide barrier properties in use. Other papers are coated with a melted plastic layer: curtain coating is one common method. Printed papers commonly have a top coat of a protective polymer to seal the print, provide scuff resistance, and sometimes gloss. Some coatings are processed by UV curing for stability. Most plastic coatings in the packaging industry are polyethylene (LDPE) and to a much lesser degree PET. Liquid packaging board cartons typically contain 74% paper, 22% plastic and 4% aluminum. Frozen food cartons are usually made up of an 80% paper and 20% plastic combination. The most notable applications for plastic-coated paper are single use (disposable food packaging): Liquid packaging board for milk and juice folding cartons Hot and cold paper cups Paper plates Frozen food containers Plastic-lined paper bags Take-out containers Waterproof paper (also multi-use) Heat sealable paper Barrier packaging Plastic coatings or layers usually make paper recycling more difficult. Some plastic laminations can be separated from the paper during the recycling process, allowing filtering out the film. If the coated paper is shredded prior to recycling, the degree of separation depends on the particular process. Some plastic coatings are water dispersible to aid recycling and repulping. Special recycling processes are available to help separate plastics. Some plastic coated papers are incinerated for heat or landfilled rather than recycled. Most plastic coated papers are not suited to composting, but do variously end up in compost bins, sometimes even legally so. In this case, the remains of the non-biodegradable plastics components form part of the global microplastics waste problem. Others Printed papers commonly have a top coat of a protective polymer to seal the print, provide scuff resistance, and sometimes gloss. Some coatings are processed by UV curing for stability. A release liner is a paper (or film) sheet used to prevent a sticky surface from adhering. It is coated on one or both sides with a release agent. Heat printed papers such as receipts are coated with a chemical mixture, which often contains estrogenic and carcinogenic poisons, such as bisphenol A (BPA). It is possible to check whether a piece of paper is thermographically coated, as it will turn black from friction or heat. (see Thermal paper) Paper labels are often coated with adhesive (pressure sensitive or gummed) on one side and coated with printing or graphics on the other. See also Printing References Further reading Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, External links Chemical processes Composite materials Environmental impact of products Packaging materials Paper Papermaking Plastics and the environment
Coated paper
[ "Physics", "Chemistry" ]
1,216
[ "Composite materials", "Chemical processes", "Materials", "nan", "Chemical process engineering", "Matter" ]
5,489,476
https://en.wikipedia.org/wiki/Socket%20M
Socket M (mPGA478MT) is a CPU interface introduced by Intel in 2006 for the Intel Core line of mobile processors. Technical specifications Socket M is used in all Intel Core products, as well as the Core-derived Dual-Core Xeon codenamed Sossaman. It was also used in the first generation of the mobile version of Intel's Core 2 Duo, specifically, the T5x00 and T7x00 Merom lines (referred to as Napa Refresh), though that line switched to Socket P (Santa Rosa) in 2007. It typically uses the Intel 945PM/945GM chipsets which support up to 667 MHz FSB and the Intel PM965/GM965 which allows 800 MHz FSB support, though the Socket M, PM965/GM965 combination is less common. The "Sossaman" Xeons use the E7520 chipset. Relation to other sockets Socket M is pin-compatible with desktop socket mPGA478A but it is not electrically compatible. Socket M is not pin-compatible with the older desktop Socket 478 (mPGA478B) or the newer mobile Socket P (mPGA478MN) by location of one pin; it is also incompatible with most versions of the older mobile Socket 479. Pentium III-M processors designed for the first version of Socket 479 will physically fit into a Socket M, but are electrically incompatible with it. Although conflicting information has been published, no 45 nm Penryn processors have been released for Socket M. See also List of Intel microprocessors References Intel CPU sockets
Socket M
[ "Technology" ]
339
[ "Computing stubs", "Computer hardware stubs" ]
5,489,569
https://en.wikipedia.org/wiki/Threshold%20energy
In particle physics, the threshold energy for production of a particle is the minimum kinetic energy that must be imparted to one of a pair of particles in order for their collision to produce a given result. If the desired result is to produce a third particle then the threshold energy is greater than or equal to the rest energy of the desired particle. In most cases, since momentum is also conserved, the threshold energy is significantly greater than the rest energy of the desired particle. The threshold energy should not be confused with the threshold displacement energy, which is the minimum energy needed to permanently displace an atom in a crystal to produce a crystal defect in radiation material science. Example of pion creation Consider the collision of a mobile proton with a stationary proton so that a meson is produced: We can calculate the minimum energy that the moving proton must have in order to create a pion. Transforming into the ZMF (Zero Momentum Frame or Center of Mass Frame) and assuming the outgoing particles have no KE (kinetic energy) when viewed in the ZMF, the conservation of energy equation is: Rearranged to By assuming that the outgoing particles have no KE in the ZMF, we have effectively considered an inelastic collision in which the product particles move with a combined momentum equal to that of the incoming proton in the Lab Frame. Our terms in our expression will cancel, leaving us with: Using relativistic velocity additions: We know that is equal to the speed of one proton as viewed in the ZMF, so we can re-write with : So the energy of the proton must be MeV. Therefore, the minimum kinetic energy for the proton must be MeV. Example of antiproton creation At higher energy, the same collision can produce an antiproton: If one of the two initial protons is stationary, we find that the impinging proton must be given at least of energy, that is, 5.63 GeV. On the other hand, if both protons are accelerated one towards the other (in a collider) with equal energies, then each needs to be given only of energy. A more general example Consider the case where a particle 1 with lab energy (momentum ) and mass impinges on a target particle 2 at rest in the lab, i.e. with lab energy and mass . The threshold energy to produce three particles of masses , , , i.e. is then found by assuming that these three particles are at rest in the center of mass frame (symbols with hat indicate quantities in the center of mass frame): Here is the total energy available in the center of mass frame. Using , and one derives that References http://galileo.phys.virginia.edu/classes/252/particle_creation.html Energy (physics) Particle physics
Threshold energy
[ "Physics", "Mathematics" ]
575
[ "Physical quantities", "Quantity", "Energy (physics)", "Particle physics", "Particle physics stubs", "Wikipedia categories named after physical quantities" ]
5,489,639
https://en.wikipedia.org/wiki/Available%20energy%20%28particle%20collision%29
In particle physics, the available energy is the energy in a particle collision available to produce new particles from the energy of the colliding particles. In early accelerators both colliding particles usually survived after the collision, so the available energy was the total kinetic energy of the colliding particles in the center-of-momentum frame before the collision. In modern accelerators particles collide with their anti-particles and can annihilate, so the available energy includes both the kinetic energy and the rest energy of the colliding particles in the center-of-momentum frame before the collision. See also Threshold energy Matter creation References Particle physics
Available energy (particle collision)
[ "Physics" ]
132
[ "Particle physics stubs", "Particle physics" ]
5,489,731
https://en.wikipedia.org/wiki/Reflex%20hammer
A reflex hammer is a medical instrument used by practitioners to test deep tendon reflexes, the best known possibly being the patellar reflex. Testing for reflexes is an important part of the neurological physical examination in order to detect abnormalities in the central or peripheral nervous system. Reflex hammers can also be used for chest percussion. Models of reflex hammer Prior to the development of specialized reflex hammers, hammers specific for percussion of the chest were used to elicit reflexes. However, this proved to be cumbersome, as the weight of the chest percussion hammer was insufficient to generate an adequate stimulus for a reflex. Starting in the late 19th century, several models of specific reflex hammers were created: The Taylor or tomahawk reflex hammer was designed by John Madison Taylor in 1888 and is the most well known reflex hammer in the USA. It consists of a triangular rubber component which is attached to a flat metallic handle. The traditional Taylor hammer is significantly lighter in weight when compared to the heavier European hammers. The Queen Square reflex hammer was designed for use at the National Hospital for Nervous Diseases (now the National Hospital for Neurology and Neurosurgery) in Queen Square, London. It was originally made with a bamboo or cane handle of varying length, of average 25 to 40 centimetres (10 to 16 inches), attached to a 5-centimetre (2-inch) metal disk with a plastic bumper. The Queen Square hammer is also now made with plastic molds, and often has a sharp tapered end to allow for testing of plantar reflexes though this is no longer recommended due to tightened infection control. It is the reflex hammer of choice of the UK neurologist. The Babinski reflex hammer was designed by Joseph Babiński in 1912 and is similar to the Queen Square hammer, except that it has a metallic handle that is often detachable. Babinski hammers can also be telescoping, allowing for compact storage. Babinski's hammer was popularized in clinical use in America by the neurologist Abraham Rabiner, who was given the instrument as a peace offering by Babinski after the two brawled at a black tie affair in Vienna. The Trömner reflex hammer was designed by Ernst Trömner. This model is shaped like a two-headed mallet. The larger mallet is used to elicit tendon stretch reflexes, and the smaller mallet is used to elicit percussion myotonia. Other reflex hammer types include the Buck, Berliner and Stookey reflex hammers. There are numerous models available from various commercial sources. Method of use The strength of a reflex is used to gauge central and peripheral nervous system disorders, with the former resulting in hyperreflexia, or exaggerated reflexes, and the latter resulting in hyporeflexia or diminished reflexes. However, the strength of the stimulus used to extract the reflex also affects the magnitude of the reflex. Attempts have been made to determine the force required to elicit a reflex, but vary depending on the hammer used, and are difficult to quantify. The Taylor hammer is usually held at the end by the physician, and the entire device is swung in an arc-like motion onto the tendon in question. The Queen Square and Babinski hammers are usually held perpendicular to the tendon in question, and are passively swung with gravity assistance onto the tendon. The Jendrassik maneuver, which entails interlocking of flexed fingers to distract a patient and prime the reflex response, can also be used to accentuate reflexes. In cases of hyperreflexia, the physician may place his finger on top of the tendon, and tap the finger with the hammer. Sometimes a reflex hammer may not be necessary to elicit hyperreflexia, with finger tapping over the tendon being sufficient as a stimulus. See also Physical examination Neurology References Hammers History of neurology Medical equipment 1888 introductions
Reflex hammer
[ "Biology" ]
812
[ "Medical equipment", "Medical technology" ]
5,490,772
https://en.wikipedia.org/wiki/Octadecyltrichlorosilane
Octadecyltrichlorosilane (ODTS or n-octadecyltrichlorosilane) is an organosilicon compound with the formula . A colorless liquid, it is used as a silanization agent to prepare hydrophobic stationary phase, for reversed-phase chromatography. It is also evaluated for forming self-assembled monolayers on silicon dioxide substrates. Its structural chemical formula is CH3(CH2)17SiCl3. It is flammable and hydrolyzes readily with release of hydrogen chloride. Dodecyltrichlorosilane, an ODTS analog with shorter alkyl chain, is used for the same purpose. ODTS-PVP films are used in organic-substrate LCD displays. References Organochlorosilanes Thin films
Octadecyltrichlorosilane
[ "Materials_science", "Mathematics", "Engineering" ]
169
[ "Nanotechnology", "Planes (geometry)", "Thin films", "Materials science" ]
5,490,892
https://en.wikipedia.org/wiki/18D/Perrine%E2%80%93Mrkos
18D/Perrine–Mrkos is a periodic comet in the Solar System, originally discovered by the American-Argentine astronomer Charles Dillon Perrine (Lick Observatory, California, United States) on December 9, 1896. For some time it was thought to be a fragment of Biela's Comet. It was considered lost after the 1909 appearance, but was rediscovered by the Czech astronomer Antonín Mrkos (Skalnate Pleso Observatory, Slovakia) on October 19, 1955, using ordinary binoculars, it was later confirmed as 18D by Leland E. Cunningham (Leuschner Observatory, University of California, Berkeley). The comet was last observed during the 1968 perihelion passage when it passed from the Earth. The comet has not been observed during the following perihelion passages: 1975 Aug. 2 1982 May 16 1989 Feb. 28 1995 Dec. 6 (apmag 19?) 2002 Sept.10 (apmag 20?) 2009 Apr. 17 (apmag 24?) 2017 Feb. 26 (apmag 24?) The next predicted perihelion passage would be on 2025-Jan-01 but the comet is currently considered lost as it has not been seen since Jan 1969. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 18D at Kronk's Cometography 18D at Kazuo Kinoshita's Comets 18D at Seiichi Yoshida's Comet Catalog NK 835 18D/Perrine-Mrkos – Syuichi Nakano (2002) Periodic comets Lost comets 018P 0018 18961209 19551019 Recovered astronomical objects
18D/Perrine–Mrkos
[ "Astronomy" ]
343
[ "Recovered astronomical objects", "Astronomical objects", "Astronomy stubs", "Comet stubs" ]
5,490,978
https://en.wikipedia.org/wiki/Art%20toys
Art toys, also called designer toys, are toys and collectibles created by artists and designers that are either self-produced or made by small, independent toy companies, typically in very limited editions. Artists use a variety of materials, such as ABS plastic, vinyl, wood, metal, latex, plush, and resin. Creators often have backgrounds in graphic design, illustration, or fine art, but many accomplished toy artists are self-taught. The first art toys appeared in the 1990s in Hong Kong and Japan. By the early 2000s, the majority of art toys were based upon characters created by popular Lowbrow artists, linking the two movements. In his book Vinyl Will Kill!, illustrator Jeremyville, in Sydney, claims that the cultural phenomenon of designer toys began when Hong Kong–based artist Michael Lau took his customized G.I. Joe figures to a local toy show. He had reworked them "into urban hip-hop characters, wearing cool streetwear labels and accessories." Initially known as "urban vinyl", the accepted term soon became "designer toys". Overview A typical example of designer toys are the Qee series, produced in Hong Kong by Toy2R. The standard size of Qee figures is 2" high, but 8" and 16" figures are also produced. Qees vary in their design, usually with the same basic body type, but with head sculpts that may be of a bear, a cat, a dog, a monkey, or a rabbit. Variations of the Qee are the Toyer with a head that resembles a cartoon skull; the Knuckle Bear, which was created by Japanese character designer Touma, and resembles a graffiti-style caricature of an anthropomorphized bear; and the Qee Egg, a bird's egg with arms and legs. Blank Qees are produced in 2" and 8" sizes; these figures may be of any Qee sculpt, but are packaged unpainted, as do-it-yourself pieces. Each piece is designed by an artist and carries its own aesthetic theme. Each 2" figure is packaged with an optional keychain attachment. Another example of designer toys is the Dunny series, produced by the American company Kidrobot. Dunny figures may be considered the Western counterpart of the Chinese Qee and the Japanese Bearbrick. Dunny are a series of figures that resemble anthropomorphized rabbits in a cartoon style (a design originally illustrated by graffiti, stencil, and comic artists) which are produced as 3", 8", and 20" figures. There is a variation of the Dunny figure called a Munny, which resembles a monkey, and is only sold as an unpainted do-it-yourself piece. Some creators of designer toys are Hong Kong–based Michael Lau, credited with the establishment of the Urban Vinyl movement; Devilrobots, a five-person design team from Japan, known for their television character named TO-FU Oyako; Mexican artists Carlos & Ernesto East "The Beast Brothers" which are known for their Dia de muertos and Aztec influences; American concert poster artist Frank Kozik's Mongers series and Labbit character; and British illustrator James Jarvis's cast of characters, produced as vinyl figures of varying sizes. Most vinyl toys are produced in factories in China, though some designers have shifted production to Japan where higher quality materials such as clear vinyl are used. Designer toys are rarely produced in the United States due to environmental restrictions on vinyl production; some exceptions are resin and plush toys. Urban vinyl Urban vinyl is a type of designer toy, featuring action figures in particular which are usually made of vinyl. Although the term is sometimes used interchangeably with the term designer toy, it is more accurately used as a modifier: not all designer toys can be considered urban vinyl, while urban vinyl figures are necessarily designer toys, by virtue of how they are produced. Like designer toys in general, urban vinyl figures feature original designs, small production numbers, and are marketed to collectors, predominantly adults. What sets "urban" designer toys apart from general "designer" or "art" toys, is the subject matter. Anything dealing with graffiti, hip-hop, rap or other subjects typically tied to an urban environment. The urban vinyl trend was initiated by artist Eric So & Michael Lau, who first created urban vinyl figures in Hong Kong in the late 1990s. Early designer toy creators include New York artist Ron English. Other creators of urban vinyl figures are Japanese artist and designer Takashi Murakami whose work has been exhibited in the Museum of Contemporary Art, Tokyo, and the Museum of Fine Arts, Boston, Australian designer Nathan Jurevicius's Scarygirl, based on characters from his comic of the same name, and produced in conjunction with Hong Kong company Flyingcat, and former graffiti artist KAWS. Urban vinyl figures are designed primarily by illustrators, graffiti artists, musicians and DJs from urban areas in Asia (especially Japan and Hong Kong), North America (especially the United States), and Europe. An offshoot of hip hop and youth-oriented popular culture, urban vinyl often depicts real-life figures from Asian and American culture, particularly artists who perform in a hip-hop or related styles. Two examples are Lau's depiction of the LMF rappers from Hong Kong, and figures based on the members of the virtual electronic band Gorillaz, produced by Jamie Hewlett and made by Kidrobot. Urban vinyl is commonly designated as either Eastern Vinyl, including anything designed and produced in Asia or Australia, or Western Vinyl, encompassing pieces which are designed and produced in North America, South America, or Europe. Urban vinyl figures have become collectible items. Rare pieces may sell for hundreds or even thousands of dollars. Resin toys Some artists create their toys using synthetic resin material and resin casting. After casting the resin toy receives adjustments in its details, sometimes being superficially cast on some parts. The toy can be finished using automotive paint by aerosol and sometimes receives a varnished layer above the dry painting. The process of making resin toys is more labor-intensive and time-consuming than industrialized vinyl toy production, which in most of cases are made identical and in large quantities. But resin casting allows artists to produce toys in small numbers. Most vinyl factories will only produce toys in large series. Resin toys have become a way for less established artists to produce a toy without the large financial investment required to produce a vinyl toy. Unlike most vinyl toys, resin toys are usually sculpted, cast, and painted by a single artist as shown in the book "We are indie Toys" by HarperCollins . Designer plush Designer plush, a subcategory of designer toys, are soft, stuffed dolls created in limited quantities by artists and designers. Common designs include anthropomorphized animals or fantastic human likenesses, although designer plush dolls often feature entirely unique character designs. Designer plush dolls are usually given names and personas, with their distinctive personalities described on their tags or in booklets included in their packaging. One producer of designer plush is Friends With You, a commercial art and design collective based in Miami, Florida. Their work is characterized by a cute yet bizarre aesthetic, and exhibits a hand-made quality, even in pieces that are machine-made. In addition to their plush dolls, Friends With You also create modular wood toys, and motion graphics for companies such as Sony, MTV, Nike, and Columbia Records. Another type of designer plush are Uglydolls, created by independent toy designers David Horvath and Sun-Min Kim under the label Pretty Ugly. Their first products were 12" plush dolls based on drawings by David Horvath, and handmade by Sun-Min Kim. Pretty Ugly has also produced 7" versions of their character designs, called Little Uglys, 24" versions called Giant Uglys, 4" miniatures intended for use as keychains, and in a departure from the plush medium, 7" Vinyl Uglys. Ugly Dolls have been featured in motion pictures like Zarthura and in major specialty stores. Designer consumer electronics A recent addition to the world of vinyl toys, designer consumer electronics is a subcategory started by mimoco, whose mimobot Art Toy Flash Drives proved that there was a market for Urban Vinyl products with a purpose. Like other platform art toys, there have been mimobot designs by a wide variety of artists, including Mori Chack, Sket One and Jon Burgerman. The appeal of designer consumer electronics in flash memory form tends to be the addition of a digital canvas, allowing affiliated artists to create much more in-depth characters, complete with animations, music, etc. Designer consumer electronics are not limited to mimobots; in 2007, Medicom brought out a Be@rbrick USB flash drive. See also Disney Vinylmation Gashapon Model figure Treeson Kidrobot Kaws References Design Toy collecting
Art toys
[ "Engineering" ]
1,831
[ "Design" ]
5,491,211
https://en.wikipedia.org/wiki/Ball%20%28association%20football%29
A football or soccer ball is the ball used in the sport of association football. The ball's spherical shape, as well as its size, weight, mass, and material composition, are specified by Law 2 of the Laws of the Game maintained by the International Football Association Board. Additional, more stringent standards are specified by FIFA and other big governing bodies for the balls used in the competitions they sanction. Early footballs began as animal bladders or stomachs that would easily fall apart if kicked too much. Improvements became possible in the 19th century with the introduction of rubber and discoveries of vulcanization by Charles Goodyear. The modern 32-panel ball design was developed in 1962 by Eigil Nielsen, and technological research continues to develop footballs with improved performance. The 32-panel ball design was soon joined by 24-panel balls as well as 42-panel balls, both of which improved on performance prior to 2007. A black-and-white patterned spherical truncated icosahedron design, brought to prominence by the Adidas Telstar, has become a symbol of association football. Many different designs of balls exist, varying both in appearance and physical characteristics. History First years of football codes In the year 1863, the first specifications for footballs were set by the Football Association. Previous to this, footballs were made out of inflated animal bladder, with later leather coverings to help footballs maintain their shapes and sizes. In 1872, the specifications were revised and have been kept essentially unchanged by the International Football Association Board. Differences in footballs made since this rule came into effect have been with the material used to create them. Footballs have dramatically changed over time. During medieval times balls were normally made from an outer shell of leather filled with cork shavings. Another method of creating a ball was using animal bladders to make it inflatable inside. However, these two styles of footballs were easy to puncture and were inadequate for kicking. It was not until the 19th century that footballs developed a more modern appearance. Vulcanization In 1838, Charles Goodyear introduced vulcanized rubber, which dramatically improved football. Vulcanization is the treatment of rubber to give it certain qualities such as strength, elasticity, and resistance to solvents. Vulcanization of rubber also helps the football resist moderate heat and cold. Vulcanization helped create inflatable bladders that pressurize the outer panel arrangement of the football. Charles Goodyear's innovation increased the bounce ability of the ball and made it easier to kick. Most balls of this time had tanned leather with eighteen sections stitched together. These were arranged in six panels of three strips each. Reasons for improvement During the 1900s, footballs were made out of leather with a lace of the same material (known as in Spanish) used to stitch the panels. Although leather was perfect for bouncing and kicking the ball, when heading the football (hitting it with the player's head) it was usually painful. This problem was most probably due to water absorption of the leather from rain, which caused a considerable increase in weight, causing head or neck injury. By around 2017, this had also been associated with dementia in former players. Another problem of early footballs was that they deteriorated quickly, as the plastic used in manufacturing varied in thickness and in quality. The ball without the leather lace was developed and patented by Romano Polo, Antonio Tossolini and Juan Valbonesi in 1931 in Bell Ville, Córdoba Province, Argentina. This innovative ball (named Superball) was adopted by the Argentine Football Association as the official ball for its competitions since 1932. Latest developments The deformation of the football when it is kicked or when the ball hits a surface is tested. Two styles of footballs have been tested by the Sports Technology Research Group of Wolfson School of Mechanical and Manufacturing Engineering in Loughborough University; these two models are called the Basic FE model and the Developed FE model of the football. The basic model considered the ball as a spherical shell with isotropic material properties. The developed model also used isotropic material properties but included an additional stiffer stitching seam region. Manufacturers are experimenting with microchips and even cameras embedded inside the ball. The microchip technology was considered for the goal-line technology. The ball used in the 2018 FIFA World Cup in Russia had an embedded chip which did not provide any measurements, but provided 'user experience' via smartphone after connecting with the ball via NFC. Future developments Companies such as Umbro, Mitre, Adidas, Nike, Select and Puma are releasing footballs made out of new materials which are intended to provide more accurate flight and more power to be transferred to the football. Specification Construction Modern footballs are more complex than past footballs. Most footballs consist of twelve regular pentagonal and twenty regular hexagonal panels positioned in a truncated icosahedron spherical geometry. Some premium-grade 32-panel balls use non-regular polygons to give a closer approximation to sphericality. The inside of the football is made up of a latex or butyl rubber bladder which enables the football to be pressurised. The ball's outside is made of leather, synthetic leather, polyurethane or PVC panels. The surface can be textured, weaved or embossed for greater control and touch. The panel pairs are either machine-stitched, hand-stitched or thermo-bonded (glued and bonded by heat) along the edge. To prevent water absorption balls may be specially coated, or the stitches bonded with glue. The size of a football is roughly in diameter for a regulation size 5 ball. Rules state that a size 5 ball must be in circumference. Averaging that to and then dividing by gives a diameter of about . Size and weight Regulation size and weight for a football is a circumference of and a weight of . The ball is inflated to a pressure of at sea level. This is known as "Size 5". Smaller balls, Sizes 1, 3, and 4, are also produced for younger players or as training tools. Following consultations with football associations, clubs and ball manufacturers, FIFA has developed non-compulsory recommendations for appropriate sizes, circumferences and weights of balls for different age groups of youth football. Types of ball There are different types of football balls depending on the match and turf including training footballs, match footballs, professional match footballs, beach footballs, street footballs, indoor footballs, turf balls, futsal footballs and mini/skills footballs. Professional/premium match footballs are developed with top professional clubs to maximize players natural abilities and skills. They are FIFA-approved for use at the highest professional and international levels and designed for performance, exact specifications, great accuracy, speed and control. Air retention, water-resistance, and performance are far superior when compared to a training ball. They are intended for all natural and artificial turf surfaces and all climates. These are the most expensive footballs. Matchday footballs are high performance range of balls for all playing surfaces. The outer casing is either leather or an approved synthetic and it will typically be water-resistant as well. They are guaranteed to conform to official size, weight, and texture regulations, designed to suit all levels of play and all age groups. These balls cost more than turf or training balls, which is offset by their superior level of quality. Recreational/practice/training footballs are tough and highly durable balls for extended use. Made of robust materials for use on all playing surfaces and used by players at any level. Practice balls are the least expensive balls when compared with match type footballs. Turf balls are specifically designed to work on artificial surfaces that mimic grass. They are durable and reasonably affordable, but tend to skip more when used on a natural pitch. Promotional balls are usually made to promote a name brand, organization or event. Indoor footballs come in the same sizes as outdoor soccer balls but are designed to have less bounce and rebound in them, making it possible to control the ball on a smaller court or indoor arena. The cover of an indoor ball is also the strongest of any category, so it can withstand the hard rebound impact on the court flooring and wall surfaces. Futsal footballs differ from typical footballs in that the bladder is filled with foam. That makes the ball heavier and with less bounce for use on the hard futsal playing surface. A futsal football is smaller in size to a football used on the football pitch. Suppliers Many companies throughout the world produce footballs. The earliest balls were made by local suppliers where the game was played. It is estimated that 70% of all footballs are made in Sialkot, Pakistan with other major producers being China and India. As a response to the problems with the balls in the 1962 FIFA World Cup, Adidas created the Adidas Santiago – this led to Adidas winning the contract to supply the match balls for all official FIFA and UEFA matches, which they have held since the 1970s, and also for the Olympic Games. They also supply the ball for the UEFA Champions League which is called the Adidas Finale. FIFA World Cup In early FIFA World Cups, match balls were mostly provided by the hosts from local suppliers. Records indicate a variety of models being used within individual tournaments and even, on some occasions, individual games. Over time, FIFA took more control over the choice of ball used. Since 1970 Adidas have supplied official match balls (all of which were made in Sialkot, Pakistan) for every tournament. League balls The most up-to-date balls used in various club football competitions as of 2024–25 season are: Unicode The association football symbol () was introduced by computing standard Unicode. The symbol was representable in HTML as or . The addition of this symbol follows a 2008 proposal by Karl Pentzlin. See also Ball (gridiron football) Football (ball) Notes References External links New York Times interactive feature on the evolution of the world cup ball Association football equipment Laws of association football Inflatable manufactured goods Spherical objects
Ball (association football)
[ "Physics" ]
2,040
[ "Spherical objects", "Physical objects", "Matter" ]
5,491,283
https://en.wikipedia.org/wiki/Sealant
Sealant is a substance used to block the passage of fluids through openings in materials, a type of mechanical seal. In building construction sealant is sometimes synonymous with caulk (especially if acrylic latex or polyurethane based) and also serve the purposes of blocking dust, sound and heat transmission. Sealants may be weak or strong, flexible or rigid, permanent or temporary. Sealants are not adhesives but some have adhesive qualities and are called adhesive-sealants or structural sealants. History Sealants were first used in prehistory in the broadest sense as mud, grass and reeds to seal dwellings from the weather such as the daub in wattle and daub and thatching. Natural sealants and adhesive-sealants included plant resins such as pine pitch and birch pitch, bitumen, wax, tar, natural gum, clay (mud) mortar, lime mortar, lead, blood and egg. In the 17th century glazing putty was first used to seal window glass made with linseed oil and chalk, later other drying oils were also used to make oil-based putties. In the 1920s, polymers such as acrylic polymers, butyl polymers and silicone polymers were first developed and used in sealants. By the 1960s, synthetic-polymer-based sealants were widely available. Function Sealants, despite not having great strength, convey a number of properties. They seal top structures to the substrate, and are particularly effective in waterproofing processes by keeping moisture out (or in) the components in which they are used. They can provide thermal and acoustical insulation, and may serve as fire barriers. They may have electrical properties, as well. Sealants can also be used for simple smoothing or filling. They are often called upon to perform several of these functions at once. A caulking sealant has three basic functions: It fills a gap between two or more substrates; it forms a barrier due to the physical properties of the sealant itself and by adhesion to the substrate; and it maintains sealing properties for the expected lifetime, service conditions, and environments. The sealant performs these functions by way of correct formulation to achieve specific application and performance properties. Other than adhesives, however, there are few functional alternatives to the sealing process. Soldering or welding can perhaps be used as alternatives in certain instances, depending on the substrates and the relative movement that the substrates will see in service. However, the simplicity and reliability offered by organic elastomers usually make them the clear choice for performing these functions. Types A sealant may be viscous material that has little or no flow characteristics and which stay where they are applied; or they can be thin and runny so as to allow it to penetrate the substrate by means of capillary action. Anaerobic acrylic sealants (generally referred to as impregnants) are the most desirable, as they are required to cure in the absence of air, unlike surface sealants that require air as part of the cure mechanism that changes state to become solid, once applied, and is used to prevent the penetration of air, gas, noise, dust, fire, smoke, or liquid from one location through a barrier into another. Typically, sealants are used to close small openings that are difficult to shut with other materials, such as concrete, drywall, etc. Desirable properties of sealants include insolubility, corrosion resistance, and adhesion. Uses of sealants vary widely and sealants are used in many industries, for example, construction, automotive and aerospace industries. Sealants can be categorized in accordance with varying criteria, e. g. in accordance with the reactivity of the product in the ready-to-use condition or on the basis of its mechanical behavior after installation. Often the intended use or the chemical basis is used to classify sealants, too. A typical classification system for most commonly used sealants is shown below. Types of sealants fall between the higher-strength, adhesive-derived sealers and coatings at one end, and extremely low-strength putties, waxes, and caulks at the other. Putties and caulks serve only one function – i.e., to take up space and fill voids. Sealants may be based on silicone. Other common types of sealants: Common areas of use Aerospace sealants Firewall Sealants – a two-component, firewall sealant intended for use as a coating, sealant or filleting material in the construction, repair and maintenance of aircraft and is especially useful where fire resistance, exposure to phosphate ester fluids, and/or exposure to extreme temperatures, −65 °F (−54 °C) to 400 °F (204 °C) are major considerations. Fuel Tank Sealants – High-temperature fuel resistant sealant intended for use on integral fuel tanks with excellent resistance to other fluids such as water, alcohols, synthetic oils and petroleum-based hydraulic fluids Access Door Sealants – Access door sealant intended for use on integral fuel tanks and pressurized cabins with low adhesion characteristics and excellent resistance to other fluids such as water, alcohols, synthetic oils and petroleum based hydraulic fluids. Windshield Sealant – demonstrated to be a useful sealant in a variety of applications where quick setting is desired, for example, windshield sealants, repair caulks, adhesives, etc. Comparison with adhesives The main difference between adhesives and sealants is that sealants typically have lower strength and higher elongation than adhesives do. When sealants are used between substrates having different thermal coefficients of expansion or differing elongation under stress, they need to have adequate flexibility and elongation. Sealants generally contain inert filler material and are usually formulated with an elastomer to give the required flexibility and elongation. They usually have a paste consistency to allow filling of gaps between substrates. Low shrinkage after application is often required. Sealants also typically require a sufficient compression set, especially when the sealant is a foam gasket. Many adhesive technologies can be formulated into sealants. References Seals (mechanical) Materials
Sealant
[ "Physics" ]
1,266
[ "Seals (mechanical)", "Materials", "Matter" ]
5,491,440
https://en.wikipedia.org/wiki/Multi-scale%20approaches
The scale space representation of a signal obtained by Gaussian smoothing satisfies a number of special properties, scale-space axioms, which make it into a special form of multi-scale representation. There are, however, also other types of "multi-scale approaches" in the areas of computer vision, image processing and signal processing, in particular the notion of wavelets. The purpose of this article is to describe a few of these approaches: Scale-space theory for one-dimensional signals For one-dimensional signals, there exists quite a well-developed theory for continuous and discrete kernels that guarantee that new local extrema or zero-crossings cannot be created by a convolution operation. For continuous signals, it holds that all scale-space kernels can be decomposed into the following sets of primitive smoothing kernels: the Gaussian kernel : where , truncated exponential kernels (filters with one real pole in the s-plane): if and 0 otherwise where if and 0 otherwise where , translations, rescalings. For discrete signals, we can, up to trivial translations and rescalings, decompose any discrete scale-space kernel into the following primitive operations: the discrete Gaussian kernel where where are the modified Bessel functions of integer order, generalized binomial kernels corresponding to linear smoothing of the form where where , first-order recursive filters corresponding to linear smoothing of the form where where , the one-sided Poisson kernel for where for where . From this classification, it is apparent that we require a continuous semi-group structure, there are only three classes of scale-space kernels with a continuous scale parameter; the Gaussian kernel which forms the scale-space of continuous signals, the discrete Gaussian kernel which forms the scale-space of discrete signals and the time-causal Poisson kernel that forms a temporal scale-space over discrete time. If we on the other hand sacrifice the continuous semi-group structure, there are more options: For discrete signals, the use of generalized binomial kernels provides a formal basis for defining the smoothing operation in a pyramid. For temporal data, the one-sided truncated exponential kernels and the first-order recursive filters provide a way to define time-causal scale-spaces that allow for efficient numerical implementation and respect causality over time without access to the future. The first-order recursive filters also provide a framework for defining recursive approximations to the Gaussian kernel that in a weaker sense preserve some of the scale-space properties. See also Scale space Scale space implementation Scale-space segmentation References Image processing Computer vision
Multi-scale approaches
[ "Engineering" ]
543
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
44,958
https://en.wikipedia.org/wiki/Water%20tower
A water tower is an elevated structure supporting a water tank constructed at a height sufficient to pressurize a distribution system for potable water, and to provide emergency storage for fire protection. Water towers often operate in conjunction with underground or surface service reservoirs, which store treated water close to where it will be used. Other types of water towers may only store raw (non-potable) water for fire protection or industrial purposes, and may not necessarily be connected to a public water supply. Water towers are able to supply water even during power outages, because they rely on hydrostatic pressure produced by elevation of water (due to gravity) to push the water into domestic and industrial water distribution systems; however, they cannot supply the water for a long time without power, because a pump is typically required to refill the tower. A water tower also serves as a reservoir to help with water needs during peak usage times. The water level in the tower typically falls during the peak usage hours of the day, and then a pump fills it back up during the night. This process also keeps the water from freezing in cold weather, since the tower is constantly being drained and refilled. History Although the use of elevated water storage tanks has existed since ancient times in various forms, the modern use of water towers for pressurized public water systems developed during the mid-19th century, as steam-pumping became more common, and better pipes that could handle higher pressures were developed. In the United Kingdom, standpipes consisted of tall, exposed, N-shaped pipes, used for pressure relief and to provide a fixed elevation for steam-driven pumping engines which tended to produce a pulsing flow, while the pressurized water distribution system required constant pressure. Standpipes also provided a convenient fixed location to measure flow rates. Designers typically enclosed the riser pipes in decorative masonry or wooden structures. By the late 19th century, standpipes grew to include storage tanks to meet the ever-increasing demands of growing cities. Many early water towers are now considered historically significant and have been included in various heritage listings around the world. Some are converted to apartments or exclusive penthouses. In certain areas, such as New York City in the United States, smaller water towers are constructed for individual buildings. In California and some other states, domestic water towers enclosed by siding (tankhouses) were once built (1850s–1930s) to supply individual homes; windmills pumped water from hand-dug wells up into the tank in New York. Water towers were used to supply water stops for steam locomotives on railroad lines. Early steam locomotives required water stops every . Design and construction A variety of materials can be used to construct a typical water tower; steel and reinforced or prestressed concrete are most often used (with wood, fiberglass, or brick also in use), incorporating an interior coating to protect the water from any effects from the lining material. The reservoir in the tower may be spherical, cylindrical, or an ellipsoid, with a minimum height of approximately and a minimum of in diameter. A standard water tower typically has a height of approximately . Pressurization occurs through the hydrostatic pressure of the elevation of water; for every of elevation, it produces of pressure. of elevation produces roughly , which is enough pressure to operate and provide for most domestic water pressure and distribution system requirements. The height of the tower provides the pressure for the water supply system, and it may be supplemented with a pump. The volume of the reservoir and diameter of the piping provide and sustain flow rate. However, relying on a pump to provide pressure is expensive; to keep up with varying demand, the pump would have to be sized to meet peak demands. During periods of low demand, jockey pumps are used to meet these lower water flow requirements. The water tower reduces the need for electrical consumption of cycling pumps and thus the need for an expensive pump control system, as this system would have to be sized sufficiently to give the same pressure at high flow rates. Very high volumes and flow rates are needed when fighting fires. With a water tower present, pumps can be sized for average demand, not peak demand; the water tower can provide water pressure during the day and pumps will refill the water tower when demands are lower. Using wireless sensor networks to monitor water levels inside the tower allows municipalities to automatically monitor and control pumps without installing and maintaining expensive data cables. Architecture The adjacent image shows three architectural approaches to incorporating these tanks in the design of a building, one on East 57th Street in New York City. From left to right, a fully enclosed and ornately decorated brick structure, a simple unadorned roofless brick structure hiding most of the tank but revealing the top of the tank, and a simple utilitarian structure that makes no effort to hide the tanks or otherwise incorporate them into the design of the building. The technology dates to at least the 19th century, and for a long time New York City required that all buildings higher than six stories be equipped with a rooftop water tower. Two companies in New York build water towers, both of which are family businesses in operation since the 19th century. The original water tower builders were barrel makers who expanded their craft to meet a modern need as buildings in the city grew taller in height. Even today, no sealant is used to hold the water in. The wooden walls of the water tower are held together with steel cables or straps, but water leaks through the gaps when first filled. As the water saturates the wood, it swells, the gaps close and become impermeable. The rooftop water towers store of water until it is needed in the building below. The upper portion of water is skimmed off the top for everyday use while the water in the bottom of the tower is held in reserve to fight fire. When the water drops below a certain level, a pressure switch, level switch or float valve will activate a pump or open a public water line to refill the water tower. Architects and builders have taken varied approaches to incorporating water towers into the design of their buildings. On many large commercial buildings, water towers are completely hidden behind an extension of the facade of the building. For cosmetic reasons, apartment buildings often enclose their tanks in rooftop structures, either simple unadorned rooftop boxes, or ornately decorated structures intended to enhance the visual appeal of the building. Many buildings, however, leave their water towers in plain view atop utilitarian framework structures. Water towers are common in India, where the electricity supply is erratic in most places. If the pumps fail (such as during a power outage), then water pressure will be lost, causing potential public health concerns. Many U.S. states require a "boil-water advisory" to be issued if water pressure drops below . This advisory presumes that the lower pressure might allow pathogens to enter the system. Some have been converted to serve modern purposes, as for example, the Wieża Ciśnień (Wrocław water tower) in Wrocław, Poland which is today a restaurant complex. Others have been converted to residential use. Historically, railroads that used steam locomotives required a means of replenishing the locomotive's tenders. Water towers were common along the railroad. The tenders were usually replenished by water cranes, which were fed by a water tower. Some water towers are also used as observation towers, and some restaurants, such as the Goldbergturm in Sindelfingen, Germany, or the second of the three Kuwait Towers, in the State of Kuwait. It is also common to use water towers as the location of transmission mechanisms in the UHF range with small power, for instance for closed rural broadcasting service, amateur radio, or cellular telephone service. In hilly regions, local topography can be substituted for structures to elevate the tanks. These tanks are often nothing more than concrete cisterns terraced into the sides of local hills or mountains, but function identically to the traditional water tower. The tops of these tanks can be landscaped or used as park space, if desired. Spheres and spheroids The Chicago Bridge and Iron Company has built many of the water spheres and spheroids found in the United States. The website World's Tallest Water Sphere describes the distinction between a water sphere and water spheroid thus: The Union Watersphere is a water tower topped with a sphere-shaped water tank in Union, New Jersey, and characterized as the World's Tallest Water Sphere. A Star Ledger article suggested a water tower in Erwin, North Carolina completed in early 2012, tall and holding , had become the World's Tallest Water Sphere. However, photographs of the Erwin water tower revealed the new tower to be a water spheroid. The water tower in Braman, Oklahoma, built by the Kaw Nation and completed in 2010, is tall and can hold . Slightly taller than the Union Watersphere, it is also a spheroid. Another tower in Oklahoma, built in 1986 and billed as the "largest water tower in the country", is tall, can hold , and is located in Edmond. The Earthoid, a perfectly spherical tank located in Germantown, Maryland is tall and holds of water. The name is taken from it being painted to resemble a globe of the world. The golf ball-shaped tank of the water tower at Gonzales, California is supported by three tubular legs and reaches about high. The Watertoren (or Water Towers) in Eindhoven, Netherlands contain three spherical tanks, each in diameter and capable of holding of water, on three spires were completed in 1970. Decoration Water towers can be surrounded by ornate coverings including fancy brickwork, a large ivy-covered trellis or they can be simply painted. Some city water towers have the name of the city painted in large letters on the roof, as a navigational aid to aviators and motorists. Sometimes the decoration can be humorous. An example of this are water towers built side by side, labeled HOT and COLD. Cities in the United States possessing side-by-side water towers labeled HOT and COLD include Granger, Iowa; Canton, Kansas; Pratt, Kansas, and St. Clair, Missouri. Eveleth, Minnesota at one time had two such towers, but no longer does. Many small towns in the United States use their water towers to advertise local tourism, their local high school sports teams, or other locally notable facts. A "mushroom" water tower was built in Örebro, Sweden and holds almost two million gallons of water. Tallest Alternatives Alternatives to water towers are simple pumps mounted on top of the water pipes to increase the water pressure. This new approach is more straightforward, but also more subject to potential public health risks; if the pumps fail, then loss of water pressure may result in entry of contaminants into the water system. Most large water utilities do not use this approach, given the potential risks. Examples Australia Bankstown Reservoir, Sydney Austria Wasserturm Amstetten (Water tower with transmission antenna) Belgium Mechelen-Zuid Watertoren Brazil Nave Espacial de Varginha in Varginha Canada Guaranteed Pure Milk bottle in Montreal, Quebec Croatia Vukovar water tower in Vukovar. Denmark Svaneke water tower Finland Mustankallio water tower in Lahti Germany Lüneburg Water Tower Heidelberg TV Tower (TV tower with water reservoir) Mannheim Water Tower (built 1886–1889) Kuwait Kuwait Towers, which include two water reservoirs, and Kuwait Water Towers (Mushroom towers in Kuwait City. India Tala tank in Kolkata Italy Ginosa Water Tower, tall Netherlands Amsterdamsestraatweg Water Tower in Utrecht Eindhoven Water Towers in Eindhoven Poldertoren in Emmeloord Water Tower Simpelveld in Simpelveld Water Tower Hellevoetsluis in Hellevoetsluis Poland Wrocław Water Tower Old Water Tower, Bydgoszcz Romania Fabric Water Tower Iosefin Water Tower Oltenița Water Tower Turnu Măgurele Water Tower Slovakia Water Tower in Komárno Water Tower in Trnava Slovenia Brežice Water Tower in Brežice Sweden Vanadislundens water reservoir (Stockholm) United Kingdom Cardiff Central Station Water Tower Dock Tower in Grimsby House in the Clouds in Thorpeness, Suffolk Jumbo in Colchester, Essex Norton Water Tower in Norton, Cheshire Tilehurst Water Tower in Reading Tower Park in Poole, Dorset Cranhill, Garthamlock and Drumchapel in Glasgow, and Tannochside just outside the city United States Brooks Catsup Bottle Water Tower near Collinsville, Illinois Chicago Water Tower in Chicago, Illinois Florence Y'all Water Tower in Florence, Kentucky Lawson Tower in Scituate, Massachusetts Leaning Water Tower in Groom, Texas North Point Water Tower in Milwaukee, Wisconsin Peachoid next to I-85 on the edge of Gaffney, South Carolina Show Place Arena water tower in Upper Marlboro, Maryland Union Watersphere in Union Township, New Jersey Volunteer Park Water Tower in Capitol Hill, Seattle, Washington Warner Bros. Water Tower in Burbank, California (In the animated TV series Animaniacs, it was used to incarcerate the characters Yakko, Wakko, and Dot, as well as to serve as their home.) Weehawken Water Tower in Weehawken, New Jersey Ypsilanti Water Tower in Ypsilanti, Michigan (Winner of the Most Phallic Building contest in 2003) Standpipe A standpipe is a water tower which is cylindrical (or nearly cylindrical) throughout its whole height, rather than an elevated tank on supports with a narrower pipe leading to and from the ground. There were originally over 400 standpipe water towers in the United States, but very few remain today, including: Addison Standpipe, in Addison, Michigan Belton Standpipe in Belton, South Carolina (also in Allendale and Walterboro) Belton Standpipe in Belton, Texas Bellevue Standpipe (actually a water tank, not a tower), in Boston, Massachusetts Chicago Water Tower, in Chicago, Illinois Cochituate standpipe, in Boston, Massachusetts Craig, Nebraska standpipe Eden Park Stand Pipe, in Cincinnati Evansville Standpipe (a steel tower), in Evansville, Wisconsin Fall River Waterworks, in Fall River, Massachusetts Forbes Hill Standpipe, in Quincy, Massachusetts Louisville Water Tower, in Louisville, Kentucky North Point Water Tower, in Milwaukee, Wisconsin Reading Standpipe (demolished in 1999 and replaced by a modern steel tower), in Reading, Massachusetts Roxbury High Fort contains the Cochituate Standpipe St. Louis, Missouri has three standpipe water towers which are on the National Register of Historic Places. Bissell Tower (also known as the Red Tower) Compton Hill Tower Grand Avenue Water Tower Thomas Hill Standpipe, in Bangor, Maine Ypsilanti Water Tower, in Ypsilanti, Michigan Bremen Water Tower, in Bremen, Indiana Gallery See also Architectural structure List of nonbuilding structure types American and Canadian Water Landmark Caldwell Tanks Gas holder, a similar utility storage structure Hyperboloid structure Pittsburgh-Des Moines Steel Co. Pumped-storage hydroelectricity Water tank References External links International Watertower Archive Website about 1000 watertowers from Poland Towers
Water tower
[ "Engineering" ]
3,094
[ "Structural engineering", "Towers" ]
44,961
https://en.wikipedia.org/wiki/U.S.%20National%20Ice%20Center
The U.S. National Ice Center (USNIC) is a tri-agency operational center and echelon V command of the Naval Oceanographic Office (NAVOCEANO), whose mission is to provide worldwide navigational ice analyses for the armed forces of the United States, allied nations, and U.S. government agencies. It is represented by the United States Navy (Department of Defense); the National Oceanic and Atmospheric Administration (Department of Commerce); and the United States Coast Guard (Department of Homeland Security). The U.S. National Ice Center is a subordinate command of the Naval Oceanographic Office (NAVOCEANO). Originally known as the Navy/NOAA Joint Ice Center, which was established on December 15, 1976 in a memorandum of agreement between the U.S. Navy and NOAA, the National Ice Center was formed in 1995 when the U.S. Coast Guard became a partner. The U.S. National Ice Center produces global sea ice charts and various cryospheric GIS products. They also name and track Antarctic icebergs if greater than on its longest axis. Icebergs must be a minimum of 19 kilometers in length to be tracked by the USNIC. See also International Ice Charting Working Group International Ice Patrol References External links United States Coast Guard National Oceanic and Atmospheric Administration United States Navy Government agencies established in 1995 Ice in transportation 1995 establishments in the United States
U.S. National Ice Center
[ "Physics" ]
289
[ "Physical systems", "Transport", "Ice in transportation" ]
44,987
https://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli%20lemma
In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century. A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law. Statement of lemma for probability spaces Let E1, E2, ... be a sequence of events in some probability space. The Borel–Cantelli lemma states: Here, "lim sup" denotes limit supremum of the sequence of events, and each event is a set of outcomes. That is, lim sup En is the set of outcomes that occur infinitely many times within the infinite sequence of events (En). Explicitly, The set lim sup En is sometimes denoted {En i.o.}, where "i.o." stands for "infinitely often". The theorem therefore asserts that if the sum of the probabilities of the events En is finite, then the set of all outcomes that are "repeated" infinitely many times must occur with probability zero. Note that no assumption of independence is required. Example Suppose (Xn) is a sequence of random variables with Pr(Xn = 0) = 1/n2 for each n. The probability that Xn = 0 occurs for infinitely many n is equivalent to the probability of the intersection of infinitely many [Xn = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ΣPr(Xn = 0) converges to and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of Xn = 0 occurring for infinitely many n is 0. Almost surely (i.e., with probability 1), Xn is nonzero for all but finitely many n. Proof Let (En) be a sequence of events in some probability space. The sequence of events is non-increasing: By continuity from above, By subadditivity, By original assumption, As the series converges, as required. General measure spaces For general measure spaces, the Borel–Cantelli lemma takes the following form: Converse result A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states: If the events En are independent and the sum of the probabilities of the En diverges to infinity, then the probability that infinitely many of them occur is 1. That is: The assumption of independence can be weakened to pairwise independence, but in that case the proof is more difficult. The infinite monkey theorem follows from this second lemma. Example The lemma can be applied to give a covering theorem in Rn. Specifically , if Ej is a collection of Lebesgue measurable subsets of a compact set in Rn such that then there is a sequence Fj of translates such that apart from a set of measure zero. Proof Suppose that and the events are independent. It is sufficient to show the event that the En's did not occur for infinitely many values of n has probability 0. This is just to say that it is sufficient to show that Noting that: it is enough to show: . Since the are independent: The convergence test for infinite products guarantees that the product above is 0, if diverges. This completes the proof. Counterpart Another related result is the so-called counterpart of the Borel–Cantelli lemma. It is a counterpart of the Lemma in the sense that it gives a necessary and sufficient condition for the limsup to be 1 by replacing the independence assumption by the completely different assumption that is monotone increasing for sufficiently large indices. This Lemma says: Let be such that , and let denote the complement of . Then the probability of infinitely many occur (that is, at least one occurs) is one if and only if there exists a strictly increasing sequence of positive integers such that This simple result can be useful in problems such as for instance those involving hitting probabilities for stochastic process with the choice of the sequence usually being the essence. Kochen–Stone Let be a sequence of events with and Then there is a positive probability that occur infinitely often. Proof Let . Then, note that and Hence, we know that We have that Now, notice that by the Cauchy-Schwarz Inequality, therefore, We then have Given , since , we can find large enough so that for any given . Therefore, But the left side is precisely the probability that the occur infinitely often since We're done now, since we've shown that See also Lévy's zero–one law Kuratowski convergence Infinite monkey theorem References . . . Durrett, Rick. "Probability: Theory and Examples." Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005. External links Planet Math Proof Refer for a simple proof of the Borel Cantelli Lemma Theorems in measure theory Probability theorems Covering lemmas Lemmas
Borel–Cantelli lemma
[ "Mathematics" ]
1,163
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Covering lemmas", "Theorems in probability theory", "Mathematical problems", "Mathematical theorems", "Lemmas" ]
45,022
https://en.wikipedia.org/wiki/Natural%20transformation
In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i.e., the composition of morphisms) of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Informally, the notion of a natural transformation states that a particular map between functors can be done consistently over an entire category. Indeed, this intuition can be formalized to define so-called functor categories. Natural transformations are, after categories and functors, one of the most fundamental notions of category theory and consequently appear in the majority of its applications. Definition If and are functors between the categories and (both from to ), then a natural transformation from to is a family of morphisms that satisfies two requirements. The natural transformation must associate, to every object in , a morphism between objects of . The morphism is called the component of at . Components must be such that for every morphism in we have: The last equation can conveniently be expressed by the commutative diagram If both and are contravariant, the vertical arrows in the right diagram are reversed. If is a natural transformation from to , we also write or . This is also expressed by saying the family of morphisms is natural in . If, for every object in , the morphism is an isomorphism in , then is said to be a (or sometimes natural equivalence or isomorphism of functors). Two functors and are called naturally isomorphic or simply isomorphic if there exists a natural isomorphism from to . An infranatural transformation from to is simply a family of morphisms , for all in . Thus a natural transformation is an infranatural transformation for which for every morphism . The naturalizer of , nat, is the largest subcategory of containing all the objects of on which restricts to a natural transformation. Examples Opposite group Statements such as "Every group is naturally isomorphic to its opposite group" abound in modern mathematics. We will now give the precise meaning of this statement as well as its proof. Consider the category of all groups with group homomorphisms as morphisms. If is a group, we define its opposite group as follows: is the same set as , and the operation is defined by . All multiplications in are thus "turned around". Forming the opposite group becomes a (covariant) functor from to if we define for any group homomorphism . Note that is indeed a group homomorphism from to : The content of the above statement is: "The identity functor is naturally isomorphic to the opposite functor " To prove this, we need to provide isomorphisms for every group , such that the above diagram commutes. Set . The formulas and show that is a group homomorphism with inverse . To prove the naturality, we start with a group homomorphism and show , i.e. for all in . This is true since and every group homomorphism has the property . Modules Let be an -module homomorphism of right modules. For every left module there is a natural map , form a natural transformation . For every right module there is a natural map defined by , form a natural transformation . Abelianization Given a group , we can define its abelianization . Let denote the projection map onto the cosets of . This homomorphism is "natural in ", i.e., it defines a natural transformation, which we now check. Let be a group. For any homomorphism , we have that is contained in the kernel of , because any homomorphism into an abelian group kills the commutator subgroup. Then factors through as for the unique homomorphism . This makes a functor and a natural transformation, but not a natural isomorphism, from the identity functor to . Hurewicz homomorphism Functors and natural transformations abound in algebraic topology, with the Hurewicz homomorphisms serving as examples. For any pointed topological space and positive integer there exists a group homomorphism from the -th homotopy group of to the -th homology group of . Both and are functors from the category Top* of pointed topological spaces to the category Grp of groups, and is a natural transformation from to . Determinant Given commutative rings and with a ring homomorphism , the respective groups of invertible matrices and inherit a homomorphism which we denote by , obtained by applying to each matrix entry. Similarly, restricts to a group homomorphism , where denotes the group of units of . In fact, and are functors from the category of commutative rings to . The determinant on the group , denoted by , is a group homomorphism which is natural in : because the determinant is defined by the same formula for every ring, holds. This makes the determinant a natural transformation from to . Double dual of a vector space For example, if is a field, then for every vector space over we have a "natural" injective linear map from the vector space into its double dual. These maps are "natural" in the following sense: the double dual operation is a functor, and the maps are the components of a natural transformation from the identity functor to the double dual functor. Finite calculus For every abelian group , the set of functions from the integers to the underlying set of forms an abelian group under pointwise addition. (Here is the standard forgetful functor .) Given an morphism , the map given by left composing with the elements of the former is itself a homomorphism of abelian groups; in this way we obtain a functor . The finite difference operator taking each function to is a map from to itself, and the collection of such maps gives a natural transformation . Tensor-hom adjunction Consider the category of abelian groups and group homomorphisms. For all abelian groups , and we have a group isomorphism . These isomorphisms are "natural" in the sense that they define a natural transformation between the two involved functors . (Here "op" is the opposite category of , not to be confused with the trivial opposite group functor on !) This is formally the tensor-hom adjunction, and is an archetypal example of a pair of adjoint functors. Natural transformations arise frequently in conjunction with adjoint functors, and indeed, adjoint functors are defined by a certain natural isomorphism. Additionally, every pair of adjoint functors comes equipped with two natural transformations (generally not isomorphisms) called the unit and counit. Unnatural isomorphism The notion of a natural transformation is categorical, and states (informally) that a particular map between functors can be done consistently over an entire category. Informally, a particular map (esp. an isomorphism) between individual objects (not entire categories) is referred to as a "natural isomorphism", meaning implicitly that it is actually defined on the entire category, and defines a natural transformation of functors; formalizing this intuition was a motivating factor in the development of category theory. Conversely, a particular map between particular objects may be called an unnatural isomorphism (or "an isomorphism that is not natural") if the map cannot be extended to a natural transformation on the entire category. Given an object a functor (taking for simplicity the first functor to be the identity) and an isomorphism proof of unnaturality is most easily shown by giving an automorphism that does not commute with this isomorphism (so ). More strongly, if one wishes to prove that and are not naturally isomorphic, without reference to a particular isomorphism, this requires showing that for any isomorphism , there is some with which it does not commute; in some cases a single automorphism works for all candidate isomorphisms while in other cases one must show how to construct a different for each isomorphism. The maps of the category play a crucial role – any infranatural transform is natural if the only maps are the identity map, for instance. This is similar (but more categorical) to concepts in group theory or module theory, where a given decomposition of an object into a direct sum is "not natural", or rather "not unique", as automorphisms exist that do not preserve the direct sum decomposition – see for example. Some authors distinguish notationally, using for a natural isomorphism and for an unnatural isomorphism, reserving for equality (usually equality of maps). Example: fundamental group of torus As an example of the distinction between the functorial statement and individual objects, consider homotopy groups of a product space, specifically the fundamental group of the torus. The homotopy groups of a product space are naturally the product of the homotopy groups of the components, with the isomorphism given by projection onto the two factors, fundamentally because maps into a product space are exactly products of maps into the components – this is a functorial statement. However, the torus (which is abstractly a product of two circles) has fundamental group isomorphic to , but the splitting is not natural. Note the use of , , and : This abstract isomorphism with a product is not natural, as some isomorphisms of do not preserve the product: the self-homeomorphism of (thought of as the quotient space ) given by (geometrically a Dehn twist about one of the generating curves) acts as this matrix on (it's in the general linear group of invertible integer matrices), which does not preserve the decomposition as a product because it is not diagonal. However, if one is given the torus as a product – equivalently, given a decomposition of the space – then the splitting of the group follows from the general statement earlier. In categorical terms, the relevant category (preserving the structure of a product space) is "maps of product spaces, namely a pair of maps between the respective components". Naturality is a categorical notion, and requires being very precise about exactly what data is given – the torus as a space that happens to be a product (in the category of spaces and continuous maps) is different from the torus presented as a product (in the category of products of two spaces and continuous maps between the respective components). Example: dual of a finite-dimensional vector space Every finite-dimensional vector space is isomorphic to its dual space, but there may be many different isomorphisms between the two spaces. There is in general no natural isomorphism between a finite-dimensional vector space and its dual space. However, related categories (with additional structure and restrictions on the maps) do have a natural isomorphism, as described below. The dual space of a finite-dimensional vector space is again a finite-dimensional vector space of the same dimension, and these are thus isomorphic, since dimension is the only invariant of finite-dimensional vector spaces over a given field. However, in the absence of additional constraints (such as a requirement that maps preserve the chosen basis), the map from a space to its dual is not unique, and thus such an isomorphism requires a choice, and is "not natural". On the category of finite-dimensional vector spaces and linear maps, one can define an infranatural isomorphism from vector spaces to their dual by choosing an isomorphism for each space (say, by choosing a basis for every vector space and taking the corresponding isomorphism), but this will not define a natural transformation. Intuitively this is because it required a choice, rigorously because any such choice of isomorphisms will not commute with, say, the zero map; see for detailed discussion. Starting from finite-dimensional vector spaces (as objects) and the identity and dual functors, one can define a natural isomorphism, but this requires first adding additional structure, then restricting the maps from "all linear maps" to "linear maps that respect this structure". Explicitly, for each vector space, require that it comes with the data of an isomorphism to its dual, . In other words, take as objects vector spaces with a nondegenerate bilinear form . This defines an infranatural isomorphism (isomorphism for each object). One then restricts the maps to only those maps that commute with the isomorphisms: or in other words, preserve the bilinear form: . (These maps define the naturalizer of the isomorphisms.) The resulting category, with objects finite-dimensional vector spaces with a nondegenerate bilinear form, and maps linear transforms that respect the bilinear form, by construction has a natural isomorphism from the identity to the dual (each space has an isomorphism to its dual, and the maps in the category are required to commute). Viewed in this light, this construction (add transforms for each object, restrict maps to commute with these) is completely general, and does not depend on any particular properties of vector spaces. In this category (finite-dimensional vector spaces with a nondegenerate bilinear form, maps linear transforms that respect the bilinear form), the dual of a map between vector spaces can be identified as a transpose. Often for reasons of geometric interest this is specialized to a subcategory, by requiring that the nondegenerate bilinear forms have additional properties, such as being symmetric (orthogonal matrices), symmetric and positive definite (inner product space), symmetric sesquilinear (Hermitian spaces), skew-symmetric and totally isotropic (symplectic vector space), etc. – in all these categories a vector space is naturally identified with its dual, by the nondegenerate bilinear form. Operations with natural transformations Vertical composition If and are natural transformations between functors , then we can compose them to get a natural transformation . This is done componentwise: . This vertical composition of natural transformations is associative and has an identity, and allows one to consider the collection of all functors itself as a category (see below under Functor categories). The identity natural transformation on functor has components . For , . Horizontal composition If is a natural transformation between functors and is a natural transformation between functors , then the composition of functors allows a composition of natural transformations with components . By using whiskering (see below), we can write , hence . This horizontal composition of natural transformations is also associative with identity. This identity is the identity natural transformation on the identity functor, i.e., the natural transformation that associate to each object its identity morphism: for object in category , . For with , . As identity functors and are functors, the identity for horizontal composition is also the identity for vertical composition, but not vice versa. Whiskering Whiskering is an external binary operation between a functor and a natural transformation. If is a natural transformation between functors , and is another functor, then we can form the natural transformation by defining . If on the other hand is a functor, the natural transformation is defined by . It's also an horizontal composition where one of the natural transformations is the identity natural transformation: and . Note that (resp. ) is generally not the left (resp. right) identity of horizontal composition ( and in general), except if (resp. ) is the identity functor of the category (resp. ). Interchange law The two operations are related by an identity which exchanges vertical composition with horizontal composition: if we have four natural transformations as shown on the image to the right, then the following identity holds: . Vertical and horizontal compositions are also linked through identity natural transformations: for and , . As whiskering is horizontal composition with an identity, the interchange law gives immediately the compact formulas of horizontal composition of and without having to analyze components and the commutative diagram: . Functor categories If is any category and is a small category, we can form the functor category having as objects all functors from to and as morphisms the natural transformations between those functors. This forms a category since for any functor there is an identity natural transformation (which assigns to every object the identity morphism on ) and the composition of two natural transformations (the "vertical composition" above) is again a natural transformation. The isomorphisms in are precisely the natural isomorphisms. That is, a natural transformation is a natural isomorphism if and only if there exists a natural transformation such that and . The functor category is especially useful if arises from a directed graph. For instance, if is the category of the directed graph , then has as objects the morphisms of , and a morphism between and in is a pair of morphisms and in such that the "square commutes", i.e. . More generally, one can build the 2-category whose 0-cells (objects) are the small categories, 1-cells (arrows) between two objects and are the functors from to , 2-cells between two 1-cells (functors) and are the natural transformations from to . The horizontal and vertical compositions are the compositions between natural transformations described previously. A functor category is then simply a hom-category in this category (smallness issues aside). More examples Every limit and colimit provides an example for a simple natural transformation, as a cone amounts to a natural transformation with the diagonal functor as domain. Indeed, if limits and colimits are defined directly in terms of their universal property, they are universal morphisms in a functor category. Yoneda lemma If is an object of a locally small category , then the assignment defines a covariant functor . This functor is called representable (more generally, a representable functor is any functor naturally isomorphic to this functor for an appropriate choice of ). The natural transformations from a representable functor to an arbitrary functor are completely known and easy to describe; this is the content of the Yoneda lemma. Historical notes Saunders Mac Lane, one of the founders of category theory, is said to have remarked, "I didn't invent categories to study functors; I invented them to study natural transformations." Just as the study of groups is not complete without a study of homomorphisms, so the study of categories is not complete without the study of functors. The reason for Mac Lane's comment is that the study of functors is itself not complete without the study of natural transformations. The context of Mac Lane's remark was the axiomatic theory of homology. Different ways of constructing homology could be shown to coincide: for example in the case of a simplicial complex the groups defined directly would be isomorphic to those of the singular theory. What cannot easily be expressed without the language of natural transformations is how homology groups are compatible with morphisms between objects, and how two equivalent homology theories not only have the same homology groups, but also the same morphisms between those groups. See also Extranatural transformation Universal property Higher category theory Notes References . External links nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view J. Adamek, H. Herrlich, G. Strecker, Abstract and Concrete Categories-The Joy of Cats Stanford Encyclopedia of Philosophy: "Category Theory"—by Jean-Pierre Marquis. Extensive bibliography. Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher categories. Functors
Natural transformation
[ "Mathematics" ]
4,062
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Functors", "Category theory" ]
45,053
https://en.wikipedia.org/wiki/Georges%20Perec
Georges Perec (; 7 March 1936 – 3 March 1982) was a French novelist, filmmaker, documentalist, and essayist. He was a member of the Oulipo group. His father died as a soldier early in the Second World War and his mother was killed in the Holocaust. Many of his works deal with absence, loss, and identity, often through word play. Early life Born in a working-class district of Paris, Perec was the only son of Icek Judko and Cyrla (Schulewicz) Peretz, Polish Jews who had emigrated to France in the 1920s. He was a distant relative of the Yiddish writer Isaac Leib Peretz. Perec's father, who enlisted in the French Army during World War II, died in 1940 from untreated gunfire or shrapnel wounds, and his mother was killed in the Holocaust, probably in Auschwitz sometime after 1943. Perec was taken into the care of his paternal aunt and uncle in 1942, and in 1945, he was formally adopted by them. Career Perec started writing reviews and essays for La Nouvelle Revue française and Les Lettres nouvelles, prominent literary publications, while studying history and sociology at the Sorbonne. In 1958/59 Perec served in the French army as a paratrooper (); he married Paulette Petras after being discharged. They spent one year (1960/1961) in Sfax, Tunisia, where Paulette worked as a teacher; these experiences are reflected in Things: A Story of the Sixties, which is about a young Parisian couple who also spend a year in Sfax. In 1961 Perec began working at the Neurophysiological Research Laboratory in the unit's research library funded by the CNRS and attached to the Hôpital Saint-Antoine in Paris as an archivist, a low-paid position which he retained until 1978. A few reviewers have noted that the daily handling of records and varied data may have influenced his literary style. In any case, Perec's work on the reassessment of the academic journals under subscription was influenced by a talk about the handling of scientific information given by Eugene Garfield in Paris, and he was introduced to Marshall McLuhan by Jean Duvignaud. Perec's other major influence was the Oulipo, which he joined in 1967, meeting Raymond Queneau, among others. Perec dedicated his masterpiece, (Life: A User's Manual) to Queneau, who died before it was published. Perec began working on a series of radio plays with his translator Eugen Helmle and the musician in the late 60s; less than a decade later, he was making films. His first cinematic work, based on his novel , was co-directed by , and won the feature-film Prix Jean Vigo in 1974. Perec also created crossword-puzzles for Le Point from 1976 on. La Vie mode d'emploi (1978) brought Perec some financial and critical success—it won the Prix Médicis—and allowed him to turn to writing full-time. He was a writer-in-residence at the University of Queensland in Australia in 1981, during which time he worked on 53 Jours (53 Days), which remained unfinished. Shortly after his return from Australia, his health deteriorated. A heavy smoker, he was diagnosed with lung cancer. He died the following year in Ivry-sur-Seine at age 45, four days shy of his 46th birthday; his ashes are held at the columbarium of the Père Lachaise Cemetery. Work Many of Perec's novels and essays abound with experimental word play, lists and attempts at classification, and they are usually tinged with melancholy. Perec's first novel Les Choses (published in English as Things: A Story of the Sixties) (1965) was awarded the Prix Renaudot. Perec's most famous novel La Vie mode d'emploi (Life A User's Manual) was published in 1978. Its title page describes it as "novels", in the plural, the reasons for which become apparent on reading. La Vie mode d'emploi is a tapestry of interwoven stories and ideas as well as literary and historical allusions, based on the lives of the inhabitants of a fictitious Parisian apartment block. It was written according to a complex plan of writing constraints and is primarily constructed from several elements, each adding a layer of complexity. The 99 chapters of his 600-page novel move like a knight's tour of a chessboard around the room plan of the building, describing the rooms and stairwell and telling the stories of the inhabitants. At the end, it is revealed that the whole book actually takes place in a single moment, with a final twist that is an example of "cosmic irony". It was translated into English by David Bellos in 1987. Perec is noted for his constrained writing. His 300-page novel La disparition (1969) is a lipogram, written with natural sentence structure and correct grammar, but using only words that do not contain the letter "e". It has been translated into English by Gilbert Adair under the title A Void (1994). His novella Les revenentes (1972) is a complementary univocalic piece in which the letter "e" is the only vowel used. This constraint affects even the title, which would conventionally be spelt Revenantes. An English translation by Ian Monk was published in 1996 as The Exeter Text: Jewels, Secrets, Sex in the collection Three. It has been remarked by Jacques Roubaud that these two novels draw words from two disjoint sets of the French language, and that a third novel would be possible, made from the words not used so far (those containing both "e" and a vowel other than "e"). W ou le souvenir d'enfance, (W, or the Memory of Childhood, 1975) is a semi-autobiographical work that is hard to classify. Two alternating narratives make up the volume: The first is a fictional outline of a remote island country called "W", which at first appears to be a utopian society modelled on the Olympic ideal but is gradually exposed as a horrifying, totalitarian prison much like a concentration camp. The second is a description of Perec's childhood during and after World War II. Both narratives converge towards the end, highlighting the common theme of the Holocaust. "Cantatrix sopranica L. Scientific Papers" is a spoof scientific paper detailing experiments on the "yelling reaction" provoked in sopranos by pelting them with rotten tomatoes. All references in the paper are multi-lingual puns and jokes; e.g., "(Karybb & Szyla, 1973)". David Bellos, who has translated several of Perec's works, wrote an extensive biography of Perec entitled Georges Perec: A Life in Words, which won the Académie Goncourt's bourse for biography in 1994. The Association Georges Perec has extensive archives on the author in Paris. In 1992 Perec's initially rejected novel Gaspard pas mort (Gaspard not dead), believed to be lost, was found by David Bellos amongst papers in the house of Perec's friend . The novel was reworked several times and retitled and published in 2012; its English translation by Bellos followed in 2014 as Portrait of a Man after the 1475 painting of that name by Antonello da Messina. The initial title borrows the name Gaspard from the Paul Verlaine poem "Gaspar Hauser Chante" (inspired by Kaspar Hauser, from the 1881 collection Sagesse) and characters named "Gaspard" appear in both W, or the Memory of Childhood and Life: A User's Manual, while in MICRO-TRADUCTIONS, 15 variations discrètes sur un poème connu he creatively re-writes the Verlaine poem fifteen times. Honours Asteroid no. 2817, discovered in 1982, was named after Perec. In 1994, a street in the 20th arrondissement of Paris was named after him, . The French postal service issued a stamp in 2002 in his honour; it was designed by Marc Taraskoff and engraved by Pierre Albuisson. For his work, Perec won the Prix Renaudot in 1965, the Prix Jean Vigo in 1974, and the Prix Médicis in 1978. He was featured as a Google Doodle on his 80th birthday. Works Books The most complete bibliography of Perec's works is Bernard Magné's Tentative d'inventaire pas trop approximatif des écrits de Georges Perec (Toulouse, Presses Universitaires du Mirail, 1993). Films Un homme qui dort, 1974 (with Bernard Queysanne, English title: The Man Who Sleeps) Les Lieux d'une fugue, 1975 Série noire (Alain Corneau, 1979) Ellis Island (TV film with Robert Bober) References Further reading Biographies Georges Perec: A Life in Words by David Bellos (1993) Criticism The Poetics of Experiment: A Study of the Work of Georges Perec by Warren Motte (1984) Perec ou les textes croisés by J. Pedersen (1985). In French. Pour un Perec lettré, chiffré by J.-M. Raynaud (1987). In French. Georges Perec by Claude Burgelin (1988). In French. Georges Perec: Traces of His Passage by Paul Schwartz (1988) Perecollages 1981–1988 by Bernard Magné (1989). In French. La Mémoire et l'oblique by Philippe Lejeune (1991). In French. Georges Perec: Ecrire Pour Ne Pas Dire by Stella Béhar (1995). In French. Poétique de Georges Perec: «...une trace, une marque ou quelques signes» by Jacques-Denis Bertharion (1998) In French. Georges Perec Et I'Histoire, ed. by Carsten Sestoft & Steen Bille Jorgensen (2000). In French. La Grande Catena. Studi su "La Vie mode d'emploi" by Rinaldo Rinaldi (2004). In Italian. External links L'Association Georges Perec, in French Je me souviens de Georges Perec – comprehensive site in French by Jean-Benoît Guinot, with extensive bibliography of secondary material and links Université McGill: le roman selon les romanciers (French) Inventory and analysis of Georges Perec non-novelistic writings about the novel Reading Georges Perec, by Warren Motte Georges Perèc o la Literatura como Arte Combinatoria. Instrucciones de uso | in Spanish | by Adolfo Vasquez Rocca Pensar y clasificar; Georges Perèc, escritor y trapecista | in Spanish | by Adolfo Vasquez Rocca PhD 1936 births 1982 deaths Burials at Père Lachaise Cemetery Writers from Paris Oulipo members French people of Polish-Jewish descent 20th-century French Jews Jewish French writers French Holocaust survivors Prix Médicis winners Prix Renaudot winners Postmodern writers French adoptees 20th-century French novelists French male novelists Deaths from lung cancer in France 20th-century French male writers Go (game) writers Crossword creators Palindromists
Georges Perec
[ "Physics" ]
2,368
[ "Palindromists", "Symmetry", "Palindromes" ]
45,059
https://en.wikipedia.org/wiki/Part%20of%20speech
In grammar, a part of speech or part-of-speech (abbreviated as POS or PoS, also known as word class or grammatical category) is a category of words (or, more generally, of lexical items) that have similar grammatical properties. Words that are assigned to the same part of speech generally display similar syntactic behavior (they play similar roles within the grammatical structure of sentences), sometimes similar morphological behavior in that they undergo inflection for similar properties and even similar semantic behavior. Commonly listed English parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, numeral, article, and determiner. Other terms than part of speech—particularly in modern linguistic classifications, which often make more precise distinctions than the traditional scheme does—include word class, lexical class, and lexical category. Some authors restrict the term lexical category to refer only to a particular type of syntactic category; for them the term excludes those parts of speech that are considered to be function words, such as pronouns. The term form class is also used, although this has various conflicting definitions. Word classes may be classified as open or closed: open classes (typically including nouns, verbs and adjectives) acquire new members constantly, while closed classes (such as pronouns and conjunctions) acquire new members infrequently, if at all. Almost all languages have the word classes noun and verb, but beyond these two there are significant variations among different languages. For example: Japanese has as many as three classes of adjectives, where English has one. Chinese, Korean, Japanese and Vietnamese have a class of nominal classifiers. Many languages do not distinguish between adjectives and adverbs, or between adjectives and verbs (see stative verb). Because of such variation in the number of categories and their identifying properties, analysis of parts of speech must be done for each individual language. Nevertheless, the labels for each category are assigned on the basis of universal criteria. History The classification of words into lexical categories is found from the earliest moments in the history of linguistics. India In the Nirukta, written in the 6th or 5th century BCE, the Sanskrit grammarian Yāska defined four main categories of words: नाम nāma – noun (including adjective) आख्यात ākhyāta – verb उपसर्ग upasarga – pre-verb or prefix निपात nipāta – particle, invariant word (perhaps preposition) These four were grouped into two larger classes: inflectable (nouns and verbs) and uninflectable (pre-verbs and particles). The ancient work on the grammar of the Tamil language, Tolkāppiyam, argued to have been written around 2nd century CE, classifies Tamil words as peyar (பெயர்; noun), vinai (வினை; verb), idai (part of speech which modifies the relationships between verbs and nouns), and uri (word that further qualifies a noun or verb). Western tradition A century or two after the work of Yāska, the Greek scholar Plato wrote in his Cratylus dialogue, "sentences are, I conceive, a combination of verbs [rhêma] and nouns [ónoma]". Aristotle added another class, "conjunction" [sýndesmos], which included not only the words known today as conjunctions, but also other parts (the interpretations differ; in one interpretation it is pronouns, prepositions, and the article). By the end of the 2nd century BCE, grammarians had expanded this classification scheme into eight categories, seen in the Art of Grammar, attributed to Dionysius Thrax: 'Name' (ónoma) translated as 'noun': a part of speech inflected for case, signifying a concrete or abstract entity. It includes various species like nouns, adjectives, proper nouns, appellatives, collectives, ordinals, numerals and more. Verb (rhêma): a part of speech without case inflection, but inflected for tense, person and number, signifying an activity or process performed or undergone Participle (metokhḗ): a part of speech sharing features of the verb and the noun Article (árthron): a declinable part of speech, taken to include the definite article, but also the basic relative pronoun Pronoun (antōnymíā): a part of speech substitutable for a noun and marked for a person Preposition (próthesis): a part of speech placed before other words in composition and in syntax Adverb (epírrhēma): a part of speech without inflection, in modification of or in addition to a verb, adjective, clause, sentence, or other adverb Conjunction (sýndesmos): a part of speech binding together the discourse and filling gaps in its interpretation It can be seen that these parts of speech are defined by morphological, syntactic and semantic criteria. The Latin grammarian Priscian (fl. 500 CE) modified the above eightfold system, excluding "article" (since the Latin language, unlike Greek, does not have articles) but adding "interjection". The Latin names for the parts of speech, from which the corresponding modern English terms derive, were nomen, verbum, participium, pronomen, praepositio, adverbium, conjunctio and interjectio. The category nomen included substantives (nomen substantivum, corresponding to what are today called nouns in English), adjectives (nomen adjectivum) and numerals (nomen numerale). This is reflected in the older English terminology noun substantive, noun adjective and noun numeral. Later the adjective became a separate class, as often did the numerals, and the English word noun came to be applied to substantives only. Classification Works of English grammar generally follow the pattern of the European tradition as described above, except that participles are now usually regarded as forms of verbs rather than as a separate part of speech, and numerals are often conflated with other parts of speech: nouns (cardinal numerals, e.g., "one", and collective numerals, e.g., "dozen"), adjectives (ordinal numerals, e.g., "first", and multiplier numerals, e.g., "single") and adverbs (multiplicative numerals, e.g., "once", and distributive numerals, e.g., "singly"). Eight or nine parts of speech are commonly listed: noun verb adjective adverb pronoun preposition conjunction interjection determiner Some traditional classifications consider articles to be adjectives, yielding eight parts of speech rather than nine. And some modern classifications define further classes in addition to these. For discussion see the sections below. Additionally, there are other parts of speech including particles (yes, no) and postpositions (ago, notwithstanding) although many fewer words are in these categories. The classification below, or slight expansions of it, is still followed in most dictionaries: Noun (names) a word or lexical item denoting any abstract (abstract noun: e.g. home) or concrete entity (concrete noun: e.g. house); a person (police officer, Michael), place (coastline, London), thing (necktie, television), idea (happiness), or quality (bravery). Nouns can also be classified as count nouns or non-count nouns; some can belong to either category. The most common part of speech; they are called naming words. Pronoun (replaces or places again) a substitute for a noun or noun phrase (them, he). Pronouns make sentences shorter and clearer since they replace nouns. Adjective (describes, limits) a modifier of a noun or pronoun (big, brave). Adjectives make the meaning of another word (noun) more precise. Verb (states action or being) a word denoting an action (walk), occurrence (happen), or state of being (be). Without a verb, a group of words cannot be a clause or sentence. Adverb (describes, limits) a modifier of an adjective, verb, or another adverb (very, quite). Adverbs make language more precise. Preposition (relates) a word that relates words to each other in a phrase or sentence and aids in syntactic context (in, of). Prepositions show the relationship between a noun or a pronoun with another word in the sentence. Conjunction (connects) a syntactic connector; links words, phrases, or clauses (and, but). Conjunctions connect words or group of words. Interjection (expresses feelings and emotions) an emotional greeting or exclamation (Huzzah, Alas). Interjections express strong feelings and emotions. Article (describes, limits)a grammatical marker of definiteness (the) or indefiniteness (a, an). The article is not always listed separately as its own part of speech. It is considered by some grammarians to be a type of adjective or sometimes the term 'determiner' (a broader class) is used. English words are not generally marked as belonging to one part of speech or another; this contrasts with many other European languages, which use inflection more extensively, meaning that a given word form can often be identified as belonging to a particular part of speech and having certain additional grammatical properties. In English, most words are uninflected, while the inflected endings that exist are mostly ambiguous: -ed may mark a verbal past tense, a participle or a fully adjectival form; -s may mark a plural noun, a possessive noun, or a present-tense verb form; -ing may mark a participle, gerund, or pure adjective or noun. Although -ly is a frequent adverb marker, some adverbs (e.g. tomorrow, fast, very) do not have that ending, while many adjectives do have it (e.g. friendly, ugly, lovely), as do occasional words in other parts of speech (e.g. jelly, fly, rely). Many English words can belong to more than one part of speech. Words like neigh, break, outlaw, laser, microwave, and telephone might all be either verbs or nouns. In certain circumstances, even words with primarily grammatical functions can be used as verbs or nouns, as in, "We must look to the hows and not just the whys." The process whereby a word comes to be used as a different part of speech is called conversion or zero derivation. Functional classification Linguists recognize that the above list of eight or nine word classes is drastically simplified. For example, "adverb" is to some extent a catch-all class that includes words with many different functions. Some have even argued that the most basic of category distinctions, that of nouns and verbs, is unfounded, or not applicable to certain languages. Modern linguists have proposed many different schemes whereby the words of English or other languages are placed into more specific categories and subcategories based on a more precise understanding of their grammatical functions. Common lexical category set defined by function may include the following (not all of them will necessarily be applicable in a given language): Categories that will usually be open classes: adjectives adverbs nouns verbs (except auxiliary verbs) interjections Categories that will usually be closed classes: auxiliary verbs coverbs conjunctions determiners (articles, quantifiers, demonstratives, and possessives) measure words or classifiers adpositions (prepositions, postpositions, and circumpositions) preverbs pronouns cardinal numerals particles Within a given category, subgroups of words may be identified based on more precise grammatical properties. For example, verbs may be specified according to the number and type of objects or other complements which they take. This is called subcategorization. Many modern descriptions of grammar include not only lexical categories or word classes, but also phrasal categories, used to classify phrases, in the sense of groups of words that form units having specific grammatical functions. Phrasal categories may include noun phrases (NP), verb phrases (VP) and so on. Lexical and phrasal categories together are called syntactic categories. Open and closed classes Word classes may be either open or closed. An open class is one that commonly accepts the addition of new words, while a closed class is one to which new items are very rarely added. Open classes normally contain large numbers of words, while closed classes are much smaller. Typical open classes found in English and many other languages are nouns, verbs (excluding auxiliary verbs, if these are regarded as a separate class), adjectives, adverbs and interjections. Ideophones are often an open class, though less familiar to English speakers, and are often open to nonce words. Typical closed classes are prepositions (or postpositions), determiners, conjunctions, and pronouns. The open–closed distinction is related to the distinction between lexical and functional categories, and to that between content words and function words, and some authors consider these identical, but the connection is not strict. Open classes are generally lexical categories in the stricter sense, containing words with greater semantic content, while closed classes are normally functional categories, consisting of words that perform essentially grammatical functions. This is not universal: in many languages verbs and adjectives are closed classes, usually consisting of few members, and in Japanese the formation of new pronouns from existing nouns is relatively common, though to what extent these form a distinct word class is debated. Words are added to open classes through such processes as compounding, derivation, coining, and borrowing. When a new word is added through some such process, it can subsequently be used grammatically in sentences in the same ways as other words in its class. A closed class may obtain new items through these same processes, but such changes are much rarer and take much more time. A closed class is normally seen as part of the core language and is not expected to change. In English, for example, new nouns, verbs, etc. are being added to the language constantly (including by the common process of verbing and other types of conversion, where an existing word comes to be used in a different part of speech). However, it is very unusual for a new pronoun, for example, to become accepted in the language, even in cases where there may be felt to be a need for one, as in the case of gender-neutral pronouns. The open or closed status of word classes varies between languages, even assuming that corresponding word classes exist. Most conspicuously, in many languages verbs and adjectives form closed classes of content words. An extreme example is found in Jingulu, which has only three verbs, while even the modern Indo-European Persian has no more than a few hundred simple verbs, a great deal of which are archaic. (Some twenty Persian verbs are used as light verbs to form compounds; this lack of lexical verbs is shared with other Iranian languages.) Japanese is similar, having few lexical verbs. Basque verbs are also a closed class, with the vast majority of verbal senses instead expressed periphrastically. In Japanese, verbs and adjectives are closed classes, though these are quite large, with about 700 adjectives, and verbs have opened slightly in recent years. Japanese adjectives are closely related to verbs (they can predicate a sentence, for instance). New verbal meanings are nearly always expressed periphrastically by appending to a noun, as in , and new adjectival meanings are nearly always expressed by adjectival nouns, using the suffix when an adjectival noun modifies a noun phrase, as in . The closedness of verbs has weakened in recent years, and in a few cases new verbs are created by appending to a noun or using it to replace the end of a word. This is mostly in casual speech for borrowed words, with the most well-established example being , from . This recent innovation aside, the huge contribution of Sino-Japanese vocabulary was almost entirely borrowed as nouns (often verbal nouns or adjectival nouns). Other languages where adjectives are closed class include Swahili, Bemba, and Luganda. By contrast, Japanese pronouns are an open class and nouns become used as pronouns with some frequency; a recent example is , now used by some as a first-person pronoun. The status of Japanese pronouns as a distinct class is disputed, however, with some considering it only a use of nouns, not a distinct class. The case is similar in languages of Southeast Asia, including Thai and Lao, in which, like Japanese, pronouns and terms of address vary significantly based on relative social standing and respect. Some word classes are universally closed, however, including demonstratives and interrogative words. See also Part-of-speech tagging Sliding window based part-of-speech tagging Traditional grammar Notes References External links The parts of speech Martin Haspelmath. 2001. "Word Classes and Parts of Speech." In: Baltes, Paul B. & Smelser, Neil J. (eds.) International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Pergamon, 16538–16545. (PDF) Grammar
Part of speech
[ "Technology" ]
3,676
[ "Parts of speech", "Components" ]
45,063
https://en.wikipedia.org/wiki/Abelian%20category
In mathematics, an abelian category is a category in which morphisms and objects can be added and in which kernels and cokernels exist and have desirable properties. The motivating prototypical example of an abelian category is the category of abelian groups, . Abelian categories are very stable categories; for example they are regular and they satisfy the snake lemma. The class of abelian categories is closed under several categorical constructions, for example, the category of chain complexes of an abelian category, or the category of functors from a small category to an abelian category are abelian as well. These stability properties make them inevitable in homological algebra and beyond; the theory has major applications in algebraic geometry, cohomology and pure category theory. Mac Lane says Alexander Grothendieck defined the abelian category, but there is a reference that says Eilenberg's disciple, Buchsbaum, proposed the concept in his PhD thesis, and Grothendieck popularized it under the name "abelian category". Definitions A category is abelian if it is preadditive and it has a zero object, it has all binary biproducts, it has all kernels and cokernels, and all monomorphisms and epimorphisms are normal. This definition is equivalent to the following "piecemeal" definition: A category is preadditive if it is enriched over the monoidal category of abelian groups. This means that all hom-sets are abelian groups and the composition of morphisms is bilinear. A preadditive category is additive if every finite set of objects has a biproduct. This means that we can form finite direct sums and direct products. In Def. 1.2.6, it is required that an additive category have a zero object (empty biproduct). An additive category is preabelian if every morphism has both a kernel and a cokernel. Finally, a preabelian category is abelian if every monomorphism and every epimorphism is normal. This means that every monomorphism is a kernel of some morphism, and every epimorphism is a cokernel of some morphism. Note that the enriched structure on hom-sets is a consequence of the first three axioms of the first definition. This highlights the foundational relevance of the category of Abelian groups in the theory and its canonical nature. The concept of exact sequence arises naturally in this setting, and it turns out that exact functors, i.e. the functors preserving exact sequences in various senses, are the relevant functors between abelian categories. This exactness concept has been axiomatized in the theory of exact categories, forming a very special case of regular categories. Examples As mentioned above, the category of all abelian groups is an abelian category. The category of all finitely generated abelian groups is also an abelian category, as is the category of all finite abelian groups. If R is a ring, then the category of all left (or right) modules over R is an abelian category. In fact, it can be shown that any small abelian category is equivalent to a full subcategory of such a category of modules (Mitchell's embedding theorem). If R is a left-noetherian ring, then the category of finitely generated left modules over R is abelian. In particular, the category of finitely generated modules over a noetherian commutative ring is abelian; in this way, abelian categories show up in commutative algebra. As special cases of the two previous examples: the category of vector spaces over a fixed field k is abelian, as is the category of finite-dimensional vector spaces over k. If X is a topological space, then the category of all (real or complex) vector bundles on X is not usually an abelian category, as there can be monomorphisms that are not kernels. If X is a topological space, then the category of all sheaves of abelian groups on X is an abelian category. More generally, the category of sheaves of abelian groups on a Grothendieck site is an abelian category. In this way, abelian categories show up in algebraic topology and algebraic geometry. If C is a small category and A is an abelian category, then the category of all functors from C to A forms an abelian category. If C is small and preadditive, then the category of all additive functors from C to A also forms an abelian category. The latter is a generalization of the R-module example, since a ring can be understood as a preadditive category with a single object. Grothendieck's axioms In his Tōhoku article, Grothendieck listed four additional axioms (and their duals) that an abelian category A might satisfy. These axioms are still in common use to this day. They are the following: AB3) For every indexed family (Ai) of objects of A, the coproduct *Ai exists in A (i.e. A is cocomplete). AB4) A satisfies AB3), and the coproduct of a family of monomorphisms is a monomorphism. AB5) A satisfies AB3), and filtered colimits of exact sequences are exact. and their duals AB3*) For every indexed family (Ai) of objects of A, the product PAi exists in A (i.e. A is complete). AB4*) A satisfies AB3*), and the product of a family of epimorphisms is an epimorphism. AB5*) A satisfies AB3*), and filtered limits of exact sequences are exact. Axioms AB1) and AB2) were also given. They are what make an additive category abelian. Specifically: AB1) Every morphism has a kernel and a cokernel. AB2) For every morphism f, the canonical morphism from coim f to im f is an isomorphism. Grothendieck also gave axioms AB6) and AB6*). AB6) A satisfies AB3), and given a family of filtered categories and maps , we have , where lim denotes the filtered colimit. AB6*) A satisfies AB3*), and given a family of cofiltered categories and maps , we have , where lim denotes the cofiltered limit. Elementary properties Given any pair A, B of objects in an abelian category, there is a special zero morphism from A to B. This can be defined as the zero element of the hom-set Hom(A,B), since this is an abelian group. Alternatively, it can be defined as the unique composition A → 0 → B, where 0 is the zero object of the abelian category. In an abelian category, every morphism f can be written as the composition of an epimorphism followed by a monomorphism. This epimorphism is called the coimage of f, while the monomorphism is called the image of f. Subobjects and quotient objects are well-behaved in abelian categories. For example, the poset of subobjects of any given object A is a bounded lattice. Every abelian category A is a module over the monoidal category of finitely generated abelian groups; that is, we can form a tensor product of a finitely generated abelian group G and any object A of A. The abelian category is also a comodule; Hom(G,A) can be interpreted as an object of A. If A is complete, then we can remove the requirement that G be finitely generated; most generally, we can form finitary enriched limits in A. Given an object in an abelian category, flatness refers to the idea that is an exact functor. See flat module or, for more generality, flat morphism. Related concepts Abelian categories are the most general setting for homological algebra. All of the constructions used in that field are relevant, such as exact sequences, and especially short exact sequences, and derived functors. Important theorems that apply in all abelian categories include the five lemma (and the short five lemma as a special case), as well as the snake lemma (and the nine lemma as a special case). Semi-simple Abelian categories An abelian category is called semi-simple if there is a collection of objects called simple objects (meaning the only sub-objects of any are the zero object and itself) such that an object can be decomposed as a direct sum (denoting the coproduct of the abelian category)This technical condition is rather strong and excludes many natural examples of abelian categories found in nature. For example, most module categories over a ring are not semi-simple; in fact, this is the case if and only if is a semisimple ring. Examples Some Abelian categories found in nature are semi-simple, such as Category of vector spaces over a fixed field . By Maschke's theorem the category of representations of a finite group over a field whose characteristic does not divide is a semi-simple abelian category. The category of coherent sheaves on a Noetherian scheme is semi-simple if and only if is a finite disjoint union of irreducible points. This is equivalent to a finite coproduct of categories of vector spaces over different fields. Showing this is true in the forward direction is equivalent to showing all groups vanish, meaning the cohomological dimension is 0. This only happens when the skyscraper sheaves at a point have Zariski tangent space equal to zero, which is isomorphic to using local algebra for such a scheme. Non-examples There do exist some natural counter-examples of abelian categories which are not semi-simple, such as certain categories of representations. For example, the category of representations of the Lie group has the representationwhich only has one subrepresentation of dimension . In fact, this is true for any unipotent grouppg 112. Subcategories of abelian categories There are numerous types of (full, additive) subcategories of abelian categories that occur in nature, as well as some conflicting terminology. Let A be an abelian category, C a full, additive subcategory, and I the inclusion functor. C is an exact subcategory if it is itself an exact category and the inclusion I is an exact functor. This occurs if and only if C is closed under pullbacks of epimorphisms and pushouts of monomorphisms. The exact sequences in C are thus the exact sequences in A for which all objects lie in C. C is an abelian subcategory if it is itself an abelian category and the inclusion I is an exact functor. This occurs if and only if C is closed under taking kernels and cokernels. Note that there are examples of full subcategories of an abelian category that are themselves abelian but where the inclusion functor is not exact, so they are not abelian subcategories (see below). C is a thick subcategory if it is closed under taking direct summands and satisfies the 2-out-of-3 property on short exact sequences; that is, if is a short exact sequence in A such that two of lie in C, then so does the third. In other words, C is closed under kernels of epimorphisms, cokernels of monomorphisms, and extensions. Note that P. Gabriel used the term thick subcategory to describe what we here call a Serre subcategory. C is a topologizing subcategory if it is closed under subquotients. C is a Serre subcategory if, for all short exact sequences in A we have M in C if and only if both are in C. In other words, C is closed under extensions and subquotients. These subcategories are precisely the kernels of exact functors from A to another abelian category. C is a localizing subcategory if it is a Serre subcategory such that the quotient functor admits a right adjoint. There are two competing notions of a wide subcategory. One version is that C contains every object of A (up to isomorphism); for a full subcategory this is obviously not interesting. (This is also called a lluf subcategory.) The other version is that C is closed under extensions. Here is an explicit example of a full, additive subcategory of an abelian category that is itself abelian but the inclusion functor is not exact. Let k be a field, the algebra of upper-triangular matrices over k, and the category of finite-dimensional -modules. Then each is an abelian category and we have an inclusion functor identifying the simple projective, simple injective and indecomposable projective-injective modules. The essential image of I is a full, additive subcategory, but I is not exact. History Abelian categories were introduced by (under the name of "exact category") and in order to unify various cohomology theories. At the time, there was a cohomology theory for sheaves, and a cohomology theory for groups. The two were defined differently, but they had similar properties. In fact, much of category theory was developed as a language to study these similarities. Grothendieck unified the two theories: they both arise as derived functors on abelian categories; the abelian category of sheaves of abelian groups on a topological space, and the abelian category of G-modules for a given group G. See also Triangulated category References Additive categories Homological algebra Niels Henrik Abel
Abelian category
[ "Mathematics" ]
2,959
[ "Mathematical structures", "Additive categories", "Fields of abstract algebra", "Category theory", "Homological algebra" ]
45,086
https://en.wikipedia.org/wiki/Biodiversity
Biodiversity is the variability of life on Earth. It can be measured on various levels. There is for example genetic variability, species diversity, ecosystem diversity and phylogenetic diversity. Diversity is not distributed evenly on Earth. It is greater in the tropics as a result of the warm climate and high primary productivity in the region near the equator. Tropical forest ecosystems cover less than one-fifth of Earth's terrestrial area and contain about 50% of the world's species. There are latitudinal gradients in species diversity for both marine and terrestrial taxa. Since life began on Earth, six major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic aeon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion. In this period, the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive biodiversity losses. Those events have been classified as mass extinction events. In the Carboniferous, rainforest collapse may have led to a great loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years. Human activities have led to an ongoing biodiversity loss and an accompanying loss of genetic diversity. This process is often referred to as Holocene extinction, or sixth mass extinction. For example, it was estimated in 2007 that up to 30% of all species will be extinct by 2050. Destroying habitats for farming is a key reason why biodiversity is decreasing today. Climate change also plays a role. This can be seen for example in the effects of climate change on biomes. This anthropogenic extinction may have started toward the end of the Pleistocene, as some studies suggest that the megafaunal extinction event that took place around the end of the last ice age partly resulted from overhunting. Definitions Biologists most often define biodiversity as the "totality of genes, species and ecosystems of a region". An advantage of this definition is that it presents a unified view of the traditional types of biological variety previously identified: taxonomic diversity (usually measured at the species diversity level) ecological diversity (often viewed from the perspective of ecosystem diversity) morphological diversity (which stems from genetic diversity and molecular diversity) functional diversity (which is a measure of the number of functionally disparate species within a population (e.g. different feeding mechanism, different motility, predator vs prey, etc.)) Biodiversity is most commonly used to replace the more clearly-defined and long-established terms, species diversity and species richness. However, there is no concrete definition for biodiversity, as its definition continues to be defined. Other definitions include (in chronological order): An explicit definition consistent with this interpretation was first given in a paper by Bruce A. Wilcox commissioned by the International Union for the Conservation of Nature and Natural Resources (IUCN) for the 1982 World National Parks Conference. Wilcox's definition was "Biological diversity is the variety of life forms...at all levels of biological systems (i.e., molecular, organismic, population, species and ecosystem)...". A publication by Wilcox in 1984: Biodiversity can be defined genetically as the diversity of alleles, genes and organisms. They study processes such as mutation and gene transfer that drive evolution. The 1992 United Nations Earth Summit defined biological diversity as "the variability among living organisms from all sources, including, inter alia, terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems". This definition is used in the United Nations Convention on Biological Diversity. Gaston and Spicer's definition in their book "Biodiversity: an introduction" in 2004 is "variation of life at all levels of biological organization". The Food and Agriculture Organization of the United Nations (FAO) defined biodiversity in 2019 as "the variability that exists among living organisms (both within and between species) and the ecosystems of which they are part." Number of species According to estimates by Mora et al. (2011), there are approximately 8.7 million terrestrial species and 2.2 million oceanic species. The authors note that these estimates are strongest for eukaryotic organisms and likely represent the lower bound of prokaryote diversity. Other estimates include: 220,000 vascular plants, estimated using the species-area relation method 0.7-1 million marine species 10–30 million insects; (of some 0.9 million we know today) 5–10 million bacteria; 1.5-3 million fungi, estimates based on data from the tropics, long-term non-tropical sites and molecular studies that have revealed cryptic speciation. Some 0.075 million species of fungi had been documented by 2001; 1 million mites The number of microbial species is not reliably known, but the Global Ocean Sampling Expedition dramatically increased the estimates of genetic diversity by identifying an enormous number of new genes from near-surface plankton samples at various marine locations, initially over the 2004–2006 period. The findings may eventually cause a significant change in the way science defines species and other taxonomic categories. Since the rate of extinction has increased, many extant species may become extinct before they are described. Not surprisingly, in the animalia the most studied groups are birds and mammals, whereas fishes and arthropods are the least studied animals groups. Current biodiversity loss During the last century, decreases in biodiversity have been increasingly observed. It was estimated in 2007 that up to 30% of all species will be extinct by 2050. Of these, about one eighth of known plant species are threatened with extinction. Estimates reach as high as 140,000 species per year (based on Species-area theory). This figure indicates unsustainable ecological practices, because few species emerge each year. The rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates. and expected to still grow in the upcoming years. As of 2012, some studies suggest that 25% of all mammal species could be extinct in 20 years. In absolute terms, the planet has lost 58% of its biodiversity since 1970 according to a 2016 study by the World Wildlife Fund. The Living Planet Report 2014 claims that "the number of mammals, birds, reptiles, amphibians, and fish across the globe is, on average, about half the size it was 40 years ago". Of that number, 39% accounts for the terrestrial wildlife gone, 39% for the marine wildlife gone and 76% for the freshwater wildlife gone. Biodiversity took the biggest hit in Latin America, plummeting 83 percent. High-income countries showed a 10% increase in biodiversity, which was canceled out by a loss in low-income countries. This is despite the fact that high-income countries use five times the ecological resources of low-income countries, which was explained as a result of a process whereby wealthy nations are outsourcing resource depletion to poorer nations, which are suffering the greatest ecosystem losses. A 2017 study published in PLOS One found that the biomass of insect life in Germany had declined by three-quarters in the last 25 years. Dave Goulson of Sussex University stated that their study suggested that humans "appear to be making vast tracts of land inhospitable to most forms of life, and are currently on course for ecological Armageddon. If we lose the insects then everything is going to collapse." In 2020 the World Wildlife Foundation published a report saying that "biodiversity is being destroyed at a rate unprecedented in human history". The report claims that 68% of the population of the examined species were destroyed in the years 1970 – 2016. Of 70,000 monitored species, around 48% are experiencing population declines from human activity (in 2023), whereas only 3% have increasing populations. Rates of decline in biodiversity in the current sixth mass extinction match or exceed rates of loss in the five previous mass extinction events in the fossil record. Biodiversity loss is in fact "one of the most critical manifestations of the Anthropocene" (since around the 1950s); the continued decline of biodiversity constitutes "an unprecedented threat" to the continued existence of human civilization. The reduction is caused primarily by human impacts, particularly habitat destruction. Since the Stone Age, species loss has accelerated above the average basal rate, driven by human activity. Estimates of species losses are at a rate 100–10,000 times as fast as is typical in the fossil record. Loss of biodiversity results in the loss of natural capital that supplies ecosystem goods and services. Species today are being wiped out at a rate 100 to 1,000 times higher than baseline, and the rate of extinctions is increasing. This process destroys the resilience and adaptability of life on Earth. In 2006, many species were formally classified as rare or endangered or threatened; moreover, scientists have estimated that millions more species are at risk which have not been formally recognized. About 40 percent of the 40,177 species assessed using the IUCN Red List criteria are now listed as threatened with extinction—a total of 16,119. As of late 2022 9251 species were considered part of the IUCN's critically endangered. Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However, other scientists have criticized this finding and say that loss of habitat caused by "the growth of commodities for export" is the main driver. Some studies have however pointed out that habitat destruction for the expansion of agriculture and the overexploitation of wildlife are the more significant drivers of contemporary biodiversity loss, not climate change. Distribution Biodiversity is not evenly distributed, rather it varies greatly across the globe as well as within regions and seasons. Among other factors, the diversity of all living things (biota) depends on temperature, precipitation, altitude, soils, geography and the interactions between other species. The study of the spatial distribution of organisms, species and ecosystems, is the science of biogeography. Diversity consistently measures higher in the tropics and in other localized regions such as the Cape Floristic Region and lower in polar regions generally. Rain forests that have had wet climates for a long time, such as Yasuní National Park in Ecuador, have particularly high biodiversity. There is local biodiversity, which directly impacts daily life, affecting the availability of fresh water, food choices, and fuel sources for humans. Regional biodiversity includes habitats and ecosystems that synergizes and either overlaps or differs on a regional scale. National biodiversity within a country determines the ability for a country to thrive according to its habitats and ecosystems on a national scale. Also, within a country, endangered species are initially supported on a national level then internationally. Ecotourism may be utilized to support the economy and encourages tourists to continue to visit and support species and ecosystems they visit, while they enjoy the available amenities provided. International biodiversity impacts global livelihood, food systems, and health. Problematic pollution, over consumption, and climate change can devastate international biodiversity. Nature-based solutions are a critical tool for a global resolution. Many species are in danger of becoming extinct and need world leaders to be proactive with the Kunming-Montreal Global Biodiversity Framework. Terrestrial biodiversity is thought to be up to 25 times greater than ocean biodiversity. Forests harbour most of Earth's terrestrial biodiversity. The conservation of the world's biodiversity is thus utterly dependent on the way in which we interact with and use the world's forests. A new method used in 2011, put the total number of species on Earth at 8.7 million, of which 2.1 million were estimated to live in the ocean. However, this estimate seems to under-represent the diversity of microorganisms. Forests provide habitats for 80 percent of amphibian species, 75 percent of bird species and 68 percent of mammal species. About 60 percent of all vascular plants are found in tropical forests. Mangroves provide breeding grounds and nurseries for numerous species of fish and shellfish and help trap sediments that might otherwise adversely affect seagrass beds and coral reefs, which are habitats for many more marine species. Forests span around 4 billion acres (nearly a third of the Earth's land mass) and are home to approximately 80% of the world's biodiversity. About 1 billion hectares are covered by primary forests. Over 700 million hectares of the world's woods are officially protected. The biodiversity of forests varies considerably according to factors such as forest type, geography, climate and soils – in addition to human use. Most forest habitats in temperate regions support relatively few animal and plant species and species that tend to have large geographical distributions, while the montane forests of Africa, South America and Southeast Asia and lowland forests of Australia, coastal Brazil, the Caribbean islands, Central America and insular Southeast Asia have many species with small geographical distributions. Areas with dense human populations and intense agricultural land use, such as Europe, parts of Bangladesh, China, India and North America, are less intact in terms of their biodiversity. Northern Africa, southern Australia, coastal Brazil, Madagascar and South Africa, are also identified as areas with striking losses in biodiversity intactness. European forests in EU and non-EU nations comprise more than 30% of Europe's land mass (around 227 million hectares), representing an almost 10% growth since 1990. Latitudinal gradients Generally, there is an increase in biodiversity from the poles to the tropics. Thus localities at lower latitudes have more species than localities at higher latitudes. This is often referred to as the latitudinal gradient in species diversity. Several ecological factors may contribute to the gradient, but the ultimate factor behind many of them is the greater mean temperature at the equator compared to that at the poles. Even though terrestrial biodiversity declines from the equator to the poles, some studies claim that this characteristic is unverified in aquatic ecosystems, especially in marine ecosystems. The latitudinal distribution of parasites does not appear to follow this rule. Also, in terrestrial ecosystems the soil bacterial diversity has been shown to be highest in temperate climatic zones, and has been attributed to carbon inputs and habitat connectivity. In 2016, an alternative hypothesis ("the fractal biodiversity") was proposed to explain the biodiversity latitudinal gradient. In this study, the species pool size and the fractal nature of ecosystems were combined to clarify some general patterns of this gradient. This hypothesis considers temperature, moisture, and net primary production (NPP) as the main variables of an ecosystem niche and as the axis of the ecological hypervolume. In this way, it is possible to build fractal hyper volumes, whose fractal dimension rises to three moving towards the equator. Biodiversity Hotspots A biodiversity hotspot is a region with a high level of endemic species that have experienced great habitat loss. The term hotspot was introduced in 1988 by Norman Myers. While hotspots are spread all over the world, the majority are forest areas and most are located in the tropics. Brazil's Atlantic Forest is considered one such hotspot, containing roughly 20,000 plant species, 1,350 vertebrates and millions of insects, about half of which occur nowhere else. The island of Madagascar and India are also particularly notable. Colombia is characterized by high biodiversity, with the highest rate of species by area unit worldwide and it has the largest number of endemics (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth can be found in Colombia, including over 1,900 species of bird, more than in Europe and North America combined, Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world. Madagascar dry deciduous forests and lowland rainforests possess a high ratio of endemism. Since the island separated from mainland Africa 66 million years ago, many species and ecosystems have evolved independently. Indonesia's 17,000 islands cover and contain 10% of the world's flowering plants, 12% of mammals and 17% of reptiles, amphibians and birds—along with nearly 240 million people. Many regions of high biodiversity and/or endemism arise from specialized habitats which require unusual adaptations, for example, alpine environments in high mountains, or Northern European peat bogs. Accurately measuring differences in biodiversity can be difficult. Selection bias amongst researchers may contribute to biased empirical research for modern estimates of biodiversity. In 1768, Rev. Gilbert White succinctly observed of his Selborne, Hampshire "all nature is so full, that that district produces the most variety which is the most examined." Evolution over geologic timeframes Biodiversity is the result of 3.5 billion years of evolution. The origin of life has not been established by science, however, some evidence suggests that life may already have been well-established only a few hundred million years after the formation of the Earth. Until approximately 2.5 billion years ago, all life consisted of microorganisms – archaea, bacteria, and single-celled protozoans and protists. Biodiversity grew fast during the Phanerozoic (the last 540 million years), especially during the so-called Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. However, recent studies suggest that this diversification had started earlier, at least in the Ediacaran, and that it continued in the Ordovician. Over the next 400 million years or so, invertebrate diversity showed little overall trend and vertebrate diversity shows an overall exponential trend. This dramatic rise in diversity was marked by periodic, massive losses of diversity classified as mass extinction events. A significant loss occurred in anamniotic limbed vertebrates when rainforests collapsed in the Carboniferous, but amniotes seem to have been little affected by this event; their diversification slowed down later, around the Asselian/Sakmarian boundary, in the early Cisuralian (Early Permian), about 293 Ma ago. The worst was the Permian-Triassic extinction event, 251 million years ago. Vertebrates took 30 million years to recover from this event. The most recent major mass extinction event, the Cretaceous–Paleogene extinction event, occurred 66 million years ago. This period has attracted more attention than others because it resulted in the extinction of the dinosaurs, which were represented by many lineages at the end of the Maastrichtian, just before that extinction event. However, many other taxa were affected by this crisis, which affected even marine taxa, such as ammonites, which also became extinct around that time. The biodiversity of the past is called Paleobiodiversity. The fossil record suggests that the last few million years featured the greatest biodiversity in history. However, not all scientists support this view, since there is uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recent geologic sections. Some scientists believe that corrected for sampling artifacts, modern biodiversity may not be much different from biodiversity 300 million years ago, whereas others consider the fossil record reasonably reflective of the diversification of life. Estimates of the present global macroscopic species diversity vary from 2 million to 100 million, with a best estimate of somewhere near 9 million, the vast majority arthropods. Diversity appears to increase continually in the absence of natural selection. Diversification The existence of a global carrying capacity, limiting the amount of life that can live at once, is debated, as is the question of whether such a limit would also cap the number of species. While records of life in the sea show a logistic pattern of growth, life on land (insects, plants and tetrapods) shows an exponential rise in diversity. As one author states, "Tetrapods have not yet invaded 64 percent of potentially habitable modes and it could be that without human influence the ecological and taxonomic diversity of tetrapods would continue to increase exponentially until most or all of the available eco-space is filled." It also appears that the diversity continues to increase over time, especially after mass extinctions. On the other hand, changes through the Phanerozoic correlate much better with the hyperbolic model (widely used in population biology, demography and macrosociology, as well as fossil biodiversity) than with exponential and logistic models. The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) and/or a negative feedback arising from resource limitation. Hyperbolic model implies a second-order positive feedback. Differences in the strength of the second-order feedback due to different intensities of interspecific competition might explain the faster rediversification of ammonoids in comparison to bivalves after the end-Permian extinction. The hyperbolic pattern of the world population growth arises from a second-order positive feedback between the population size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted for by a feedback between diversity and community structure complexity. The similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend with cyclical and stochastic dynamics. Most biologists agree however that the period since human emergence is part of a new mass extinction, named the Holocene extinction event, caused primarily by the impact humans are having on the environment. It has been argued that the present rate of extinction is sufficient to eliminate most species on the planet Earth within 100 years. New species are regularly discovered (on average between 5–10,000 new species each year, most of them insects) and many, though discovered, are not yet classified (estimates are that nearly 90% of all arthropods are not yet classified). Most of the terrestrial diversity is found in tropical forests and in general, the land has more species than the ocean; some 8.7 million species may exist on Earth, of which some 2.1 million live in the ocean. Species diversity in geologic time frames It is estimated that 5 to 50 billion species have existed on the planet. Assuming that there may be a maximum of about 50 million species currently alive, it stands to reason that greater than 99% of the planet's species went extinct prior to the evolution of humans. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86% have not yet been described. However, a May 2016 scientific report estimates that 1 trillion species are currently on Earth, with only one-thousandth of one percent described. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as four trillion tons of carbon. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. The age of Earth is about 4.54 billion years. The earliest undisputed evidence of life dates at least from 3.7 billion years ago, during the Eoarchean era after a geological crust started to solidify following the earlier molten Hadean eon. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old meta-sedimentary rocks discovered in Western Greenland.. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth...then it could be common in the universe." Role and benefits of biodiversity Ecosystem services There have been many claims about biodiversity's effect on the ecosystem services, especially provisioning and regulating services. Some of those claims have been validated, some are incorrect and some lack enough evidence to draw definitive conclusions. Ecosystem services have been grouped in three types: Provisioning services which involve the production of renewable resources (e.g.: food, wood, fresh water) Regulating services which are those that lessen environmental change (e.g.: climate regulation, pest/disease control) Cultural services represent human value and enjoyment (e.g.: landscape aesthetics, cultural heritage, outdoor recreation and spiritual significance) Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs; for example insect pollination cannot be mimicked, though there have been attempts to create artificial pollinators using unmanned aerial vehicles. The economic activity of pollination alone represented between $2.1–14.6 billion in 2003. Other sources have reported somewhat conflicting results and in 1997 Robert Costanza and his colleagues reported the estimated global value of ecosystem services (not captured in traditional markets) at an average of $33 trillion annually. Provisioning services With regards to provisioning services, greater species diversity has the following benefits: Greater species diversity of plants increases fodder yield (synthesis of 271 experimental studies). Greater species diversity of plants (i.e. diversity within a single species) increases overall crop yield (synthesis of 575 experimental studies). Although another review of 100 experimental studies reported mixed evidence. Greater species diversity of trees increases overall wood production (synthesis of 53 experimental studies). However, there is not enough data to draw a conclusion about the effect of tree trait diversity on wood production. Regulating services With regards to regulating services, greater species diversity has the following benefits: Greater species diversity of fish increases the stability of fisheries yield (synthesis of 8 observational studies) of plants increases carbon sequestration, but note that this finding only relates to actual uptake of carbon dioxide and not long-term storage; synthesis of 479 experimental studies) of plants increases soil nutrient remineralization (synthesis of 103 experimental studies), increases soil organic matter (synthesis of 85 experimental studies) and decreases disease prevalence on plants (synthesis of 107 experimental studies) of natural pest enemies decreases herbivorous pest populations (data from two separate reviews; synthesis of 266 experimental and observational studies; Synthesis of 18 observational studies. Although another review of 38 experimental studies found mixed support for this claim, suggesting that in cases where mutual intraguild predation occurs, a single predatory species is often more effective Agriculture Agricultural diversity can be divided into two categories: intraspecific diversity, which includes the genetic variation within a single species, like the potato (Solanum tuberosum) that is composed of many different forms and types (e.g. in the U.S. they might compare russet potatoes with new potatoes or purple potatoes, all different, but all part of the same species, S. tuberosum). The other category of agricultural diversity is called interspecific diversity and refers to the number and types of different species. Agricultural diversity can also be divided by whether it is 'planned' diversity or 'associated' diversity. This is a functional classification that we impose and not an intrinsic feature of life or diversity. Planned diversity includes the crops which a farmer has encouraged, planted or raised (e.g. crops, covers, symbionts, and livestock, among others), which can be contrasted with the associated diversity that arrives among the crops, uninvited (e.g. herbivores, weed species and pathogens, among others). Associated biodiversity can be damaging or beneficial. The beneficial associated biodiversity include for instance wild pollinators such as wild bees and syrphid flies that pollinate crops and natural enemies and antagonists to pests and pathogens. Beneficial associated biodiversity occurs abundantly in crop fields and provide multiple ecosystem services such as pest control, nutrient cycling and pollination that support crop production. Although about 80 percent of humans' food supply comes from just 20 kinds of plants, humans use at least 40,000 species. Earth's surviving biodiversity provides resources for increasing the range of food and other products suitable for human use, although the present extinction rate shrinks that potential. Human health Biodiversity's relevance to human health is becoming an international political issue, as scientific evidence builds on the global health implications of biodiversity loss. This issue is closely linked with the issue of climate change, as many of the anticipated health risks of climate change are associated with changes in biodiversity (e.g. changes in populations and distribution of disease vectors, scarcity of fresh water, impacts on agricultural biodiversity and food resources etc.). This is because the species most likely to disappear are those that buffer against infectious disease transmission, while surviving species tend to be the ones that increase disease transmission, such as that of West Nile Virus, Lyme disease and Hantavirus, according to a study done co-authored by Felicia Keesing, an ecologist at Bard College and Drew Harvell, associate director for Environment of the Atkinson Center for a Sustainable Future (ACSF) at Cornell University. Some of the health issues influenced by biodiversity include dietary health and nutrition security, infectious disease, medical science and medicinal resources, social and psychological health. Biodiversity is also known to have an important role in reducing disaster risk and in post-disaster relief and recovery efforts. Biodiversity provides critical support for drug discovery and the availability of medicinal resources. A significant proportion of drugs are derived, directly or indirectly, from biological sources: at least 50% of the pharmaceutical compounds on the US market are derived from plants, animals and microorganisms, while about 80% of the world population depends on medicines from nature (used in either modern or traditional medical practice) for primary healthcare. Only a tiny fraction of wild species has been investigated for medical potential. Marine ecosystems are particularly important, although inappropriate bioprospecting can increase biodiversity loss, as well as violating the laws of the communities and states from which the resources are taken. Business and industry Many industrial materials derive directly from biological sources. These include building materials, fibers, dyes, rubber, and oil. Biodiversity is also important to the security of resources such as water, timber, paper, fiber, and food. As a result, biodiversity loss is a significant risk factor in business development and a threat to long-term economic sustainability. Cultural and aesthetic value Philosophically it could be argued that biodiversity has intrinsic aesthetic and spiritual value to mankind in and of itself. This idea can be used as a counterweight to the notion that tropical forests and other ecological realms are only worthy of conservation because of the services they provide. Biodiversity also affords many non-material benefits including spiritual and aesthetic values, knowledge systems and education. Measuring biodiversity Analytical limits Less than 1% of all species that have been described have been studied beyond noting their existence. The vast majority of Earth's species are microbial. Contemporary biodiversity physics is "firmly fixated on the visible [macroscopic] world". For example, microbial life is metabolically and environmentally more diverse than multicellular life (see e.g., extremophile). "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life consists of barely noticeable twigs. The inverse relationship of size and population recurs higher on the evolutionary ladder—to a first approximation, all multicellular species on Earth are insects". Insect extinction rates are high—supporting the Holocene extinction hypothesis. Biodiversity changes (other than losses) Natural seasonal variations Biodiversity naturally varies due to seasonal shifts. Spring's arrival enhances biodiversity as numerous species breed and feed, while winter's onset temporarily reduces it as some insects perish and migrating animals leave. Additionally, the seasonal fluctuation in plant and invertebrate populations influences biodiversity. Introduced and invasive species Barriers such as large rivers, seas, oceans, mountains and deserts encourage diversity by enabling independent evolution on either side of the barrier, via the process of allopatric speciation. The term invasive species is applied to species that breach the natural barriers that would normally keep them constrained. Without barriers, such species occupy new territory, often supplanting native species by occupying their niches, or by using resources that would normally sustain native species. Species are increasingly being moved by humans (on purpose and accidentally). Some studies say that diverse ecosystems are more resilient and resist invasive plants and animals. Many studies cite effects of invasive species on natives, but not extinctions. Invasive species seem to increase local (alpha diversity) diversity, which decreases turnover of diversity (beta diversity). Overall gamma diversity may be lowered because species are going extinct because of other causes, but even some of the most insidious invaders (e.g.: Dutch elm disease, emerald ash borer, chestnut blight in North America) have not caused their host species to become extinct. Extirpation, population decline and homogenization of regional biodiversity are much more common. Human activities have frequently been the cause of invasive species circumventing their barriers, by introducing them for food and other purposes. Human activities therefore allow species to migrate to new areas (and thus become invasive) occurred on time scales much shorter than historically have been required for a species to extend its range. At present, several countries have already imported so many exotic species, particularly agricultural and ornamental plants, that their indigenous fauna/flora may be outnumbered. For example, the introduction of kudzu from Southeast Asia to Canada and the United States has threatened biodiversity in certain areas. Another example are pines, which have invaded forests, shrublands and grasslands in the southern hemisphere. Hybridization and genetic pollution Endemic species can be threatened with extinction through the process of genetic pollution, i.e. uncontrolled hybridization, introgression and genetic swamping. Genetic pollution leads to homogenization or replacement of local genomes as a result of either a numerical and/or fitness advantage of an introduced species. Hybridization and introgression are side-effects of introduction and invasion. These phenomena can be especially detrimental to rare species that come into contact with more abundant ones. The abundant species can interbreed with the rare species, swamping its gene pool. This problem is not always apparent from morphological (outward appearance) observations alone. Some degree of gene flow is normal adaptation and not all gene and genotype constellations can be preserved. However, hybridization with or without introgression may, nevertheless, threaten a rare species' existence. Conservation Conservation biology matured in the mid-20th century as ecologists, naturalists and other scientists began to research and address issues pertaining to global biodiversity declines. The conservation ethic advocates management of natural resources for the purpose of sustaining biodiversity in species, ecosystems, the evolutionary process and human culture and society. Conservation biology is reforming around strategic plans to protect biodiversity. Preserving global biodiversity is a priority in strategic conservation plans that are designed to engage public policy and concerns affecting local, regional and global scales of communities, ecosystems and cultures. Action plans identify ways of sustaining human well-being, employing natural capital, macroeconomic policies including economic incentives, and ecosystem services. In the EU Directive 1999/22/EC zoos are described as having a role in the preservation of the biodiversity of wildlife animals by conducting research or participation in breeding programs. Protection and restoration techniques Removal of exotic species will allow the species that they have negatively impacted to recover their ecological niches. Exotic species that have become pests can be identified taxonomically (e.g., with Digital Automated Identification SYstem (DAISY), using the barcode of life). Removal is practical only given large groups of individuals due to the economic cost. As sustainable populations of the remaining native species in an area become assured, "missing" species that are candidates for reintroduction can be identified using databases such as the Encyclopedia of Life and the Global Biodiversity Information Facility. Biodiversity banking places a monetary value on biodiversity. One example is the Australian Native Vegetation Management Framework. Gene banks are collections of specimens and genetic material. Some banks intend to reintroduce banked species to the ecosystem (e.g., via tree nurseries). Reduction and better targeting of pesticides allows more species to survive in agricultural and urbanized areas. Location-specific approaches may be less useful for protecting migratory species. One approach is to create wildlife corridors that correspond to the animals' movements. National and other boundaries can complicate corridor creation. Protected areas Protected areas, including forest reserves and biosphere reserves, serve many functions including for affording protection to wild animals and their habitat. Protected areas have been set up all over the world with the specific aim of protecting and conserving plants and animals. Some scientists have called on the global community to designate as protected areas of 30 percent of the planet by 2030, and 50 percent by 2050, in order to mitigate biodiversity loss from anthropogenic causes. The target of protecting 30% of the area of the planet by the year 2030 (30 by 30) was adopted by almost 200 countries in the 2022 United Nations Biodiversity Conference. At the moment of adoption (December 2022) 17% of land territory and 10% of ocean territory were protected. In a study published 4 September 2020 in Science Advances researchers mapped out regions that can help meet critical conservation and climate goals. Protected areas safeguard nature and cultural resources and contribute to livelihoods, particularly at local level. There are over 238 563 designated protected areas worldwide, equivalent to 14.9 percent of the earth's land surface, varying in their extension, level of protection, and type of management (IUCN, 2018). The benefits of protected areas extend beyond their immediate environment and time. In addition to conserving nature, protected areas are crucial for securing the long-term delivery of ecosystem services. They provide numerous benefits including the conservation of genetic resources for food and agriculture, the provision of medicine and health benefits, the provision of water, recreation and tourism, and for acting as a buffer against disaster. Increasingly, there is acknowledgement of the wider socioeconomic values of these natural ecosystems and of the ecosystem services they can provide. National parks and wildlife sanctuaries A national park is a large natural or near natural area set aside to protect large-scale ecological processes, which also provide a foundation for environmentally and culturally compatible, spiritual, scientific, educational, recreational and visitor opportunities. These areas are selected by governments or private organizations to protect natural biodiversity along with its underlying ecological structure and supporting environmental processes, and to promote education and recreation. The International Union for Conservation of Nature (IUCN), and its World Commission on Protected Areas (WCPA), has defined "National Park" as its Category II type of protected areas. Wildlife sanctuaries aim only at the conservation of species Forest protected areas Forest protected areas are a subset of all protected areas in which a significant portion of the area is forest. This may be the whole or only a part of the protected area. Globally, 18 percent of the world's forest area, or more than 700 million hectares, fall within legally established protected areas such as national parks, conservation areas and game reserves. There is an estimated 726 million ha of forest in protected areas worldwide. Of the six major world regions, South America has the highest share of forests in protected areas, 31 percent. The forests play a vital role in harboring more than 45,000 floral and 81,000 faunal species of which 5150 floral and 1837 faunal species are endemic. In addition, there are 60,065 different tree species in the world. Plant and animal species confined to a specific geographical area are called endemic species. In forest reserves, rights to activities like hunting and grazing are sometimes given to communities living on the fringes of the forest, who sustain their livelihood partially or wholly from forest resources or products. Approximately 50 million hectares (or 24%) of European forest land is protected for biodiversity and landscape protection. Forests allocated for soil, water, and other ecosystem services encompass around 72 million hectares (32% of European forest area). Role of society Transformative change In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services, the Global Assessment Report on Biodiversity and Ecosystem Services, was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). It stated that "the state of nature has deteriorated at an unprecedented and accelerating rate". To fix the problem, humanity will need a transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. The concept of nature-positive is playing a role in mainstreaming the goals of the Global Biodiversity Framework (GBF) for biodiversity. The aim of mainstreaming is to embed biodiversity considerations into public and private practice to conserve and sustainably use biodiversity on global and local levels. The concept of nature-positive refers to the societal goal to halt and reverse biodiversity loss, measured from a baseline of 2020 levels, and to achieve full so-called "nature recovery" by 2050. Citizen science Citizen science, also known as public participation in scientific research, has been widely used in environmental sciences and is particularly popular in a biodiversity-related context. It has been used to enable scientists to involve the general public in biodiversity research, thereby enabling the scientists to collect data that they would otherwise not have been able to obtain. Volunteer observers have made significant contributions to on-the-ground knowledge about biodiversity, and recent improvements in technology have helped increase the flow and quality of occurrences from citizen sources. A 2016 study published in Biological Conservation registers the massive contributions that citizen scientists already make to data mediated by the Global Biodiversity Information Facility (GBIF). Despite some limitations of the dataset-level analysis, it is clear that nearly half of all occurrence records shared through the GBIF network come from datasets with significant volunteer contributions. Recording and sharing observations are enabled by several global-scale platforms, including iNaturalist and eBird. Legal status International United Nations Convention on Biological Diversity (1992) and Cartagena Protocol on Biosafety; UN BBNJ (High Seas Treaty) 2023 Intergovernmental conference on an international legally binding instrument under the UNCLOS on the conservation and sustainable use of marine biological diversity of areas beyond national jurisdiction (GA resolution 72/249) Convention on International Trade in Endangered Species (CITES); Ramsar Convention (Wetlands); Bonn Convention on Migratory Species; UNESCO Convention concerning the Protection of the World's Cultural and Natural Heritage (indirectly by protecting biodiversity habitats) UNESCO Global Geoparks Regional Conventions such as the Apia Convention Bilateral agreements such as the Japan-Australia Migratory Bird Agreement. Global agreements such as the Convention on Biological Diversity, give "sovereign national rights over biological resources" (not property). The agreements commit countries to "conserve biodiversity", "develop resources for sustainability" and "share the benefits" resulting from their use. Biodiverse countries that allow bioprospecting or collection of natural products, expect a share of the benefits rather than allowing the individual or institution that discovers/exploits the resource to capture them privately. Bioprospecting can become a type of biopiracy when such principles are not respected. Sovereignty principles can rely upon what is better known as Access and Benefit Sharing Agreements (ABAs). The Convention on Biodiversity implies informed consent between the source country and the collector, to establish which resource will be used and for what and to settle on a fair agreement on benefit sharing. On the 19 of December 2022, during the 2022 United Nations Biodiversity Conference every country on earth, with the exception of the United States and the Holy See, signed onto the agreement which includes protecting 30% of land and oceans by 2030 (30 by 30) and 22 other targets intended to reduce biodiversity loss. The agreement includes also recovering 30% of earth degraded ecosystems and increasing funding for biodiversity issues. European Union In May 2020, the European Union published its Biodiversity Strategy for 2030. The biodiversity strategy is an essential part of the climate change mitigation strategy of the European Union. From the 25% of the European budget that will go to fight climate change, large part will go to restore biodiversity and nature based solutions. The EU Biodiversity Strategy for 2030 include the next targets: Protect 30% of the sea territory and 30% of the land territory especially Old-growth forests. Plant 3 billion trees by 2030. Restore at least 25,000 kilometers of rivers, so they will become free flowing. Reduce the use of Pesticides by 50% by 2030. Increase Organic farming. In linked EU program From Farm to Fork it is said, that the target is making 25% of EU agriculture organic, by 2030. Increase biodiversity in agriculture. Give €20 billion per year to the issue and make it part of the business practice. Approximately half of the global GDP depend on nature. In Europe many parts of the economy that generate trillions of euros per year depend on nature. The benefits of Natura 2000 alone in Europe are €200 – €300 billion per year. National level laws Biodiversity is taken into account in some political and judicial decisions: The relationship between law and ecosystems is very ancient and has consequences for biodiversity. It is related to private and public property rights. It can define protection for threatened ecosystems, but also some rights and duties (for example, fishing and hunting rights). Law regarding species is more recent. It defines species that must be protected because they may be threatened by extinction. The U.S. Endangered Species Act is an example of an attempt to address the "law and species" issue. Laws regarding gene pools are only about a century old. Domestication and plant breeding methods are not new, but advances in genetic engineering have led to tighter laws covering distribution of genetically modified organisms, gene patents and process patents. Governments struggle to decide whether to focus on for example, genes, genomes, or organisms and species. Uniform approval for use of biodiversity as a legal standard has not been achieved, however. Bosselman argues that biodiversity should not be used as a legal standard, claiming that the remaining areas of scientific uncertainty cause unacceptable administrative waste and increase litigation without promoting preservation goals. India passed the Biological Diversity Act in 2002 for the conservation of biological diversity in India. The Act also provides mechanisms for equitable sharing of benefits from the use of traditional biological resources and knowledge. History of the term 1916 – The term biological diversity was used first by J. Arthur Harris in "The Variable Desert", Scientific American: "The bare statement that the region contains a flora rich in genera and species and of diverse geographic origin or affinity is entirely inadequate as a description of its real biological diversity." 1967 – Raymond F. Dasmann used the term biological diversity in reference to the richness of living nature that conservationists should protect in his book A Different Kind of Country. 1974 – The term natural diversity was introduced by John Terborgh. 1980 – Thomas Lovejoy introduced the term biological diversity to the scientific community in a book. It rapidly became commonly used. 1985 – According to Edward O. Wilson, the contracted form biodiversity was coined by W. G. Rosen: "The National Forum on BioDiversity ... was conceived by Walter G.Rosen ... Dr. Rosen represented the NRC/NAS throughout the planning stages of the project. Furthermore, he introduced the term biodiversity". 1985 – The term "biodiversity" appears in the article, "A New Plan to Conserve the Earth's Biota" by Laura Tangley. 1988 – The term biodiversity first appeared in publication. 1988 to Present – The United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in began working in November 1988, leading to the publication of the draft Convention on Biological Diversity in May 1992. Since this time, there have been 16 Conferences of the Parties (COPs) to discuss potential global political responses to biodiversity loss. Most recently COP 16 in Cali, Colombia in 2024. See also Ecological indicator Genetic diversity Global biodiversity Index of biodiversity articles International Day for Biological Diversity Megadiverse countries Soil biodiversity Species diversity 30 by 30 Artificialization References External links Assessment Report on Diverse Values and Valuation of Nature by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), 2022. NatureServe: This site serves as a portal for accessing several types of publicly available biodiversity data Biodiversity Synthesis Report (PDF) by the Millennium Ecosystem Assessment (MA, 2005) World Map of Biodiversity an interactive map from the United Nations Environment Programme World Conservation Monitoring Centre Biodiversity Heritage Library – Open access digital library of historical taxonomic literature Biodiversity PMC – Open access digital library of biodiversity and ecological literature Mapping of biodiversity Encyclopedia of Life – Documenting all species of life on Earth. Biodiversity Biogeography Population genetics Species
Biodiversity
[ "Biology" ]
9,980
[ "Biogeography", "Biodiversity" ]
45,132
https://en.wikipedia.org/wiki/Truth%20condition
In semantics and pragmatics, a truth condition is the condition under which a sentence is true. For example, "It is snowing in Nebraska" is true precisely when it is snowing in Nebraska. Truth conditions of a sentence do not necessarily reflect current reality. They are merely the conditions under which the statement would be true. More formally, a truth condition makes for the truth of a sentence in an inductive definition of truth (for details, see the semantic theory of truth). Understood this way, truth conditions are theoretical entities. To illustrate with an example: suppose that, in a particular truth theory which is a theory of truth where truth is somehow made acceptable despite semantic terms as close as possible, the word "Nixon" refers to Richard M. Nixon, and "is alive" is associated with the set of currently living things. Then one way of representing the truth condition of "Nixon is alive" is as the ordered pair <Nixon, {x: x is alive}>. And we say that "Nixon is alive" is true if and only if the referent (or referent of) "Nixon" belongs to the set associated with "is alive", that is, if and only if Nixon is alive. In semantics, the truth condition of a sentence is almost universally considered distinct from its meaning. The meaning of a sentence is conveyed if the truth conditions for the sentence are understood. Additionally, there are many sentences that are understood although their truth condition is uncertain. One popular argument for this view is that some sentences are necessarily true—that is, they are true whatever happens to obtain. All such sentences have the same truth conditions, but arguably do not thereby have the same meaning. Likewise, the sets {x: x is alive} and {x: x is alive and x is not a rock} are identical—they have precisely the same members—but presumably the sentences "Nixon is alive" and "Nixon is alive and is not a rock" have different meanings. See also Slingshot argument Truth-conditional semantics Semantic theory of truth Notes and references Iten, C. (2005). Linguistic meaning, truth conditions and relevance: The case of concessives. Basingstoke, Hampshire;New York;: Palgrave Macmillan. External links An interview with John McWhorter on Donald Trump’s linguistics, in particular his lack of truth conditions. Semantics Logical truth
Truth condition
[ "Mathematics" ]
492
[ "Mathematical logic", "Logical truth" ]
45,145
https://en.wikipedia.org/wiki/Hubble%20sequence
The Hubble sequence is a morphological classification scheme for galaxies published by Edwin Hubble in 1926. It is often colloquially known as the Hubble tuning-fork diagram because the shape in which it is traditionally represented resembles a tuning fork. It was invented by John Henry Reynolds and Sir James Jeans. The tuning fork scheme divided regular galaxies into three broad classes – ellipticals, lenticulars and spirals – based on their visual appearance (originally on photographic plates). A fourth class contains galaxies with an irregular appearance. The Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy. Classes of galaxies Ellipticals On the left (in the sense that the sequence is usually drawn) lie the ellipticals. Elliptical galaxies have relatively smooth, featureless light distributions and appear as ellipses in photographic images. They are denoted by the letter E, followed by an integer representing their degree of ellipticity in the sky. By convention, is ten times the ellipticity of the galaxy, rounded to the nearest integer, where the ellipticity is defined as for an ellipse with the semi-major axis length and the semi-minor axis length. The ellipticity increases from left to right on the Hubble diagram, with near-circular (E0) galaxies situated on the very left of the diagram. It is important to note that the ellipticity of a galaxy on the sky is only indirectly related to the true 3-dimensional shape (for example, a flattened, discus-shaped galaxy can appear almost round if viewed face-on or highly elliptical if viewed edge-on). Observationally, the most flattened "elliptical" galaxies have ellipticities (denoted E7). However, from studying the light profiles and the ellipticity profiles, rather than just looking at the images, it was realised in the 1960s that the E5–E7 galaxies are probably misclassified lenticular galaxies with large-scale disks seen at various inclinations to our line-of-sight. Observations of the kinematics of early-type galaxies further confirmed this. Examples of elliptical galaxies: M49, M59, M60, M87, NGC 4125. Lenticulars At the centre of the Hubble tuning fork, where the two spiral-galaxy branches and the elliptical branch join, lies an intermediate class of galaxies known as lenticulars and given the symbol S0. These galaxies consist of a bright central bulge, similar in appearance to an elliptical galaxy, surrounded by an extended, disk-like structure. Unlike spiral galaxies, the disks of lenticular galaxies have no visible spiral structure and are not actively forming stars in any significant quantity. When simply looking at a galaxy's image, lenticular galaxies with relatively face-on disks are difficult to distinguish from ellipticals of type E0–E3, making the classification of many such galaxies uncertain. When viewed edge-on, the disk becomes more apparent and prominent dust-lanes are sometimes visible in absorption at optical wavelengths. At the time of the initial publication of Hubble's galaxy classification scheme, the existence of lenticular galaxies was purely hypothetical. Hubble believed that they were necessary as an intermediate stage between the highly flattened "ellipticals" and spirals. Later observations (by Hubble himself, among others) showed Hubble's belief to be correct and the S0 class was included in the definitive exposition of the Hubble sequence by Allan Sandage. Missing from the Hubble sequence are the early-type galaxies with intermediate-scale disks, in between the E0 and S0 types, Martha Liller denoted them ES galaxies in 1966. Lenticular and spiral galaxies, taken together, are often referred to as disk galaxies. The bulge-to-disk flux ratio in lenticular galaxies can take on a range of values, just as it does for each of the spiral galaxy morphological types (Sa, Sb, etc.). Examples of lenticular galaxies: M85, M86, NGC 1316, NGC 2787, NGC 5866, Centaurus A. Spirals On the right of the Hubble sequence diagram are two parallel branches encompassing the spiral galaxies. A spiral galaxy consists of a flattened disk, with stars forming a (usually two-armed) spiral structure, and a central concentration of stars known as the bulge. Roughly half of all spirals are also observed to have a bar-like structure, with the bar extending from the central bulge, and the arms begin at the ends of the bar. In the tuning-fork diagram, the regular spirals occupy the upper branch and are denoted by the letter S, while the lower branch contains the barred spirals, given the symbol SB. Both type of spirals are further subdivided according to the detailed appearance of their spiral structures. Membership of one of these subdivisions is indicated by adding a lower-case letter to the morphological type, as follows: Sa (SBa) – tightly wound, smooth arms; large, bright central bulge Sb (SBb) – less tightly wound spiral arms than Sa (SBa); somewhat fainter bulge Sc (SBc) – loosely wound spiral arms, clearly resolved into individual stellar clusters and nebulae; smaller, fainter bulge Hubble originally described three classes of spiral galaxy. This was extended by Gérard de Vaucouleurs to include a fourth class: Sd (SBd) – very loosely wound, fragmentary arms; most of the luminosity is in the arms and not the bulge Although strictly part of the de Vaucouleurs system of classification, the Sd class is often included in the Hubble sequence. The basic spiral types can be extended to enable finer distinctions of appearance. For example, spiral galaxies whose appearance is intermediate between two of the above classes are often identified by appending two lower-case letters to the main galaxy type (for example, Sbc for a galaxy that is intermediate between an Sb and an Sc). Our own Milky Way is generally classed as Sc or SBc, making it a barred spiral with well-defined arms. Examples of regular spiral galaxies: (visually) M31 (Andromeda Galaxy), M74, M81, M104 (Sombrero Galaxy), M51a (Whirlpool Galaxy), NGC 300, NGC 772. Examples of barred spiral galaxies: M91, M95, NGC 1097, NGC 1300, NGC1672, NGC 2536, NGC 2903. Irregulars Galaxies that do not fit into the Hubble sequence, because they have no regular structure (either disk-like or ellipsoidal), are termed irregular galaxies. Hubble defined two classes of irregular galaxy: Irr I galaxies have asymmetric profiles and lack a central bulge or obvious spiral structure; instead they contain many individual clusters of young stars Irr II galaxies have smoother, asymmetric appearances and are not clearly resolved into individual stars or stellar clusters In his extension to the Hubble sequence, de Vaucouleurs called the Irr I galaxies 'Magellanic irregulars', after the Magellanic Clouds – two satellites of the Milky Way which Hubble classified as Irr I. The discovery of a faint spiral structure in the Large Magellanic Cloud led de Vaucouleurs to further divide the irregular galaxies into those that, like the LMC, show some evidence for spiral structure (these are given the symbol Sm) and those that have no obvious structure, such as the Small Magellanic Cloud (denoted Im). In the extended Hubble sequence, the Magellanic irregulars are usually placed at the end of the spiral branch of the Hubble tuning fork. Examples of irregular galaxies: M82, NGC 1427A, Large Magellanic Cloud, Small Magellanic Cloud. Physical significance Elliptical and lenticular galaxies are commonly referred to together as "early-type" galaxies, while spirals and irregular galaxies are referred to as "late types". This nomenclature is the source of the common, but erroneous, belief that the Hubble sequence was intended to reflect a supposed evolutionary sequence, from elliptical galaxies through lenticulars to either barred or regular spirals. In fact, Hubble was clear from the beginning that no such interpretation was implied: The nomenclature, it is emphasized, refers to position in the sequence, and temporal connotations are made at one's peril. The entire classification is purely empirical and without prejudice to theories of evolution... The evolutionary picture appears to be lent weight by the fact that the disks of spiral galaxies are observed to be home to many young stars and regions of active star formation, while elliptical galaxies are composed of predominantly old stellar populations. In fact, current evidence suggests the opposite: the early Universe appears to be dominated by spiral and irregular galaxies. In the currently favored picture of galaxy formation, present-day ellipticals formed as a result of mergers between these earlier building blocks; while some lenticular galaxies may have formed this way, others may have accreted their disks around pre-existing spheroids. Some lenticular galaxies may also be evolved spiral galaxies, whose gas has been stripped away leaving no fuel for continued star formation, although the galaxy LEDA 2108986 opens the debate on this. Shortcomings A common criticism of the Hubble scheme is that the criteria for assigning galaxies to classes are subjective, leading to different observers assigning galaxies to different classes (although experienced observers usually agree to within less than a single Hubble type). Although not really a shortcoming, since the 1961 Hubble Atlas of Galaxies, the primary criteria used to assign the morphological type (a, b, c, etc.) has been the nature of the spiral arms, rather than the bulge-to-disk flux ratio, and thus a range of flux ratios exist for each morphological type, as with the lenticular galaxies. Another criticism of the Hubble classification scheme is that, being based on the appearance of a galaxy in a two-dimensional image, the classes are only indirectly related to the true physical properties of galaxies. In particular, problems arise because of orientation effects. The same galaxy would look very different, if viewed edge-on, as opposed to a face-on or 'broadside' viewpoint. As such, the early-type sequence is poorly represented: The ES galaxies are missing from the Hubble sequence, and the E5–E7 galaxies are actually S0 galaxies. Furthermore, the barred ES and barred S0 galaxies are also absent. Visual classifications are also less reliable for faint or distant galaxies, and the appearance of galaxies can change depending on the wavelength of light in which they are observed. Nonetheless, the Hubble sequence is still commonly used in the field of extragalactic astronomy and Hubble types are known to correlate with many physically relevant properties of galaxies, such as luminosities, colours, masses (of stars and gas) and star formation rates. In June 2019, citizen scientists in the Galaxy Zoo project argued that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported by evidence. Consequently, the scheme may need revision. See also Galaxy color–magnitude diagram Galaxy morphological classification References External links Sequence Astronomical classification systems Extragalactic astronomy
Hubble sequence
[ "Astronomy" ]
2,294
[ "Extragalactic astronomy", "Astronomical sub-disciplines", "Astronomical classification systems" ]