id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
42,960
https://en.wikipedia.org/wiki/Fractal%20transform
The fractal transform is a technique invented by Michael Barnsley et al. to perform lossy image compression. This first practical fractal compression system for digital images resembles a vector quantization system using the image itself as the codebook. Fractal transform compression Start with a digital image A1. Downsample it by a factor of 2 to produce image A2. Now, for each block B1 of 4x4 pixels in A1, find the corresponding block B2 in A2 most similar to B1, and then find the grayscale or RGB offset and gain from A2 to B2. For each destination block, output the positions of the source blocks and the color offsets and gains. Fractal transform decompression Starting with an empty destination image A1, repeat the following algorithm several times: Downsample A1 down by a factor of 2 to produce image A2. Then copy blocks from A2 to A1 as directed by the compressed data, multiplying by the respective gains and adding the respective color offsets. This algorithm is guaranteed to converge to an image, and it should appear similar to the original image. In fact, a slight modification of the decompressor to run at block sizes larger than 4x4 pixels produces a method of stretching images without causing the blockiness or blurriness of traditional linear resampling algorithms. Patents The basic patents covering Fractal Image Compression, U.S. Patents 4,941,193, 5,065,447, 5,384,867, 5,416,856, and 5,430,812 appear to be expired. See also Image compression External links E2 writeup Fractals
Fractal transform
[ "Mathematics" ]
348
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Fractals", "Mathematical relations" ]
1,489,962
https://en.wikipedia.org/wiki/Air-independent%20propulsion
Air-independent propulsion (AIP), or air-independent power, is any marine propulsion technology that allows a non-nuclear submarine to operate without access to atmospheric oxygen (by surfacing or using a snorkel). AIP can augment or replace the diesel-electric propulsion system of non-nuclear vessels. Modern non-nuclear submarines are potentially stealthier than nuclear submarines; although some modern submarine reactors are designed to rely on natural circulation, most naval nuclear reactors use pumps to constantly circulate the reactor coolant, generating some amount of detectable noise. Non-nuclear submarines running on battery power or AIP, on the other hand, can be virtually silent. While nuclear-powered designs still dominate in submergence times, speed, range and deep-ocean performance, small, high-tech non-nuclear attack submarines can be highly effective in coastal operations and pose a significant threat to less-stealthy and less-maneuverable nuclear submarines. AIP is usually implemented as an auxiliary source, with the traditional diesel engine handling surface propulsion. Most such systems generate electricity, which in turn drives an electric motor for propulsion or recharges the boat's batteries. The submarine's electrical system is also used for providing "hotel services"—ventilation, lighting, heating etc.—although this consumes a small amount of power compared to that required for propulsion. AIP can be retrofitted into existing submarine hulls by inserting an additional hull section. AIP does not typically provide the endurance or power to replace atmospheric dependent propulsion, but allows for longer underwater endurance than a conventionally propelled submarine. A typical conventional power plant provides 3 megawatts maximum, and an AIP source around 10% of that. A nuclear submarine's propulsion plant is usually much greater than 20 megawatts. The United States Navy uses the hull classification symbol "SSP" to designate boats powered by AIP, while retaining "SSK" for classic diesel-electric attack submarines. History In the development of the submarine, the problem of finding satisfactory forms of propulsion underwater has been persistent. The earliest submarines were man-powered with hand-cranked propellers, which quickly used up the air inside; these vessels had to move for much of the time on the surface with hatches open, or use some form of breathing tube, both inherently dangerous and resulting in a number of early accidents. Later, mechanically driven vessels used compressed air or steam, or electricity, which had to be re-charged from shore or from an on-board aerobic engine. The earliest attempt at a fuel that would burn anaerobically was in 1867, when Spanish engineer Narciso Monturiol successfully developed a chemically powered anaerobic or air independent steam engine. The engine was powered by a mixture of potassium chlorate and zinc, which reacted to generate heat and, conveniently, oxygen. In 1908 the Imperial Russian Navy launched the submarine Pochtovy, which used a gasoline engine fed with compressed air and exhausted under water. These two approaches, the use of a fuel that provides energy to an open-cycle system, and the provision of oxygen to an aerobic engine in a closed cycle, characterize AIP today. Types Air independent propulsion (non-nuclear) can take various forms. All currently active AIP submarines require oxygen for AIP, which is commonly stored as a liquid (LOX). AIP submarine range is primarily limited by the amount of LOX it can carry. Open-cycle systems During World War II the German firm Walter experimented with submarines that used high-test (concentrated) hydrogen peroxide as their source of oxygen under water. These used steam turbines, employing steam heated by burning diesel fuel in the steam/oxygen atmosphere created by the decomposition of hydrogen peroxide by a potassium permanganate catalyst. Several experimental boats were produced, though the work did not mature into any viable combat vessels. One drawback was the instability and scarcity of the fuel involved. Another was that while the system produced high underwater speeds, it was extravagant with fuel; the first boat, V-80, required 28 tons of fuel to travel , and the final designs were little better. After the war one Type XVII boat, , which had been scuttled at the end of World War II, was salvaged and recommissioned into the Royal Navy as . The British built two improved models in the late 1950s, and . Meteorite was not popular with its crews, who regarded it as dangerous and volatile; she was officially described as 75% safe. The reputations of Excalibur and Explorer were little better; the boats were nicknamed Excruciater and Exploder. The Soviet Union also experimented with the technology and one experimental boat was built which utilized hydrogen peroxide in a Walter engine. The United States also received a Type XVII boat, U-1406, and went on to begin two AIP submarine projects. Project SCB 66 developed an experimental midget submarine, , which was launched in September 1955. It was originally powered by a hydrogen peroxide/diesel engine and battery system until an explosion of her hydrogen peroxide supply on 20 May 1957. X-1 was later converted to a diesel-electric. The second U.S. Navy project was of a full sized AIP submarine under SCB 67 in 1950, later SCB 67A. This submarine, designated SSX, would have one of three propulsion plants under development: a Walther open cycle hydrogen peroxide plant (termed Alton), a liquid oxygen steam plant (Ellis), and an AIP gas turbine (Wolverine). By late 1951 the Navy realized that while the competing nuclear designs were heavier due to shielding, they were more compact than the three AIP plants: the SSX would be longer than the SSN by nearly 40 feet. The SSN would likely be quieter and less complicated than the AIP technology of this time. By 1952 the nuclear reactors were so far along in development it appeared that the SSX submarine would not be needed as a stopgap. The project was cancelled on 26 October 1953. The USSR and the UK, the only other countries known to be experimenting with the technology at that time, also abandoned it when the US developed the nuclear reactor small enough for submarine propulsion. Other nations, including Germany and Sweden, would later recommence AIP development. It was retained for propelling torpedoes by the British and the Soviet Union, although hastily abandoned by the former following the tragedy. Both this and the loss of the were due to accidents involving hydrogen peroxide propelled torpedoes. Closed-cycle diesel engines This technology uses a submarine diesel engine which can be operated conventionally on the surface, but which can also be provided with oxidant, usually stored as liquid oxygen, when submerged. Since the metal of an engine would burn in pure oxygen, the oxygen is usually diluted with recycled exhaust gas. Argon replaces exhaust gas when the engine is started. In the late 1930s the Soviet Union experimented with closed-cycle engines, and a number of small M-class vessels were built using the REDO system, but none were completed before the German invasion in 1941. During World War II the German experimented with such a system as an alternative to the Walter peroxide system, designing variants of their Type XVII U-boat and their Type XXVIIB midget submarine, the Type XVIIK and Type XXVIIK respectively, though neither was completed before the war's end. After the war the USSR developed the small 650-ton submarine, of which thirty were built between 1953 and 1956. These had three diesel engines—two were conventional and one was closed cycle using liquid oxygen. In the Soviet system, called a "single propulsion system", oxygen was added after the exhaust gases had been filtered through a lime-based chemical absorbent. The submarine could also run its diesel using a snorkel. The Quebec had three drive shafts: a 32D diesel on the centre shaft and two M-50P diesels on the outer shafts. In addition a "creep" motor was coupled to the centre shaft. The boat could be run at slow speed using the centreline diesel only. Because liquid oxygen cannot be stored indefinitely, these boats could not operate far from a base. It was dangerous; at least seven submarines suffered explosions, and one of these, , sank following an explosion and fire. They were sometimes nicknamed cigarette lighters. The last submarine using this technology was scrapped in the early 1970s. The German Navy's former Type 205 submarine (launched 1967) was fitted with an experimental unit. Closed-cycle steam turbines The French MESMA () system is offered by French shipyard DCNS. MESMA is available for the Agosta 90B and s. It is essentially a modified version of their nuclear propulsion system with heat generated by ethanol and oxygen. Specifically, a conventional steam turbine power plant is powered by steam generated from the combustion of ethanol and stored oxygen at a pressure of 60 atmospheres. This pressure-firing allows exhaust carbon dioxide to be expelled overboard at any depth without an exhaust compressor. Each MESMA system costs around $50–60 million. As installed on the Scorpènes, it requires adding an , 305-tonne hull section to the submarine, and results in a submarine able to operate for greater than 21 days underwater, depending on variables such as speed. On the Agosta 90B, the AIP system allows the submarine to operate 16 days under water and gives it a range of . An article in Undersea Warfare Magazine notes that: "although MESMA can provide higher output power than the other alternatives, its inherent efficiency is the lowest of the four AIP candidates, and its rate of oxygen consumption is correspondingly higher." Stirling cycle engines The Swedish shipbuilder Kockums constructed three s for the Swedish Navy that are fitted with an auxiliary Stirling engine that burns diesel fuel with liquid oxygen to drive 75 kW electrical generators for either propulsion or charging batteries. The underwater endurance of the 1,500-tonne vessels is around 14 days at , with an approximate range of 1700 nautical miles. Kockums refurbished and upgraded the Swedish submarines with a Stirling AIP plugin section. Two ( and ) are in service in Sweden as the , and two others are in service in Singapore as the (Archer and Swordsman). Kockums also delivered Stirling engines to Japan. Ten Japanese submarines were equipped with Stirling engines. The first submarine in the class, , was launched on 5 December 2007 and delivered to the navy in March 2009. The eleventh of the class is the first one that is equipped with lithium-ion batteries without a Stirling engine. This submarine may have a range from AIP of 6500 nautical miles and can remain submerged for 40 days. The new Swedish has the Stirling AIP system as its main energy source. The submerged endurance will be more than 18 days at 5 knots using AIP. Fuel cells Siemens has developed a 30–50 kilowatt fuel cell unit, a device that converts the chemical energy from a fuel and oxidiser into electricity. Fuel cells differ from batteries in that they require a continuous source of fuel (such as hydrogen) and oxygen, which are carried in the vessel in pressurized tanks, to sustain the chemical reaction. Nine of these units are incorporated into Howaldtswerke Deutsche Werft AG's 1,830 t submarine , lead ship for the Type 212A of the German Navy. The other boats of this class and HDW's AIP equipped export submarines, , Type 209 mod and Type 214, use two modules, also from Siemens. The Type 212 can remain submerged for 21 days; one such submarine conducted a 1600 nautical mile journey solely on AIP in 2016. After the success of Howaldtswerke Deutsche Werft AG in its export activities, several builders developed fuel-cell auxiliary units for submarines, but no other shipyard has a contract for a submarine so equipped. The AIP implemented on the of the Spanish Navy is based on a bioethanol-processor (provided by Hynergreen from Abengoa) consisting of a reaction chamber and several intermediate Coprox reactors, that transform the BioEtOH into high purity hydrogen. The output feeds a series of fuel cells from Collins Aerospace (which also supplied fuel cells for the Space Shuttle). The reformer is fed with bioethanol as fuel, and oxygen (stored as a liquid in a high pressure cryogenic tank), generating hydrogen as a sub-product. The produced hydrogen and more oxygen is fed to the fuel cells. China has been researching fuel cell engines for AIP submarines. The Dalian Institute of Chemical Physics reportedly developed 100 kW and 1 MW fuel cell engines. The Naval Materials Research Laboratory of Indian Defence Research and Development Organisation in collaboration with Larsen & Toubro and Thermax has developed a 270 kilowatt phosphoric acid fuel cell (PAFC) to power the s, which are based on the design. All six Kalvari-class submarines will be retrofitted with AIP during their first upgrade. It produces electricity by reacting with hydrogen generated from sodium borohydride and stored oxygen with phosphoric acid acting as an electrolyte. The Portuguese Navy s are also equipped with fuel cells. Nuclear power Air-independent propulsion is a term normally used in the context of improving the performance of conventionally propelled submarines. However, as an auxiliary power supply, nuclear power falls into the technical definition of AIP. For example, a proposal to use a small 200-kilowatt reactor for auxiliary power—styled by Atomic Energy of Canada Limited (AECL) as a "nuclear battery"—could improve the under-ice capability of Canadian submarines. Nuclear reactors have been used since the 1950s to power submarines. The first such submarine was USS Nautilus commissioned in 1954. Today, China, France, India, Russia, the United Kingdom and the United States are the only countries to have built and operated nuclear-powered submarines successfully. Non-nuclear AIP submarines , some 10 nations are building AIP submarines with almost 20 nations operating AIP based submarines: References Notes Sources Further reading Submarine design Marine propulsion Spanish inventions
Air-independent propulsion
[ "Engineering" ]
2,894
[ "Marine propulsion", "Marine engineering" ]
1,490,017
https://en.wikipedia.org/wiki/Electroactive%20polymer
An electroactive polymer (EAP) is a polymer that exhibits a change in size or shape when stimulated by an electric field. The most common applications of this type of material are in actuators and sensors. A typical characteristic property of an EAP is that they will undergo a large amount of deformation while sustaining large forces. The majority of historic actuators are made of ceramic piezoelectric materials. While these materials are able to withstand large forces, they commonly will only deform a fraction of a percent. In the late 1990s, it has been demonstrated that some EAPs can exhibit up to a 380% strain, which is much more than any ceramic actuator. One of the most common applications for EAPs is in the field of robotics in the development of artificial muscles; thus, an electroactive polymer is often referred to as an artificial muscle. History The field of EAPs emerged back in 1880, when Wilhelm Röntgen designed an experiment in which he tested the effect of an electrostatic field on the mechanical properties of a stripe of natural rubber. The rubber stripe was fixed at one end and was attached to a mass at the other. Electric charges were then sprayed onto the rubber, and it was observed that the length changed. It was in 1925 that the first piezoelectric polymer was discovered (Electret). Electret was formed by combining carnauba wax, rosin and beeswax, and then cooling the solution while it is subject to an applied DC electrical bias. The mixture would then solidify into a polymeric material that exhibited a piezoelectric effect. Polymers that respond to environmental conditions, other than an applied electric current, have also been a large part of this area of study. In 1949 Katchalsky et al. demonstrated that when collagen filaments are dipped in acid or alkali solutions, they would respond with a change in volume. The collagen filaments were found to expand in an acidic solution and contract in an alkali solution. Although other stimuli (such as pH) have been investigated, due to its ease and practicality most research has been devoted to developing polymers that respond to electrical stimuli in order to mimic biological systems. The next major breakthrough in EAPs took place in the late 1960s. In 1969 Kawai demonstrated that polyvinylidene fluoride (PVDF) exhibits a large piezoelectric effect. This sparked research interest in developing other polymers that would show a similar effect. In 1977 the first electrically conducting polymers were discovered by Hideki Shirakawa et al. Shirakawa, along with Alan MacDiarmid and Alan Heeger, demonstrated that polyacetylene was electrically conductive, and that by doping it with iodine vapor, they could enhance its conductivity by 8 orders of magnitude. Thus the conductance was close to that of a metal. By the late 1980s a number of other polymers had been shown to exhibit a piezoelectric effect or were demonstrated to be conductive. In the early 1990s, ionic polymer-metal composites (IPMCs) were developed and shown to exhibit electroactive properties far superior to previous EAPs. The major advantage of IPMCs was that they were able to show activation (deformation) at voltages as low as 1 or 2 volts. This is orders of magnitude less than any previous EAP. Not only was the activation energy for these materials much lower, but they could also undergo much larger deformations. IPMCs were shown to exhibit anywhere up to 380% strain, orders of magnitude larger than previously developed EAPs. In 1999, Yoseph Bar-Cohen proposed the Armwrestling Match of EAP Robotic Arm Against Human Challenge. This was a challenge in which research groups around the world competed to design a robotic arm consisting of EAP muscles that could defeat a human in an arm wrestling match. The first challenge was held at the Electroactive Polymer Actuators and Devices Conference in 2005. Another major milestone of the field is that the first commercially developed device including EAPs as an artificial muscle was produced in 2002 by Eamex in Japan. This device was a fish that was able to swim on its own, moving its tail using an EAP muscle. But the progress in practical development has not been satisfactory. DARPA-funded research in the 1990s at SRI International and led by Ron Pelrine developed an electroactive polymer using silicone and acrylic polymers; the technology was spun off into the company Artificial Muscle in 2003, with industrial production beginning in 2008. In 2010, Artificial Muscle became a subsidiary of Bayer MaterialScience. Types EAPs can have several configurations, but are generally divided in two principal classes: Dielectric and Ionic. Dielectric Dielectric EAPs are materials in which actuation is caused by electrostatic forces between two electrodes which squeeze the polymer. Dielectric elastomers are capable of very high strains and are fundamentally a capacitor that changes its capacitance when a voltage is applied by allowing the polymer to compress in thickness and expand in area due to the electric field. This type of EAP typically requires a large actuation voltage to produce high electric fields (hundreds to thousands of volts), but very low electrical power consumption. Dielectric EAPs require no power to keep the actuator at a given position. Examples are electrostrictive polymers and dielectric elastomers. Ferroelectric polymers Ferroelectric polymers are a group of crystalline polar polymers that are also ferroelectric, meaning that they maintain a permanent electric polarization that can be reversed, or switched, in an external electric field. Ferroelectric polymers, such as polyvinylidene fluoride (PVDF), are used in acoustic transducers and electromechanical actuators because of their inherent piezoelectric response, and as heat sensors because of their inherent pyroelectric response. Electrostrictive graft polymers Electrostrictive graft polymers consist of flexible backbone chains with branching side chains. The side chains on neighboring backbone polymers cross link and form crystal units. The backbone and side chain crystal units can then form polarized monomers, which contain atoms with partial charges and generate dipole moments, shown in Figure 2. When an electrical field is applied, a force is applied to each partial charge, which causes rotation of the whole polymer unit. This rotation causes electrostrictive strain and deformation of the polymer. Liquid crystalline polymers Main-chain liquid crystalline polymers have mesogenic groups linked to each other by a flexible spacer. The mesogens within a backbone form the mesophase structure, causing the polymer itself to adopt a conformation compatible with the structure of the mesophase. The direct coupling of the liquid crystalline order with the polymer conformation has given main-chain liquid crystalline elastomers a large amount of interest. The synthesis of highly oriented elastomers leads to a large strain thermal actuation along the polymer chain direction, with temperature variation resulting in unique mechanical properties and potential applications as mechanical actuators. Ionic Ionic EAPs are polymers in which actuation is caused by the displacement of ions inside the polymer. Only a few volts are needed for actuation, but the ionic flow implies that higher electrical power is needed for actuation, and energy is needed to keep the actuator at a given position. Examples of ionic EAPs are conductive polymers, ionic polymer-metal composites (IPMCs), and responsive gels. Yet another example is a Bucky gel actuator, which is a polymer-supported layer of polyelectrolyte material consisting of an ionic liquid sandwiched between two electrode layers, which is then a gel of ionic liquid containing single-wall carbon nanotubes. The name comes from the similarity of the gel to the paper that can be made by filtering carbon nanotubes, the so-called buckypaper. Electrorheological fluid Electrorheological fluids change viscosity when an electric field is applied. The fluid is a suspension of polymers in a low dielectric-constant liquid. With the application of a large electric field the viscosity of the suspension increases. Potential applications of these fluids include shock absorbers, engine mounts and acoustic dampers. Ionic polymer-metal composite Ionic polymer-metal composites consist of a thin ionomeric membrane with noble metal electrodes plated on its surface. It also has cations to balance the charge of the anions fixed to the polymer backbone. They are very active actuators that show very high deformation at low applied voltage and show low impedance. Ionic polymer-metal composites work through electrostatic attraction between the cationic counter ions and the cathode of the applied electric field, a schematic representation is shown in Figure 3. These types of polymers show the greatest promise for bio-mimetic uses as collagen fibers are essentially composed of natural charged ionic polymers. Nafion and Flemion are commonly used ionic polymer metal composites. Stimuli-responsive gels Stimuli-responsive gels (hydrogels, when the swelling agent is an aqueous solution) are a special kind of swellable polymer networks with volume phase transition behaviour. These materials change reversibly their volume, optical, mechanical and other properties by very small alterations of certain physical (e.g. electric field, light, temperature) or chemical (concentrations) stimuli. The volume change of these materials occurs by swelling/shrinking and is diffusion-based. Gels provide the biggest change in volume of solid-state materials. Combined with an excellent compatibility with micro-fabrication technologies, especially stimuli-responsive hydrogels are of strong increasing interest for microsystems with sensors and actuators. Current fields of research and application are chemical sensor systems, microfluidics and multimodal imaging systems. Comparison of dielectric and ionic EAPs Dielectric polymers are able to hold their induced displacement while activated under a DC voltage. This allows dielectric polymers to be considered for robotic applications. These types of materials also have high mechanical energy density and can be operated in air without a major decrease in performance. However, dielectric polymers require very high activation fields (>10 V/μm) that are close to the breakdown level. The activation of ionic polymers, on the other hand, requires only 1-2 volts. They however need to maintain wetness, though some polymers have been developed as self-contained encapsulated activators which allows their use in dry environments. Ionic polymers also have a low electromechanical coupling. They are however ideal for bio-mimetic devices. Characterization While there are many different ways electroactive polymers can be characterized, only three will be addressed here: stress–strain curve, dynamic mechanical thermal analysis, and dielectric thermal analysis. Stress–strain curve Stress strain curves provide information about the polymer's mechanical properties such as the brittleness, elasticity and yield strength of the polymer. This is done by providing a force to the polymer at a uniform rate and measuring the deformation that results. An example of this deformation is shown in Figure 4. This technique is useful for determining the type of material (brittle, tough, etc.), but it is a destructive technique as the stress is increased until the polymer fractures. Dynamic mechanical thermal analysis (DMTA) Dynamic mechanical analysis is a non destructive technique that is useful in understanding the mechanism of deformation at a molecular level. In DMTA a sinusoidal stress is applied to the polymer, and based on the polymer's deformation, the elastic modulus and damping characteristics are obtained (assuming the polymer is a damped harmonic oscillator). Elastic materials take the mechanical energy of the stress and convert it into potential energy which can later be recovered. An ideal spring will use all the potential energy to regain its original shape (no damping), while a liquid will use all the potential energy to flow, never returning to its original position or shape (high damping). A viscoeleastic polymer will exhibit a combination of both types of behavior. Dielectric thermal analysis (DETA) DETA is similar to DMTA, but instead of an alternating mechanical force an alternating electric field is applied. The applied field can lead to polarization of the sample, and if the polymer contains groups that have permanent dipoles (as in Figure 2), they will align with the electrical field. The permittivity can be measured from the change in amplitude and resolved into dielectric storage and loss components. The electric displacement field can also be measured by following the current. Once the field is removed, the dipoles will relax back into a random orientation. Applications EAP materials can be easily manufactured in various shapes due to the ease of processing many polymeric materials, making them very versatile materials. One potential application for EAPs is integration into microelectromechanical systems (MEMS) to produce smart actuators. Artificial muscles As the most prospective practical research direction, EAPs have been used in artificial muscles. Their ability to emulate the operation of biological muscles with high fracture toughness, large actuation strain and inherent vibration damping draw the attention of scientists in this field. EAPs have even successfully been used to make a type of hand. Tactile displays In recent years, "electro active polymers for refreshable Braille displays" has emerged to aid the visually impaired in fast reading and computer assisted communication. This concept is based on using an EAP actuator configured in an array form. Rows of electrodes on one side of an EAP film and columns on the other activate individual elements in the array. Each element is mounted with a Braille dot and is lowered by applying a voltage across the thickness of the selected element, causing local thickness reduction. Under computer control, dots would be activated to create tactile patterns of highs and lows representing the information to be read. Visual and tactile impressions of a virtual surface are displayed by a high resolution tactile display, a so-called "artificial skin" (Fig. 6). These monolithic devices consist of an array of thousands of multimodal modulators (actuator pixels) based on stimuli-responsive hydrogels. Each modulator is able to change individually their transmission, height and softness. Besides their possible use as graphic displays for visually impaired such displays are interesting as free programmable keys of touchpads and consoles. Microfluidics EAP materials have huge potential for microfluidics, e.g. as drug delivery systems, microfluidic devices and lab-on-a-chip. A first microfluidic platform technology reported in the literature is based on stimuli-responsive gels. To avoid the electrolysis of water, hydrogel-based microfluidic devices are mainly based on temperature-responsive polymers with lower critical solution temperature (LCST) characteristics, which are controlled by an electrothermic interface. Two types of micropumps are known, a diffusion micropump and a displacement micropump. Microvalves based on stimuli-responsive hydrogels show some advantageous properties such as particle tolerance, no leakage and outstanding pressure resistance. Besides these microfluidic standard components, the hydrogel platform provides also chemical sensors and a novel class of microfluidic components, the chemical transistors (also referred as chemostat valves). These devices regulate a liquid flow if a threshold concentration of a certain chemical is reached. Chemical transistors form the basis of microchemomechanical fluidic integrated circuits. "Chemical ICs" process exclusively chemical information, are energy-self-powered, operate automatically and are suitable for large-scale integration. Another microfluidic platform is based on ionomeric materials. Pumps made from that material could offer low voltage (battery) operation, extremely low noise signature, high system efficiency, and highly accurate control of flow rate. Another technology that can benefit from the unique properties of EAP actuators is optical membranes. Due to their low modulus, the mechanical impedance of the actuators, they are well-matched to common optical membrane materials. Also, a single EAP actuator is capable of generating displacements that range from micrometers to centimeters. For this reason, these materials can be used for static shape correction and jitter suppression. These actuators could also be used to correct for optical aberrations due to atmospheric interference. Since these materials exhibit excellent electroactive character, EAP materials show potential in biomimetic-robot research, stress sensors and acoustics field, which will make EAPs become a more attractive study topic in the near future. They have been used for various actuators such as face muscles and arm muscles in humanoid robots. Future directions The field of EAPs is far from mature, which leaves several issues that still need to be worked on. The performance and long-term stability of the EAP should be improved by designing a water impermeable surface. This will prevent the evaporation of water contained in the EAP, and also reduce the potential loss of the positive counter ions when the EAP is operating submerged in an aqueous environment. Improved surface conductivity should be explored using methods to produce a defect-free conductive surface. This could possibly be done using metal vapor deposition or other doping methods. It may also be possible to utilize conductive polymers to form a thick conductive layer. Heat resistant EAP would be desirable to allow operation at higher voltages without damaging the internal structure of the EAP due to the generation of heat in the EAP composite. Development of EAPs in different configurations (e.g., fibers and fiber bundles), would also be beneficial, in order to increase the range of possible modes of motion. See also Pneumatic artificial muscles Artificial muscles References Further reading Electroactive polymer (EAP) actuators as artificial muscles – reality, potential and challenges, Electroactive Polymers as Artificial Muscles Reality and Challenges Electroactive polymers for sensing Electrical engineering Polymer material properties Smart materials Transducers
Electroactive polymer
[ "Chemistry", "Materials_science", "Engineering" ]
3,734
[ "Materials science", "Polymer material properties", "Polymer chemistry", "Electrical engineering", "Smart materials" ]
1,490,148
https://en.wikipedia.org/wiki/Perturbation%20%28astronomy%29
In astronomy, perturbation is the complex motion of a massive body subjected to forces other than the gravitational attraction of a single other massive body. The other forces can include a third (fourth, fifth, etc.) body, resistance, as from an atmosphere, and the off-center attraction of an oblate or otherwise misshapen body. Introduction The study of perturbations began with the first attempts to predict planetary motions in the sky. In ancient times the causes were unknown. Isaac Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation. Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for marine navigation. The complex motions of gravitational perturbations can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is a conic section, and can be described in geometrical terms. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between that and the actual motion of the body are perturbations due to the additional gravitational effects of the remaining body or bodies. If there is only one other significant body then the perturbed motion is a three-body problem; if there are multiple other bodies it is an ‑body problem. A general analytical solution (a mathematical expression to predict the positions and motions at any future time) exists for the two-body problem; when more than two bodies are considered analytic solutions exist only for special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape. Most systems that involve multiple gravitational attractions present one primary body which is dominant in its effects (for example, a star, in the case of the star and its planet, or a planet, in the case of the planet and its satellite). The gravitational effects of the other bodies can be treated as perturbations of the hypothetical unperturbed motion of the planet or satellite around its primary body. Mathematical analysis General perturbations In methods of general perturbations, general differential equations, either of motion or of change in the orbital elements, are solved analytically, usually by series expansions. The result is usually expressed in terms of algebraic and trigonometric functions of the orbital elements of the body in question and the perturbing bodies. This can be applied generally to many different sets of conditions, and is not specific to any particular set of gravitating objects. Historically, general perturbations were investigated first. The classical methods are known as variation of the elements, variation of parameters or variation of the constants of integration. In these methods, it is considered that the body is always moving in a conic section, however the conic section is constantly changing due to the perturbations. If all perturbations were to cease at any particular instant, the body would continue in this (now unchanging) conic section indefinitely; this conic is known as the osculating orbit and its orbital elements at any particular time are what are sought by the methods of general perturbations. General perturbations takes advantage of the fact that in many problems of celestial mechanics, the two-body orbit changes rather slowly due to the perturbations; the two-body orbit is a good first approximation. General perturbations is applicable only if the perturbing forces are about one order of magnitude smaller, or less, than the gravitational force of the primary body. In the Solar System, this is usually the case; Jupiter, the second largest body, has a mass of about that of the Sun. General perturbation methods are preferred for some types of problems, as the source of certain observed motions are readily found. This is not necessarily so for special perturbations; the motions would be predicted with similar accuracy, but no information on the configurations of the perturbing bodies (for instance, an orbital resonance) which caused them would be available. Special perturbations In methods of special perturbations, numerical datasets, representing values for the positions, velocities and accelerative forces on the bodies of interest, are made the basis of numerical integration of the differential equations of motion. In effect, the positions and velocities are perturbed directly, and no attempt is made to calculate the curves of the orbits or the orbital elements. Special perturbations can be applied to any problem in celestial mechanics, as it is not limited to cases where the perturbing forces are small. Once applied only to comets and minor planets, special perturbation methods are now the basis of the most accurate machine-generated planetary ephemerides of the great astronomical almanacs. Special perturbations are also used for modeling an orbit with computers. Cowell's formulation Cowell's formulation (so named for Philip H. Cowell, who, with A.C.D. Cromellin, used a similar method to predict the return of Halley's comet) is perhaps the simplest of the special perturbation methods. In a system of mutually interacting bodies, this method mathematically solves for the Newtonian forces on body by summing the individual interactions from the other bodies: where is the acceleration vector of body , is the gravitational constant, is the mass of body , and are the position vectors of objects and respectively, and is the distance from object to object , all vectors being referred to the barycenter of the system. This equation is resolved into components in and and these are integrated numerically to form the new velocity and position vectors. This process is repeated as many times as necessary. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large. However, for many problems in celestial mechanics, this is never the case. Another disadvantage is that in systems with a dominant central body, such as the Sun, it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies, although with high precision numbers built into modern computers this is not as much of a limitation as it once was. Encke's method Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time. Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as rectification. Encke's method is similar to the general perturbation method of variation of the elements, except the rectification is performed at discrete intervals rather than continuously. Letting be the radius vector of the osculating orbit, the radius vector of the perturbed orbit, and the variation from the osculating orbit, and are just the equations of motion of and where is the gravitational parameter with and the masses of the central body and the perturbed body, is the perturbing acceleration, and and are the magnitudes of and . Substituting from equations () and () into equation (), which, in theory, could be integrated twice to find . Since the osculating orbit is easily calculated by two-body methods, and are accounted for and can be solved. In practice, the quantity in the brackets, , is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits. Encke's method was more widely used before the advent of modern computers, when much orbit computation was performed on mechanical calculating machines. Periodic nature In the Solar System, many of the disturbances of one planet by another are periodic, consisting of small impulses each time a planet passes another in its orbit. This causes the bodies to follow motions that are periodic or quasi-periodic – such as the Moon in its strongly perturbed orbit, which is the subject of lunar theory. This periodic nature led to the discovery of Neptune in 1846 as a result of its perturbations of the orbit of Uranus. On-going mutual perturbations of the planets cause long-term quasi-periodic variations in their orbital elements, most apparent when two planets' orbital periods are nearly in sync. For instance, five orbits of Jupiter (59.31 years) is nearly equal to two of Saturn (58.91 years). This causes large perturbations of both, with a period of 918 years, the time required for the small difference in their positions at conjunction to make one complete circle, first discovered by Laplace. Venus currently has the orbit with the least eccentricity, i.e. it is the closest to circular, of all the planetary orbits. In 25,000 years' time, Earth will have a more circular (less eccentric) orbit than Venus. It has been shown that long-term periodic disturbances within the Solar System can become chaotic over very long time scales; under some circumstances one or more planets can cross the orbit of another, leading to collisions. The orbits of many of the minor bodies of the Solar System, such as comets, are often heavily perturbed, particularly by the gravitational fields of the gas giants. While many of these perturbations are periodic, others are not, and these in particular may represent aspects of chaotic motion. For example, in April 1996, Jupiter's gravitational influence caused the period of Comet Hale–Bopp's orbit to decrease from 4,206 to 2,380 years, a change that will not revert on any periodic basis. See also Formation and evolution of the Solar System Frozen orbit Molniya orbit Nereid one of the outer moons of Neptune with a high orbital eccentricity of ~0.75 and is frequently perturbed Osculating orbit Orbit modeling Orbital resonance Perturbation theory Proper orbital elements Stability of the Solar System References Footnotes Citations Bibliography Further reading P.E. El'Yasberg: Introduction to the Theory of Flight of Artificial Earth Satellites External links Solex (by Aldo Vitagliano) predictions for the position/orbit/close approaches of Mars Gravitation Sir George Biddell Airy's 1884 book on gravitational motion and perturbations, using little or no math.(at Google books) Dynamical systems Dynamics of the Solar System Celestial mechanics
Perturbation (astronomy)
[ "Physics", "Astronomy", "Mathematics" ]
2,232
[ "Dynamics of the Solar System", "Classical mechanics", "Astrophysics", "Mechanics", "Celestial mechanics", "Solar System", "Dynamical systems" ]
1,490,408
https://en.wikipedia.org/wiki/Scrupulosity
Scrupulosity is the pathological guilt and anxiety about moral issues. Although it can affect nonreligious people, it is usually related to religious beliefs. It is personally distressing, dysfunctional, and often accompanied by significant impairment in social functioning. It is typically conceptualized as a moral or religious form of obsessive–compulsive disorder (OCD). The term is derived from the Latin scrupus, a sharp stone, implying a stabbing pain on the conscience. Scrupulosity was formerly called scruples in religious contexts, but the word scruple now commonly refers to a troubling of the conscience rather than to the disorder. As a personality trait, scrupulosity is a recognized diagnostic criterion for obsessive–compulsive personality disorder. It is sometimes called "scrupulousness", but that word properly applies to the positive trait of having scruples. Presentation In scrupulosity, a person's obsessions focus on moral or religious fears, such as the fear of being an evil person or the fear of divine retribution for sin. Although it can affect nonreligious people, it is usually related to religious beliefs. Not all obsessive–compulsive behaviors related to religion are instances of scrupulosity: strictly speaking, for example, scrupulosity is not present in people who repeat religious requirements merely to be sure that they were done properly. Scrupulosity can be distinguished from normal religious beliefs through the four criteria established by Greenberg and Witzum (1991). These criteria include more intense than normative religious experiences, often distressing for the individual affected, associated with poor self-care/social functioning, and usually involves special messages from religious figures. In addition, while religiosity may affect how OCD is manifested, there is no proven causality between the severity of OCD and religiosity, and only small associations between the latter and scrupulosity. Some individuals afflicted with scrupulosity view their unwanted thoughts as morally equivalent to performing those thoughts or as evidence of a hidden desire to. This connection, known as moral thought-action fusion (moral TAF), creates significant distress for those experiencing it. An example of moral TAF is a mother who has an intrusive thought of hurting her child. The mother may feel she is a danger to the child; she considers her thoughts as evidence for her ostensible abuse. Some research indicates an increased likelihood of moral TAF with some religions and cultures that hold thoughts and actions morally equivalent. Treatment Treatment is similar to that for other forms of obsessive–compulsive disorder. Exposure and response prevention (ERP), a form of behavior therapy, is widely used for OCD in general and may be promising for scrupulosity in particular. ERP is based on the idea that deliberate repeated exposure to obsessional stimuli lessens anxiety, and that avoiding rituals lowers the urge to behave compulsively. For example, with ERP a person obsessed by blasphemous thoughts while reading the Bible would practice reading the Bible. However, ERP is considerably harder to implement than with other disorders, because scrupulosity often involves spiritual issues that are not specific situations and objects. For example, ERP is not appropriate for a man obsessed by feelings that God has rejected and is punishing him. Cognitive therapy may be appropriate when ERP is not feasible. Other therapy strategies include noting contradictions between the compulsive behaviors and moral or religious teachings, and informing individuals that for centuries religious figures have suggested strategies similar to ERP. Religious counseling may be an additional way to readjust beliefs associated with the disorder, though it may also stimulate greater anxiety. Little evidence is available on the use of medications to treat scrupulosity. Although serotonergic medications are often used to treat OCD, studies of pharmacologic treatment of scrupulosity in particular have produced so few results that even tentative recommendations cannot be made. Treatment of scrupulosity in children has not been investigated to the extent it has been studied in adults, and one of the factors that makes the treatment difficult is the fine line the therapist must walk between engaging and offending the client. Epidemiology The prevalence of scrupulosity is speculative. Available data do not permit reliable estimates, and available analyses mostly disregard associations with age or with gender, and have not reliably addressed associations with geography or ethnicity. Available data suggest that the prevalence of obsessive–compulsive disorder does not differ by culture, except where prevalence rates differ for all psychiatric disorders. Associations between OCD and the depth of religious beliefs have been difficult to demonstrate, and data are scarce. There are large regional differences in the percentage of OCD patients who have religious obsessions or compulsions, ranging from 0–7% in countries like the U.K. and Singapore, to 40–60% in traditional Muslim and orthodox Jewish populations. Characteristics of scrupulosity also tend to vary by religion in relation to traditional practices and beliefs. In Western Christian samples, increased levels of religiosity are associated with an increase in obsessions about controlling thoughts. This phenomenon is thought to be caused by the Biblical explanation that merely thinking of a sin is as bad as committing it. In Jewish communities, scrupulous compulsions tend to include washing, excessive prayer, and consultation with religious leaders, which are closely linked to Jewish customs of removing impurities through hand washing. Similarly, a study of a conservative Muslim population in Saudi Arabia revealed that obsessions about prayer, washing, and contamination dominate, seemingly stemming from the religious practice wuduʾ, which requires methodical cleansing of the body before prayer. Additionally, Muslims in Pakistan describe a concept called “Nepak” which is a “mix of unpleasant feelings of contamination with strong religious connotations of dirtiness and unholiness.” When suffering Nepak, an individual must cleanse himself thoroughly before participating in religious rituals again. History Scrupulosity is a modern-day psychological problem that echoes a traditional use of the term scruples in a religious context, e.g. by Catholics, to mean obsessive concern with one's own sins and compulsive performance of religious devotion. This use of the term dates to the 12th century. Several historical and religious figures suffered from doubts of sin, and expressed their pains. Ignatius of Loyola, founder of the Jesuits, wrote "After I have trodden upon a cross formed by two straws ... there comes to me from without a thought that I have sinned ... this is probably a scruple and temptation suggested by the enemy." Alphonsus Liguori, the Redemptorists' founder, wrote of it as "groundless fear of sinning that arises from 'erroneous ideas'". Although the condition was lifelong for Loyola and Liguori, Thérèse of Lisieux stated that she recovered from her condition after 18 months, writing "One would have to pass through this martyrdom to understand it well, and for me to express what I experienced for a year and a half would be impossible." Martin Luther also suffered from obsessive doubts; in his mind, his omitting the word enim ("for") during the Eucharist was as horrible as laziness, divorce, or murdering one's parent. Although historical religious figures such as Loyola, Luther and John Bunyan are commonly cited as examples of scrupulosity in modern self-help books, some of these retrospective diagnoses may be deeply ahistorical: these figures' obsession with salvation may have been excessive by modern standards, but that does not mean that it was pathological. Scrupulosity's first known public description as a disorder was in 1691, by John Moore, who called it "religious melancholy" and said it made people "fear, that what they do, is so defective and unfit to be presented unto God, that he will not accept it". Loyola, Liguori, the French confessor R.P. Duguet, and other religious authorities and figures attempted to develop solutions and coping mechanisms; the monthly newsletter Scrupulous Anonymous, published by the followers of Liguori, has been used as an adjunct to therapy. In the 19th century, Christian spiritual advisors in the U.S. and Britain became worried that scrupulosity was not only a sin in itself, but also led to sin, by attacking the virtues of faith, hope, and charity. Studies in the mid-20th century reported that scrupulosity was a major problem among American Catholics, with up to 25 per cent of high school students affected; commentators at the time asserted that this was an increase over previous levels. Starting in the 20th century, individuals with scrupulosity in the U.S. and Britain increasingly began looking to psychiatrists, rather than to religious advisors, for help with the condition. Resources International OCD Foundation (OCDF) . Non-profit organization dedicated to giving support to individuals with obsessive-compulsive disorder (OCDF), since 1986 raises funds for research; compiles and disseminates the latest treatment information, including scrupulosity Managing Scrupulosity .  A service from Fr. Thomas M Santa,  C.Ss.R., (A Roman Catholic priest). Fr. Santa has ministered to people with scrupulosity for more than 20 years. Further reading The Obsessive–Compulsive Disorder: Pastoral Care for the Road to Change Can Christianity Cure Obsessive–Compulsive Disorder?: A Psychiatrist Explores the Role of Faith in Treatment Beattie, Trent (2011). Scruples and Sainthood. Loreto Publications 2011. Fr. Thomas M. Santa, CSS. Understanding Scrupulosity (2017). William Van Ornum, A Thousand Frightening Fantasies: Understanding and Healing Scrupulosity and Obsessive Compulsive Disorder, Crossroad Pub., 1997, . Joseph W. Ciarrocchi, The Doubting Disease: Help for Scrupulosity and Religious Compulsions, Paulist Press, 1995, . References Anxiety disorders Culture-bound syndromes Religious practices
Scrupulosity
[ "Biology" ]
2,125
[ "Behavior", "Religious practices", "Human behavior" ]
1,490,597
https://en.wikipedia.org/wiki/Magic%20User%20Interface
The Magic User Interface (MUI in short) is an object-oriented system by Stefan Stuntz to generate and maintain graphical user interfaces. With the aid of a preferences program, the user of an application has the ability to customize the system according to personal taste. The Magic User Interface was written for AmigaOS and gained popularity amongst both programmers and users. It has been ported to PowerPC processors and adopted as the default GUI toolkit of the MorphOS operating system. The MUI application programmer interface has been cloned by the Zune toolkit used in the AROS Research Operating System. History Creating GUI applications on Amiga was difficult for a very long time, mainly because the programmer got only a minuscule amount of support from the operating system. Beginning with Kickstart 2.0, the gadtools.library was a step in the right direction, however, even using this library to generate complex and flexible interfaces remained difficult and still required a great deal of patience. The largest problem in existing tools for the creation of user interfaces was the inflexible output. Most of the programs were still using built-in fonts and window sizes, thus making the use of new high resolution graphics hardware adapters nearly unbearable. Even the preference programs on the Workbench were still only using the default fixed-width font. In 1992 Stefan Stuntz started developing a new object-oriented GUI toolkit for Amiga. Main goals for new GUI toolkit were: Font sensitivity: Possible for the font to be set in every application. Changeable window sizes: Windows have a sizing gadget which allows users to change the window size until it suits their needs Flexibility: Elements can be changed by the user regarding their own personal tastes. Controlling by keyboard: Widgets can be controlled by the keyboard as well as by the mouse. System integration: Every program has an ARexx port and can be iconified or uniconified by pushing a gadget or by using the Commodities exchange program. Adjusting to its environment: Every application can be made to open on any screen and adapts itself to its environment. MUI was released as shareware. Starting from MUI 3.9 an unrestricted version is integrated with MorphOS, but a shareware key is still required to activate all user configuration options in AmigaOS. Application theory UI development is done at source-code level without the aid of GUI builders. In MUI application the programmer only defines logical structure of the GUI and the layout is determined at run time depending on user configuration. Unlike on other GUI toolkits developer does not determine exact coordinates for UI objects but only their relative placement to each other using object groups. In traditional Intuition-based UI coding programmer had to calculate placement of gadgets relative to font and border sizes. By default all UI elements are resizable and change their size to match window size. It can also automatically switch into smaller font or hide UI elements if there is not enough space on screen to display window with full contents. This makes it very easy to build UI which adapts well to tiny and large displays as well. There are over 50 built-in MUI classes today and various third-party MUI classes. Example // Complete MUI application #include <libraries/mui.h> #include <proto/muimaster.h> // Sample application: ApplicationObject, SubWindow, WindowObject, WindowContents, VGroup, Child, TextObject, MUIA_Text_Contents, "Hello World!", End, End, End, End; This example code creates a small MUI application with the text "Hello World!" displayed on it. It is also possible to embed other BOOPSI based GUI toolkit objects inside a MUI application. Applications Some notable applications that use MUI as a widget toolkit include: Aladdin4D - 3D rendering/animation application Ambient - desktop environment AmIRC - IRC client Digital Universe - desktop planetarium IBrowse - web browser Origyn Web Browser - web browser PageStream - desktop publishing SimpleMail - email client Voyager - web browser YAM - email client Other GUI toolkits Currently there are two main widget toolkits in the Amiga world, which are competing with each other. The most widely used is MUI (adopted into AROS, MorphOS and in most Amiga programs), the other one is ReAction which was adopted in AmigaOS 3.5. There is in development a GTK MUI wrapper and it will allow the porting of various GTK based software. There is also modern interfaces based on XML, Feelin. Palette extension to Workbench defaults MUI extended Workbench's four-colour palette with four additional colours, allowing smoother gradients with less noticeable dithering. The MagicWB companion to MUI made use of this extended palette to provide more attractive icons to replace the dated Workbench defaults. MUI 4 added support for alpha blending and support for user defined widget shapes. See also ReAction GUI (ClassAct) Zune References External links MUI homepage Unofficial MUI nightly build directory Tutorial Widget toolkits Amiga APIs Amiga software AmigaOS AmigaOS 4 software MorphOS
Magic User Interface
[ "Technology" ]
1,077
[ "AmigaOS", "Computing platforms" ]
1,490,598
https://en.wikipedia.org/wiki/Index%20locking
In databases an index is a data structure, part of the database, used by a database system to efficiently navigate access to user data. Index data are system data distinct from user data, and consist primarily of pointers. Changes in a database (by insert, delete, or modify operations), may require indexes to be updated to maintain accurate user data accesses. Index locking is a technique used to maintain index integrity. A portion of an index is locked during a database transaction when this portion is being accessed by the transaction as a result of attempt to access related user data. Additionally, special database system transactions (not user-invoked transactions) may be invoked to maintain and modify an index, as part of a system's self-maintenance activities. When a portion of an index is locked by a transaction, other transactions may be blocked from accessing this index portion (blocked from modifying, and even from reading it, depending on lock type and needed operation). Index Locking Protocol guarantees that phantom read phenomenon won't occur. Index locking protocol states: Every relation must have at least one index. A transaction can access tuples only after finding them through one or more indices on the relation A transaction Ti that performs a lookup must lock all the index leaf nodes that it accesses, in S-mode, even if the leaf node does not contain any tuple satisfying the index lookup (e.g. for a range query, no tuple in a leaf is in the range) A transaction Ti that inserts, updates or deletes a tuple ti in a relation must update all indices to and it must obtain exclusive locks on all index leaf nodes affected by the insert/update/delete The rules of the two-phase locking protocol must be observed. Specialized concurrency control techniques exist for accessing indexes. These techniques depend on the index type, and take advantage of its structure. They are typically much more effective than applying to indexes common concurrency control methods applied to user data. Notable and widely researched are specialized techniques for B-trees (B-Tree concurrency control) which are regularly used as database indexes. Index locks are used to coordinate threads accessing indexes concurrently, and typically shorter-lived than the common transaction locks on user data. In professional literature, they are often called latches. See also Database index Concurrency control Lock (database) B-Tree concurrency control References Databases Transaction processing Concurrency control
Index locking
[ "Technology" ]
493
[ "Computing stubs" ]
1,490,758
https://en.wikipedia.org/wiki/Grim%20trigger
In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game. Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game. In Robert Axelrod's book The Evolution of Cooperation, grim trigger is called "Friedman", for a 1971 paper by James W. Friedman, which uses the concept. The infinitely repeated prisoners' dilemma The infinitely repeated prisoners’ dilemma is a well-known example for the grim trigger strategy. The normal game for two prisoners is as follows: In the prisoners' dilemma, each player has two choices in each stage: Cooperate Defect for an immediate gain If a player defects, he will be punished for the remainder of the game. In fact, both players are better off to stay silent (cooperate) than to betray the other, so playing (C, C) is the cooperative profile while playing (D, D), also the unique Nash equilibrium in this game, is the punishment profile. In the grim trigger strategy, a player cooperates in the first round and in the subsequent rounds as long as his opponent does not defect from the agreement. Once the player finds that the opponent has betrayed in the previous game, he will then defect forever. In order to evaluate the subgame perfect equilibrium (SPE) for the following grim trigger strategy of the game, strategy S* for players i and j is as follows: Play C in every period unless someone has ever played D in the past Play D forever if someone has played D in the past Then, the strategy is an SPE only if the discount factor is . In other words, neither Player 1 or Player 2 is incentivized to defect from the cooperation profile if the discount factor is greater than one half. To prove that the strategy is a SPE, cooperation should be the best response to the other player's cooperation, and the defection should be the best response to the other player's defection. Step 1: Suppose that D is never played so far. Player i's payoff from C : Player i's payoff from D : Then, C is better than D if . Step 2: Suppose that someone has played D previously, then Player j will play D no matter what. Player i's payoff from C : Player i's payoff from D : Since , playing D is optimal. The preceding argument emphasizes that there is no incentive to deviate (no profitable deviation) from the cooperation profile if , and this is true for every subgame. Therefore, the strategy for the infinitely repeated prisoners’ dilemma game is a Subgame Perfect Nash equilibrium. In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it performs poorly. Grim trigger in international relations Under the grim trigger in international relations perspective, a nation cooperates only if its partner has never exploited it in the past. Because a nation will refuse to cooperate in all future periods once its partner defects once, the indefinite removal of cooperation becomes the threat that makes such strategy a limiting case. Grim trigger in user-network interactions Game theory has recently been used in developing future communications systems, and the user in the user-network interaction game employing the grim trigger strategy is one of such examples. If the grim trigger is decided to be used in the user-network interaction game, the user stays in the network (cooperates) if the network maintains a certain quality, but punishes the network by stopping the interaction and leaving the network as soon as the user finds out the opponent defects. Antoniou et al. explains that “given such a strategy, the network has a stronger incentive to keep the promise given for a certain quality, since it faces the threat of losing its customer forever.” Comparison with other strategies Tit for tat and grim trigger strategies are similar in nature in that both are trigger strategy where a player refuses to defect first if he has the ability to punish the opponent for defecting. The difference, however, is that grim trigger seeks maximal punishment for a single defection while tit for tat is more forgiving, offering one punishment for each defection. See also References Non-cooperative games
Grim trigger
[ "Mathematics" ]
947
[ "Game theory", "Strategy (game theory)" ]
1,490,790
https://en.wikipedia.org/wiki/Player%20versus%20environment
Player versus environment (PvE, also known as player versus monster (PvM) and commonly misinterpreted as player versus entity) is a term used for both single player and online games, particularly MMORPGs, CORPGs, MUDs, other online role-playing video games and survival games to refer to fighting computer-controlled enemies - in contrast to PvP (player versus player) which is fighting other players in the game. In survival games a large part may be fighting the elements, controlling hunger and thirst, learning to adapt to the environment and exploration. Usually a PvE mode can be played alone, with human companions or with AI companions. The PvE mode may contain a storyline that is narrated as the player progresses through missions. It may also contain missions that may be done in any order. Examples Guild Wars narrates its story by displaying in-game cut scenes and dialogue with non-playable characters (NPCs). To enhance replayability, missions can often be completed many times. Characters playing in this mode are often protected against being killed by other players and/or having their possessions stolen. An example of a game where this is not the case is Eve Online, where players can be, and often are, ambushed by other human players player versus player while attempting to complete a quest. Some games, such as World of Warcraft, offer the player the choice of participating in open-world PvP combat or doing quests without PvP interruption, by offering players the ability to join servers where PvP is enabled by default, or allowing them to activate temporary "flags" which allow them to attack and be attacked by other "flagged" players for a limited time. See also Player versus player Cooperative gameplay Deathmatch Single player video game References MUD terminology Role-playing game terminology Video game terminology
Player versus environment
[ "Technology" ]
372
[ "Computing terminology", "Video game terminology" ]
1,490,894
https://en.wikipedia.org/wiki/Upper%20atmosphere
Upper atmosphere is a collective term that refers to various layers of the atmosphere of the Earth above the troposphere and corresponding regions of the atmospheres of other planets, and includes: The mesosphere, which on Earth lies between the altitudes of about , sometimes considered part of the "middle atmosphere" rather than the upper atmosphere The thermosphere, which on Earth lies between the altitudes of about The exosphere, which on Earth lies between the altitudes of about and The ionosphere, an ionized portion of the upper atmosphere which includes the upper mesosphere, thermosphere, and lower exosphere and on Earth lies between the altitudes of See also Geospace Magnetosphere References Atmosphere of Earth Atmosphere
Upper atmosphere
[ "Astronomy" ]
144
[ "Astronomy stubs", "Planetary science stubs" ]
1,491,136
https://en.wikipedia.org/wiki/Coffer
A coffer (or coffering) in architecture is a series of sunken panels in the shape of a square, rectangle, or octagon in a ceiling, soffit or vault. A series of these sunken panels was often used as decoration for a ceiling or a vault, also called caissons ("boxes"), or lacunaria ("spaces, openings"), so that a coffered ceiling can be called a lacunar ceiling: the strength of the structure is in the framework of the coffers. History The stone coffers of the ancient Greeks and Romans are the earliest surviving examples, but a seventh-century BC Etruscan chamber tomb in the necropolis of San Giuliano, which is cut in soft tufa-like stone reproduces a ceiling with beams and cross-beams lying on them, with flat panels filling the lacunae. For centuries, it was thought that wooden coffers were first made by crossing the wooden beams of a ceiling in the Loire Valley châteaux of the early Renaissance. In 2012, however, archaeologists working under the Packard Humanities Institute at the House of the Telephus in Herculaneum discovered that wooden coffered ceilings were constructed in Roman times. Experimentation with the possible shapes in coffering, which solve problems of mathematical tiling, or tessellation, were a feature of Islamic as well as Renaissance architecture. The more complicated problems of diminishing the scale of the individual coffers were presented by the requirements of curved surfaces of vaults and domes. A prominent example of Roman coffering, employed to lighten the weight of the dome, can be found in the ceiling of the rotunda dome in the Pantheon, Rome. Coffered ceilings were used in cathedrals starting with St Mark's Basilica and Santa Maria Maggiore. They spread following the reforms of the Council of Trent, as the improved acoustics and opportunity to include statues, apostolic heraldry and other religious elements in compositions with versatile shapes was thought to enhance the doctrinal purpose of a cathedral. Asian architecture In ancient Chinese wooden architecture, coffering is known as zaojing (). Gallery See also Dome Dropped ceiling Cove ceiling Beam ceiling Muqarnas Footnotes External links U.S. National Capitol Ceilings Architectural elements
Coffer
[ "Technology", "Engineering" ]
474
[ "Structural engineering", "Building engineering", "Architectural elements", "Ceilings", "Components", "Architecture" ]
1,491,198
https://en.wikipedia.org/wiki/Session%20border%20controller
A session border controller (SBC) is a network element deployed to protect SIP based voice over Internet Protocol (VoIP) networks. Early deployments of SBCs were focused on the borders between two service provider networks in a peering environment. This role has now expanded to include significant deployments between a service provider's access network and a backbone network to provide service to residential and/or enterprise customers. The term "session" refers to a communication between two or more parties – in the context of telephony, this would be a call. Each call consists of one or more call signaling message exchanges that control the call, and one or more call media streams which carry the call's audio, video, or other data along with information of call statistics and quality. Together, these streams make up a session. It is the job of a session border controller to exert influence over the data flows of sessions. The term "border" refers to a point of demarcation between one part of a network and another. As a simple example, at the edge of a corporate network, a firewall demarcates the local network (inside the corporation) from the rest of the Internet (outside the corporation). A more complex example is that of a large corporation where different departments have security needs for each location and perhaps for each kind of data. In this case, filtering routers or other network elements are used to control the flow of data streams. It is the job of a session border controller to assist policy administrators in managing the flow of session data across these borders. The term "controller" refers to the influence that session border controllers have on the data streams that comprise sessions, as they traverse borders between one part of a network and another. Additionally, session border controllers often provide measurement, access control, and data conversion facilities for the calls they control. Functions SBCs commonly maintain full session state and offer the following functions: Security – protect the network and other devices from: Malicious attacks such as a denial-of-service attack (DoS) or distributed DoS Toll fraud via rogue media streams Malformed packet protection Encryption of signaling (via TLS and IPSec) and media (SRTP) Connectivity – allow different parts of the network to communicate through the use of a variety of techniques such as: NAT traversal SIP normalization via SIP message and header manipulation IPv4 to IPv6 interworking VPN connectivity Protocol translations between SIP, SIP-I, H.323 Quality of service – the QoS policy of a network and prioritization of flows is usually implemented by the SBC. It can include such functions as: Traffic policing Resource allocation Rate limiting Call admission control ToS/DSCP bit setting Regulatory – many times the SBC is expected to provide support for regulatory requirements such as: emergency calls prioritization and lawful interception Media services – many of the new generation of SBCs also provide built-in digital signal processors (DSPs) to enable them to offer border-based media control and services such as: DTMF relay and interworking Media transcoding Tones and announcements Data and fax interworking Support for voice and video calls Statistics and billing information – since all sessions that pass through the edge of the network pass through the SBC, it is a natural point to gather statistics and usage-based information on these sessions. With the advent of WebRTC some SBCs have also assumed the role of SIP to WebRTC Gateway and translate SIP. While no one signalling protocol is mandated by the WebRTC specifications, SIP over WebSockets (RFC 7118) is often used partially due to the applicability of SIP to most of the envisaged communication scenarios as well as the availability of open source software such as JsSIP. In such a case the SBC acts as a gateway between the WebRTC applications and SIP end points. Applications SBCs are inserted into the signaling and/or media paths between calling and called parties in a VoIP call, predominantly those using the Session Initiation Protocol (SIP), H.323, and MGCP call-signaling protocols. In many cases the SBC hides the network topology and protects the service provider or enterprise packet networks. The SBC terminates an inbound call and initiates the second call leg to the destination party. In technical terms, when used with the SIP protocol, this defines a back-to-back user agent (B2BUA). The effect of this behavior is that not only the signaling traffic, but also the media traffic (voice, video) is controlled by the SBC. In cases where the SBC does not have the capability to provide media services, SBCs are also able to redirect media traffic to a different element elsewhere in the network, for recording, generation of music-on-hold, or other media-related purposes. Conversely, without an SBC, the media traffic travels directly between the endpoints, without the in-network call signaling elements having control over their path. In other cases, the SBC simply modifies the stream of call control (signaling) data involved in each call, perhaps limiting the kinds of calls that can be conducted, changing the codec choices, and so on. Ultimately, SBCs allow the network operators to manage the calls that are made on their networks, fix or change protocols and protocol syntax to achieve interoperability, and also overcome some of the problems that firewalls and network address translators (NATs) present for VoIP calls. To show the operation of an SBC, one can compare a simple call establishment sequence with a call establishment sequence with an SBC. In the simplest session establishment sequence with only one proxy between the user agents the proxy’s task is to identify the callee’s location and forward the request to it. The proxy also adds a Via header with its own address to indicate the path that the response should traverse. The proxy does not change any dialog identification information present in the message such as the tag in the From header, the Call-Id or the Cseq. Proxies also do not alter any information in the SIP message bodies. Note that during the session initiation phase the user agents exchange SIP messages with the SDP bodies that include addresses at which the agents expect the media traffic. After successfully finishing the session initiation phase the user agents can exchange the media traffic directly between each other without the involvement of the proxy. SBCs are designed for many applications and are used by operators and enterprises to achieve a variety of goals. Even the same SBC implementation might act differently depending on its configuration and the use case. Hence, it is not easily possible to describe an exact SBC behavior that would apply to all SBC implementations. In general it is possible to identify certain features that are common to SBCs. For example, most SBCs are implemented as back-to-back user agent. A B2BUA is a proxy-like server that splits a SIP transaction in two call legs: on the side facing the user agent client (UAC), it acts as server, on the side facing user agent server (UAS) it acts as a client. While a proxy usually keeps only state information related to active transactions, B2BUAs keep state information about active dialogs, e.g., calls. That is, once a proxy receives a SIP request it will save some state information. Once the transaction is over, e.g., after receiving a response, the state information will soon after be deleted. A B2BUA will maintain state information for active calls and only delete this information once the call is terminated. When an SBC is included in the call path, the SBC acts as a B2BUA that behaves as a user agent server towards the caller and as user agent client towards the callee. In this sense, the SBC actually terminates that call that was generated by the caller and starts a new call towards the callee. The INVITE message sent by the SBC contains no longer a clear reference to the caller. The INVITE sent by the SBC to the proxy includes Via and Contact headers that point to the SBC itself and not the caller. SBCs often also manipulate the dialog identification information listed in the Call-Id and From tag. Further, in case the SBC is configured to also control the media traffic then the SBC also changes the media addressing information included in the c and m lines of the SDP body. Thereby, not only will all SIP messages traverse the SBC but also all audio and video packets. As the INVITE sent by the SBC establishes a new dialog, the SBC also manipulates the message sequence number (CSeq) as well the Max-Forwards value. Note that the list of header manipulations listed here is only a subset of the possible changes that an SBC might introduce to a SIP message. Furthermore, some SBCs might not do all of the listed manipulations. If the SBC is not expected to control the media traffic then there might be no need to change anything in the SDP body. Some SBCs do not change the dialog identification information and others might even not change the addressing information. SBCs are often used by corporations along with firewalls and intrusion prevention systems (IPS) to enable VoIP calls to and from a protected enterprise network. VoIP service providers use SBCs to allow the use of VoIP protocols from private networks with Internet connections using NAT, and also to implement strong security measures that are necessary to maintain a high quality of service. SBCs also replace the function of application-level gateways. In larger enterprises, SBCs can also be used in conjunction with SIP trunks to provide call control and make routing/policy decisions on how calls are routed through the LAN/WAN. There are often tremendous cost savings associated with routing traffic through the internal IP networks of an enterprise, rather than routing calls through a traditional circuit-switched phone network. Additionally, some SBCs can allow VoIP calls to be set up between two phones using different VoIP signaling protocols (e.g., SIP, H.323, Megaco/MGCP) as well as performing transcoding of the media stream when different codecs are in use. Most SBCs also provide firewall features for VoIP traffic (denial of service protection, call filtering, bandwidth management). Protocol normalization and header manipulation is also commonly provided by SBCs, enabling communication between different vendors and networks. From an IP Multimedia Subsystem (IMS) or 3GPP (3rd Generation Partnership Project) architecture perspective, the SBC is the integration of the P-CSCF and IMS-ALG at the signaling plane and the IMS Access Gateway at the media plane on the access side. On the interconnect side, the SBC maps to the IBCF, IWF at the signaling plane and TrGW (Transition Gateway) at the media plane. From an IMS/TISPAN architecture perspective, the SBC is the integration of the P-CSCF and C-BGF functions on the access side, and the IBCF, IWF, THIG, and I-BGF functions on the peering side. Some SBCs can be "decomposed", meaning the signaling functions can be located on a separate hardware platform than the media relay functions – in other words the P-CSCF can be separated from the C-BGF, or the IBCF/IWF can be separated from the I-BGF functions physically. Standards-based protocol, such as the H.248 Ia profile, can be used by the signaling platform to control the media one while a few SBCs use proprietary protocols. Controversy During its infancy, the concept of SBC was controversial to proponents of end-to-end systems and peer-to-peer networking because: SBCs can extend the length of the media path (the way of media packets through the network) significantly. A long media path is undesirable, as it increases the delay of voice packets and the probability of packet loss. Both effects deteriorate the voice/video quality. However, many times there are obstacles to communication such as firewalls between the call parties, and in these cases SBCs offer an efficient method to guide media streams towards an acceptable path between caller and callee; without the SBC the call media would be blocked. Some SBCs can detect if the ends of the call are in the same subnetwork and release control of the media enabling it to flow directly between the clients, this is anti-tromboning or media release. Also, some SBCs can create a media path where none would otherwise be allowed to exist (by virtue of various firewalls and other security apparatus between the two endpoints). Lastly, for specific VoIP network models where the service provider owns the network, SBCs can actually decrease the media path by shortcut routing approaches. For example, a service provider that provides trunking services to several enterprises would usually allocate each enterprise a VPN. It is often desirable to have the option to interconnect the VPN through SBCs. A VPN-aware SBC may perform this function at the edge of the VPN network, rather than sending all the traffic to the core. SBCs can restrict the flow of information between call endpoints, potentially reducing end-to-end transparency. VoIP phones may not be able to use new protocol features unless they are understood by the SBC. However, the SBCs are usually able to cope with the majority of new, and unanticipated protocol features. Sometimes end-to-end encryption can't be used if the SBC does not have the key, although some portions of the information stream in an encrypted call are not encrypted, and those portions can be used and influenced by the SBC. However, the new generations of SBCs, armed with sufficient computing capacity, are able to offload this encryption function from other elements in the network by terminating SIP-TLS, IPsec, and/or SRTP. Furthermore, SBCs can actually make calls and other SIP scenarios work when they couldn't have before, by performing specific protocol "normalization" or "fix-up". In most cases, far-end or hosted NAT traversal can be done without SBCs if the VoIP phones support protocols like STUN, TURN, ICE, or Universal Plug and Play (UPnP). Most of the controversy surrounding SBCs pertains to whether call control should remain solely with the two endpoints in a call (in service to their owners), or should rather be shared with other network elements owned by the organizations managing various networks involved in connecting the two call endpoints. For example, should call control remain with Alice and Bob (two callers), or should call control be shared with the operators of all the IP networks involved in connecting Alice and Bob's VoIP phones together. The debate of this point was vigorous, almost religious, in nature. Those who wanted unfettered control in the endpoints only, were also greatly frustrated by the various realities of modern networks, such as firewalls and filtering/throttling. On the other side, network operators are typically concerned about overall network performance, interoperability and quality, and want to ensure it is secure. Lawful intercept and CALEA Lawful intercept is governed in America by the Communications Assistance for Law Enforcement Act (CALEA). An SBC may provide session media (usually RTP) and signaling (often SIP) wiretap services, which can be used by providers to enforce requests for the lawful interception of network sessions. Standards for the interception of such services are provided by ATIS, TIA, CableLabs and ETSI, among others. History and market According to Jonathan Rosenberg, the author of RFC 3261 (SIP) and numerous other related RFCs, Dynamicsoft developed the first working SBC in conjunction with Aravox, but the product never truly gained marketshare. Newport Networks was the first to have an IPO on the London Stock Exchange's AIM in May 2004 (NNG), while Cisco has been publicly traded since 1990. Acme Packet followed in October 2006 by floating on the NASDAQ. With the field narrowed by acquisition, NexTone merged with Reefpoint becoming Nextpoint, which was subsequently acquired in 2008 by Genband. At this same time, there emerged the "integrated" SBC where the border control function was integrated into another edge device. In 2009, Ingate Systems' Firewall became the first SBC to earn certification from ICSA Labs, a milestone in certifying the VoIP security capabilities of an SBC. The continuing growth of VoIP networks pushes SBCs further to the edge, mandating adaptation in capacity and complexity. As the VoIP network grows and traffic volume increases, more and more sessions are passing through SBC. Vendors are addressing these new scale requirements in a variety of ways. Some have developed separate, load balancing systems to sit in front of SBC clusters. Others, have developed new architectures using the latest generation chipsets offering higher performance SBCs and scalability using service cards. See also 3GPP Long Term Evolution (LTE) Firewall (computing) H.323 Gatekeeper IP Multimedia Subsystem (IMS) Session Initiation Protocol (SIP) SIP trunking Universal Mobile Telecommunications System (UMTS) References Voice over IP Computer network security
Session border controller
[ "Engineering" ]
3,599
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
1,491,215
https://en.wikipedia.org/wiki/Cajun%20Dart
Cajun Dart is the designation of an American sounding rocket. The Cajun Dart was used 87 times between 1964 and 1970. The Cajun rocket motor was developed from Deacon. Staged on top of a Nike rocket, it was part of the Nike-Cajun sounding rocket; it was also used as part of the Terasca three-stage rocket. Specs Takeoff thrust: 36 kN Maximum flight height: 74 km Takeoff weight: 100 kg Diameter: 0.17 m Length: 4.10 m References https://web.archive.org/web/20100102071657/http://www.astronautix.com/lvs/cajun.htm Sounding rockets of the United States
Cajun Dart
[ "Astronomy" ]
149
[ "Rocketry stubs", "Astronomy stubs" ]
1,491,324
https://en.wikipedia.org/wiki/The%20Unreality%20of%20Time
"The Unreality of Time" is the best-known philosophical work of University of Cambridge idealist J. M. E. McTaggart (1866–1925). In the argument, first published as a journal article in Mind in 1908, McTaggart argues that time is unreal because our descriptions of time are either contradictory, circular, or insufficient. A slightly different version of the argument appeared in 1927 as one of the chapters in the second volume of McTaggart's most well known book, The Nature of Existence. The argument for the unreality of time is popularly treated as a stand-alone argument that does not depend on any significant metaphysical principles (e.g. as argued by C. D. Broad 1933 and L. O. Mink 1960). R. D. Ingthorsson disputes this, and argues that the argument can only be understood as an attempt to draw out certain consequences of the metaphysical system that McTaggart presents in the first volume of The Nature of Existence (Ingthorsson 1998 & 2016). It is helpful to consider the argument as consisting of three parts. In the first part, McTaggart offers a phenomenological analysis of the appearance of time, in terms of the now famous A- and B-series (see below for detail). In the second part, he argues that a conception of time as only forming a B-series but not an A-series is an inadequate conception of time because the B-series does not contain any notion of change. The A-series, on the other hand, appears to contain change and is thus more likely to be an adequate conception of time. In the third and final part, he argues that the conception of time forming an A-series is contradictory and thus nothing can be like an A-series. Since the A- and the B- series exhaust possible conceptions of how reality can be temporal, and neither is adequate, the conclusion McTaggart reaches is that reality is not temporal at all. The phenomenological analysis: the A- and B-series To frame his argument, McTaggart initially offers a phenomenological analysis of how time appears to us in experience. Time appears, he says, in the form of events standing in temporal positions, of which there are two kinds. On the one hand events are earlier than and later than each other, and on the other hand they are future, present, and past, and continually changing their position in terms of futurity, presentness, and pastness. The two kinds of temporal positions each represent events in time as standing in a certain order which McTaggart chooses to call the A-series and the B-series. The A-series represents the series of positions determined as future, present, and past, and which continuously pass from the distant future towards the present, and through the present into the remote past. The B-series represents the series of positions determined as earlier than or later than each other. The determinations of the B-series hold between the events in time, and never change. If an event ever is earlier or later than some other event, then their respective position in time never changes. The determinations of the A-series must hold to something outside of time, something that does not itself change its position in time, but in relation to which the events in time pass from being future, present, and past. Surprisingly, McTaggart does not suggest the present, or NOW, as this something whose position in time is fixed and unchanging. He just says that it will be difficult to identify any such entity (seeing as it is outside time). Broad explains that McTaggart believed that the difficulty of identifying this entity was serious enough in its own right to be persuaded that time is unreal, but thinks that the contradiction of the A-series is still more convincing; for that reason he leaves this particular difficulty aside. The atemporality of the B-series McTaggart argues that the conception of time as only forming a B-series is inadequate because the B-series does not change, and change is of the essence of time. If any conception of reality represents it as changeless, then this is a conception of an atemporal reality. The B-series does not change because earlier-later relationships never change (e.g. the year 2010 is always later than 2000). The events that form a B-series must therefore also form an A-series in order to count as being in time, i.e. they must pass from future to present, and from present to past, in order to change. The A- and B-series are not mutually exclusive. If events form an A-series they automatically also form a B-series (anything in the present is earlier than anything in the future, and later than everything past). The question is not therefore whether time forms an A- or a B-series; the question is whether time forms both an A- and a B-series, or only a B-series. The proponents of the B-view of time typically respond by arguing that even if events do not change their positions in the B-series, it does not follow that there can be no change in the B-series. This conclusion only follows if it is assumed that events are the only entities that can change. There can be change in the B-series in the form of objects bearing different properties at different times (Braithwaite 1928; Gotshalk 1930; Marhenke 1935; Smart 1949; Mellor 1981 & 98; Oaklander 1984; LePoidevin 1991; Dyke 2002). The suggestion that the B-view of time can escape the problem by appealing to particulars that endure through time and have different properties at different times is controversial in its own right, but it is generally assumed that this is a controversy that has nothing to do with McTaggart. Instead it is treated as a separate issue, the question of whether things can endure in B-time. However, as Ingthorsson has argued, McTaggart does discuss variation in the properties of persistent entities in the 1st Volume of The Nature of Existence, and there comes to the conclusion that variation in the properties of things between times is not change but mere variation between the temporal parts of things (Ingthorsson 2001). The contradiction of the A-series Attacking the A-series, McTaggart argues that any event in the A-series is past, present, and future, which is contradictory in that each of those properties excludes the other two. McTaggart admits that the contradictory nature of the A-series may not be obvious, because it would appear that events never are simultaneously future, present, and past, but only successively so. However, there is a contradiction, he insists, because any attempt to explain why they are future, present, and past, at different times is (i) circular because we would need to describe the successive order of those "different times" again by invoking the determinations of being future, present or past, and (ii) this in turn will inevitably lead to a vicious infinite regress. The vicious infinite regress arises, because to explain why the second appeal to future, present, and past, doesn't lead again to the same difficulty all over, we need to explain that they in turn apply successively and thus we must again explain that succession by appeal to future, present, and past, and there is no end to such an explanation. It is the validity of the argument in favour of a vicious infinite regress that has received the most attention in 20th Century philosophy of time. In the later version of the argument, in The Nature of Existence, McTaggart no longer advances the circularity objection. This is, arguably, because by then he has come to treat tense as a simple and indefinable notion, and thus cannot contend that the terms need to be explained at all in order to be applied. He now instead argues that even if it is admitted that they are simple and indefinable, and thus can be applied without further analysis, they still lead to contradiction. Philosophers who favour the B-view of time tend to find McTaggart's argument against the A-series to demonstrate conclusively that tense involves a contradiction. On the other hand, philosophers who favour the A-view of time struggle to see why the argument should be considered to have any force. Two of the most commonly invoked objections are, first, that McTaggart is mistaken about the phenomenology of time; that he is claiming to see a contradiction in the appearance of time, where none is apparent. Second, that McTaggart is mistaken about the semantics of tensed discourse. The idea here is that claims like "M is present, has been future, and will be past" can only imply a contradiction if it is interpreted as saying that M is all at once future in the past, present in the present, and also past in the future. This reading, it is argued, is absurd because "has been" and "will be" indicate that we are not talking about how M currently is, but instead of how M once was, but is no longer, and how it will be, but is not yet. Hence it is wrong to think of the expression as an attribution to M of futurity, presentness, and pastness, all at once (Marhenke 1935; Broad 1938; Mink 1960; Prior 1967; Christensen 1974; Lloyd 1977; Lowe 1987). Ingthorsson has argued that the reason for this incommensurability between the proponents of the A- and B-views is found in the prevailing view that McTaggart's argument is a stand-alone argument. If it is read in that way, the proponents of each view will understand the argument against the background of their respective views of time, and come to incompatible conclusions (1998 & 2016). Indeed, on closer scrutiny it will be found that McTaggart explicitly claims that in "The Unreality of Time" he is inquiring whether reality can have the characteristics it appears to have in experience (notably being temporal and material) given his earlier conclusions about what reality must really be like in Absolute Reality. In the introduction to the 2nd Volume of The Nature of Existence, he says: Starting from our conclusions as to the general nature of the existent, as reached in the earlier Books, we shall have to ask, firstly which of these characteristics can really be possessed by what is existent, and which of them, in spite of the primâ facie appearance to the contrary, cannot be possessed by anything existent (1927: sect. 295).And he continues:It will be possible to show that, having regard to the general nature of the existent as previously determined, certain characteristics, that we consider here for the first time, cannot be true of the existent (1927: sect. 298).As Ingthorsson notes, the most central result of McTaggart's earlier inquiry into the general nature of the existent in Absolute Reality, an inquiry McTaggart claims is based entirely on a priori arguments (i.e. such as do not rely on any empirical observations), is that existence and reality coincide and have no degrees: either something exists and thus is real, or it does not. It immediately follows that for the future and past to be real, they must exist. This is why he interprets the statement "M is present, has been future, and will be past" as a statement about M existing in the present bearing the property of being present, and existing in the past bearing the property of being future, and existing in the future bearing the property of being past. This interpretation of the expression, if correct, does say that M is future, present, and past, which is contradictory. However, since it starts from the premise that the future and past can only be real by existing, then it remains to show that this is what the A-view of time assumes. The C-series Having come to the conclusion that reality can neither form an A- nor a B-series, despite appearances to the contrary, then McTaggart finds it necessary to explain what the world is really like such that it appears to be different from what it appears to be. Here is where the C-series comes into play. McTaggart does not say much about the C-series in the original journal article, but in The Nature of Existence he devotes six whole chapters to discuss it (1927: Chs. 44–9). The C-series is rarely given much attention. When it is mentioned, it is described as "an expression synonymous with 'B-series' when the latter is shorn of its temporal connotations" (Shorter 1986: 226). There is a grain of truth in this, but there is more to the C-series than this. Stripping the temporal features from the B-series only gives what the C- and B-series have minimally in common, notably the constituents of the series and the formal characteristics of being linear, asymmetric, and transitive. However, the C-series has features that the B-series does not have. The constituents of the C-series are mental states (a consequence of McTaggart's argument in Ch. 34 of The Nature of Existence that reality cannot really be material), which are related to each other on the basis of their conceptual content in terms of being included in and inclusive of (1927: sect. 566 & Ch. 60). These atemporal relations are meant to provide what the earlier/later than relation cannot, notably explain why an illusion of change and temporal succession can arise in an atemporal reality. Influence McTaggart's argument has had an enormous influence on the philosophy of time. His phenomenological analysis of the appearance of time has been accepted as good and true even by those who firmly deny the end conclusion that time is unreal. For instance, J. S. Findlay (1940) and A. Prior (1967) took McTaggart's phenomenological analysis as their point of departure in the development of modern tense logic. McTaggart's characterisation of the appearance of time in terms of the A- and B-series served to sharpen the contrast between the two emerging and rival views of time that we now know as the A- and B-views of time. The assumption is that the A-view, in accepting the reality of tense, represent time as being like an A-series, and that the B-view, in rejecting the reality of tense, represent time as being like a B-series. The two objections that McTaggart develops against the conception of time as forming an A- and a B-series are still the two main objections with which the A- and B-views of time struggle. Notably, is the A-view contradictory, and is the B-view able to incorporate an account of change? The controversy about McTaggart's argument for the unreality of time continues unabated (see, for instance, Smith 2011; Cameron 2015; Mozersky 2015; Ingthorsson 2016). Editions J. M. E. McTaggart (1908). "The Unreality of Time". Mind 17: 457–73. J. M. E. McTaggart (1927). The Nature of Existence (Volume 2). Cambridge: Cambridge University Press. See also Julian Barbour, a scholar who has also argued about the unreality of time McTaggartian change Philosophy of space and time Notes References Baldwin, Thomas. 1999. "Back to the Present", Philosophy 74(288): 177–97. Braithwaite, R. B. 1928. "Symposium: Time and Change", Proceedings of the Aristotelian Society, Supplementary Volumes, 8, Mind Matter and Purpose: 143–188. Broad, C.D. 1933. An Examination of McTaggart's Philosophy, Vol. I. Cambridge: Cambridge University Press. Broad, C. D. 1938. Examination of McTaggart's Philosophy, Vol. II. Cambridge: Cambridge University Press. Cameron, Ross. 2015. The Moving Spotlight: An Essay On Time and Ontology. Oxford: Oxford University Press. Christensen, Ferrel. 1974. "McTaggart"s Paradox and the Nature of Time", Philosophical Quarterly 24: 289–99. Dummett, Michael. 1960. "A Defense of McTaggart"s Proof of the Unreality of Time", Philosophical Review 69: 497–504. Dyke, Heather. 2002. "McTaggart and the Truth about Time", Royal Institute of Philosophy Supplement 50, Supplement: 137–52. Findlay, J. N. 1941. "Time: A Treatment of some Puzzles", Australasian Journal of Philosophy 19 (3): 216–35. Gotshalk, D. W. 1930, "McTaggart on Time", Mind 39(153): 26–42. Ingthorsson, R. D. 1998. "McTaggart and the Unreality of Time", Axiomathes 9(3): 287–306. Ingthorsson, R. D. 2001. "Temporal Parity and the Problem of Change", SATS–Nordic Journal of Philosophy 2(2): 60–79. Ingthorsson, R. D. 2016. McTaggart's Paradox. New York: Routledge. LePoidevin, Robin. 1991. Change, Cause and Contradiction: A Defence of the Tenseless Theory of Time. London: Macmillan Press Ltd. Lloyd, Genevieve. 1977. "Tense and Predication", Mind 86: 433–8. Lowe, E. J. 1987. "The Indexical Fallacy in McTaggart"s Proof of the Unreality of Time", Mind 96: 62–70. Marhenke, P. 1935. "McTaggart"s Analysis of Time". In The Problem of Time, edited by Stephen C. Pepper et al. University of California Publications in Philosophy, Vol 18 (6). Berkeley, CA: University of California Publications; repr. 1969 New York: Johnson Reprint Corp: 151–74. Mellor, D. H. 1981. Real Time. Cambridge University Press, Cambridge. Mellor, D. H. 1998. Real Time II. Routledge, London. Mozersky, Joshua M. 2015. Time, Language, and Ontology. Oxford: Oxford University Press. Oakeley, Hilda D. 1946–7. "The Philosophy of Time and the Timeless in McTaggart's Nature of Existence", Proceedings of the Aristotelian Society 47: 105–28. Oaklander, L. Nathan. 1984. Temporal Relations and Temporal Becoming: A Defense of a Russellian Theory of Time. Lanham: University Press of America. Prior, Arthur N. 1967. Past Present and Future. Oxford: Clarendon Press. Shorter, Michael. 1986. "Subjective and Objective Time", Proceedings of the Aristotelian Society, Supplementary Volumes 60: 223–34. Smart, J. J. C. 1949. "The River of Time", Mind 58(232): 483–94. Smith, Nicholas J. J. 2011. "Inconsistency in the A–Theory", Philosophical Studies 156: 231–47. Further reading Peter Bieri, 1972. Zeit und Zeiterfahrung (Frankfurt am Main: Suhrkamp) C. D. Broad, An examination of McTaggart's philosophy. Vol. 1. Cambridge University Press, 1933 C. D. Broad, An examination of McTaggart's philosophy. Vol. 2. Cambridge University Press, 1938 Gerald Rochelle, 1991. The Life and Philosophy of J.McT.E. McTaggart 1866-1925 (Lewiston NY: Edwin Mellen Press) Gerald Rochelle, 1998. Behind Time: The incoherence of time and McTaggart's atemporal replacement (Aldershot, Ashgate) Gerald Rochelle, 1998, "Killing time without injuring eternity — McTaggart's C series," Idealistic Studies 28(3): 159–69. Robin Le Poidevin ed., 2002, "Questions of Time and Tense" (Oxford: Oxford University Press) R. D. Ingthorsson, 2016, McTaggart's Paradox (New York: Routledge). External links McTaggart, "The Unreality of Time" Time (Stanford Encyclopedia of Philosophy) Author's Introduction to "The Unreality of Time" Philosophy papers Philosophy of time 1908 essays Works originally published in Mind (journal)
The Unreality of Time
[ "Physics" ]
4,290
[ "Spacetime", "Philosophy of time", "Physical quantities", "Time" ]
1,491,567
https://en.wikipedia.org/wiki/Rm%20%28Unix%29
rm (short for remove) is a basic command on Unix and Unix-like operating systems used to remove objects such as computer files, directories and symbolic links from file systems and also special files such as device nodes, pipes and sockets, similar to the del command in MS-DOS, OS/2, and Microsoft Windows. The command is also available in the EFI shell. Overview The rm command removes references to objects from the filesystem using the unlink system call, where those objects might have had multiple references (for example, a file with two different names), and the objects themselves are discarded only when all references have been removed and no programs still have open handles to the objects. This allows for scenarios where a program can open a file, immediately remove it from the filesystem, and then use it for temporary space, knowing that the file's space will be reclaimed after the program exits, even if it exits by crashing. The command generally does not destroy file data, since its purpose is really merely to unlink references, and the filesystem space freed may still contain leftover data from the removed file. This can be a security concern in some cases, and hardened versions sometimes provide for wiping out the data as the last link is being cut, and programs such as shred and srm are available which specifically provide data wiping capability. rm is generally only seen on UNIX-derived operating systems, which typically do not provide for recovery of deleted files through a mechanism like the recycle bin, hence the tendency for users to enclose rm in some kind of wrapper to limit accidental file deletion. There are undelete utilities that will attempt to reconstruct the index and can bring the file back if the parts were not reused. History On some old versions of Unix, the rm command would delete directories if they were empty. This behaviour can still be obtained in some versions of rm with the -d flag, e.g., the BSDs (such as FreeBSD, NetBSD, OpenBSD and macOS) derived from 4.4BSD-Lite2. The version of rm bundled in GNU coreutils was written by Paul Rubin, David MacKenzie, Richard Stallman, and Jim Meyering. This version also provides -d option, to help with compatibility. The same functionality is provided by the standard rmdir command. The -i option in Version 7 replaced dsw, or "delete from switches", which debuted in Version 1. Doug McIlroy wrote that dsw "was a desperation tool designed to clean up files with unutterable names". The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. KolibriOS includes an implementation of the command. The command has also been ported to the IBM i operating system. Syntax rm deletes the file specified after options are added. Users can use a full path or a relative file path to specify the files to delete. rm doesn't delete a directory by default.rm foo deletes the file "foo" in the directory the user is currently in. rm, like other commands, uses options to specify how it will behave: -r, "recursive," which removes directories, removing the contents recursively beforehand (so as not to leave files without a directory to reside in). -i, "interactive" which asks for every deletion to be confirmed. -f, "force," which ignores non-existent files and overrides any confirmation prompts (effectively canceling -i), although it will not remove files from a directory if the directory is write-protected. -v, "verbose," which prints what rm is doing onto the terminal -d, "directory," which deletes an empty directory, and only works if the specified directory is empty. --one-file-system, only removes files on the same file system as the argument, and will ignore mounted file systems. rm can be overlain by a shell alias (C shell alias, Bourne shell or Bash) function of "rm -i" so as to avoid accidental deletion of files. If a user still wishes to delete a large number of files without confirmation, they can manually cancel out the -i argument by adding the -f option (as the option specified later on the expanded command line "rm -i -f" takes precedence). Unfortunately this approach generates dangerous habits towards the use of wildcarding, leading to its own version of accidental removals. rm -rf (variously, rm -rf /, rm -rf *, and others) is frequently used in jokes and anecdotes about Unix disasters, such as the loss of many files during the production of film Toy Story 2 at Pixar. The rm -rf / variant of the command, if run by a superuser, would cause every file accessible from the present file system to be deleted from the machine. rm is often used in conjunction with xargs to supply a list of files to delete: xargs rm < filelist Or, to remove all PNG images in all directories below the current one: find . -name '*.png' -exec rm {} + Permissions Usually, on most filesystems, deleting a file requires write permission on the parent directory (and execute permission, in order to enter the directory in the first place). (Note that, confusingly for beginners, permissions on the file itself are irrelevant. However, GNU rm asks for confirmation if a write-protected file is to be deleted, unless the -f option is used.) To delete a directory (with rm -r), one must delete all of its contents recursively. This requires that one must have read and write and execute permission to that directory (if it's not empty) and all non-empty subdirectories recursively (if there are any). The read permissions are needed to list the contents of the directory in order to delete them. This sometimes leads to an odd situation where a non-empty directory cannot be deleted because one doesn't have write permission to it and so cannot delete its contents; but if the same directory were empty, one would be able to delete it. If a file resides in a directory with the sticky bit set, then deleting the file requires one to be the owner of the file. Protection of the filesystem root Sun Microsystems introduced "rm -rf /" protection in Solaris 10, first released in 2005. Upon executing the command, the system now reports that the removal of / is not allowed. Shortly after, the same functionality was introduced into FreeBSD version of rm utility. GNU rm refuses to execute rm -rf / if the --preserve-root option is given, which has been the default since version 6.4 of GNU Core Utilities was released in 2006. In newer systems, this failsafe is always active, even without the option. To run the command, user must bypass the failsafe by adding the option --no-preserve-root, even if they are the superuser. User-proofing Systems administrators, designers, and even users often attempt to defend themselves against accidentally deleting files by creating an alias or function along the lines of: alias rm="rm -i" rm () { /bin/rm -i "$@" ; } This results in rm asking the user to confirm on a file-by-file basis whether it should be deleted, by pressing the Y or N key. Unfortunately, this tends to train users to be careless about the wildcards they hand into their rm commands, as well as encouraging a tendency to alternately pound y and the return key to affirm removes - until just past the one file they needed to keep. Users have even been seen going as far as "yes | rm files", which automatically inserts "y" for each file. A compromise that allows users to confirm just once, encourages proper wildcarding, and makes verification of the list easier can be achieved with something like: if [ -n "$PS1" ] ; then rm () { ls -FCsd "$@" echo 'remove[ny]? ' | tr -d '\012' ; read if [ "_$REPLY" = "_y" ]; then /bin/rm -rf "$@" else echo '(cancelled)' fi } fi It is important to note that this function should not be made into a shell script, which would run a risk of it being found ahead of the system rm in the search path, nor should it be allowed in non-interactive shells where it could break batch jobs. Enclosing the definition in the if [ -n "$PS1" ] ; then .... ; fi construct protects against the latter. There exist third-party alternatives which prevent accidental deletion of important files, such as "safe-rm" or "trash". Maximum command line argument limitation GNU Core Utilities implementation used in multiple Linux distributions have limits on command line arguments. Arguments are nominally limited to 32 times the kernel's allocated page size. Systems with 4KB page size would thus have a argument size limit of 128KB. For command-line arguments before kernel 2.6.23, (released on 9 October 2007,) the limits were defined at kernel compile time and can be modified by changing the variable MAX_ARG_PAGES in include/linux/binfmts.h file. Newer kernels limit the maximum argument length to 25% of the maximum stack limit (ulimit -s). Exceeding the limit would prompt the display of the error message /bin/rm: Argument list too long. See also srm (Unix): secure remove file in Unix unlink(): the underlying system call called by this user space program for its main functionality del (command) deltree dsw (command) - an obsolete Unix command for deleting difficult files References Further reading External links File deletion Standard Unix programs Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Rm (Unix)
[ "Technology" ]
2,141
[ "IBM i Qshell commands", "Standard Unix programs", "Computing commands", "Plan 9 commands", "Inferno (operating system) commands" ]
1,491,614
https://en.wikipedia.org/wiki/Dennis%20Brutus
Dennis Vincent Brutus (28 November 1924 – 26 December 2009) was a South African activist, educator, journalist and poet best known for his campaign to have South Africa banned from the Olympic Games due to its racial policy of apartheid. Life and work Born in Salisbury, Southern Rhodesia in 1924 to South African parents, Brutus was of indigenous Khoi, Dutch, French, English, German and Malay ancestry. His parents moved back home to Port Elizabeth when he was aged four, and young Brutus was classified under South Africa's apartheid racial code as "coloured". Brutus was a graduate of the University of Fort Hare (BA, 1946) and of the University of the Witwatersrand, where he studied law. He taught English and Afrikaans at several high schools in South Africa after 1948, but was eventually dismissed for his vocal criticism of apartheid. He served on the faculty of the University of Denver, Northwestern University and University of Pittsburgh, and was a Professor Emeritus from the last institution. In 2008, Brutus was awarded the Lifetime Honorary Award by the South African Department of Arts and Culture for his lifelong dedication to African and world poetry and literary arts. Activist Brutus was an activist against the apartheid government of South Africa in the 1950s and 1960s. He learned politics in the Trotskyist movement of the Eastern Cape. Although not an accomplished athlete in his own right, he was motivated by the unfairness of selections for athletic teams. He joined the Anti-Coloured Affairs Department organisation (Anti-CAD), a Trotskyist group that organised against the Coloured Affairs Department, which was an attempt by the government to institutionalise divisions between blacks and coloureds. In 1958, he formed the South African Sports Association, and as Secretary was strongly opposed to a proposed cricket tour by Frank Worrell’s West Indies to South Africa in 1959, leading a successful campaign to have it cancelled. In 1962, Brutus was a co-founder of the South African Non-Racial Olympic Committee (SANROC), an organisation that would be heavily influential in the banning of apartheid-era South Africa from the Olympics in 1964. In 1961, Brutus was banned for his political activities as part of SANROC. As South Africa attempted, in 1968, to get back into the Olympics by arguing that they would field multi-racial teams, SANROC successfully pointed out that those teams were chosen on a segregated basis, leading to South Africa's continued ban from 1968 until 1992. Arrest and jail In 1963, Brutus was arrested for trying to meet with an International Olympic Committee (IOC) official; he was accused of breaking the terms of his "banning," which were that he could not meet with more than two people outside his family, and he was sentenced to 18 months in jail. However, he "jumped bail" by trying to leave South Africa to attend the IOC meeting in Baden-Baden, West Germany, on behalf of SANROC and while he was in Mozambique, on a Rhodesian passport, the Portuguese colonial secret police arrested him and returned him to South Africa. There, while trying to escape, he was shot in the back at point-blank range. After only partly recovering from the wound, Brutus was sent to Robben Island for 16 months, five in solitary. He was in the cell next to Nelson Mandela's. Brutus was in prison when news of the country's suspension from the 1964 Tokyo Olympics, for which he had campaigned, broke. Brutus was forbidden to teach, write and publish in South Africa. His first collection of poetry, Sirens, Knuckles and Boots (1963), was published in Nigeria while he was in prison. The book received the Mbari Poetry Prize, awarded to a black poet of distinction, but Brutus turned it down on the grounds of its racial exclusivity. He was the author of 14 books. Release from jail After he was released, in 1965, Brutus left South Africa on an exit permit, which meant he could never return home while the apartheid regime stayed in power. He went into exile in Britain, where he first met George Houser, the executive director of the American Committee on Africa (ACOA). South Africa made a concerted effort to get reinstated to the Olympic Games in Mexico City in 1968. Its Prime Minister John Vorster outlined a new policy of fielding a multi-racial team. At first the IOC accepted this new policy and was going to allow South Africa to compete, but SANROC pointed out that there would be no mixed sporting events within South Africa and therefore all South African athletes chosen for the Games would be chosen under a segregated framework. In 1967, Brutus came to the United States under the auspices of the ACOA on a speaking tour, where he acquainted Americans more closely with the present situation in South Africa, informed American sports organisations about the segregated conditions that South African athletes must endure, and raised money to support the ACOA's Africa Defense and Aid Fund to support the defence of those charged under the apartheid laws. The Supreme Council for Sport in Africa, which represented the independent African nations at the IOC, threatened to boycott if South Africa was included in the 1968 Games. In co-operation with SANROC, the ACOA organised a boycott of American athletes in February 1968. Jackie Robinson, the first African-American athlete to break the colour barrier in major league baseball, published a statement calling for continued suspension of South Africa from the Olympic Games. As a result of the international pressure, the IOC relented and kept South Africa out of the Olympic Games from 1968 until 1992. Life in the United States In 1971, Brutus settled in the United States, where he served as professor of African Literature at Northwestern University. When his British passport was cancelled in the wake of Zimbabwe's independence in 1980, he was threatened with deportation, and he fought a protracted and highly publicized legal battle until 1983, when he was granted asylum in the United States. He continued to participate in protests against the apartheid government while teaching in the United States. He was eventually "unbanned" by the South African government in 1990, and in 1991 he became one of the sponsors of the Committee for Academic Freedom in Africa. Brutus taught at Amherst College, Cornell University, and Swarthmore College, before heading, in 1986, to the University of Pittsburgh, where he served a professor of African Literature until his retirement. Return to South Africa, poetry and activism He returned to South Africa and was based at the University of KwaZulu-Natal, where he often contributed to the annual Poetry Africa Festival hosted by the university and supported activism against neo-liberal policies in contemporary South Africa through working with NGOs. In December 2007, Brutus was to be inducted into the South African Sports Hall of Fame. At the induction ceremony, he publicly turned down his nomination, stating: According to fellow writer Olu Oguibe, interim Director of the Institute for African American Studies at the University of Connecticut, "Brutus was arguably Africa's greatest and most influential modern poet after Leopold Sedar Senghor and Christopher Okigbo, certainly the most widely-read, and no doubt among the world's finest poets of all time. More than that, he was a fearless campaigner for justice, a relentless organizer, an incorrigible romantic, and a great humanist and teacher." Brutus died on 26 December 2009, aged 85, at his home in Cape Town, South Africa, from prostate cancer. He is survived by two sisters, eight children including his son Anthony, nine grandchildren, and four great-grandchildren. The Dennis Brutus Tapes: Essays at Autobiography, edited by Bernth Lindfors, was published in 2011, including transcripts of tapes recorded when he was a visiting professor at the University of Texas at Austin in 1974–75, reflecting on his life and career. Bibliography Sirens, Knuckles and Boots (Mbari Productions, 1963). Letters to Martha and Other Poems from a South African Prison (Heinemann, 1968). Poems from Algiers (African and Afro-American Studies and Research Institute, 1970). A Simple Lust (Heinemann, 1973). China Poems (African and Afro-American Studies and Research Centre, 1975). Stubborn Hope (Three Continents Press/Heinemann, 1978). Salutes and Censures (Fourth Dimension, 1982). Airs & Tributes (Whirlwind Press, 1989). Still the Sirens (Pennywhistle Press, 1993). Remembering Soweto, ed. Lamont B. Steptoe (Whirlwind Press, 2004). Leafdrift, ed. Lamont B. Steptoe (Whirlwind Press, 2005). Sustar, Lee, and Karim, Aisha (eds), Poetry and Protest: A Dennis Brutus Reader (Haymarket Books, 2006). It is The Constant Image Of Your Face: A Dennis Brutus Reader (2008). Brown, Geoff, and Hogsbjerg, Christian. Apartheid is not a Game: Remembering the Stop the Seventy Tour campaign. London: Redwords, 2020. . See also List of people subject to banning orders under apartheid References External links Dennis Brutus Papers, 1960–1984, Northwestern University Archives, Evanston, Illinois Dennis Brutus Papers Worcester State University Archives, Worcester, Massachusetts Dennis Brutus Papers on sport, anti-apartheid activities and literature, 1958–1971, Borthwick Institute, University of York "Dennis Brutus reads from his work" for the WGBH series, Ten O'clock News "Dennis Brutus poem 'Gull' Copenhagen conference" Dennis Brutus Defense Committee Western Massachusetts Dennis Brutus Defense Committee Obituaries Dennis Brutus 1924–2009 This "cyber-tombeau" at Silliman's Blog by poet Ron Silliman includes comments, tributes, and links Dennis Brutus (1924–2009): South African Poet and Activist Dies in Cape Town – video by Democracy Now!, 28 December 2009. 1924 births 2009 deaths 20th-century South African poets South African anti-apartheid activists Coloureds Environmental ethics Inmates of Robben Island Northwestern University faculty People from Harare Rhodesian emigrants to South Africa South African expatriates in Southern Rhodesia South African expatriates in the United States South African people of Dutch descent South African people of English descent South African people of French descent South African people of German descent South African people of Malay descent South African refugees South African Trotskyists Tax resisters University of Denver faculty University of Fort Hare alumni University of Pittsburgh faculty University of the Witwatersrand alumni Writers from Pittsburgh Zimbabwean people of German descent African poets
Dennis Brutus
[ "Environmental_science" ]
2,174
[ "Environmental ethics" ]
1,491,909
https://en.wikipedia.org/wiki/Canadian%20Medical%20Hall%20of%20Fame
The Canadian Medical Hall of Fame is a Canadian charitable organization, founded in 1994, that honours Canadians who have contributed to the understanding of disease and improving the health of people. It has an exhibit hall in London, Ontario, an annual induction ceremony, career exploration programs for youth and a virtual hall of fame. Laureates References External links Official site 1994 establishments in Ontario Health charities in Canada Medical Organizations based in London, Ontario Museums established in 1994 Companies based in London, Ontario Museums in London, Ontario Medical museums in Canada Science and technology halls of fame
Canadian Medical Hall of Fame
[ "Technology" ]
110
[ "Science and technology awards", "Science and technology halls of fame" ]
1,491,913
https://en.wikipedia.org/wiki/Primosome
In molecular biology, a primosome is a protein complex responsible for creating RNA primers on single stranded DNA during DNA replication. The primosome consists of seven proteins: DnaG primase, DnaB helicase, DnaC helicase assistant, DnaT, PriA, Pri B, and PriC. At each replication fork, the primosome is utilized once on the leading strand of DNA and repeatedly, initiating each Okazaki fragment, on the lagging DNA strand. Initially the complex formed by PriA, PriB, and PriC binds to DNA. Then the DnaB-DnaC helicase complex attaches along with DnaT. This structure is referred to as the pre-primosome. Finally, DnaG will bind to the pre-primosome forming a complete primosome. The primosome attaches 1-10 RNA nucleotides to the single stranded DNA creating a DNA-RNA hybrid. This sequence of RNA is used as a primer to initiate DNA polymerase III. The RNA bases are ultimately replaced with DNA bases by RNase H nuclease (eukaryotes) or DNA polymerase I nuclease (prokaryotes). DNA Ligase then acts to join the two ends together. Assembly of the Escherichia coli primosome requires six proteins, PriA, PriB, PriC, DnaB, DnaC, and DnaT, acting at a primosome assembly site (pas) on an SSBcoated single-stranded (8s) DNA. Assembly is initiated by interactions of PriA and PriB with ssDNA and the pas. PriC, DnaB, DnaC, and DnaT then act on the PriAPriB- DNA complex to yield the primosome. Primosomes are nucleoproteins assemblies that activate DNA replication forks. Their primary role is to recruit the replicative helicase onto single-stranded DNA. The "replication restart" primosome, defined in Escherichia coli, is involved in the reactivation of arrested replication forks. Binding of the PriA protein to forked DNA triggers its assembly. PriA is conserved in bacteria, but its primosomal partners are not. In Bacillus subtilis, genetic analysis has revealed three primosomal proteins, DnaB, DnaD, and DnaI, that have no obvious homologues in E. coli. They are involved in primosome function both at arrested replication forks and at the chromosomal origin. Our biochemical analysis of the DnaB and DnaD proteins unravels their role in primosome assembly. They are both multimeric and bind individually to DNA. Furthermore, DnaD stimulates DnaB binding activities. DnaD alone and the DnaD/DnaB pair interact specifically with PriA of B. subtilis on several DNA substrates. This suggests that the nucleoprotein assembly is sequential in the PriA, DnaD, DnaB order. The preferred DNA substrate mimics an arrested DNA replication fork with unreplicated lagging strand, structurally identical to a product of recombinational repair of a stalled replication fork. References Genetics
Primosome
[ "Chemistry", "Biology" ]
665
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry", "Genetics" ]
1,492,498
https://en.wikipedia.org/wiki/Muscular%20hydrostat
A muscular hydrostat is a biological structure found in animals. It is used to manipulate items (including food) or to move its host about and consists mainly of muscles with no skeletal support. It performs its hydraulic movement without fluid in a separate compartment, as in a hydrostatic skeleton. A muscular hydrostat, like a hydrostatic skeleton, relies on the fact that water is effectively incompressible at physiological pressures. In contrast to a hydrostatic skeleton, where muscle surrounds a fluid-filled cavity, a muscular hydrostat is composed mainly of muscle tissue. Since muscle tissue itself is mainly made of water and is also effectively incompressible, similar principles apply. Muscular anatomy Muscles provide the force to move a muscular hydrostat. Since muscles are only able to produce force by contracting and becoming shorter, different groups of muscles have to work against each other, with one group relaxing and lengthening as the other group provides the force by contracting. Such complementary muscle groups are termed antagonistic pairs. The muscle fibers in a muscular hydrostat are oriented in three directions: parallel to the long axis, perpendicular to the long axis, and wrapped obliquely around the long axis. The muscles parallel to the long axis are arranged in longitudinal bundles. The more peripherally these are located, the more elaborate bending movements are possible. A more peripheral distribution is found in tetrapod tongues, octopus arms, nautilus tentacles, and elephant trunks. Tongues that are adapted for protrusion typically have centrally located longitudinal fibers. These are found in snake tongues, many lizard tongues, and the mammalian anteaters. The muscles perpendicular to the long axis may be arranged in a transverse, circular, or radial pattern. A transverse arrangement involves sheets of muscle fibers running perpendicular to the long axis, usually alternating between horizontal and vertical orientations. This arrangement is found in the arms and tentacles of squid, octopuses, and in most mammalian tongues. A radial arrangement involves fibers radiating out in all directions from the center of the organ. This is found in the tentacles of the chambered nautilus and in the elephant proboscis (trunk). A circular arrangement has rings of contractive fibers around the long axis. This is found in many mammalian and lizard tongues along with squid tentacles. Helical or oblique fibers around the long axis are generally present in two layers with opposite chirality and wrap around the central core of musculature. Mechanism of operation In a muscular hydrostat, the musculature itself both creates movement and provides skeletal support for that movement. It can provide this support because it is composed primarily of an incompressible “liquid" and is thus constant in volume. The most important biomechanical feature of a muscular hydrostat is its constant volume. Muscle is composed primarily of an aqueous liquid that is essentially incompressible at physiological pressures. In a muscular hydrostat or any other structure of constant volume, a decrease in one dimension will cause a compensatory increase in at least one other dimension. The mechanisms of elongation, bending and torsion in muscular hydrostats all depend on constancy of volume to effect shape changes in the absence of stiff skeletal attachments. Since muscular hydrostats are under constant volume when the diameter increases or decreases, the length must also decrease or increase, respectively. When looking at a cylinder the volume is: V=πr²l. When the radius is differentiated with respect to the length: dr/dl=-r/(2l). From this, if a diameter decreases by 25%, the length will increase by approximately 80% which may produce a large amount of force depending on what the animal is trying to do. Elongation and shortening Elongation in hydrostats is caused by the contraction of transverse or helical musculature arrangements. Given the constant volume of muscular hydrostats, these contractions cause an elongation of the longitudinal muscles. Change in length is proportional to the square of the decrease in diameter. Therefore, contractions of muscles perpendicular to the long axis will cause a decrease in diameter while keeping a constant volume will elongate the organ length-wise. Shortening, on the other hand, can be caused by contraction of the muscles parallel to the long axis resulting in the organ increasing in diameter as well as shortening in length. The muscles used in elongation and shortening maintain support through the constant volume principle and their antagonistic relationships with each other. These mechanisms are seen often in prey capture of shovelnose frogs and chameleons, as well as in the human tongue and many other examples. In some frogs, the tongue elongates up to 180% of its resting length. Extra-oral tongues show higher length/width ratios than intra-oral tongues, allowing for a greater increase in length (more than 100% of resting length, as compared to intra-oral tongues at only about 50% of resting length increase). Greater elongation lengths trade off with the force produced by the organ; as the length/width ratio is increased elongation increases while force is decreased. Squids have been shown to use muscular hydrostat elongation in prey capture and feeding as well. Bending The bending of a muscular hydrostat can occur in two ways, both of which require the use of antagonistic muscles. The unilateral contraction of a longitudinal muscle will produce little or no bending and will serve to increase the diameter of the muscular hydrostat because of the constant volume principle that must be met. To bend the hydrostat structure, the unilateral contraction of longitudinal muscle must be accompanied by contractile activity of transverse, radial, or circular muscles to maintain a constant diameter. Bending of a muscular hydrostat can also occur by the contraction of transverse, radial, or circular muscles which decreases the diameter. Bending is produced by longitudinal muscle activity which maintains a constant length on one side of the structure. The bending of a muscular hydrostat is particularly important in animal tongues. This motion provides the mechanism by which a snake flicks the air with its tongue to sense its surroundings, and it is also responsible for the complexities of human speech. Stiffening The stiffening of a muscular hydrostat is accomplished by the muscle or connective tissue of the hydrostat resisting dimensional changes. Torsion Torsion is the twisting of a muscular hydrostat along its long axis and is produced by a helical or oblique arrangement of musculature which have varying direction. For a counter-clockwise torsion it is necessary for a right-hand helix to contract. Contraction of a left-hand helix causes clockwise torsion. The simultaneous contraction of both right and left-hand helixes results in an increase in resistance to torsional forces. The oblique or helical muscle arrays in the muscular hydrostats are located in the periphery of the structure, wrapping the inner core of musculature, and this peripheral location provides a larger moment through which the torque is applied than a more central location. The effect of helically arranged muscle fibers, which may also contribute to changes in length of a muscular hydrostat, depends on fiber angle—the angle that the helical muscle fibers make with the long axis of the structure. The length of the helical fiber is at a minimum when the fiber angle equals 54°44′ and is at maximum length when the fiber angle approaches 0° and 90°. Summed up, this means that helically arranged muscle fibers with a fiber angle greater than 54°44′ will create force for both torsion and elongation while helically arranged muscle fibers with a fiber angle less than 54°44′ will create force for both torsion and shortening. The fiber angle of the oblique or helical muscle layers must increase during shortening and decrease during lengthening. In addition to creating a torsional force, the oblique muscle layers will therefore create a force for elongation that may aid the transverse musculature in resisting longitudinal compression. Examples Whole bodies of many worms Feet of mollusks (including arms and tentacles in cephalopods) Tongues of mammals and reptiles Trunks of elephants The snout of the West Indian manatee Technological applications A group of engineers and biologists have collaborated to develop robotic arms that are able to manipulate and handle various objects of different size, mass, surface texture and mechanical properties. These robotic arms have many advantages over previous robotic arms that were not based on muscular hydrostats. References Animal anatomy Biomechanics
Muscular hydrostat
[ "Physics" ]
1,754
[ "Biomechanics", "Mechanics" ]
1,492,704
https://en.wikipedia.org/wiki/Centrolecithal
Centrolecithal (Greek kentron = center of a circle, lekithos = yolk) describes the placement of the yolk in the centre of the cytoplasm of ova. Many arthropod eggs are centrolecithal. During cytokinesis, centrolecithal zygotes undergo meroblastic cleavage, where the cleavage plane extends only to the accumulated yolk and is superficial. This is due to the large dense yolk found within centrolecithal eggs and triggers a delayed embryonic development. See also Cell cycle Isolecithal Telolecithal References Centrolecithal
Centrolecithal
[ "Biology" ]
131
[ "Cell biology" ]
1,493,025
https://en.wikipedia.org/wiki/Glassphalt
Glassphalt or glasphalt (a portmanteau of glass and asphalt) is a variety of asphalt that uses crushed glass. It has been used as an alternative to conventional bituminous asphalt pavement since the early 1970s. Glassphalt must be properly mixed and placed if it is to meet roadway pavement standards, requiring some modifications to generally accepted asphalt procedures. Generally, there is about 10–20% glass by weight in glassphalt. External links Recycled Glass in Asphalt Preparation and Placement of Glassphalt Building materials Glass applications Pavements
Glassphalt
[ "Physics", "Engineering" ]
113
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
1,493,053
https://en.wikipedia.org/wiki/Cockayne%20syndrome
Cockayne syndrome (CS), also called Neill-Dingwall syndrome, is a rare and fatal autosomal recessive neurodegenerative disorder characterized by growth failure, impaired development of the nervous system, abnormal sensitivity to sunlight (photosensitivity), eye disorders and premature aging. Failure to thrive and neurological disorders are criteria for diagnosis, while photosensitivity, hearing loss, eye abnormalities, and cavities are other very common features. Problems with any or all of the internal organs are possible. It is associated with a group of disorders called leukodystrophies, which are conditions characterized by degradation of neurological white matter. There are two primary types of Cockayne syndrome: Cockayne syndrome type A (CSA), arising from mutations in the ERCC8 gene, and Cockayne syndrome type B (CSB), resulting from mutations in the ERCC6 gene. The underlying disorder is a defect in a DNA repair mechanism. Unlike other defects of DNA repair, patients with CS are not predisposed to cancer or infection. Cockayne syndrome is a rare but destructive disease usually resulting in death within the first or second decade of life. The mutation of specific genes in Cockayne syndrome is known, but the widespread effects and its relationship with DNA repair is yet to be well understood. It is named after English physician Edward Alfred Cockayne (1880–1956) who first described it in 1936 and re-described in 1946. Neill-Dingwall syndrome was named after Mary M. Dingwall and Catherine A. Neill. These two scientists described the case of two brothers with Cockayne syndrome and asserted it was the same disease described by Cockayne. In their article, the two contributed to the signs of the disease through their discovery of calcifications in the brain. They also compared Cockayne syndrome to what is now known as Hutchinson–Gilford progeria syndrome (HGPS), then called progeria, due to the advanced aging that characterizes both disorders. Types CS Type I, the "classic" form, is characterized by normal fetal growth with the onset of abnormalities in the first two years of life. Vision and hearing gradually decline. The central and peripheral nervous systems progressively degenerate until death in the first or second decade of life as a result of serious neurological degradation. Cortical atrophy is less severe in CS Type I. CS Type II is present from birth (congenital) and is much more severe than CS Type 1. It involves very little neurological development after birth. Death usually occurs by age seven. This specific type has also been designated as cerebro-oculo-facio-skeletal (COFS) syndrome or Pena-Shokeir syndrome Type II. COFS syndrome is named so due to the effects it has on the brain, eyes, face, and skeletal system, as the disease frequently causes brain atrophy, cataracts, loss of fat in the face, and osteoporosis. COFS syndrome can be further subdivided into several conditions (COFS types 1, 2, 3 (associated with xeroderma pigmentosum) and 4). Typically patients with this early-onset form of the disorder show more severe brain damage, including reduced myelination of white matter, and more widespread calcifications, including in the cortex and basal ganglia. CS Type III, characterized by late-onset, is typically milder than Types I and II. Often patients with Type III will live into adulthood. Xeroderma pigmentosum-Cockayne syndrome (XP-CS) occurs when an individual also has xeroderma pigmentosum, another DNA repair disease. Some symptoms of each disease are expressed. For instance, freckling and pigment abnormalities characteristic of XP are present. The neurological disorder, spasticity, and underdevelopment of sexual organs characteristic of CS are seen. However, hypomyelination and the facial features of typical CS patients are not present. Causes If hyperoxia or excess oxygen occurs in the body, the cellular metabolism produces several highly reactive forms of oxygen called free radicals. This can cause oxidative damage to cellular components including the DNA. In normal cells, our body repairs the damaged sections. In the case of this disease, due to subtle defects in transcription, children's genetic machinery for synthesizing proteins needed by the body does not operate at normal capacity. Over time, went this theory, results in developmental failure and death. Every minute, the body pumps 10 to 20 liters of oxygen through the blood, carrying it to billions of cells in our bodies. In its normal molecular form, oxygen is harmless. However, cellular metabolism involving oxygen can generate several highly reactive free radicals. These free radicals can cause oxidative damage to cellular components including the DNA. In an average human cell, several thousand lesions occur in the DNA every day. Many of these lesions result from oxidative damage. Each lesion—a damaged section of DNA—must be snipped out and the DNA repaired to preserve its normal function. Unrepaired DNA can lose its ability to code for proteins. Mutations also can result. These mutations can activate oncogenes or silence tumor suppressor genes. According to research, oxidative damage to active genes is not preferentially repaired, and in the most severe cases, the repair is slowed throughout the whole genome. The resulting accumulation of oxidative damage could impair the normal functions of the DNA and may even result in triggering a program of cell death (apoptosis). The children with this disease do not repair the active genes where oxidative damage occurs. Normally, oxidative damage repair is faster in the active genes (which make up less than five percent of the genome) than in inactive regions of the DNA. The resulting accumulation of oxidative damage could impair the normal functions of the DNA and may even result in triggering a program of cell death (apoptosis). Genetics Cockayne syndrome is classified genetically as follows: Mutations in the ERCC8 (also known as CSA) gene or the ERCC6 (also known as CSB) gene are the cause of Cockayne syndrome type A and type B. Mutations in the ERCC6 gene mutation makes up ~70% of cases. The proteins made by these genes are involved in repairing damaged DNA via the transcription-coupled repair mechanism, particularly the DNA in active genes. DNA damage is caused by ultraviolet rays from sunlight, radiation, or free radicals in the body. A normal cell can repair DNA damage before it accumulates. If either the ERCC6 or the ERCC8 gene is altered (as in Cockayne Syndrome), DNA damage encountered during transcription isn't repaired, causing RNA polymerase to stall at that location, interfering with gene expression. As the unrepaired DNA damage accumulates, progressively more active gene expression is impeded, leading to malfunctioning cells or cell death, which likely contributes to the signs of Cockayne Syndrome such as premature aging and neuronal hypomyelination. Mechanism In contrast to cells with normal repair capability, CSA and CSB deficient cells are unable to preferentially repair cyclobutane pyrimidine dimers induced by the action of ultraviolet (UV) light on the template strand of actively transcribed genes. This deficiency reflects the loss of ability to perform the DNA repair process known as transcription coupled nucleotide excision repair (TC-NER). Within the damaged cell, the CSA protein normally localizes to sites of DNA damage, particularly inter-strand cross-links, double-strand breaks and some monoadducts. CSB protein is also normally recruited to DNA damaged sites, and its recruitment is most rapid and robust as follows: interstrand crosslinks > double-strand breaks > monoadducts > oxidative damage. CSB protein forms a complex with another DNA repair protein, SNM1A (DCLRE1A), a 5' – 3' exonuclease, that localizes to inter-strand cross-links in a transcription dependent manner. The accumulation of CSB protein at sites of DNA double-strand breaks occurs in a transcription dependent manner and facilitates homologous recombinational repair of the breaks. During the G0/G1 phase of the cell cycle, DNA damage can trigger a CSB-dependent recombinational repair process that uses an RNA (rather than DNA) template. The premature aging features of CS are likely due, at least in part, to the deficiencies in DNA repair (see DNA damage theory of aging). Diagnosis People with this syndrome have smaller than normal head sizes (microcephaly), are of short stature (dwarfism), their eyes appear sunken, and they have an "aged" look. They often have long limbs with joint contractures (inability to relax the muscle at a joint), a hunched back (kyphosis), and they may be very thin (cachetic), due to a loss of subcutaneous fat. Their small chin, large ears, and pointy, thin nose often give an aged appearance. The skin of those with Cockayne syndrome is also frequently affected: hyperpigmentation, varicose or spider veins (telangiectasia), and serious sensitivity to sunlight are common, even in individuals without XP-CS. Often patients with Cockayne Syndrome will severely burn or blister with very little heat exposure. The eyes of patients can be affected in various ways and eye abnormalities are common in CS. Cataracts and cloudiness of the cornea (corneal opacity) are common. The loss of and damage to the nerves of the optic nerve, causing optic atrophy, can occur. Nystagmus, or involuntary eye movement, and pupils that fail to dilate demonstrate a loss of control of voluntary and involuntary muscle movement. A salt and pepper retinal pigmentation is also a typical sign. Diagnosis is determined by a specific test for DNA repair, which measures the recovery of RNA after exposure to UV radiation. Despite being associated with genes involved in nucleotide excision repair (NER), unlike xeroderma pigmentosum, CS is not associated with an increased risk of cancer. Laboratory Studies In Cockayne syndrome patients, UV-irradiated cells show decreased DNA and RNA synthesis. Laboratory studies are mainly useful to eliminate other disorders. For example, skeletal radiography, endocrinologic tests, and chromosomal breakage studies can help in excluding disorders included in the differential diagnosis. Imaging Studies Brain CT scanning in Cockayne syndrome patients may reveal calcifications and cortical atrophy. Other Tests Prenatal evaluation is possible. Amniotic fluid cell culturing is used to demonstrate that fetal cells are deficient in RNA synthesis after UV irradiation. Neurology Imaging studies reveal a widespread absence of the myelin sheaths of the neurons in the white matter of the brain and general atrophy of the cortex. Calcifications have also been found in the putamen, an area of the forebrain that regulates movements and aids in some forms of learning, along with the cortex. Additionally, atrophy of the central area of the cerebellum found in patients with Cockayne syndrome could also result in the lack of muscle control, particularly involuntary, and poor posture typically seen. Treatment There is no permanent cure for this syndrome, although patients can be symptomatically treated. Treatment usually involves physical therapy and minor surgeries to the affected organs, such as cataract removal. Also wearing high-factor sunscreen and protective clothing is recommended because Cockayne Syndrome patients are very sensitive to UV radiation. Optimal nutrition can also help. Genetic counseling for the parents is recommended, as the disorder has a 25% chance of being passed to any future children, and prenatal testing is also a possibility. Another important aspect is the prevention of recurrence of CS in other siblings. Identification of gene defects involved makes it possible to offer genetic counseling and antenatal diagnostic testing to the parents who already have one affected child. Currently, there are two ongoing projects focused on the development of gene therapy for Cockayne syndrome. The first project, led by the Viljem Julijan Association for Children with Rare Diseases, aims to develop gene therapy specifically for Cockayne syndrome type B. The second project, led by the Riaan Research Initiative, is dedicated to the development of gene therapy for Cockayne syndrome type A. Prognosis The prognosis for those with Cockayne syndrome is poor, as death typically occurs by the age of 12. The prognosis for Cockayne syndrome varies by disease type. There are three types of Cockayne syndrome according to the severity and onset of the symptoms. However, the differences between the types are not always clear-cut, and some researchers believe the signs and symptoms reflect a spectrum instead of distinct types: Cockayne syndrome Type A (CSA) is marked by normal development until a child is 1 or 2 years old, at which point growth slows and developmental delays are noticed. Symptoms are not apparent until they are 1 year. Life expectancy for type A is approximately 10 to 20 years. These symptoms are seen in CS type 1 children. Cockayne syndrome type B (CSB), also known as "cerebro-oculo-facio-skeletal (COFS) syndrome" (or "Pena-Shokeir syndrome type B"), is the most severe subtype. Symptoms are present at birth and normal brain development stops after birth. The average lifespan for children with type B is up to 7 years of age. These symptoms are seen in CS type 2 children. Cockayne syndrome type C (CSC) appears later in childhood with milder symptoms than the other types and a slower progression of the disorder. People with this type of Cockayne syndrome live into adulthood, with an average lifespan of 40 to 50 years. These symptoms are seen in CS type 3. Epidemiology Cockayne syndrome is rare worldwide. No racial predilection is reported for Cockayne syndrome. No sexual predilection is described for Cockayne syndrome; the male-to-female ratio is equal. Cockayne syndrome I (CS-A) manifests in childhood. Cockayne syndrome II (CS-B) manifests at birth or in infancy, and it has a worse prognosis. Recent research The recent research on Jan 2018 mentions different CS features that are seen globally with similarities and differences: CS has an incidence of 1 in 250,000 live births, and a prevalence of approximately 1 per 2.5 million, which is remarkably consistent across various regions globally: See also Accelerated aging disease Biogerontology Degenerative disease Genetic disorder CAMFAK syndrome — thought to be a form (or subset) of Cockayne syndrome References External links This article incorporates some public domain text from The U.S. National Library of Medicine Autosomal recessive disorders Rare diseases Neurological disorders Syndromes affecting the nervous system Genodermatoses DNA replication and repair-deficiency disorders Progeroid syndromes Diseases named after discoverers
Cockayne syndrome
[ "Biology" ]
3,151
[ "Senescence", "DNA replication and repair-deficiency disorders", "Progeroid syndromes" ]
1,493,168
https://en.wikipedia.org/wiki/SlimFast
SlimFast is an American company headquartered in Palm Beach Gardens, Florida, that markets an eponymous brand of shakes, bars, snacks, packaged meals, and other dietary supplement foods sold in the U.S., Canada, France, Germany, Iceland, Ireland, Latin America, and the U.K. SlimFast promotes diets and weight loss plans featuring its food products. There is mixed evidence on the effectiveness of the diet, although it appears to function no better than behavioral counseling. History SlimFast was started in 1977 as a product line of the Thompson Medical Company, founded in the 1940s by S. Daniel Abraham. The product was rolled out nationwide in a marketing campaign that began on July 11, 1977 for "a fat-free, carbohydrate-free, animal-based fortified cherry-flavored protein supplement formula" that promised to make purchasers "feel better, cleaner, stronger and healthier. Thompson Medical also sold the controversial weight loss dietary supplement Dexatrim. In 1987, Abraham took the brand private, and it was acquired by Unilever in 2000. In 2014, Unilever sold SlimFast to Kainos Capital. After the sale, KSF Acquisition invested with Kainos Capital in order to take responsibility for the SlimFast brand in the UK, Ireland and Germany. In 2018, Glanbia Plc. acquired SlimFast from Kainos Capital. On December 3, 2009, SlimFast recalled all of its canned products due to possible bacterial contamination. The company stated that it had halted production until the cause was discovered. No further problems or issues have been noted. In 2011, SlimFast stopped producing cans and has since used plastic bottles. Products Original (1987–2004) SlimFast was originally just a diet shake product line. It consisted of chocolate, strawberry, and vanilla shakes meant to replace breakfast and lunch. The company suggested customers eat a low-calorie dinner. Usually, dieters would pick a low-calorie frozen dinner brand such as Lean Cuisine or Weight Watchers, as the SlimFast diet was a convenience product line that offered none of its own dinner products. Later, in the mid-1990s, SlimFast began offering meal bars that could be used as meal replacements. Effectiveness In a 2009 study involving 300 overweight and obese males and females aged 21–60 years published by Cambridge University Press, the SlimFast programme achieved weight losses of between 5 kg (11 lbs) and 9 kg (19 lbs) after six months compared to a control diet. The results were comparable to that of both the Weight Watchers 'Pure Points' programme and Rosemary Conley's 'Eat yourself Slim' Diet and Fitness Plan. References External links Official website Brand name diet products Former Unilever brands Products introduced in 1977 Dietary supplements Low-carbohydrate diets 2000 mergers and acquisitions 2014 mergers and acquisitions 2018 mergers and acquisitions American subsidiaries of foreign companies
SlimFast
[ "Chemistry" ]
588
[ "Carbohydrates", "Low-carbohydrate diets" ]
1,493,236
https://en.wikipedia.org/wiki/Random%20permutation
A random permutation is a random permutation of a set of objects, that is, a permutation-valued random variable. The use of random permutations is common in games of chance and in randomized algorithms in coding theory, cryptography, and simulation. A good example of a random permutation is the fair shuffling of a standard deck of cards: this is ideally a random permutation of the 52 cards. Computation of random permutations Entry-by-entry methods One algorithm for generating a random permutation of a set of size n uniformly at random, i.e., such that each of the n! permutations is equally likely to appear, is to generate a sequence by uniformly randomly selecting an integer between 1 and n (inclusive), sequentially and without replacement n times, and then to interpret this sequence (x1, ..., xn) as the permutation shown here in two-line notation. An inefficient brute-force method for sampling without replacement could select from the numbers between 1 and n at every step, retrying the selection whenever the random number picked is a repeat of a number already selected until selecting a number that has not yet been selected. The expected number of retries per step in such cases will scale with the inverse of the fraction of numbers already selected, and the overall number of retries as the sum of those inverses, making this an inefficient approach. Such retries can be avoided using an algorithm where, on each ith step when x1, ..., xi − 1 have already been chosen, one chooses a uniformly random number j from between 1 and n − i + 1 (inclusive) and sets xi equal to the jth largest of the numbers that have not yet been selected. This selects uniformly randomly among the remaining numbers at every step without retries. Fisher-Yates shuffles A simple algorithm to generate a permutation of n items uniformly at random without retries, known as the Fisher–Yates shuffle, is to start with any permutation (for example, the identity permutation), and then go through the positions 0 through n − 2 (we use a convention where the first element has index 0, and the last element has index n − 1), and for each position i swap the element currently there with a randomly chosen element from positions i through n − 1 (the end), inclusive. Any permutation of n elements will be produced by this algorithm with probability exactly 1/n!, thus yielding a uniform distribution of the permutations. unsigned uniform(unsigned m); /* Returns a random integer 0 <= uniform(m) <= m-1 with uniform distribution */ void initialize_and_permute(unsigned permutation[], unsigned n) { unsigned i; for (i = 0; i <= n-2; i++) { unsigned j = i+uniform(n-i); /* A random integer such that i ≤ j < n */ swap(permutation[i], permutation[j]); /* Swap the randomly picked element with permutation[i] */ } } If the uniform() function is implemented simply as random() % (m) then there will be a bias in the distribution of permutations if the number of return values of random() is not a multiple of m. However, this effect is small if the number of return values of random() is orders of magnitude greater than m. Randomness testing As with all computational implementations of random processes, the quality of the distribution generated by an implementation of a randomized algorithm such as the Fisher-Yates shuffle, i.e., how close the actually generated distribution is to the desired distribution, will depend on the quality of underlying sources of randomness in the implementation such as pseudorandom number generators or hardware random number generators. There are many randomness tests for random permutations, such as the "overlapping permutations" test of the Diehard tests. A typical form of such tests is to take some permutation statistic for which the distribution is theoretically known and then test whether the distribution of that statistic on a set of randomly generated permutations from an implementation closely approximates the distribution of that statistic from the true distribution. Statistics on random permutations Fixed points The probability distribution for the number of fixed points of a uniformly distributed random permutation of n elements approaches a Poisson distribution with expected value 1 as n grows. The first n moments of this distribution are exactly those of the Poisson distribution. In particular, the probability that a random permutation has no fixed points (i.e., that the permutation is a derangement) approaches 1/e as n increases. See also Ewens's sampling formula — a connection with population genetics Faro shuffle Golomb–Dickman constant Random permutation statistics Shuffling algorithms — random sort method, iterative exchange method Pseudorandom permutation References External links Random permutation at MathWorld Random permutation generation -- detailed and practical explanation of Knuth shuffle algorithm and its variants for generating k-permutations (permutations of k elements chosen from a list) and k-subsets (generating a subset of the elements in the list without replacement) with pseudocode Permutations Randomized algorithms
Random permutation
[ "Mathematics" ]
1,124
[ "Functions and mappings", "Permutations", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
1,493,289
https://en.wikipedia.org/wiki/Mob%20%28video%20games%29
A mob, short for mobile or mobile object, is a computer-controlled non-player character (NPC) in a video game such as an MMORPG or MUD. Depending on context, every and any such character in a game may be considered to be a "mob", or usage of the term may be limited to hostile NPCs and/or NPCs vulnerable to attack. In most modern graphical games, "mob" may be used to specifically refer to generic monstrous NPCs that the player is expected to hunt and kill, excluding NPCs that engage in dialog, sell items, or NPCs which cannot be attacked. "Named mobs" are distinguished by having a proper name rather than being referred to by a general type ("a goblin", "a citizen", etc.). Most mobs are those capable of no complex behaviors beyond generic programming of attacking or moving around. Purpose of mobs Defeating mobs may be required to gather experience points, money, items, or to complete quests. Combat between player characters (PCs) and mobs is called player versus environment (PvE). PCs may also attack mobs because they aggressively attack PCs. Monster versus monster (MvM) battles also take place in some games. A game world might contain hundreds of different kinds of mobs, but if players spend a certain amount of time playing, they might become well aware of the characteristics presented by each kind and its related hazard. This knowledge might dull the game to some extent. Etymology The term "mobile object" was used by Richard Bartle for objects that were self-mobile in MUD1. Later source code in DikuMUD used the term "mobile" to refer to a generic NPC, shortened further to "mob" in identifiers. DikuMUD was a heavy influence on EverQuest, and the term as it exists in MMORPGs is derived from the MUD usage. The term is properly an abbreviation rather than an acronym. References Massively multiplayer online role-playing games MUD terminology Video game terminology
Mob (video games)
[ "Technology" ]
421
[ "Computing terminology", "Video game terminology" ]
1,493,317
https://en.wikipedia.org/wiki/Fire-control%20system
A fire-control system (FCS) is a number of components working together, usually a gun data computer, a director and radar, which is designed to assist a ranged weapon system to target, track, and hit a target. It performs the same task as a human gunner firing a weapon, but attempts to do so faster and more accurately. Naval fire control Origins The original fire-control systems were developed for ships. The early history of naval fire control was dominated by the engagement of targets within visual range (also referred to as direct fire). In fact, most naval engagements before 1800 were conducted at ranges of . Even during the American Civil War, the famous engagement between and was often conducted at less than range. Rapid technical improvements in the late 19th century greatly increased the range at which gunfire was possible. Rifled guns of much larger size firing explosive shells of lighter relative weight (compared to all-metal balls) so greatly increased the range of the guns that the main problem became aiming them while the ship was moving on the waves. This problem was solved with the introduction of the gyroscope, which corrected this motion and provided sub-degree accuracies. Guns were now free to grow to any size, and quickly surpassed calibre by the 1890s. These guns were capable of such great range that the primary limitation was seeing the target, leading to the use of high masts on ships. Another technical improvement was the introduction of the steam turbine which greatly increased the performance of the ships. Earlier reciprocating engine powered capital ships were capable of perhaps 16 knots, but the first large turbine ships were capable of over 20 knots. Combined with the long range of the guns, this meant that the target ship could move a considerable distance, several ship lengths, between the time the shells were fired and landed. One could no longer eyeball the aim with any hope of accuracy. Moreover, in naval engagements it is also necessary to control the firing of several guns at once. Naval gun fire control potentially involves three levels of complexity. Local control originated with primitive gun installations aimed by the individual gun crews. Director control aims all guns on the ship at a single target. Coordinated gunfire from a formation of ships at a single target was a focus of battleship fleet operations. Corrections are made for surface wind velocity, firing ship roll and pitch, powder magazine temperature, drift of rifled projectiles, individual gun bore diameter adjusted for shot-to-shot enlargement, and rate of change of range with additional modifications to the firing solution based upon the observation of preceding shots. The resulting directions, known as a firing solution, would then be fed back out to the turrets for laying. If the rounds missed, an observer could work out how far they missed by and in which direction, and this information could be fed back into the computer along with any changes in the rest of the information and another shot attempted. At first, the guns were aimed using the technique of artillery spotting. It involved firing a gun at the target, observing the projectile's point of impact (fall of shot), and correcting the aim based on where the shell was observed to land, which became more and more difficult as the range of the gun increased. Between the American Civil War and 1905, numerous small improvements, such as telescopic sights and optical rangefinders, were made in fire control. There were also procedural improvements, like the use of plotting boards to manually predict the position of a ship during an engagement. World War I Then increasingly sophisticated mechanical calculators were employed for proper gun laying, typically with various spotters and distance measures being sent to a central plotting station deep within the ship. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments. Around 1905, mechanical fire control aids began to become available, such as the Dreyer Table, Dumaresq (which was also part of the Dreyer Table), and Argo Clock, but these devices took a number of years to become widely deployed. These devices were early forms of rangekeepers. Arthur Pollen and Frederic Charles Dreyer independently developed the first such systems. Pollen began working on the problem after noting the poor accuracy of naval artillery at a gunnery practice near Malta in 1900. Lord Kelvin, widely regarded as Britain's leading scientist first proposed using an analogue computer to solve the equations which arise from the relative motion of the ships engaged in the battle and the time delay in the flight of the shell to calculate the required trajectory and therefore the direction and elevation of the guns. Pollen aimed to produce a combined mechanical computer and automatic plot of ranges and rates for use in centralised fire control. To obtain accurate data of the target's position and relative motion, Pollen developed a plotting unit (or plotter) to capture this data. To this he added a gyroscope to allow for the yaw of the firing ship. Like the plotter, the primitive gyroscope of the time required substantial development to provide continuous and reliable guidance. Although the trials in 1905 and 1906 were unsuccessful, they showed promise. Pollen was encouraged in his efforts by the rapidly rising figure of Admiral Jackie Fisher, Admiral Arthur Knyvet Wilson and the Director of Naval Ordnance and Torpedoes (DNO), John Jellicoe. Pollen continued his work, with occasional tests carried out on Royal Navy warships. Meanwhile, a group led by Dreyer designed a similar system. Although both systems were ordered for new and existing ships of the Royal Navy, the Dreyer system eventually found most favour with the Navy in its definitive Mark IV* form. The addition of director control facilitated a full, practicable fire control system for World War I ships, and most RN capital ships were so fitted by mid 1916. The director was high up over the ship where operators had a superior view over any gunlayer in the turrets. It was also able to co-ordinate the fire of the turrets so that their combined fire worked together. This improved aiming and larger optical rangefinders improved the estimate of the enemy's position at the time of firing. The system was eventually replaced by the improved "Admiralty Fire Control Table" for ships built after 1927. World War II During their long service life, rangekeepers were updated often as technology advanced, and by World War II they were a critical part of an integrated fire-control system. The incorporation of radar into the fire-control system early in World War II provided ships the ability to conduct effective gunfire operations at long range in poor weather and at night. For U.S. Navy gun fire control systems, see ship gun fire-control systems. The use of director-controlled firing, together with the fire control computer, removed the control of the gun laying from the individual turrets to a central position; although individual gun mounts and multi-gun turrets would retain a local control option for use when battle damage limited director information transfer (these would be simpler versions called "turret tables" in the Royal Navy). Guns could then be fired in planned salvos, with each gun giving a slightly different trajectory. Dispersion of shot caused by differences in individual guns, individual projectiles, powder ignition sequences, and transient distortion of ship structure was undesirably large at typical naval engagement ranges. Directors high on the superstructure had a better view of the enemy than a turret mounted sight, and the crew operating them were distant from the sound and shock of the guns. Gun directors were topmost, and the ends of their optical rangefinders protruded from their sides, giving them a distinctive appearance. Unmeasured and uncontrollable ballistic factors, like high-altitude temperature, humidity, barometric pressure, wind direction and velocity, required final adjustment through observation of the fall of shot. Visual range measurement (of both target and shell splashes) was difficult prior to the availability of radar. The British favoured coincidence rangefinders while the Germans favoured the stereoscopic type. The former were less able to range on an indistinct target but easier on the operator over a long period of use, the latter the reverse. Submarines were also equipped with fire control computers for the same reasons, but their problem was even more pronounced; in a typical "shot", the torpedo would take one to two minutes to reach its target. Calculating the proper "lead" given the relative motion of the two vessels was very difficult, and torpedo data computers were added to dramatically improve the speed of these calculations. In a typical World War II British ship the fire control system connected the individual gun turrets to the director tower (where the sighting instruments were located) and the analogue computer in the heart of the ship. In the director tower, operators trained their telescopes on the target; one telescope measured elevation and the other bearing. Rangefinder telescopes on a separate mounting measured the distance to the target. These measurements were converted by the Fire Control Table into the bearings and elevations for the guns to fire upon. In the turrets, the gunlayers adjusted the elevation of their guns to match an indicator for the elevation transmitted from the Fire Control table—a turret layer did the same for bearing. When the guns were on target they were centrally fired. Even with as much mechanization of the process, it still required a large human element; the Transmitting Station (the room that housed the Dreyer table) for HMS Hoods main guns housed 27 crew. Directors were largely unprotected from enemy fire. It was difficult to put much weight of armour so high up on the ship, and even if the armour did stop a shot, the impact alone would likely knock the instruments out of alignment. Sufficient armour to protect from smaller shells and fragments from hits to other parts of the ship was the limit. The performance of the analog computer was impressive. The battleship during a 1945 test was able to maintain an accurate firing solution on a target during a series of high-speed turns. It is a major advantage for a warship to be able to maneuver while engaging a target. Night naval engagements at long range became feasible when radar data could be input to the rangekeeper. The effectiveness of this combination was demonstrated in November 1942 at the Third Battle of Savo Island when the engaged the Japanese battleship at a range of at night. Kirishima was set aflame, suffered a number of explosions, and was scuttled by her crew. She had been hit by at least nine rounds out of 75 fired (12% hit rate). The wreck of Kirishima was discovered in 1992 and showed that the entire bow section of the ship was missing. The Japanese during World War II did not develop radar or automated fire control to the level of the US Navy and were at a significant disadvantage. Post-1945 By the 1950s gun turrets were increasingly unmanned, with gun laying controlled remotely from the ship's control centre using inputs from radar and other sources. The last combat action for the analog rangekeepers, at least for the US Navy, was in the 1991 Persian Gulf War when the rangekeepers on the s directed their last rounds in combat. Aircraft based fire control World War II bomb sights An early use of fire-control systems was in bomber aircraft, with the use of computing bombsights that accepted altitude and airspeed information to predict and display the impact point of a bomb released at that time. The best known United States device was the Norden bombsight. World War II aerial gunnery sights Simple systems, known as lead computing sights also made their appearance inside aircraft late in the war as gyro gunsights. These devices used a gyroscope to measure turn rates, and moved the gunsight's aim-point to take this into account, with the aim point presented through a reflector sight. The only manual "input" to the sight was the target distance, which was typically handled by dialing in the size of the target's wing span at some known range. Small radar units were added in the post-war period to automate even this input, but it was some time before they were fast enough to make the pilots completely happy with them. The first implementation of a centralized fire control system in a production aircraft was on the B-29. Post-World War II systems By the start of the Vietnam War, a new computerized bombing predictor, called the Low Altitude Bombing System (LABS), began to be integrated into the systems of aircraft equipped to carry nuclear armaments. This new bomb computer was revolutionary in that the release command for the bomb was given by the computer, not the pilot; the pilot designated the target using the radar or other targeting system, then "consented" to release the weapon, and the computer then did so at a calculated "release point" some seconds later. This is very different from previous systems, which, though they had also become computerized, still calculated an "impact point" showing where the bomb would fall if the bomb were released at that moment. The key advantage is that the weapon can be released accurately even when the plane is maneuvering. Most bombsights until this time required that the plane maintain a constant attitude (usually level), though dive-bombing sights were also common. The LABS system was originally designed to facilitate a tactic called toss bombing, to allow the aircraft to remain out of range of a weapon's blast radius. The principle of calculating the release point, however, was eventually integrated into the fire control computers of later bombers and strike aircraft, allowing level, dive and toss bombing. In addition, as the fire control computer became integrated with ordnance systems, the computer can take the flight characteristics of the weapon to be launched into account. Land based fire control Anti-aircraft based fire control By the start of World War II, aircraft altitude performance had increased so much that anti-aircraft guns had similar predictive problems, and were increasingly equipped with fire-control computers. The main difference between these systems and the ones on ships was size and speed. The early versions of the High Angle Control System, or HACS, of Britain's Royal Navy were examples of a system that predicted based upon the assumption that target speed, direction, and altitude would remain constant during the prediction cycle, which consisted of the time to fuze the shell and the time of flight of the shell to the target. The USN Mk 37 system made similar assumptions except that it could predict assuming a constant rate of altitude change. The Kerrison Predictor is an example of a system that was built to solve laying in "real time", simply by pointing the director at the target and then aiming the gun at a pointer it directed. It was also deliberately designed to be small and light, in order to allow it to be easily moved along with the guns it served. The radar-based M-9/SCR-584 Anti-Aircraft System was used to direct air defense artillery since 1943. The MIT Radiation Lab's SCR-584 was the first radar system with automatic following, Bell Laboratory's M-9 was an electronic analog fire-control computer that replaced complicated and difficult-to-manufacture mechanical computers (such as the Sperry M-7 or British Kerrison predictor). In combination with the VT proximity fuze, this system accomplished the astonishing feat of shooting down V-1 cruise missiles with less than 100 shells per plane (thousands were typical in earlier AA systems). This system was instrumental in the defense of London and Antwerp against the V-1. Although listed in Land based fire control section anti-aircraft fire control systems can also be found on naval and aircraft systems. Coast artillery fire control In the United States Army Coast Artillery Corps, Coast Artillery fire control systems began to be developed at the end of the 19th century and progressed on through World War II. Early systems made use of multiple observation or base end stations (see Figure 1) to find and track targets attacking American harbors. Data from these stations were then passed to plotting rooms, where analog mechanical devices, such as the plotting board, were used to estimate targets' positions and derive firing data for batteries of coastal guns assigned to interdict them. U.S. Coast Artillery forts bristled with a variety of armament, ranging from 12-inch coast defense mortars, through 3-inch and 6-inch mid-range artillery, to the larger guns, which included 10-inch and 12-inch barbette and disappearing carriage guns, 14-inch railroad artillery, and 16-inch cannon installed just prior to and up through World War II. Fire control in the Coast Artillery became more and more sophisticated in terms of correcting firing data for such factors as weather conditions, the condition of powder used, or the Earth's rotation. Provisions were also made for adjusting firing data for the observed fall of shells. As shown in Figure 2, all of these data were fed back to the plotting rooms on a finely tuned schedule controlled by a system of time interval bells that rang throughout each harbor defense system. It was only later in World War II that electro-mechanical gun data computers, connected to coast defense radars, began to replace optical observation and manual plotting methods in controlling coast artillery. Even then, the manual methods were retained as a back-up through the end of the war. Direct and indirect fire control systems Land based fire control systems can be used to aid in both Direct fire and Indirect fire weapon engagement. These systems can be found on weapons ranging from small handguns to large artillery weapons. Modern fire control systems Modern fire-control computers, like all high-performance computers, are digital. The added performance allows basically any input to be added, from air density and wind, to wear on the barrels and distortion due to heating. These sorts of effects are noticeable for any sort of gun, and fire-control computers have started appearing on smaller and smaller platforms. Tanks were one early use that automated gun laying had, using a laser rangefinder and a barrel-distortion meter. Fire-control computers are useful not just for aiming large cannons, but also for aiming machine guns, small cannons, guided missiles, rifles, grenades, and rockets—any kind of weapon that can have its launch or firing parameters varied. They are typically installed on ships, submarines, aircraft, tanks and even on some small arms—for example, the grenade launcher developed for use on the Fabrique Nationale F2000 bullpup assault rifle. Fire-control computers have gone through all the stages of technology that computers have, with some designs based upon analogue technology and later vacuum tubes which were later replaced with transistors. Fire-control systems are often interfaced with sensors (such as sonar, radar, infra-red search and track, laser range-finders, anemometers, wind vanes, thermometers, barometers, etc.) in order to cut down or eliminate the amount of information that must be manually entered in order to calculate an effective solution. Sonar, radar, IRST and range-finders can give the system the direction to and/or distance of the target. Alternatively, an optical sight can be provided that an operator can simply point at the target, which is easier than having someone input the range using other methods and gives the target less warning that it is being tracked. Typically, weapons fired over long ranges need environmental information—the farther a munition travels, the more the wind, temperature, air density, etc. will affect its trajectory, so having accurate information is essential for a good solution. Sometimes, for very long-range rockets, environmental data has to be obtained at high altitudes or in between the launching point and the target. Often, satellites or balloons are used to gather this information. Once the firing solution is calculated, many modern fire-control systems are also able to aim and fire the weapon(s). Once again, this is in the interest of speed and accuracy, and in the case of a vehicle like an aircraft or tank, in order to allow the pilot/gunner/etc. to perform other actions simultaneously, such as tracking the target or flying the aircraft. Even if the system is unable to aim the weapon itself, for example the fixed cannon on an aircraft, it is able to give the operator cues on how to aim. Typically, the cannon points straight ahead and the pilot must maneuver the aircraft so that it oriented correctly before firing. In most aircraft the aiming cue takes the form of a "pipper" which is projected on the heads-up display (HUD). The pipper shows the pilot where the target must be relative to the aircraft in order to hit it. Once the pilot maneuvers the aircraft so that the target and pipper are superimposed, he or she fires the weapon, or on some aircraft the weapon will fire automatically at this point, in order to overcome the delay of the pilot. In the case of a missile launch, the fire-control computer may give the pilot feedback about whether the target is in range of the missile and how likely the missile is to hit if launched at any particular moment. The pilot will then wait until the probability reading is satisfactorily high before launching the weapon. See also Target acquisition Counter-battery radar Director (military) Fire-control radar Gun stabilizer List of U.S. Army fire control and sighting material by supply catalog designation Predicted impact point Ship gun fire-control systems Tartar Guided Missile Fire Control System References Further reading External links Between Human and Machine: Feedback, Control, and Computing Before Cybernetics – Google Books BASIC programs for battleship and antiaircraft gun fire control National Fire Control Symposium Military computers Artillery operation Armoured fighting vehicle vision and sighting equipment Applications of control engineering Artillery components Coastal artillery Fire-control computers of World War II
Fire-control system
[ "Technology", "Engineering" ]
4,429
[ "Control engineering", "Artillery components", "Components", "Applications of control engineering" ]
1,493,369
https://en.wikipedia.org/wiki/Absolute%20phase
Absolute phase is the phase of a waveform relative to some standard (strictly speaking, phase is always relative). To the extent that this standard is accepted by all parties, one can speak of an absolute phase in a particular field of application. Sound reproduction In the reproduction of sound by headphones or speakers, absolute phase refers the phase of the reproduced signal relative to the original signal, retaining the original polarity. A positive pressure on the microphone is reproduced as a positive pressure by the loudspeaker or headphones driver. For instance, the plosive "p" sound from a vocalist sends an initial positive air pressure wave toward the microphone which responds with an initial inward movement of the microphone diaphragm, away from the vocalist. To maintain absolute phase, a loudspeaker reproducing the sound would send an initial positive pressure outward from the loudspeaker, toward the listener. In audio, a change in polarity refers to an equal phase shift of 180° at all frequencies, usually produced on one channel by reversing the connections of two wires. Some audiophiles claim that reversing the polarities of all the channels simultaneously makes a subtle perceptible difference in the reproduced sound, even though the relative phases of all the channels are preserved. The ear is sensitive to the periodicity of a waveform at low frequencies; tests have shown that absolute phase can sometimes be heard by test subjects listening with monaural conditions (a single loudspeaker, or headphones sending the same signal to both ears.) Audio engineer Douglas Self concludes "there is a prima facie case for the audibility of absolute phase", especially for high impulse sounds such as percussion. The concept of absolute phase is rendered irrelevant for any instrument with strings (such as a guitar or piano), or for two or more instruments played together. Complex sounds such as these are known to have an undetectable phase relationship. In practice, the absolute phase of an audio system can be assumed to be inaudible. Power electronics When dealing with power electronics, the phase of the voltage and current at various points in the system relative to one another are important. If the points of interest are widely separated in space, it can be difficult to measure the relative phase. To solve this problem, the phase of the signals relative to absolute time (UTC) is measured using instruments relying on GPS. Comparison of two absolute phases in this sense allows the relative phase of distant signals to be computed. Signal processing In signal processing a pulse or finite wavetrain can be considered as a signal of a single frequency modulated by an envelope, or as a superposition of an infinite number of infinitesimal waves of different frequencies. In the first case, one may speak of the phase of the wave with respect to the envelope as the absolute phase. In the second picture, it is a question of the relative phase of the component frequencies. For examples of physical effects due to the phase of signals with the same power spectrum. References Wave mechanics
Absolute phase
[ "Physics" ]
615
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
1,493,395
https://en.wikipedia.org/wiki/Pappus%27s%20hexagon%20theorem
In mathematics, Pappus's hexagon theorem (attributed to Pappus of Alexandria) states that given one set of collinear points and another set of collinear points then the intersection points of line pairs and and and are collinear, lying on the Pappus line. These three points are the points of intersection of the "opposite" sides of the hexagon . It holds in a projective plane over any field, but fails for projective planes over any noncommutative division ring. Projective planes in which the "theorem" is valid are called pappian planes. If one considers a pappian plane containing a hexagon as just described but with sides and parallel and also sides and parallel (so that the Pappus line is the line at infinity), one gets the affine version of Pappus's theorem shown in the second diagram. If the Pappus line and the lines have a point in common, one gets the so-called little version of Pappus's theorem. The dual of this incidence theorem states that given one set of concurrent lines , and another set of concurrent lines , then the lines defined by pairs of points resulting from pairs of intersections and and and are concurrent. (Concurrent means that the lines pass through one point.) Pappus's theorem is a special case of Pascal's theorem for a conic—the limiting case when the conic degenerates into 2 straight lines. Pascal's theorem is in turn a special case of the Cayley–Bacharach theorem. The Pappus configuration is the configuration of 9 lines and 9 points that occurs in Pappus's theorem, with each line meeting 3 of the points and each point meeting 3 lines. In general, the Pappus line does not pass through the point of intersection of and . This configuration is self dual. Since, in particular, the lines have the properties of the lines of the dual theorem, and collinearity of is equivalent to concurrence of , the dual theorem is therefore just the same as the theorem itself. The Levi graph of the Pappus configuration is the Pappus graph, a bipartite distance-regular graph with 18 vertices and 27 edges. Proof: affine form If the affine form of the statement can be proven, then the projective form of Pappus's theorem is proven, as the extension of a pappian plane to a projective plane is unique. Because of the parallelity in an affine plane one has to distinct two cases: and . The key for a simple proof is the possibility for introducing a "suitable" coordinate system: Case 1: The lines intersect at point . In this case coordinates are introduced, such that (see diagram). have the coordinates . From the parallelity of the lines one gets and the parallelity of the lines yields . Hence line has slope and is parallel line . Case 2: (little theorem). In this case the coordinates are chosen such that . From the parallelity of and one gets and , respectively, and at least the parallelity . Proof with homogeneous coordinates Choose homogeneous coordinates with . On the lines , given by , take the points to be for some . The three lines are , so they pass through the same point if and only if . The condition for the three lines and with equations to pass through the same point is . So this last set of three lines is concurrent if all the other eight sets are because multiplication is commutative, so . Equivalently, are collinear. The proof above also shows that for Pappus's theorem to hold for a projective space over a division ring it is both sufficient and necessary that the division ring is a (commutative) field. German mathematician Gerhard Hessenberg proved that Pappus's theorem implies Desargues's theorem. In general, Pappus's theorem holds for some projective plane if and only if it is a projective plane over a commutative field. The projective planes in which Pappus's theorem does not hold are Desarguesian projective planes over noncommutative division rings, and non-Desarguesian planes. The proof is invalid if happen to be collinear. In that case an alternative proof can be provided, for example, using a different projective reference. Dual theorem Because of the principle of duality for projective planes the dual theorem of Pappus is true: If 6 lines are chosen alternately from two pencils with centers , the lines are concurrent, that means: they have a point in common. The left diagram shows the projective version, the right one an affine version, where the points are points at infinity. If point is on the line than one gets the "dual little theorem" of Pappus' theorem. If in the affine version of the dual "little theorem" point is a point at infinity too, one gets Thomsen's theorem, a statement on 6 points on the sides of a triangle (see diagram). The Thomsen figure plays an essential role coordinatising an axiomatic defined projective plane. The proof of the closure of Thomsen's figure is covered by the proof for the "little theorem", given above. But there exists a simple direct proof, too: Because the statement of Thomsen's theorem (the closure of the figure) uses only the terms connect, intersect and parallel, the statement is affinely invariant, and one can introduce coordinates such that (see right diagram). The starting point of the sequence of chords is One easily verifies the coordinates of the points given in the diagram, which shows: the last point coincides with the first point. Other statements of the theorem In addition to the above characterizations of Pappus's theorem and its dual, the following are equivalent statements: If the six vertices of a hexagon lie alternately on two lines, then the three points of intersection of pairs of opposite sides are collinear. Arranged in a matrix of nine points (as in the figure and description above) and thought of as evaluating a permanent, if the first two rows and the six "diagonal" triads are collinear, then the third row is collinear. That is, if are lines, then Pappus's theorem states that must be a line. Also, note that the same matrix formulation applies to the dual form of the theorem when etc. are triples of concurrent lines. Given three distinct points on each of two distinct lines, pair each point on one of the lines with one from the other line, then the joins of points not paired will meet in (opposite) pairs at points along a line. If two triangles are perspective in at least two different ways, then they are perspective in three ways. If and are concurrent and and are concurrent, then and are concurrent. Origins In its earliest known form, Pappus's Theorem is Propositions 138, 139, 141, and 143 of Book VII of Pappus's Collection. These are Lemmas XII, XIII, XV, and XVII in the part of Book VII consisting of lemmas to the first of the three books of Euclid's Porisms. The lemmas are proved in terms of what today is known as the cross ratio of four collinear points. Three earlier lemmas are used. The first of these, Lemma III, has the diagram below (which uses Pappus's lettering, with G for Γ, D for Δ, J for Θ, and L for Λ). Here three concurrent straight lines, AB, AG, and AD, are crossed by two lines, JB and JE, which concur at J. Also KL is drawn parallel to AZ. Then KJ : JL :: (KJ : AG & AG : JL) :: (JD : GD & BG : JB). These proportions might be written today as equations: KJ/JL = (KJ/AG)(AG/JL) = (JD/GD)(BG/JB). The last compound ratio (namely JD : GD & BG : JB) is what is known today as the cross ratio of the collinear points J, G, D, and B in that order; it is denoted today by (J, G; D, B). So we have shown that this is independent of the choice of the particular straight line JD that crosses the three straight lines that concur at A. In particular (J, G; D, B) = (J, Z; H, E). It does not matter on which side of A the straight line JE falls. In particular, the situation may be as in the next diagram, which is the diagram for Lemma X. Just as before, we have (J, G; D, B) = (J, Z; H, E). Pappus does not explicitly prove this; but Lemma X is a converse, namely that if these two cross ratios are the same, and the straight lines BE and DH cross at A, then the points G, A, and Z must be collinear. What we showed originally can be written as (J, ∞; K, L) = (J, G; D, B), with ∞ taking the place of the (nonexistent) intersection of JK and AG. Pappus shows this, in effect, in Lemma XI, whose diagram, however, has different lettering: What Pappus shows is DE.ZH : EZ.HD :: GB : BE, which we may write as (D, Z; E, H) = (∞, B; E, G). The diagram for Lemma XII is: The diagram for Lemma XIII is the same, but BA and DG, extended, meet at N. In any case, considering straight lines through G as cut by the three straight lines through A, (and accepting that equations of cross ratios remain valid after permutation of the entries,) we have by Lemma III or XI (G, J; E, H) = (G, D; ∞ Z). Considering straight lines through D as cut by the three straight lines through B, we have (L, D; E, K) = (G, D; ∞ Z). Thus (E, H; J, G) = (E, K; D, L), so by Lemma X, the points H, M, and K are collinear. That is, the points of intersection of the pairs of opposite sides of the hexagon ADEGBZ are collinear. Lemmas XV and XVII are that, if the point M is determined as the intersection of HK and BG, then the points A, M, and D are collinear. That is, the points of intersection of the pairs of opposite sides of the hexagon BEKHZG are collinear. Notes References External links Pappus's hexagon theorem at cut-the-knot Dual to Pappus's hexagon theorem at cut-the-knot Pappus’s Theorem: Nine proofs and three variations Theorems in projective geometry Euclidean plane geometry Articles containing proofs
Pappus's hexagon theorem
[ "Mathematics" ]
2,335
[ "Theorems in projective geometry", "Euclidean plane geometry", "Theorems in geometry", "Articles containing proofs", "Planes (geometry)" ]
1,493,534
https://en.wikipedia.org/wiki/International%20Nucleotide%20Sequence%20Database%20Collaboration
The International Nucleotide Sequence Database Collaboration (INSDC) consists of a joint effort to collect and disseminate databases containing DNA and RNA sequences. It involves the following computerized databases: NIG's DNA Data Bank of Japan (Japan), NCBI's GenBank (USA) and the EMBL-EBI's European Nucleotide Archive (EMBL). New and updated data on nucleotide sequences contributed by research teams to each of the three databases are synchronized on a daily basis through continuous interaction between the staff at each the collaborating organizations. All of the data in INSDC is available for free and unrestricted access, for any purpose, with no restrictions on analysis, redistribution, or re-publication of the data. This policy has been a foundational principle of the INSDC since its inception. Since the 1990s, most of the world's major scientific journals have required that sequence data be deposited in an INSDC database as a pre-condition for publication. The DDBJ/EMBL-EBI/GenBank synchronization is maintained according to a number of guidelines which are produced and published by an International Advisory Board. The guidelines consist of a common definition of the feature tables for the databases, which regulate the content and syntax of the database entries, in the form of a common DTD (Document Type Definition). The syntax is called INSDSeq and its core consists of the letter sequence of the gene expression (amino acid sequence) and the letter sequence for nucleotide bases in the gene or decoded segment. In a DBFetch operation shows a typical INSD entry at the EMBL-EBI database; the same entry at NCBI. See also Bioinformatics Biological database List of biological databases National Center for Biotechnology Information Sequence database References External links Official site External links EMBL-EBI INSDC site EMBL-EBI Nucleotide Database DNA Data Bank of Japan GenBank Nucleotide Search Bioinformatics organizations Biology organisations based in the United Kingdom Databases in the United Kingdom Population genetics in the United Kingdom South Cambridgeshire District
International Nucleotide Sequence Database Collaboration
[ "Biology" ]
429
[ "Bioinformatics", "Bioinformatics organizations" ]
1,493,763
https://en.wikipedia.org/wiki/Nike-Iroquois
Nike Iroquois is the designation of a two-stage American sounding rocket. The Nike Iroquois was launched 213 times between 1964 and 1978. The maximum flight height of the Nike Iroquois amounts to 290 km (950,000 ft), the takeoff thrust 48,800 lbf (217 kN), the takeoff weight 700 kg and the length 8.00 m. References Nike-Iroquois at Encyclopedia Astronautica Nike (rocket family)
Nike-Iroquois
[ "Astronomy" ]
86
[ "Rocketry stubs", "Astronomy stubs" ]
1,493,799
https://en.wikipedia.org/wiki/Biocomplexity%20Institute%20of%20Virginia%20Tech
The Biocomplexity Institute of Virginia Tech (formerly the Virginia Bioinformatics Institute) was a research institute specializing in bioinformatics, computational biology, and systems biology. The institute had more than 250 personnel, including over 50 tenured and research faculty. Research at the institute involved collaboration in diverse disciplines such as mathematics, computer science, biology, plant pathology, biochemistry, systems biology, statistics, economics, synthetic biology and medicine. The institute developed -omic and bioinformatic tools and databases that can be applied to the study of human, animal and plant diseases as well as the discovery of new vaccine, drug and diagnostic targets. The institute's programs were supported by a variety of government and private agencies including the National Institutes of Health, National Science Foundation, U.S. Department of Defense, U.S. Department of Agriculture, and U.S. Department of Energy. Since inception, the Biocomplexity Institute has received over $179 million in extramural support. It has a research portfolio totaling $68 million in grants and contracts. The institute's executive director was Chris Barrett. In 2019, the institute was absorbed into the Fralin Institute of Life Sciences at Virginia Tech after many faculty members, including Dr. Barrett, were hired away to form the Biocomplexity Institute and Initiative of the University of Virginia. History The institute opened in July 2000 in space in the Virginia Tech Corporate Research Center; it was hosted briefly in Building XI, then Building X, until it moved to Building XV in 2002, which was designed to host the institute. In January 2005, it moved into a new building on the main Virginia Tech's campus, called "Bioinformatics Facility Phase I and II", but retained its existing space in the CRC. In 2011, the institute moved its National Capital Region office into the Virginia Tech building in Arlington, Virginia. In 2015, the Virginia Bioinformatics Institute was quietly renamed and rebranded as the "Biocomplexity Institute". In November 2016, the home of the institute on Virginia Tech's main campus was dedicated as Steger Hall, after former Virginia Tech president Charles Steger. Major research divisions The Advanced Computing and Informatics Laboratories is dedicated to "Policy Informatics", including the Network Dynamics and Simulation Science Laboratory. It pursues research and development in interaction-based modeling, simulation, and associated analysis, experimental design, and decision support tools for understanding large biological, information, social, and technological systems. It includes the Comprehensive National Incident Management System project for developing a system to provide the United States military with detailed operational information about the populations being affected by a possible crisis. It also includes the project, “Modeling Disease Dynamics on Large, Detailed, Co-Evolving Networks,” which supports work to develop high-performance computer models for the study of very large networks. The Cyberinfrastructure Division develops methods, infrastructure, and resources primarily for infectious disease research. The “Pathosystems Resource Integration Center - Bioinformatics Resource Center for Bacterial Diseases” aims to integrate information on pathogens, provide resources and tools to analyze genomic, proteomic and other data arising from infectious disease research. It is part of the Middle-Atlantic Regional Center of Excellence for Biodefense and Emerging Infectious Diseases Research), which focuses on research to enable rapid defense against bioterror and emerging infectious diseases. Specific diseases and disease-causing agents under investigation include anthrax, West Nile virus, smallpox, and cryptosporidiosis The division collaborates with Georgetown University and Social and Scientific Systems on the Administrative Center of the National Institute of Allergy and Infectious Diseases-funded Proteomics Research Resource Center (PRC) for Biodefense Proteomics Research project. The team helps design, develop, and maintain a publicly accessible Web site containing data and technology protocols generated by each PRC, as well as a catalog that lists reagents and products available for public distribution. The Biological Systems Division develops computational methods for studying biochemical networks using experimental data . It developed COPASI (Complex Pathway Simulator), an open-source software package that allows users with limited experience in mathematics to construct models and simulations of biochemical networks. It also developed GenoCAD, a web-based Computer Assisted Design environment for synthetic biology. The Medical Informatics & Systems Division focuses on human genetics and disease, especially cancer and neurological disorders. It collaborates with Carilion Clinics, Virginia Tech Carilion School of Medicine and Research Institute, and other universities and government agencies. Major research laboratories The Network Dynamics and Simulation Science Laboratory at ACDIL pursues programs for interaction-based modeling, simulation, and associated analysis, experimental design, and decision support tools for understanding large and complex systems. Extremely detailed, high-resolution, multi-scale computer simulations allow formal and experimental investigation of these systems. Social and Decision Analytics Laboratory focuses on the use and development of analytical technology in the areas of public health policy, national and international security policy & public and social policy. The Nutritional Immunology and Molecular Medicine Laboratory was founded in 2002 to investigate fundamental mechanisms of gut enteric immunity, and identifying biomarkers and therapeutic targets for inflammatory and immune-mediated diseases. The center has discovered the mechanism of action underlying the anti-inflammatory actions of Conjugated linoleic acid in inflammatory bowel disease, and the insulin sensitizing and anti-inflammatory effects of abscisic acid. Its Center for Modeling Immunity to Enteric Pathogens Program is applying high performance computing techniques to model and simulate human immunology systems and help immunologists conduct quick in silico experiments to narrow down experimental design, validate their hypotheses and save significant time and laboratory cost. This laboratory is also collaborating with the Center for Global Health at the University of Virginia, the Department of Gastroenterology and the University of North Carolina at Chapel Hill and other medical schools and leading several human clinical trials on safer therapies for inflammatory and immune mediated diseases. It has recently established a partnership with the Division of Gastroenterology at the Carilion Clinic to launch a joint translational research program in inflammatory bowel diseases. Core facilities and services The institute occupies more than on the Virginia Tech campus, including over of laboratory space, designed for flexibility and to house computing and laboratory facilities. The institute occupies in Alexandria, Virginia, as part of Virginia Tech National Capital Region. The institute's infrastructure includes core facilities that integrate high-throughput data generation and data analysis capabilities. The Core Computational Facility has three data centers occupying over , with over 250 servers totalling over 10.5 terabytes of random access memory, distributed over more than 2650 processor cores. It has a storage area network with over 1 petabyte of disk and 3 petabytes of tape, expandable to 50 petabytes. The Genomics Research Laboratory has of laboratory space located at the institute's main building. It possesses state-of-the-art Roche GS-FLX, Illumina and Ion Torrent genome sequencers. It includes the Affymetrix National Custom Array Center for custom microarray design, sample processing and analytical services The Data Analysis Core offers Turnkey service to analyze -omics and other data from raw data in to manuscript ready figures and text out. It also provides Nexgen sequence assembly and annotation; microarray design, analysis and interpretation; mass spec data analysis; data QC; hypothesis generation; experimental design; statistical data analysis Education and outreach K–12 programs include "Kids' Tech University," (an educational research program for sparking interest in science, technology, engineering, and mathematics disciplines), the Climate Change Student Summit for teachers and students, and high school summer internships. Undergraduate Programs include Research Experiences for Undergraduates in microbiology and in systems biology, and a Summer Research Institute for foreign and local students. The institute is the home of the Genomics, Bioinformatics, Computational Biology Graduate Program at Virginia Tech, and accommodates students in various Virginia Tech departments. References External links 2000 establishments in Virginia 2019 disestablishments in Virginia Virginia Tech Bioinformatics organizations Genetics or genomics research institutions Research institutes in Virginia Research institutes established in 2000 Research institutes disestablished in 2019
Biocomplexity Institute of Virginia Tech
[ "Biology" ]
1,683
[ "Bioinformatics", "Bioinformatics organizations" ]
1,493,836
https://en.wikipedia.org/wiki/The%20Brain%20Tumour%20Charity
The Brain Tumour Charity is a British charity dedicated to funding research, raising awareness of brain tumours, reducing diagnosis times and providing support and information for people with brain tumours, their families and friends. History The Brain Tumour Charity was created in 2013 through the merger of Brain Tumour UK, the Samantha Dickson Brain Tumour Trust, and the Joseph Foote Trust. The Samantha Dickson Brain Tumour Trust was founded in 1996 by Neil and Angela Dickson, whose daughter Samantha died of a brain tumour when she was 16 years old. Andy Foote founded The Joseph Foote Fund in 2007 after his son, Joseph, died of a brain tumour. The Foote family began raising funds for research into the causes and treatment of brain tumours. In 1997, the UK Brain Tumour Society was founded, later becoming Brain Tumour UK. Activities Research and research funding The Brain Tumour Charity funds a portfolio of research across the UK with the aim of doubling survival rates and reducing long-term harm by identifying better diagnostic techniques and new treatments. Funding is awarded through competitive peer reviewing processes and assessments made by their independent Grant Review and Monitoring Committee (GRAM). Support and information services The charity provides free information and support services which allow people personally affected by brain tumours to access support. The services are focused on improving quality of life. Raising awareness, policies and campaigns HeadSmart The Brain Tumour Charity's primary awareness campaign is HeadSmart, which aims to educate the public and healthcare professionals about the signs and symptoms of brain tumours in children and young people, to reduce diagnosis times, to save lives and to reduce long term disability. The campaign's goal is to reduce diagnosis times to four weeks or less in line with NHS targets. Raising awareness of brain tumours The charity campaigns on a range of issues that affect people affected by a brain tumour. They engage with politicians, policy makers and other influential stakeholders within the health sector, including responding to government consultations. The charity also works with like-minded organisations and networks across the UK to better understand local healthcare issues. In 2015 the charity commissioned a research project 'Living with a brain tumour', in partnership with an independent research agency. The research investigated the lived experience of adults with a brain tumour. Two publications have results from the research: 'Losing Myself: The Reality of Life with a Brain Tumour' – this report demonstrated the extensive effect that brain tumours have on the daily lives of those affected. 'Finding Myself in Your Hands: The Reality of Brain Tumour Treatment and Care' – this report outlined the findings related to respondents' experiences of their NHS treatment and care. Manifestos Ahead of the 2015 United Kingdom general election, The Brain Tumour Charity released a manifesto on brain tumours. It outlined measures that could help survival outcomes and quality of life for those affected by brain tumours. The charity have also released manifestos ahead of the devolved nation elections in 2016. Partnerships The Brain Tumour Charity collaborates with a number of other organisations, including Cancer Research UK, Marie Curie Cancer Care Medical Research Council, Children with Cancer UK, Action Medical Research, and Great Ormond Street Hospital. Institutions that they have funded include Imperial College London, Institute of Cancer Research, Newcastle University, the University of Nottingham, Queen Mary University of London, University of Birmingham, University College London, University of Glasgow and University of Leeds. See also Cancer in the United Kingdom References External links Official website HeadSmart campaign website Biomedical research foundations Cancer organisations based in the United Kingdom Farnborough, Hampshire Health charities in the United Kingdom Health in Hampshire Neurology organizations Neuroscience in the United Kingdom Organisations based in Hampshire Organizations for children with health issues
The Brain Tumour Charity
[ "Engineering", "Biology" ]
755
[ "Biotechnology organizations", "Biomedical research foundations" ]
1,493,984
https://en.wikipedia.org/wiki/Linchpin
A linchpin, also spelled linch pin, lynchpin, or lynch pin, is a fastener used to prevent a wheel or other part from sliding off the axle upon which it is riding. The word is first attested in the late fourteenth century and derives from Middle English elements meaning "axletree pin". Securing implements onto the three-point hitch of a tractor is an example of application. Linchpins may also be used in place of an R-clip for securing hitch pins. Metaphorical use The word "linchpin" is also used figuratively to mean "something [or someone] that holds the various elements of a complicated structure together". See also References Fasteners Horse-drawn vehicle parts
Linchpin
[ "Engineering" ]
148
[ "Construction", "Fasteners" ]
1,494,133
https://en.wikipedia.org/wiki/Intracellular%20pH
Intracellular pH (pHi) is the measure of the acidity or basicity (i.e., pH) of intracellular fluid. The pHi plays a critical role in membrane transport and other intracellular processes. In an environment with the improper pHi, biological cells may have compromised function. Therefore, pHi is closely regulated in order to ensure proper cellular function, controlled cell growth, and normal cellular processes. The mechanisms that regulate pHi are usually considered to be plasma membrane transporters of which two main types exist — those that are dependent and those that are independent of the concentration of bicarbonate (). Physiologically normal intracellular pH is most commonly between 7.0 and 7.4, though there is variability between tissues (e.g., mammalian skeletal muscle tends to have a pHi of 6.8–7.1). There is also pH variation across different organelles, which can span from around 4.5 to 8.0. pHi can be measured in a number of different ways. Homeostasis Intracellular pH is typically lower than extracellular pH due to lower concentrations of HCO3−. A rise of extracellular (e.g., serum) partial pressure of carbon dioxide (pCO2) above 45 mmHg leads to formation of carbonic acid, which causes a decrease of pHi as it dissociates: H2O + CO2 H2CO3 H+ + HCO3– Since biological cells contain fluid that can act as a buffer, pHi can be maintained fairly well within a certain range. Cells adjust their pHi accordingly upon an increase in acidity or basicity, usually with the help of CO2 or HCO3– sensors present in the membrane of the cell. These sensors can permit H+ to pass through the cell membrane accordingly, allowing for pHi to be interrelated with extracellular pH in this respect. Major intracellular buffer systems include those involving proteins or phosphates. Since the proteins have acidic and basic regions, they can serve as both proton donors or acceptors in order to maintain a relatively stable intracellular pH. In the case of a phosphate buffer, substantial quantities of weak acid and conjugate weak base (H2PO4– and HPO42–) can accept or donate protons accordingly in order to conserve intracellular pH: OH– + H2PO4– H2O + HPO42– H+ + HPO42– H2PO4– In organelles The pH within a particular organelle is tailored for its specific function. For example, lysosomes have a relatively low pH of 4.5. Additionally, fluorescence microscopy techniques have indicated that phagosomes also have a relatively low internal pH. Since these are both degradative organelles that engulf and break down other substances, they require high internal acidity in order to successfully perform their intended function. In contrast to the relatively low pH inside lysosomes and phagosomes, the mitochondrial matrix has an internal pH of around 8.0, which is approximately 0.9 pH units higher than that of inside intermembrane space. Since oxidative phosphorylation must occur inside the mitochondria, this pH discrepancy is necessary to create a gradient across the membrane. This membrane potential is ultimately what allows for the mitochondria to generate large quantities of ATP. Measurement There are several common ways in which intracellular pH (pHi) can be measured including with a microelectrode, dye that is sensitive to pH, or with nuclear magnetic resonance techniques. For measuring pH inside of organelles, a technique utilizing pH-sensitive green fluorescent proteins (GFPs) may be used. Overall, all three methods have their own advantages and disadvantages. Using dyes is perhaps the easiest and fairly precise, while NMR presents the challenge of being relatively less precise. Furthermore, using a microelectrode may be challenging in situations where the cells are too small, or the intactness of the cell membrane should remain undisturbed. GFPs are unique in that they provide a noninvasive way of determining pH inside different organelles, yet this method is not the most quantitatively precise way of determining pH. Microelectrode The microelectrode method for measuring pHi consists of placing a very small electrode into the cell’s cytosol by making a very small hole in the plasma membrane of the cell. Since the microelectrode has fluid with a high H+ concentration inside, relative to the outside of the electrode, there is a potential created due to the pH discrepancy between the inside and outside of the electrode. From this voltage difference, and a predetermined pH for the fluid inside the electrode, one can determine the intracellular pH (pHi) of the cell of interest. Fluorescence spectroscopy Another way to measure Intracellular pH (pHi) is with dyes that are sensitive to pH, and fluoresce differently at various pH values. This technique, which makes use of fluorescence spectroscopy, consists of adding this special dye to the cytosol of a cell. By exciting the dye in the cell with energy from light, and measuring the wavelength of light released by the photon as it returns to its native energy state, one can determine the type of dye present, and relate that to the intracellular pH of the given cell. Nuclear magnetic resonance In addition to using pH-sensitive electrodes and dyes to measure pHi, Nuclear Magnetic Resonance (NMR) spectroscopy can also be used to quantify pHi. NMR, typically speaking, reveals information about the inside of a cell by placing the cell in an environment with a potent magnetic field. Based on the ratio between the concentrations of protonated, compared to deprotonated, forms of phosphate compounds in a given cell, the internal pH of the cell can be determined. Additionally, NMR may also be used to reveal the presence of intracellular sodium, which can also provide information about the pHi. Using NMR Spectroscopy, it has been determined that lymphocytes maintain a constant internal pH of 7.17± 0.06, though, like all cells, the intracellular pH changes in the same direction as extracellular pH. pH-sensitive GFPs To determine the pH inside organelles, pH-sensitive GFPs are often used as part of a noninvasive and effective technique. By using cDNA as a template along with the appropriate primers, the GFP gene can be expressed in the cytosol, and the proteins produced can target specific regions within the cell, such as the mitochondria, golgi apparatus, cytoplasm, and endoplasmic reticulum. If certain GFP mutants that are highly sensitive to pH in intracellular environments are used in these experiments, the relative amount of resulting fluorescence can reveal the approximate surrounding pH. References Cell biology
Intracellular pH
[ "Biology" ]
1,418
[ "Cell biology" ]
1,494,164
https://en.wikipedia.org/wiki/Dynamic%20inconsistency
In economics, dynamic inconsistency or time inconsistency is a situation in which a decision-maker's preferences change over time in such a way that a preference can become inconsistent at another point in time. This can be thought of as there being many different "selves" within decision makers, with each "self" representing the decision-maker at a different point in time; the inconsistency occurs when not all preferences are aligned. The term "dynamic inconsistency" is more closely affiliated with game theory, whereas "time inconsistency" is more closely affiliated with behavioral economics. In game theory In the context of game theory, dynamic inconsistency is a situation in a dynamic game where a player's best plan for some future period will not be optimal when that future period arrives. A dynamically inconsistent game is subgame imperfect. In this context, the inconsistency is primarily about commitment and credible threats. This manifests itself through a violation of Bellman's Principle of Optimality by the leader or dominant player, as shown in . For example, a firm might want to commit itself to dramatically dropping the price of a product it sells if a rival firm enters its market. If this threat were credible, it would discourage the rival from entering. However, the firm might not be able to commit its future self to taking such an action because if the rival does in fact end up entering, the firm's future self might determine that, given the fact that the rival is now actually in the market and there is no point in trying to discourage entry, it is now not in its interest to dramatically drop the price. As such, the threat would not be credible. The present self of the firm has preferences that would have the future self be committed to the threat, but the future self has preferences that have it not carry out the threat. Hence, the dynamic inconsistency. In behavioral economics In the context of behavioral economics, time inconsistency is related to how each different self of a decision-maker may have different preferences over current and future choices. Consider, for example, the following question: (a) Which do you prefer, to be given 500 dollars today or 505 dollars tomorrow? (b) Which do you prefer, to be given 500 dollars 365 days from now or 505 dollars 366 days from now? When this question is asked, to be time-consistent, one must make the same choice for (b) as for (a). According to George Loewenstein and Drazen Prelec, however, people are not always consistent. People tend to choose "500 dollars today" and "505 dollars 366 days later", which is different from the time-consistent answer. One common way in which selves may differ in their preferences is they may be modeled as all holding the view that "now" has especially high value compared to any future time. This is sometimes called the "immediacy effect" or "temporal discounting". As a result, the present self will care too much about itself and not enough about its future selves. The self control literature relies heavily on this type of time inconsistency, and it relates to a variety of topics including procrastination, addiction, efforts at weight loss, and saving for retirement. Time inconsistency basically means that there is disagreement between a decision-maker's different selves about what actions should be taken. Formally, consider an economic model with different mathematical weightings placed on the utilities of each self. Consider the possibility that for any given self, the weightings that self places on all the utilities could differ from the weightings that another given self places on all the utilities. The important consideration now is the relative weighting between two particular utilities. Will this relative weighting be the same for one given self as it is for a different given self? If it is, then we have a case of time consistency. If the relative weightings of all pairs of utilities are all the same for all given selves, then the decision-maker has time-consistent preferences. If there exists a case of one relative weighting of utilities where one self has a different relative weighting of those utilities than another self has, then we have a case of time inconsistency and the decision-maker will be said to have time-inconsistent preferences. It is common in economic models that involve decision-making over time to assume that decision-makers are exponential discounters. Exponential discounting posits that the decision maker assigns future utility of any good according to the formula where is the present, is the utility assigned to the good if it were consumed immediately, and is the "discount factor", which is the same for all goods and constant over time. Mathematically, it is the unique continuous function that satisfies the equation that is, the ratio of utility values for a good at two different moments of time only depends on the interval between these times, but not on their choice. (If you're willing to pay 10% over list price to buy a new phone today instead of paying list price and having it delivered in a week, you'd also be willing to pay extra 10% to get it one week sooner if you were ordering it six months in advance.) If is the same for all goods, then it is also the case that that is, if good A is assigned higher utility than good B at time , that relationship also holds at all other times. (If you'd rather eat broccoli than cake tomorrow for lunch, you'll also pick broccoli over cake if you're hungry right now.) Exponential discounting yields time-consistent preferences. Exponential discounting and, more generally, time-consistent preferences are often assumed in rational choice theory, since they imply that all of a decision-maker's selves will agree with the choices made by each self. Any decision that the individual makes for himself in advance will remain valid (i.e., an optimal choice) as time advances, unless utilities themselves change. However, empirical research makes a strong case that time inconsistency is, in fact, standard in human preferences. This would imply disagreement by people's different selves on decisions made and a rejection of the time consistency aspect of rational choice theory. For example, consider having the choice between getting the day off work tomorrow or getting a day and a half off work one month from now. Suppose you would choose one day off tomorrow. Now suppose that you were asked to make that same choice ten years ago. That is, you were asked then whether you would prefer getting one day off in ten years or getting one and a half days off in ten years and one month. Suppose that then you would have taken the day and a half off. This would be a case of time inconsistency because your relative preferences for tomorrow versus one month from now would be different at two different points in time—namely now versus ten years ago. The decision made ten years ago indicates a preference for delayed gratification, but the decision made just before the fact indicates a preference for immediate pleasure. More generally, humans have a systematic tendency to switch towards "vices" (products or activities which are pleasant in the short term) from "virtues" (products or activities which are seen as valuable in the long term) as the moment of consumption approaches, even if this involves changing decisions made in advance. One way that time-inconsistent preferences have been formally introduced into economic models is by first giving the decision-maker standard exponentially discounted preferences, and then adding another term that heavily discounts any time that is not now. Preferences of this sort have been called "present-biased preferences". The hyperbolic discounting model is another commonly used model that allows one to obtain more realistic results with regard to human decision-making. A different form of dynamic inconsistency arises as a consequence of "projection bias" (not to be confused with a defense mechanism of the same name). Humans have a tendency to mispredict their future marginal utilities by assuming that they will remain at present levels. This leads to inconsistency as marginal utilities (for example, tastes) change over time in a way that the individual did not expect. For example, when individuals are asked to choose between a piece of fruit and an unhealthy snack (such as a candy bar) for a future meal, the choice is strongly affected by their "current" level of hunger. Individuals may become addicted to smoking or drugs because they underestimate future marginal utilities of these habits (such as craving for cigarettes) once they become addicted. In media studies Theories of media choice have not explicitly dealt with choice inconsistency as it was defined by behavioral economics. However, an article by Gui et al. (2021) draws on behavioral economics literature to address blind spots in theorization of inconsistent media selection in media studies. It also highlights that inconsistent choice is even more frequent and relevant in the digital environment, as higher stimulation and multitasking makes it easier to opt for immediate gratification even in the presence of different long-term preferences. Stylized examples In a game theory context, an announced government policy of never negotiating with terrorists over the release of hostages constitutes a time inconsistency example, since in each particular hostage situation the authorities face the dilemma of breaking the rule and trying to save the hostages. Assuming the government acted consistently in not ever breaking the rule, it would make it irrational for a terrorist group to take hostages. (Of course, in the real world terrorists might not act rationally.) Students, the night before an exam, often wish that the exam could be put off for one more day. If asked on that night, such students might agree to commit to paying, say, $10 on the day of the exam for it to be held the next day. Months before the exam is held, however, students generally do not care much about having the exam put off for one day. And, in fact, if the students were made the same offer at the beginning of the term, that is, they could have the exam put off for one day by committing during registration to pay $10 on the day of the exam, they probably would reject that offer. The choice is the same, although made at different points in time. Because the outcome would change depending on the point in time, the students would exhibit time inconsistency. Monetary policy makers suffer from dynamic inconsistency with inflation expectations, as politicians are best off promising lower inflation in the future. But once tomorrow comes lowering inflation may have negative effects, such as increasing unemployment, so they do not make much effort to lower it. This is why independent central banks are believed to be advantageous for a country. Indeed, "a central bank with a high degree of discretion in conducting monetary policy would find itself under constant political pressure to boost the economy and reduce unemployment, but since the economy cannot exceed its potential GDP or its natural rate of unemployment over time, this policy would instead only lead to higher inflation in the long run". The first paper on this subject was published by Finn E. Kydland and Edward C. Prescott in the Journal of Political Economy in 1977, which eventually led to their winning the Nobel Memorial Prize in Economic Sciences in 2004. (See also Monetary policy credibility.) One famous example in literature of a mechanism for dealing with dynamic inconsistency is that of Odysseus and the Sirens. Curious to hear the Sirens' songs but mindful of the danger, Odysseus orders his men to stop their ears with beeswax and ties himself to the mast of the ship. Most importantly, he orders his men not to heed his cries while they pass the Sirens; recognizing that in the future he may behave irrationally, Odysseus limits his future agency and binds himself to a commitment mechanism (i.e., the mast) to survive this perilous example of dynamic inconsistency. This example has been used by economists to explain the benefits of commitment mechanisms in mitigating dynamic inconsistency. A curious case of dynamic inconsistency in psychology is described by . In the experiment, subjects of the study were offered free rentals of movies which were classified into two categories - "lowbrow" (e.g., The Breakfast Club) and "highbrow" (e.g., Schindler's List) - and researchers analyzed patterns of choices made. In the absence of dynamic inconsistency, the choice would be expected to be the same regardless of the delay between the decision date and the consumption date. In practice, however, the outcome was different. When subjects had to choose a movie to watch immediately, the choice was consistently lowbrow for the majority of the subjects. But when they were asked to pick a movie to be watched at later date, highbrow movies were chosen far more often. Among movies picked four or more days in advance, over 70% were highbrow. People display a consistent bias to believe that they will have more time in the future than they have today. Specifically, there is a persistent belief among people that they are "unusually busy in the immediate future, but will become less busy shortly". However, the amount of time you have this week is generally representative of the time you have in future weeks. When people are estimating their time and when deciding if they will make a commitment, they anticipate more "time slack" in future weeks than the present week. Experiments by on this topic showed that people tend to discount investments of time more than money. They nicknamed this the "Yes...Damn!" effect because people tend to commit themselves to time-consuming activities like traveling to a conference under the false impression that they will be less busy in the future. See also Time preference References Bibliography Yeung, David W. K.; Petrosyan, Leon A. Subgame Consistent Economic Optimization: An Advanced Cooperative Dynamic Game Analysis (Static & Dynamic Game Theory: Foundations & Applications), Birkhäuser Boston; 2012. Further reading Stoerger, Jan (2006): The Time Consistency Problem - Monetary Policy Models Game theory Intertemporal economics
Dynamic inconsistency
[ "Mathematics" ]
2,927
[ "Game theory" ]
1,494,488
https://en.wikipedia.org/wiki/Relay%20valve
A relay valve is an air-operated valve typically used in air brake systems to remotely control the brakes at the rear of a heavy truck or semi-trailer in a tractor-trailer combination. Relay valves are necessary in heavy trucks in order to speed-up rear-brake application and release, since air takes longer to travel to the rear of the vehicle than the front of the vehicle, where the front service brakes, foot-valve, parking-control valve, and trailer-supply valve (if applicable) are located. Without relay valves, it would take too long for sufficient air to travel from the brake pedal valve to the rear of the truck or trailer in order to apply the rear service brakes concurrently with the front service brakes, resulting in a condition known as brake lag. To correct this condition on a long-wheel-base vehicle, a relay valve is installed near the rear service brake chambers. In tractors as well as straight-trucks, a remote air-supply is provided in the form of a large diameter pipe connected between the primary reservoir and the relay valve for remote service brake application. In a truck’s air brake system, relay valves get a signal when a driver presses the treadle, which then opens the valve and allows air to enter the brake chamber via air inlet. The diaphragm gets pushed, then the rod, then the slack adjuster which twists to turn the brake camshaft. Next, it moves the disc, wedge or s-cam, which pushes the brake shoes and lining, creating friction. This friction slows and eventually stops the brake drum’s turning, which stops the wheel. Trailers In trailers, this remote air-supply is in the form of a tank, which is charged whenever the emergency brakes are released via the red trailer-supply valve on the dashboard. In a dual-circuit air brake system, this tank actually receives its air from both  the primary and secondary reservoirs of the tractor; the air from both of these reservoirs is merged via a two-way check valve. The two-way check valve is a pneumatic device that has two inputs and one output; each input is connected to one these reservoirs. Only the air that is at the higher pressure is allowed to pass through to the check valve's output, which then passes through the tractor-protection valve, and then travels onward towards the trailer's air-tank and spring brake valve via the red trailer-supply line (a.k.a., the emergency line); this releases the trailer's emergency brakes (a.k.a. spring brakes). The tractor-protection valve is a device that prevents air from being lost from the tractor's braking system in the event of the air-lines becoming separated or broken. The tractor's air-lines connect to the trailer's air-lines via metal connectors known as gladhands. The merged air from both reservoirs of the tractor prevents air-loss from only one tractor braking circuit from causing the trailer's spring brakes to automatically apply. This gives the driver more control, and prevents the vehicle from grinding to a halt in an unsafe location, such as in the middle of an intersection. Service brake relay valve With a service brake relay valve installed, the hose that connects to the primary delivery-port output of the foot-valve becomes a control-line (i.e., The air from the foot-valve “dead ends” at the relay valve's control-port.). Only low-volume air-signals are required to travel back and forth between the foot-valve's delivery port and the relay valve's control port; therefore, the air-volume supplied by the delivery port is now only a tiny-fraction  of what otherwise would have been required had the relay valve not been installed. This reduces the delay between the application of the front and rear brakes to only a fraction of a second. When the driver depresses the brake pedal, a small amount of air momentarily opens the relay valve's supply port, which then directs air from the remote air-supply directly to the rear service brake chambers, and quickly applies the rear service brakes. The pressure delivered to the service brake chambers in this manner will equal the control-pressure delivered by the foot-valve to the relay valve. When the driver partially or fully releases the brake pedal, the control-pressure delivered by the foot-valve decreases; this causes the relay valve's supply port to close, and its exhaust port to momentarily open, thus preventing a pneumatic short-circuit from occurring while the air exhausts from all rear service brake chambers. In order to control the trailer service brakes, the merged outputs (i.e., merged via 2 two-way check valves connected in-series to give three inputs) of the foot-valve and trailer-hand-valve (if applicable) are directed through the tractor-protection valve, and onward towards the trailer relay valve via the blue service line. In tractors that are not equipped with a trailer hand valve, only the merged outputs of the foot-valve (i.e., via a single two-way check valve) are directed towards the trailer relay valve; however, the fact that the foot-valve's delivery-port outputs are still merged enables the trailer's service brakes to still be controlled even if there is failure within one braking circuit of the tractor. Spring brake relay valve A spring brake relay valve works on the same principle as the service brake relay valve, although it has the opposite effect. This type of relay valve responds to a major drop in pressure at its control-port by opening its exhaust port, which causes the air from each spring brake chamber under its control to remotely exhaust, thus applying  the spring brakes much-more-quickly than would otherwise be possible if the air were required to discharge via the yellow parking-control valve on the dashboard. In a dual-circuit air brake system, air from both the primary and secondary reservoirs is fed into the supply-port of the parking-control valve, as well as the supply-port of this relay valve; it is merged via yet another  two-way check valve. The delivery-port output of the parking-control valve connects to the control-port of this relay valve; this enables the spring brakes to be controlled via this valve. The merged air from the parking-control valve prevents air-loss from only one braking circuit from causing the spring brakes to automatically apply. This gives the driver more control, and prevents the vehicle from grinding to a halt in an unsafe location. However, with this increased control, comes increased responsibility on the part of the driver: If air is lost from the primary circuit alone, the spring brakes must  be manually applied by the driver via the parking-control valve; otherwise, the front service brakes may not be enough to stop the vehicle safely in an emergency—especially if the vehicle is heavily loaded, and/or traveling at a high-speed. In fact, the driver's failure to manually apply the spring brakes in this situation could lead to catastrophic failure of the front brakes due to overheating, since it could cause the front service brakes to exceed their design-limit for energy absorption. QR1C air valve speeds up the process, with anti compounding, meaning trailer and service brakes will function 1 second between each other. The relay valve's function is analogous to the transistor used in electronic circuits. Testing Relay Valves Relay valves are tested for durability before use through a seat test with air. In pressure at 80 psig or more, a 2-inch or smaller relay valve should not be tested for less than 15 seconds under pressure or for less than 30 seconds if it is at 3 inches in size. References Valves
Relay valve
[ "Physics", "Chemistry" ]
1,578
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
1,494,666
https://en.wikipedia.org/wiki/Anti-M%C3%BCllerian%20hormone
Anti-Müllerian hormone (AMH), also known as Müllerian-inhibiting hormone (MIH), is a glycoprotein hormone structurally related to inhibin and activin from the transforming growth factor beta superfamily, whose key roles are in growth differentiation and folliculogenesis. In humans, it is encoded by the gene, on chromosome 19p13.3, while its receptor is encoded by the gene on chromosome 12. AMH is activated by SOX9 in the Sertoli cells of the male fetus. Its expression inhibits the development of the female reproductive tract, or Müllerian ducts (paramesonephric ducts), in the male embryo, thereby arresting the development of fallopian tubes, uterus, and upper vagina. AMH expression is critical to sex differentiation at a specific time during fetal development, and appears to be tightly regulated by nuclear receptor SF-1, transcription GATA factors, sex-reversal gene DAX1, and follicle-stimulating hormone (FSH). Mutations in both the AMH gene and the type II AMH receptor have been shown to cause the persistence of Müllerian derivatives in males that are otherwise normally masculinized. AMH is also a product of granulosa cells of the preantral and small antral follicles in women. As such, AMH is only present in the ovary until menopause. Production of AMH regulates folliculogenesis by inhibiting recruitment of follicles from the resting pool in order to select for the dominant follicle, after which the production of AMH diminishes. As a product of the granulosa cells, which envelop each egg and provide them energy, AMH can also serve as a molecular biomarker for relative size of the ovarian reserve. In bovine, AMH can be used for selection of females in multi-ovulatory embryo transfer programs by predicting the number of antral follicles developed to ovulation. AMH can also be used as a marker for ovarian dysfunction, such as in women with polycystic ovary syndrome (PCOS). Structure AMH is a dimeric glycoprotein with a molar mass of 140 kDa. The molecule consists of two identical subunits linked by sulfide bridges, and characterized by the N-terminal dimer (pro-region) and C-terminal dimer. AMH binds to its Type 2 receptor AMHR2, which phosphorylates a type I receptor under the TGF beta signaling pathway. Function Embryogenesis In male mammals, AMH prevents the development of the Müllerian ducts into the uterus and other Müllerian structures. The effect is ipsilateral, that is each testis suppresses Müllerian development only on its own side. If no hormone is produced from the gonads, the Müllerian will develop thanks to the presence of Wnt4 , while the Wolffian ducts, which are responsible for male reproductive parts, will die due to the presence of COUP-TFII. Amounts of AMH that are measurable in the blood vary by age and sex. AMH works by interacting with specific receptors on the surfaces of the cells of target tissues (anti-Müllerian hormone receptors). The best-known and most specific effect, mediated through the AMH type II receptors, includes programmed cell death (apoptosis) of the target tissue (the fetal Müllerian ducts). Ovarian AMH is produced by granulosa cells from pre-antral and antral follicles, restricting expression to growing follicles, until they have reached the size and differentiation state at which they are selected for dominance by the action of pituitary FSH. Ovarian AMH expression has been observed as early as 36 weeks' gestation in the humans' fetus. AMH expression is greatest in the recruitment stage of folliculogenesis, in the preantral and small antral follicles. This expression diminishes as follicles develop and enter selection stage, upon which FSH expression increases. Some authorities suggest it is a measure of certain aspects of ovarian function, useful in assessing conditions such as polycystic ovary syndrome and premature ovarian failure. Other AMH production by the Sertoli cells of the testes remains high throughout childhood in males but declines to low levels during puberty and adult life. AMH has been shown to regulate production of sex hormones, and changing AMH levels (rising in females, falling in males) may be involved in the onset of puberty in both sexes. Functional AMH receptors have also been found to be expressed in neurons in the brains of embryonic mice, and are thought to play a role in sexually dimorphic brain development and consequent development of gender-specific behaviours. In a clade of Sebastes rockfishes in the Northwest Pacific Ocean, a duplicated copy of the AMH gene (called AMHY) is the master sex-determining gene. In vitro experiments demonstrate that the overexpression of AMHY causes female-to-male sex reversal in at least one species, S. schlegelii. Pathology In males, inadequate embryonal AMH activity can lead to persistent Müllerian duct syndrome (PMDS), in which a rudimentary uterus is present and testes are usually undescended. The AMH gene (AMH) or the gene for its receptor (AMH-RII) are usually abnormal. AMH measurements have also become widely used in the evaluation of testicular presence and function in infants with intersex conditions, ambiguous genitalia, and cryptorchidism. A study published in Nature Medicine found a link between hormonal imbalance in the womb and polycystic ovary syndrome (PCOS), specifically prenatal exposure to anti-Müllerian hormone. For the study, the researchers injected pregnant mice with AMH so that they had a higher than normal concentration of the hormone. Indeed, they gave birth to daughters who later developed PCOS-like tendencies. These included problems with fertility, delayed puberty, and erratic ovulation. To reverse it, the researchers dosed the polycystic mice with an IVF drug called cetrorelix, which made the symptoms to go away. These experiments should be confirmed in humans, but it could be the first step in understanding the relationship between the polycystic ovary and the anti-Müllerian hormone. Blood levels In healthy females AMH is either just detectable or undetectable in cord blood at birth and demonstrates a marked rise by three months of age; while still detectable it falls until four years of age before rising linearly until eight years of age remaining fairly constant from mid-childhood to early adulthood – it does not change significantly during puberty. The rise during childhood and adolescence is likely reflective of different stages of follicle development. From 25 years of age AMH declines to undetectable levels at menopause. The standard measurement of AMH follows the Generation II assay. This should give the same values as the previously used IBC assay, but AMH values from the previously used DSL assay should be multiplied with 1.39 to conform to current standards because it used different antibodies. Weak evidence suggests that AMH should be measured only in the early follicular phase because of variation over the menstrual cycle. Also, AMH levels decrease under current use of oral contraceptives and current tobacco smoking. Reference ranges Reference ranges for anti-Müllerian hormone, as estimated from reference groups in the United States, are as follows: Females: Males: AMH measurements may be less accurate if the person being measured is vitamin D deficient. Note that males are born with higher AMH levels than females in order to initiate sexual differentiation, and in women, AMH levels decrease over time as fertility decreases as well. Clinical usage General fertility assessment Comparison of an individual's AMH level with respect to average levels is useful in fertility assessment, as it provides a guide to ovarian reserve. Because one's AMH level cannot be altered by any external factors, it helps identify whether a woman needs to consider either egg freezing or trying for a pregnancy sooner rather than later if their long-term future fertility is poor. A higher level of anti-Müllerian hormone when tested in women in the general population has been found to have a positive correlation with natural fertility in women aged 30–44 aiming to conceive spontaneously, even after adjusting for age. However, this correlation was not found in a comparable study of younger women (aged 20 to 30 years). In vitro fertilization AMH is a predictor for ovarian response in in vitro fertilization (IVF). Measurement of AMH supports clinical decisions, but alone it is not a strong predictor of IVF success. Women with lower levels of AMH are still able to get pregnant Additionally, AMH levels are used to estimate a woman's remaining egg supply. According to NICE guidelines of in vitro fertilization, an anti-Müllerian hormone level of less than or equal to 5.4 pmol/L (0.8 ng/mL) predicts a low response to gonadotrophin stimulation in IVF, while a level greater than or equal to 25.0 pmol/L (3.6 ng/mL) predicts a high response. Other cut-off values found in the literature vary between 0.7 and 20 pmol/L (0.1 and 2.97 ng/mL) for low response to ovarian hyperstimulation. Subsequently, higher AMH levels are associated with greater chance of live birth after IVF, even after adjusting for age. AMH can thereby be used to rationalise the programme of ovulation induction and decisions about the number of embryos to transfer in assisted reproduction techniques to maximise pregnancy success rates whilst minimising the risk of ovarian hyperstimulation syndrome (OHSS). AMH can predict an excessive response in ovarian hyperstimulation with a sensitivity and specificity of 82% and 76%, respectively. Measuring AMH alone may be misleading as high levels occur in conditions like polycystic ovarian syndrome and therefore AMH levels should be considered in conjunction with a transvaginal scan of the ovaries to assess antral follicle count and ovarian volume. Natural remedies Studies into treatments to improve low ovarian reserve and low AMH levels have met with some success. Current best available evidence suggests that DHEA improves ovarian function, increases pregnancy chances and, by reducing aneuploidy, lowers miscarriage rates. The studies into DHEA for low AMH show that a dose of 75 mg for a period of 16 weeks should be taken. Improvement of oocyte/embryo quality with DHEA supplementation potentially suggests a new concept of ovarian aging, where ovarian environments, but not oocytes themselves, age. DHEA has positive outcomes for women with AMH levels over 0.8 ng/mL or 5.7 pmol/L DHEA has no apparent effect on oocytes or ovarian environments under this range. Studies have demonstrated a decline in CoQ levels with age. Studies on CoQ10 supplementation in an aged animal model delayed depletion of ovarian reserve, restored oocyte mitochondrial gene expression, and improved mitochondrial activity. Therefore, CoQ10 is used as a stimulator of the mitochondrial ATP formation in the electron transport chain when it’s naturally deficient in ovarian aged patients. Authors note that to replicate the 12–16 weeks of using CoQ10 supplements on mice to achieve these results would be the equivalent to a decade in humans. Vitamin D is believed to play a role in AMH regulation. The AMH gene promoter contains a vitamin D response element that may cause vitamin D status to influence serum AMH levels. Women with levels of vitamin D of 267.8 ± 66.4 nmol/L show a 4 times better success rate with IVF procedure than those with low levels of 104.3 ± 21 nmol/L. Vitamin D deficiency should be considered when serum AMH levels are obtained for diagnosis. Women with cancer In women with cancer, radiation therapy and chemotherapy can damage the ovarian reserve. In such cases, a pre-treatment AMH is useful in predicting the long-term post-chemotherapy loss of ovarian function, which may indicate fertility preservation strategies such as oocyte cryopreservation. A post-treatment AMH is associated with decreased fertility. Granulosa cell tumors of the ovary secrete AMH, and AMH testing has a sensitivity ranging between 76 and 93% in diagnosing such tumors. AMH is also useful in diagnosing recurrence of granulosa cell tumors. Neutering status in animals In veterinary medicine, AMH measurements are used to determine neutering status in male and female dogs and cats. AMH levels can also be used to diagnose cases of ovarian remnant syndrome. Biomarker of polycystic ovary syndrome Polycystic ovary syndrome (PCOS) is an endocrine disorder most commonly found in women of reproductive age that is characterized by oligo- or anovulation, hyperandrogenism, and polycystic ovaries (PCO). This endocrine disorder increases AMH levels at nearly two to three times higher in women with PCOS than in normal type women. This is often attributed to the increased follicle count number characteristic of PCOS, indicating an increase in granulosa cells since they surround each individual egg. However, increased AMH levels have also been attributed not just to the increased number of follicles, but also to an increased amount of AMH produced per follicle. The high levels of androgens, characteristic of PCOS, also stimulate and provide feedback for increased production of AMH, as well. In this way, AMH has been increasingly considered to be a tool or biomarker that can be used to diagnose or indicate PCOS. Biomarker of Turner syndrome Turner syndrome is the most common sex chromosome-related inherited diseases in female around the world, with the incidence of 1 in 2000 live female births. One of the significant pathological features is the premature ovarian failure, leading to amenorrhea or even infertility. Follicle stimulating hormone and inhibin B were recommended to be monitored routinely by specialists to speculate the condition of ovary. Recently, anti-Müllerian hormone is advised as a more accurate biomarker for follicular development by several researchers. The biological function of anti-Müllerian hormone in ovary is to counteract the recruitment of primordial follicles triggered by FSH, reserving the follicle pool for further recruitment and ovulation. When menopause takes place, the serum concentration of anti-Müllerian hormone will be nearly undetectable amongst normal women. Thus, variations in AMH levels during childhood may theoretically predict the duration of any given girl's reproductive life span, assuming that the speed of the continuous follicle loss is comparable between individuals. Potential future usage AMH has been synthesized. Its ability to inhibit growth of tissue derived from the Müllerian ducts has raised hopes of usefulness in the treatment of a variety of medical conditions including endometriosis, adenomyosis, and uterine cancer. Research is underway in several laboratories. If there were more standardized AMH assays, it could potentially be used as a biomarker of polycystic ovary syndrome. In mice, an increase in AMH has been shown to reduce the number of growing follicles and thus the overall size of the ovaries. This increase in AMH production reduces primary, secondary and antral follicles without reducing the number of primordial follicles suggesting a blockade of primordial follicle activation. This may provide a viable method of contraception which protects the ovarian reserve of oocytes during chemotherapy without extracting them from the body allowing the potential for natural reproduction later in life. Names The adjective Müllerian is written either Müllerian or müllerian, depending on the governing style guide; the derived term with the prefix anti- is then anti-Müllerian, anti-müllerian, or antimüllerian. The Müllerian ducts are named after Johannes Peter Müller. A list of the names that have been used for the antimüllerian hormone is as follows. For the sake of simplicity, this list ignores some orthographic variations; for example, it gives only one row for "Müllerian-inhibiting hormone", although there are four acceptable stylings thereof (capital M or lowercase m, hyphen or space). See also Alfred Jost - first postulated the existence of a non-testesterone substance that suppressed Müllerian hormone Nathalie Josso - discovered and named AMH Anti-Müllerian hormone receptor Freemartin - involvement of anti-Müllerian hormone in cattle twins of mixed sex Persistent Müllerian duct syndrome (PMDS) Sexual differentiation References Animal developmental biology Fish hormones Hormones of the embryo Biomarkers
Anti-Müllerian hormone
[ "Biology" ]
3,589
[ "Biomarkers" ]
1,494,672
https://en.wikipedia.org/wiki/TK90X
The TK90X was a Brazilian ZX Spectrum clone made in 1985 by Microdigital Electrônica, a company from São Paulo, that had previously manufactured ZX80 (TK80, TK82) and ZX81 clones (TK82C, TK83 and TK85). Reported TK90X sales in October 1986 were 2500 machines per month. Technical details The case was a little taller than the original Spectrum and the keyboard placement was equal to the original keyboard, except for some additional Sinclair BASIC commands that did not exist in the Spectrums (UDG for user defined characters in the place of the £ sign - including specific Portuguese and Spanish characters such as and , as well as accented vowels - and the Trace function). There were two versions of the machine, with 16 and 48 KB of RAM. They contained the same Z80A processor running at 3.58 MHz, ROM chip and RAM chips (dynamic RAMs 4116 and 4416). Microdigital reverse engineered a CMOS integrated circuit (IC) with similar functionality to the original bipolar IC ULA from Sinclair/Ferranti. Most software written for the Spectrum ran on the TK90X, with some minor incompatibilities. The TV modulator was tuned to VHF channel 3, with the TV standard being hardware selectable to PAL-M (60 Hz) as used in Brazil, PAL-N (50 Hz) as used in Uruguay, Argentina and Paraguay and NTSC (60 Hz) as used in USA and many other countries. An improvement over the original ZX Spectrum was the sound output via modulated RF direct to the TV set instead of the internal beeper. Peripherals Three peripherals were released by Microdigital: a joystick, a light pen interface and a parallel printer interface. A Beta Disc Interface was available by third party companies, called 'C.A.S. disk drive interface' (a near-clone from the original Beta Disc interface), 'C.B.I. disk drive interface' (with an included printer interface) and 'IDS91' (with an included printer interface made by Synchron) or 'IDS2001ne' (also with an included printer interface made by Synchron, but exclusively compatible with the TK90X and TK95). TK95 The TK90X was replaced by the TK95, which had a different keyboard and case (identical to the Commodore Plus4), while the circuit board and schematics remained unchanged (the motherboard was marked as TK90X). It also used the same ULA as the TK90X, with only digital logic ports and the analogue part outside the ULA chip. This machine had a few ROM differences that made it more compatible with the original ZX Spectrum (e.g., the game Mikie runs only on the TK95, not on the TK90X). Some users created a switch the enabled choosing between the original TK90X, TK95 or ZX Spectrum ROM internally, in order to be able to run all of the Spectrum's software. Export model During the 1980s Brazilians were not allowed to import computers and therefore the TK90X became the first affordable color computer in the market. It was successful in other Latin American countries, such as Uruguay and Argentina, as an export model using a different circuit board and schematics, and the same Ferranti ULA as the ZX Spectrum. Because of its affordability in Latin America, many commercial software programs were developed locally for small business use and millions of users had their first computer experience with the TK90. There's an active user base of enthusiasts of this computer, with dedicated websites discussing software preservation, peripherals and homebrew development and modifications. References External links TK90X/95 Mailing list - Mailing list about TK90X/95/Spectrum (English Speakers are Welcome) ZEsarUX - ZX Second-Emulator And Released for UniX (GPL) Microdigital Eletrônica Computer-related introductions in 1985 ZX Spectrum clones Goods manufactured in Brazil
TK90X
[ "Technology" ]
872
[ "Computing stubs", "Computer hardware stubs" ]
1,494,699
https://en.wikipedia.org/wiki/Warwick%20Estevam%20Kerr
Warwick Estevam Kerr (9 September 1922 – 15 September 2018) was a Brazilian agricultural engineer, geneticist, entomologist, professor and scientific leader, notable for his discoveries in the genetics and sex determination of bees. The Africanized bee in the western hemisphere is directly descended from 26 Tanzanian queen bees (Apis mellifera scutellata) accidentally released by one of his assistant bee-keepers. When reassembling a hive, the assistant forgot to install the queen excluder. This occurred in 1957 in Rio Claro, São Paulo in the southeast of Brazil from hives operated by Kerr, who had interbred honey bees from Europe and southern Africa. Biography Kerr was born in 1922 in Santana do Parnaíba, São Paulo, Brazil, the son of Américo Caldas Kerr and Bárbara Chaves Kerr. The Kerr family immigrated by way of the United States. His family is originally from Scotland. The family moved to Pirapora do Bom Jesus, São Paulo, in 1925. He attended secondary school and the preparatory course at the Mackenzie in São Paulo and subsequently was admitted to the Escola Superior de Agricultura Luiz de Queiroz of the University of São Paulo, at Piracicaba, São Paulo, where he graduated as agricultural engineer. From March 1975 to April 1979, Kerr moved to Manaus, Amazonas, as director of the National Institute of Amazonia Research (INPA), a research institute of the National Council of Scientific and Technological Development (CNPq). He officially retired from the University of São Paulo in January 1981, but not from scientific life. Exactly eleven days later he accepted a position as Full Professor at the Universidade Estadual do Maranhão in São Luís, state of Maranhão, where he became responsible for creating the Department of Biology; and, for a short period (1987–1988) served also as Dean of the university. He moved to the Universidade Federal de Uberlândia, in Uberlândia, state of Minas Gerais, in February 1988, as a professor of genetics. Scientific contributions His scientific life began in Piracicaba, where he received his doctorate (D.Sc.) and later was an assistant professor. In 1951, he did postdoctoral studies as a visiting professor at the University of California at Davis and, in 1952, at Columbia University, where he studied with the famous geneticist Theodosius Dobzhansky. In 1958, he was invited by Professor Dias da Silveira to assist in organizing the Department of Biology at the Faculdade de Ciências do Rio Claro, of the recently created State University of São Paulo UNESP, in the city of Rio Claro, where he stayed until 1964, directing a research group on the genetics of bees, his main field of specialization. From 1962 to 1964, he served as the Scientific Director to organize the recently created São Paulo State Research Foundation (FAPESP). In December 1964, he accepted the position of Full Professor of Genetics at the Faculty of Medicine of Ribeirão Preto of the University of São Paulo, during the creation of a new Department of Genetics. In this capacity, Kerr was able to establish a research center of excellence, particularly in the areas of entomological genetics and human genetics, and which trained many masters and doctoral students. The department included a new research and teaching area, that of mathematical biology and biostatistics; and was a pioneer in the use of computers in biology and medicine, particularly for genetics applied to animal husbandry. In all these positions he never stopped his research on Meliponini, especially the genus Melipona, which is a genus of Neotropical bees that are frequently subject to the predatory action of wild honey gatherers (meleiros in Portuguese). Kerr became well known for his research on the hybridization of the African bee and the Italian bee (Apis mellifera ligustica). Kerr has 620 publications on various subjects. Apart from being a member of the Brazilian Academy of Sciences, he was also a Foreign Associate of the National Academy of Sciences of the US, and of the Third World Academy of Sciences. He was admitted by President Itamar Franco to the National Order of Scientific Merit at the Grã-Cruz class in 1994. Selected papers Sources Bad Bee Keeping Blog Fundação Getulio Vargas: Warwick E. Kerr New York Times Article: Killer Bees Brazilian Journal of History References 1922 births 2018 deaths Brazilian geneticists Brazilian biologists Brazilian agronomists Brazilian entomologists Brazilian beekeepers Brazilian people of Scottish descent Theoretical biologists Members of the Brazilian Academy of Sciences Foreign associates of the National Academy of Sciences Recipients of the Great Cross of the National Order of Scientific Merit (Brazil) People from São Paulo (state) People from Santana de Parnaíba University of São Paulo alumni Brazilian people of American descent
Warwick Estevam Kerr
[ "Biology" ]
996
[ "Bioinformatics", "Theoretical biologists" ]
1,494,813
https://en.wikipedia.org/wiki/Ramsauer%E2%80%93Townsend%20effect
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas. This effect is a result of quantum mechanics. The effect is named for Carl Ramsauer and John Sealy Townsend, who each independently studied the collisions between atoms and low-energy electrons in 1921. Definitions When an electron moves through a gas, its interactions with the gas atoms cause scattering to occur. These interactions are classified as inelastic if they cause excitation or ionization of the atom to occur and elastic if they do not. The probability of scattering in such a system is defined as the number of electrons scattered, per unit electron current, per unit path length, per unit pressure at 0 °C, per unit solid angle. The number of collisions equals the total number of electrons scattered elastically and inelastically in all angles, and the probability of collision is the total number of collisions, per unit electron current, per unit path length, per unit pressure at 0 °C. Because noble gas atoms have a relatively high first ionization energy and the electrons do not carry enough energy to cause excited electronic states, ionization and excitation of the atom are unlikely, and the probability of elastic scattering over all angles is approximately equal to the probability of collision. Description If one tries to predict the probability of collision with a classical model that treats the electron and atom as hard spheres, one finds that the probability of collision should be independent of the incident electron energy. However, Ramsauer and Townsend, independently observed that for slow-moving electrons in argon, krypton, or xenon, the probability of collision between the electrons and gas atoms obtains a minimum value for electrons with a certain amount of kinetic energy (about 1 electron volts for xenon gas). No good explanation for the phenomenon existed until the introduction of quantum mechanics, which explains that the effect results from the wave-like properties of the electron. A simple model of the collision that makes use of wave theory can predict the existence of the Ramsauer–Townsend minimum. Niels Bohr presented a simple model for the phenomenon that considers the atom as a finite square potential well. Predicting from theory the kinetic energy that will produce a Ramsauer–Townsend minimum is quite complicated since the problem involves understanding the wave nature of particles. However, the problem has been extensively investigated both experimentally and theoretically and is well understood. References Scattering Physical phenomena
Ramsauer–Townsend effect
[ "Physics", "Chemistry", "Materials_science" ]
506
[ "Physical phenomena", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
1,494,850
https://en.wikipedia.org/wiki/Sidewinder%20Raven
Sidewinder Raven is the designation of a two-stage sounding rocket. A ceiling of 112 km, a takeoff thrust of 26 kN, a takeoff weight of 110 kg, a diameter of 130 mm and a length of 5.20 m. References Sounding rockets of the United States
Sidewinder Raven
[ "Astronomy" ]
58
[ "Rocketry stubs", "Astronomy stubs" ]
151,577
https://en.wikipedia.org/wiki/Causality%20%28physics%29
Causality is the relationship between causes and effects. While causality is also a topic studied from the perspectives of philosophy and physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone. Macroscopic vs microscopic causality Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer faster than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version. Macroscopic causality In classical physics, an effect cannot occur before its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, "elsewhere", have to commute, so the order of observations or measurements of such observables do not impact each other. Another requirement of causality is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than in Descartes' theory. Simultaneity In modern physics, the notion of causality had to be clarified. The word simultaneous is observer-dependent in special relativity. The principle is relativity of simultaneity. Consequently, the relativistic principle of causality says that the cause must precede its effect according to all inertial observers. This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel backward in time. For this reason, special relativity does not allow communication faster than the speed of light. In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In those two theories, causality is closely related to the principle of locality. Bell's Theorem shows that conditions of "local causality" in experiments involving quantum entanglement result in non-classical correlations predicted by quantum mechanics. Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture. Determinism (or, what causality is not) The word causality in this context means that all effects must have specific physical causes due to fundamental interactions. Causality in this context is not associated with definitional principles such as Newton's second law. As such, in the context of causality, a force does not cause a mass to accelerate nor vice versa. Rather, Newton's Second Law can be derived from the conservation of momentum, which itself is a consequence of the spatial homogeneity of physical laws. The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) meant that scholastic arguments about what caused phenomena were either rejected for being untestable or were just ignored. The complaint that physics does not explain the cause of phenomena has accordingly been dismissed as a problem that is philosophical or metaphysical rather than empirical (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach the notion of force in Newton's second law was pleonastic, tautological and superfluous and, as indicated above, is not considered a consequence of any principle of causality. Indeed, it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies, as two coupled equations describing the positions and of the two bodies, without interpreting the right hand sides of these equations as forces; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times. The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way which, indeed, are time-reversible and agnostic to the arrow of time, but with such a direction of time established, the entire evolution system could then be completely determined. The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace determinism (rather than 'Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem. Confusion between causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined). Distributed causality Theories in physics like the butterfly effect from chaos theory open up the possibility of a type of distributed parameter systems in causality. The butterfly effect theory proposes: "Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system." This opens up the opportunity to understand a distributed causality. A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly. Causal sets In causal set theory, causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct its conformal class, so knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this, Rafael Sorkin proposed the idea of Causal Set Theory, which is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a poset, while the conformal factor can be reconstructed by identifying each poset element with a unit volume. See also (general) References Further reading Bohm, David. (2005). Causality and Chance in Modern Physics. London: Taylor and Francis. Espinoza, Miguel (2006). Théorie du déterminisme causal. Paris: L'Harmattan. . External links Causal Processes, Stanford Encyclopedia of Philosophy Caltech Tutorial on Relativity — A nice discussion of how observers moving relatively to each other see different slices of time. Faster-than-c signals, special relativity, and causality. This article explains that faster than light signals do not necessarily lead to a violation of causality. Causality Concepts in physics Time Philosophy of physics Time travel ja:因果律
Causality (physics)
[ "Physics", "Mathematics" ]
2,271
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Physical quantities", "Time", "Time travel", "Quantity", "nan", "Spacetime", "Wikipedia categories named after physical quantities" ]
151,586
https://en.wikipedia.org/wiki/Eddystone%20Lighthouse
The Eddystone Lighthouse is a lighthouse on the Eddystone Rocks, south of Rame Head in Cornwall, England. The rocks are submerged below the surface of the sea and are composed of Precambrian gneiss. The current structure is the fourth to be built on the site. The first lighthouse (Winstanley's) was swept away in a powerful storm, killing its architect and five other men in the process. The second (Rudyard's) stood for fifty years before it burned down. The third (Smeaton's) is renowned because of its influence on lighthouse design and its importance in the development of concrete for building; its upper portions were re-erected in Plymouth as a monument. The first lighthouse, completed in 1699, was the world's first open ocean lighthouse, although the Cordouan Lighthouse off the western French coast preceded it as the first offshore lighthouse. The need for a light The Eddystone Rocks are an extensive reef approximately SSW off Plymouth Sound, one of the most important naval harbours of England, and midway between Lizard Point, Cornwall and Start Point. They are submerged at high spring tides and were so feared by mariners entering the English Channel that they often hugged the coast of France to avoid the danger, which thus resulted not only in shipwrecks locally, but on the rocks of the north coast of France and the Channel Islands. Given the difficulty of gaining a foothold on the rocks particularly in the predominant swell it was a long time before anyone attempted to place any warning on them. Winstanley's lighthouse The first lighthouse on Eddystone Rocks was an octagonal wooden structure built by Henry Winstanley. The lighthouse was also the first recorded instance of an offshore lighthouse. Construction started in 1696 and the light was lit on 14 November 1698. During construction, a French privateer took Winstanley prisoner and destroyed the work done so far on the foundations, causing Louis XIV to order Winstanley's release with the words "France is at war with England, not with humanity". The lighthouse survived its first winter but was in need of repair, and was subsequently changed to a dodecagonal (12 sided) stone clad exterior on a timber-framed construction with an octagonal top section as can be seen in the later drawings or paintings. The octagonal top section (or 'lantern') was high and in diameter, its eight windows each made up of 36 individual glass panes. It was lit by '60 candles at a time, besides a great hanging lamp'. Winstanley's tower lasted until the great storm of 1703 erased almost all trace on . Winstanley was on the lighthouse, completing additions to the structure. No trace was found of him, or of the other five men in the lighthouse. The cost of construction and five years' maintenance totalled £7,814 7s.6d, during which time dues totalling £4,721 19s.3d had been collected at one penny per ton from passing vessels. Rudyard's lighthouse Following the destruction of the first lighthouse, Captain John Lovett acquired the lease of the rock, and by an act of Parliament, the (4 & 5 Ann. c. 7), was allowed to charge passing ships a toll of one penny per ton. He commissioned John Rudyard (or Rudyerd) to design the new lighthouse. Rudyard's lighthouse, in contrast to its predecessor, was a smooth conical tower, shaped 'so as to offer the least possible resistance to wind and wave'. It was built on a base of solid wood, formed from layers of timber beams, laid horizontally on seven flat steps which had been cut into the upper face of the sloping rock. On top of this base rose several courses of stone, interspersed with further layers of wood, which was designed to serve as ballast for the tower. This substructure rose to a height of , on top of which were raised four storeys of timber. The entire structure was sheathed in vertical wooden planks and anchored to the reef using 36 wrought iron bolts, forged to fit deep dovetailed holes which had been cut in the reef. The vertical planks were installed by two master-shipwrights from Woolwich Dockyard and were caulked like those of a ship. The tower was topped with an octagonal lantern, which brought it to a total height of . A light was first shone from the tower on and the work was completed in 1709. The light was provided by 24 candles. Rudyard's lighthouse proved more durable than its predecessor, surviving and serving its purpose on the reef for nearly 50 years. In 1715 Captain Lovett died and his lease was purchased by Robert Weston, Esq., in company with two others (one of whom was Rudyard). On the night of 2 December 1755, the top of the lantern caught fire, probably through a spark from one of the candles used to illuminate the light, or else through a fracture in the chimney which passed through the lantern from the stove in the kitchen below. The three keepers threw water upwards from a bucket but were driven onto the rock and were rescued by boat as the tower burnt down. Keeper Henry Hall, who was 94 at the time, died several days later from ingesting molten lead from the lantern roof. A report on this case was submitted to the Royal Society by physician Edward Spry, and the piece of lead is now in the collections of the National Museums of Scotland. Smeaton's lighthouse The third lighthouse to be built on the Eddystone marked a major step forward in the design of such structures. Design and building Following the destruction of Rudyard's tower, Robert Weston sought advice on rebuilding the lighthouse from the Earl of Macclesfield, then President of the Royal Society. He recommended mathematical instrument maker and aspiring civil engineer, John Smeaton, who was introduced to Weston in February 1756. In May, following a series of visits to the rock, Smeaton proposed that the new lighthouse should be built of stone and modelled on the shape of an oak tree. He appointed Josias Jessop to serve as his general assistant, and established a shore base for the construction works at Millbay. Work began on the reef in August 1756, with the gradual cutting away of recesses in the rock which were designed to dovetail in due course with the foundations of the tower. During the winter, the workers stayed ashore and were employed in dressing the stone for the lighthouse; work then resumed on the rock the following June, with the laying of the first courses of stone. The foundations and outside structure were built of local Cornish granite, while lighter Portland limestone masonry was used on the inside. As part of the construction process, Smeaton pioneered 'hydraulic lime', a concrete that cured under water, and developed a technique of securing the blocks using dovetail joints and marble dowels. Work continued over the course of the following two years, and the light was first lit on 16 October 1759. Smeaton's lighthouse was high and had a diameter at the base of and at the top of . It was lit by a chandelier of 24 large tallow candles. Later modifications In 1807 the 100-year lease on the lighthouse expired, whereupon ownership and management devolved to Trinity House. In 1810 they replaced the chandelier and candles with 24 Argand lamps and parabolic reflectors. In 1841 major renovations were made, under the direction of engineer Henry Norris of Messrs. Walker & Burges, including complete repointing, replacement water tanks and filling of a large cavity in the rock close to the foundations. In 1845 the lighthouse was equipped with a new second-order fixed catadioptric optic, manufactured by Henry Lepaute of Paris, with a single multi-wick oil lamp, replacing the old lamps and reflectors. This was the first time that a fully catadioptric large optic (using prisms rather than mirrors above and below the lens) had been constructed, and the first such installation in any lighthouse. A new lantern was constructed and fitted to the top of the tower in 1848, as the original had proved unsatisfactory for housing the new optic. From 1858 the tower's exterior was painted with broad red and white horizontal bands, so as to render it 'more distinctly visible during the day time'. In 1872 a 5 cwt fog bell was provided for the lighthouse; it was sounded 'five times in quick succession every half minute' in foggy weather. That same year an improved lamp was installed, which more than doubled the intensity of the light. In 1877 it was resolved to build a replacement lighthouse, following reports that erosion to the rocks under Smeaton's tower was causing it to shake from side to side whenever large waves hit. During construction of the new lighthouse, the Town Council of Plymouth petitioned for Smeaton's tower to be dismantled and rebuilt on Plymouth Hoe, in lieu of a Trinity House daymark which stood there. Trinity House consented to the removal and delivery of the lantern and the upper four rooms of the tower, the cost of labour to be borne by Plymouth Council. While the new tower was being built the old lighthouse remained operational, up until 3 February 1882 (after which a temporary fixed light was shown from the top of the new tower). When the latter was complete, Smeaton's lighthouse was decommissioned and the crane which had been used to build the new lighthouse was transferred to the task of dismantling the old. William Tregarthen Douglass supervised the operation. Present day The upper part of Smeaton's lighthouse was subsequently rebuilt, as planned, on top of a replica granite frustum on Plymouth Hoe: preserved 'as a monument to Smeaton's genius, and in commemoration of one of the most successful, useful and instructive works ever accomplished in civil engineering'. The rebuilding was funded by public subscription. It remains in place today and, as 'Smeaton's Tower', is open to the public as a tourist attraction. The original frustum or base of the tower also survives, standing where it was built on the Eddystone rocks, from the current lighthouse. Having dismantled the upper part of the structure, Douglass infilled the old entrance way and stairwell within the frustum and fixed an iron mast to the top of the stub tower. He expressed the hope that 'the rock below will for ages endure to support this portion of Smeaton's lighthouse, which, in its thus diminished form, is still rendering important service to the mariner, in giving a distinctive character to the Eddystone by day'. Douglass's lighthouse The current, fourth lighthouse was designed by James Douglass (using Robert Stevenson's developments of Smeaton's techniques). This lighthouse is still in use. Design and building By July 1878 the new site, on the South Rock was being prepared during the 3½ hours between ebb and flood tide; the foundation stone was laid on 19 August the following year by The Duke of Edinburgh, Master of Trinity House. The supply ship Hercules was based at Oreston, now a suburb of Plymouth; stone was prepared at the Oreston yard and supplied from the works of Messrs Shearer, Smith and Co of Wadebridge. The tower, which is high, contains a total of 62,133 cubic feet of granite, weighing 4,668 tons. The last stone was laid on 1 June 1881 and the light was first lit on 18 May 1882. The lighthouse was topped by a larger than usual lantern storey, high and wide; the lantern was painted red. It contained a six-sided biform (i.e. two-tier) rotating optic of the first-order, high and weighing over seven tons. Each of the six sides of the optic was divided into two Fresnel lens panels, which provided the light's characteristic of two flashes every thirty seconds. The optic was manufactured by Chance Brothers of Smethwick and designed by their chief engineer John Hopkinson FRS. At the time the Eddystone's extra-tall () lenses were the largest in existence; their superior height was achieved through the use of extra-dense flint glass in the upper and lower portions of each panel. The light had a range of . Illumination was provided by a pair of Douglass-designed six-wick concentric oil burners (one for each tier of the optic). This was said to represent 'the first practical application of superposed lenses of the first order with oil as the illuminating material'. On clear nights, only the lamp in the lower tier of lenses was lit (producing a light of 37,000 candlepower); in poor visibility, however (judged by whether the Plymouth Breakwater light was visible), both lamps were used at full power, to provide a 159,600 candlepower light. Eighteen cisterns in the lower part of the tower were used to store up to 2,660 tons (nine months' worth) of colza oil to fuel the lamps. In addition to the main light a fixed white light was shone from a room on the eighth storey of the tower (using a pair of Argand lamps and reflectors) in the direction of the hazardous Hand Deeps. The lighthouse was also provided with a pair of large bells, each weighing two tons, by Gillett, Bland & Co., which were suspended from either side of the lantern gallery to serve as a fog signal; they sounded (to match the light characteristic of the lighthouse) twice every thirty seconds in foggy weather, and were struck by the same clockwork mechanism that drove the rotation of the lenses. The mechanism required winding every hour (or every forty minutes, when the bells were in use), 'the weight to be lifted being equal to one ton'; shortly after opening, the lighthouse was equipped with a 0.5 h.p. caloric engine, designed 'for relieving the keepers of the excessive strain of driving the machine when both illuminating apparatus and fog bell are in use'. Later modifications In 1894 an explosive fog signal device was installed on the gallery of lighthouse; the fog bells were briefly retained as a standby provision, but then removed. In 1904 the lamps were replaced with incandescent oil vapour burners. Following the invention of the mercury bath system (allowing a lighthouse optic to revolve in a trough of mercury rather than on rollers) the Eddystone lens pedestal was duly upgraded and the drive mechanism replaced. Later, beginning in 1959, the light was electrified: the new light source was a 1,250W incandescent lamp, powered by a diesel generator (three of which were installed in a lower store room). In place of the old lenses a new, smaller (fourth-order) AGA 'bi-valve' optic was installed, which flashed at the faster rate of twice every ten seconds. The old optic was removed and donated to Southampton Maritime Museum (it was exhibited on the Royal Pier in the 1970s, but later removed to a council yard where it was destroyed by vandals). As part of the programme of modernisation, the lighthouse was given a 'SuperTyfon' fog signal, with compressors powered from the diesel generators. The lighthouse was automated in 1982, the first Trinity House 'Rock' (or offshore) lighthouse to be converted. Two years earlier the tower had been changed by construction of a helipad above the lantern, to allow maintenance crews access; the helipad has a weight limit of 3600 kg (3½ tons). As part of the automation of the lighthouse a new electric fog signal was installed and a metal halide discharge lamp replaced the incandescent light bulb formerly in use. The light and other systems were monitored remotely, initially by Trinity House staff at the nearby Penlee Point fog signal station. Since 1999 the lighthouse has run on solar power. Present day The tower is high, and its white light flashes twice every 10 seconds. The light is visible to , and is supplemented by a foghorn of 3 blasts every 62 seconds. A subsidiary red sector light shines from a window in the tower to highlight the Hand Deeps hazard to the west-northwest. The lighthouse is now monitored and controlled from the Trinity House Operations Control Centre at Harwich in Essex. References in media The lighthouse inspired a sea shanty, frequently recorded, that begins "My father was the keeper of the Eddystone light / He courted a mermaid one fine night / From this union there came three / A porpoise and a porgy and the other was me!". There are several verses. The lighthouse has been used as a metaphor for stability. In the Goon Show episode Ten Snowballs that shook the World (1958), Neddie Seagoon is sent to Eddystone Lighthouse to warn the inhabitants that Sterling has dropped from F-sharp to E-flat. The lighthouse is celebrated in the opening and closing movements of Ron Goodwin's Drake 400 Suite. The movement's main theme was directly inspired by the lighthouse's unique light characteristic. A novel based on the building of Smeaton's lighthouse, containing many details of the construction, was published in 2005. The lighthouse is referenced twice in Herman Melville's epic novel Moby-Dick; at the beginning of Chapter 14, "Nantucket": "How it stands there, away off shore, more lonely than the Eddystone lighthouse.", and in Chapter 133, "The Chase – First Day": "So, in a gale, the but half baffled Channel billows only recoil from the base of the Eddystone, triumphantly to overleap its summit with their scud." The lighthouse is referred to in "Daddy was a Ballplayer" by the Canadian band Stringband, and follows a similar line to the sea shanty. "The Most Famous of All Lighthouses," the third chapter of The Story of Lighthouses (Norton 1965) by Mary Ellen Chase, is devoted to the Eddystone Lighthouse. Eddystone Lighthouse was used for many of the exterior shots in The Phantom Light, a 1935 film directed by Michael Powell. The English pop group Edison Lighthouse took its name from it. Later, 'Lighthouse' was discarded, and they renamed themselves 'Edison'. An 1850 replica of Smeaton's lighthouse, Hoad Monument, stands above the town of Ulverston, Cumbria as a memorial to naval administrator Sir John Barrow. See also List of lighthouses in England Eddystone, the Google Bluetooth Low Energy beacon Hook Lighthouse, second oldest lighthouse in the world and oldest in the British Isles Notes References Further reading John Smeaton (1793). A Narrative of the Building and Description of the Eddystone Lighthouse with Stone. London Palmer, Mike; Eddystone, The Finger of Light. Palmridge Publishing, 1998 – Revised edition, 2005 by Seafarer Books & Globe Pequot Press / Sheridan House Eddystone (2016). The Finger of Light, revised Kindle ebook edition External links Trinity House Charles Harrison-Wallace webpage Captain L Edye – The Eddystone Lighthouse, 1887 A local's view of Smeaton's Tower, on the Hoe, 2005 1698 establishments in England 1703 disasters 1755 disasters Historic Civil Engineering Landmarks Industrial archaeological sites in Devon Lighthouses completed in 1882 Lighthouses in Devon Lighthouses of the English Channel Plymouth, Devon
Eddystone Lighthouse
[ "Engineering" ]
3,968
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
151,587
https://en.wikipedia.org/wiki/Forth%20and%20Clyde%20Canal
The Forth and Clyde Canal is a canal opened in 1790, crossing central Scotland; it provided a route for the seagoing vessels of the day between the Firth of Forth and the Firth of Clyde at the narrowest part of the Scottish Lowlands. This allowed navigation from Edinburgh on the east coast to the port of Glasgow on the west coast. The canal is long and it runs from the River Carron at Grangemouth to the River Clyde at Bowling, and had an important basin at Port Dundas in Glasgow. Successful in its day, it suffered as the seagoing vessels were built larger and could no longer pass through. The railway age further impaired the success of the canal, and in the 1930s decline had ended in dormancy. The final decision to close the canal in the early 1960s was made due to maintenance costs of bridges crossing the canal exceeding the revenues it brought in. However, subsidies to the rail network were also a cause for its decline and the closure ended the movement of the east-coast Forth River fishing fleets across the country to fish the Irish Sea. The lack of political and financial foresight also removed a historical recreational waterway and potential future revenue generator to the town of Grangemouth. Unlike the majority of major canals the route through Grangemouth was drained and backfilled before 1967 to create a new carriageway for port traffic. The M8 motorway in the eastern approaches to Glasgow took over some of the alignment of the canal, but more recent ideas have regenerated the utility of the canal for leisure use. Geography The eastern end of the canal is connected to the River Forth by a stretch of the River Carron near Grangemouth. The canal roughly follows the course of the Roman Antonine Wall and was the biggest infrastructure project in Scotland since then. The highest section of the canal passes close to Kilsyth and it is fed there by an aqueduct which gathers water from (the purpose built) Birkenburn Reservoir in the Kilsyth Hills, stored in another purpose-built reservoir called Townhead near Banton, from where it feeds the canal via a feeder from the Shawend Burn near Craigmarloch. The canal continues past Twechar, through Kirkintilloch and Bishopbriggs to the Maryhill area north of Glasgow city centre. A branch to Port Dundas was built to secure the agreement and financial support of Glasgow merchants who feared losing business if the canal bypassed them completely. This branch flows past Murano Street Student Village, halls of residence for the University of Glasgow. The western end of the canal connects to the River Clyde at Bowling. In 1840, a canal, the Forth and Cart Canal, was built to link the Forth and Clyde canal, at Whitecrook, to the River Clyde, opposite the mouth of the River Cart. Origins The canal was authorised by the (8 Geo. 3. c. 63). Priestley, writing in 1831, said: At first there were difficulties with securing the capital for the work, but soon, thanks in the main to investment by Sir Lawrence Dundas, 1st Baronet, "the execution of this canal proceeded with such rapidity, under the direction of [the engineer] Mr. Smeaton, that in two years and three quarters from the date of the first act, one half of the work was finished; when, in consequence of some misunderstanding between him and the proprietors, he declined any further connection with the work, which was shortly afterwards let to contractors, who however failed, and the canal was again placed under the direction of its original projector, who brought it to within of its proposed junction with the Clyde, when the work was stopped in 1775 for want of funds, and it continued at a stand for several years." Numerous supplementary acts of Parliament preceded this period, and more followed, but the key to unlocking the problem was some creativity, in which "the Barons of the Court of Exchequer in Scotland, are, out of the money arising from the sale of forfeited estates, directed to lend the Forth and Clyde Navigation Company the sum of £50,000, by which they were enabled to resume their labours, under the direction of Mr. Robert Whitworth, an engineer possessing a well earned reputation". The work was completed on 28 July 1790. The Forth and Clyde Navigation Committee was set up in Glasgow in (or before) 1787 and had several notable members: John Riddel (Lord Provost of Glasgow); John Campbell of Clathick; Patrick Colquhoun (Convenor and Superintendent); Robert Whitworth (engineer); Archibald Spiers; John Cumine (as collector of fees at east end) and James Loudon (as collector of fees at west end). Contemporary description Priestley wrote in 1831, Besides the fine rivers above-mentioned [the Forth and Clyde, the canal], is joined by the Edinburgh and Glasgow Union Canal, near Falkirk; with the Monkland and Kirkintilloch Railway at its summit, near the last-mentioned village; and with the Monkland Canal and the Garnkirk and Glasgow Railway, at Port Dundas, near the city of Glasgow. This magnificent canal commences in the River Forth, in Grangemouth Harbour, and near to where the Carron empties itself into that river. Its course is parallel with the Carron, and in nearly a westwardly direction, passing to the north of the town of Falkirk, and thence to Red Bridge, where it quits the county of Stirling, and enters a detached portion of the shire of Dumbarton. Hence it passes to the south of Kilsyth, and runs along the south bank of the River Kelvin, and over the Luggie Water, by a fine stone aqueduct, at Kirkintilloch; it then approaches within little more than of the north-west quarter of the city of Glasgow, to which there is a branch communicating with the Monkland Canal at Port Dundas, near that city. The remaining part of the line is in a westwardly direction, crossing the Kelvin River by a noble aqueduct, and thence to the Clyde, into which, after running parallel with it for some distance, it locks down at Bowling's Bay, near Dalmuir Burnfoot. The canal is in length, viz, from Grangemouth to the east end of the summit pool, is ten miles and three quarters [], with a rise, from low water in the Forth, of , by twenty locks. The summit level is in length, and in the remainder of its course, there is a fall to low water, in the Clyde, at Bowling's Bay, of , by nineteen locks. The branch to the Monkland Canal at Glasgow is two miles and three quarters []; and there is another cut into the Carron River, at Carron Shore, in order to communicate with the Carron Iron Works. Though this canal was originally constructed for vessels drawing , yet by recent improvements, sea-borne craft of draught may now pass through it, from the Irish Sea to the German Ocean. The locks are 74 feet long and 20 wide []; and upon its course are thirty-three draw-bridges, ten large aqueducts and thirty-three smaller ones; that over the Kelvin being long and above the surface of the stream. It is supplied with water from reservoirs; one of which, at Kilmananmuir, is , and deep at the sluice; and that at Kilsyth is in extent, with water at its head. Passenger traffic Between 1789 and 1803 the canal was used for trials of William Symington's steamboats, culminating in the Charlotte Dundas, the "first practical steamboat" built at the shipyard in Grangemouth by Alexander Hart. Passenger boats ran on the canal from 1783, and in 1809 fast boats were introduced, running from Edinburgh to Falkirk in 3 hours 30 minutes, providing such comforts as food, drink and newspapers. By 1812 they carried 44,000 passengers, taking receipts of more than £3,450. From 1828 there was a steamboat service, operated by Thomas Grahame's boat Cupid. Construction The canal was designed by John Smeaton. Construction started in 1768 and after delays due to funding problems was completed in 1790. To mark the opening a hogshead of water taken from the Forth was emptied into the Clyde at Bowling to symbolise the union of the eastern and western seas. The geologist James Hutton became very involved in the canal between 1767 and 1774; he contributed his geological knowledge, made extended site inspections, and acted both as a shareholder and as a member of the management committee. The Union Canal was then constructed to link the eastern end of the canal to Edinburgh. Changes of ownership In 1842 an act of Parliament, the (5 & 6 Vict. c. xli) was obtained authorising the Forth and Clyde Canal Company to take over the Forth and Cart Canal. The (30 & 31 Vict. c. cvi) authorised the Caledonian Railway to take over the Forth and Clyde Canal. In the meantime the canal company had itself built a railway branch line to Grangemouth Dock, which it owned. The canal was nationalised in 1948, along with the railway companies, and control passed to the British Transport Commission. In 1962, the British Transport Commission was wound up, and control passed to the British Waterways Board; subsequently Scottish Canals took control. Run down and revival In 1963 the canal was closed rather than construct a motorway crossing, and so it became disused and semi-derelict. Canal locks in the Falkirk area on the Union Canal near the connection to the Forth and Clyde canal had been filled in and built over in the 1930s. As part of the millennium celebrations in 2000, National Lottery funds were used to regenerate both canals. A boatlifting device, the Falkirk Wheel, was built to connect the two canals and once more allow boats to travel from the Clyde or Glasgow to Edinburgh, with a new canal connection to the River Carron and hence the River Forth. The Falkirk Wheel opened on 27 May 2002 and is now a tourist attraction. When the canal was reopened, the Port Dundas branch was reinstated from Stockingfield Junction, where it leaves the main line, to Speirs Wharf, where further progress was blocked by culverts created as part of the M8 Motorway construction and the abortive Maryhill Motorway. A connection from there to Pinkston Basin, which once formed the terminus of the Monkland Canal, was later achieved by the construction of of new canal and two locks, lowering the level of the canal to enable it to pass beneath existing structures. The project cost £5.6 million, and the first lock and intermediate basin were opened on 29 September 2006. The lock was named Speaker Martin's Lock, after Michael Martin MP, the speaker in the House of Commons who performed the opening ceremony. Opening of the second lock was delayed by a dispute over land ownership. Forth and Clyde Canal Society The Forth and Clyde Canal Society is a waterway society on the Forth and Clyde Canal in the central lowlands of Scotland. It was formed in 1980 to "campaign for the Forth and Clyde's preservation, restoration and development" According to the Forth and Clyde Canal Society's website, their current aim is "To promote the canal and to ensure its success". The Society's campaigning included a petition of over 30,000 signatures for the reopening of the canal, which was then put in place under the Millennium Link project which commenced work in 1999. The society currently has three boats which are used as trip-boats, charter vessels and for members cruises along the canal. Locks There are 39 locks on the Forth & Clyde Canal, as follows: 1 – New River Carron Sea Lock (The Helix Canal Extension – beyond The Kelpies) 2 – Basin Moorings (Sea Lock) 3 – Carron Cut Lock 4 – Abbotshaugh Lock 5 – Bainsford Lock 6 – Grahamston Iron Works Lock 7 – Merchiston Lock 8 – Merers Lock 9 – Camelon Railway Lock 10 – Camelon Lock 11 – Rosebank Lock 12 – Camelon Lock No. 12 13 – Camelon Lock No. 13 14 – Camelon Lock No. 14 15 – Falkirk Wheel 16 – Falkirk Bottom Lock No. 16 17 – Underwood Lock No. 17 18 – Allandale Lock No. 18 19 – Castlecary Lock No. 19 20 – Wyndford Lock No. 20 (summit level) 21 – Maryhill Top Lock No. 21 (summit level) 22 – Maryhill Lock 23 – Maryhill Lock 24 – Maryhill Lock 25 – Maryhill Bottom Lock No. 25 26 – Kelvindale (Temple Lock No. 26) 27 – Temple Lock No. 27 28 – Cloberhill Top Lock No. 28 29 – Cloberhill Middle Lock No. 29 30 – Cloberhill Bottom Lock No. 30 31 – Cloberhill Lock No. 31 32 – Cloberhill Lock No. 32 33 – Boghouse Top Lock No. 33 34 – Boghouse Middle Lock No. 34 35 – Boghouse Lower Lock. 35 36 – No. 36 Drop Lock – Dalmuir Drop Lock (constructed recently to take navigation below bridge) 37 – Old Kilpatrick 38 – Dalnottar Lock No. 37 39 – Bowling Lock No. 38 The overall ruling dimensions are length: ; beam: ; draught: ; headroom: , but at the western end larger vessels may use the Bowling basin. Data sourced from www.scottishcanals.co.uk See also Auchinstarry and its new basin, a £1.2M regeneration project Forth to Firth Canal Pathway Falkirk Helix John Muir Way World Canals Conference Donald's Quay Canal safety gates Stockingfield Junction Footnotes Further reading Lindsay, Jean. The Canals of Scotland. Newton Abbot: David & Charles, 1968. Brown, Hamish. Exploring the Edinburgh to Glasgow Canals. London: Stationery Office, 1997. Macneill, John. Canal Navigation: On the Resistance of Water to the Passage of Boats Upon Canals and Other Bodies of Water, Being the Results of Experiments. London: Roake and Varty, 1833.—See: Appendix A. Mouton, H.G. "The Forth and Clyde Ship Canal," Journal of Political Economy, vol. 18, no. 9 (Nov. 1910), pp. 736–741. In JSTOR External links Glasgow's Canals Unlocked, tourism publication by Scottish Canals Environmental Advisory Service case study on Auchinstarry Basin The Forth & Clyde and Union Canals The Scotland Guide: Glasgow, The Forth and Clyde Canal – surveying the canal Falkirk Wheel The Falkirk Wheel – The Forth and Clyde Canal History of the Forth and Clyde Canal – Clyde Waterfront Heritage National Library of Scotland: SCOTTISH SCREEN ARCHIVE (archive films about the Forth and Clyde Canal) Video footage of the Stockingfield Junction WWII 'Stop or Safety gate'. Video footage of Stockingfield Junction. Video footage of Ferrydyke Quay and Bascule Bridge Video footage of Auchintarry Marina Video footage of the Dalmuir Drop Lock images & map of mile markers seen along the Forth & Clyde canal Canals opened in 1790 18th century in Scotland Canals in Scotland Historic Civil Engineering Landmarks Scheduled monuments in Scotland Transport in Falkirk (council area) Transport in East Dunbartonshire Transport in Glasgow Transport in West Dunbartonshire Kirkintilloch Scottish Canals 1790 establishments in Scotland Bishopbriggs Clydebank Bearsden Falkirk Grangemouth Maryhill Scottish Lowlands
Forth and Clyde Canal
[ "Engineering" ]
3,157
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
151,588
https://en.wikipedia.org/wiki/Video%20Toaster
The NewTek Video Toaster is a combination of hardware and software for the editing and production of NTSC standard-definition video. The plug-in expansion card initially worked with the Amiga 2000 computer and provides a number of BNC connectors on the exposed rear edge that provide connectivity to common analog video sources like VHS VCRs. The related software tools support video switching, luma keying, character generation, animation, and image manipulation. Together, the hardware and software provided, for a few thousand U.S. dollars, a video editing suite that rivaled the output of contemporary (i.e. early 1990s) professional systems costing ten times as much. It allowed small studios to produce high-quality material and resulted in a cottage industry for video production not unlike the success of the Macintosh in the desktop publishing (DTP) market only a few years earlier. The Video Toaster won the Emmy Award for Technical Achievement in 1993. Other parts of the original software package were spun off as stand-alone products, notably LightWave 3D, and achieved success on their own. As the Amiga platform lost market share and Commodore International went bankrupt in 1994 as a result of declining sales, the Video Toaster was moved to the Microsoft Windows platform where it is still available. The company also produced what is essentially a portable pre-packaged version of the Video Toaster along with all the computer hardware needed, as the TriCaster. These became all-digital units in 2014, ending production of the analog line. First generation systems The Video Toaster was designed by NewTek founder Tim Jenison in Topeka, Kansas. Engineer Brad Carvey built the first wire wrap prototype, and Steve Kell wrote the software for the prototype. Many other people worked on the Toaster as it developed. The Toaster was announced at the World of Commodore expo in 1987 and released as a commercial product in December 1990 for the Commodore Amiga 2000 computer system, taking advantage of the video-friendly aspects of that system's hardware to deliver the product at an unusually low cost of $2,399. The Amiga was well adapted to this application in that its system clock at was precisely double that of the NTSC color carrier frequency, , allowing for simple synchronization of the video signal. The hardware component is a full-sized card that is installed into the Amiga 2000's unique single video expansion slot rather than the standard bus slots, and therefore cannot be used with the A500 or A1000 models. The card has several BNC connectors in the rear, which accepts four video input sources and provided two outputs (preview and program). This initial generation system is essentially a real-time four-channel video switcher. One feature of the Video Toaster is the inclusion of LightWave 3D, a 3D modeling, rendering, and animation program. This program became so popular in its own right that in 1994 it was made available as standalone product separate from the Toaster systems. Aside from simple fades, dissolves, and cuts, the Video Toaster has a large variety of character generation, overlays and complex animated switching effects. These effects are in large part performed with the help of the native Amiga graphics chipset, which is synchronized to the NTSC video signals. As a result, while the Toaster was rendering a switching animation, the computer desktop display is not visible. While these effects are unique and inventive, they cannot be modified. Soon Toaster effects were seen everywhere, advertising the device as the brand of switcher those particular production companies were using. The Toaster hardware requires very stable input signals, and therefore is often used along with a separate video sync time-base corrector to stabilize the video sources. Third-party low-cost time-base correctors (TBCs) specifically designed to work with the Toaster quickly came to market, most of which were designed as standard ISA bus cards, taking advantage of the typically unused Bridgeboard slots. The cards do not use the Bridgeboard to communicate, but simply as a convenient power supply and physical location. As with all video switchers that use a frame buffer to create DVEs (digital video effects), the video path through the Toaster hardware introduced delays in the signals when the signal was in "digital" mode. Depending on the video setup of the user, this delay could be quite noticeable when viewed along with the corresponding audio, so some users installed audio delay circuits to match the Toaster's video-delay lag, as is common practice in video-switching studios. A user still needs at least three video tape recorders (VTR) and a controller to perform A/B roll linear video editing (LE), as the Toaster serves merely as a switcher, which can be triggered through general-purpose input/output (GPIO) to switch on cue in such a configuration, as the Toaster has no edit-controlling capabilities. The frame delays passing through the Toaster and other low-cost video switchers make precise editing a frustrating endeavor. Internal cards and software from other manufacturers are available to control VTRs; the most common systems go through the serial port to provide single-frame control of a VTR as a capture device for LightWave animations. A Non-linear editing system (NLE) product was added later, with the invention of the Video Toaster Flyer. Although initially offered as just an add-on to an Amiga, the Video Toaster was soon available as a complete turn-key system that included the Toaster, Amiga, and sync generator. These Toaster systems became very popular, primarily because at a cost of around US$5,000, they could do much of what a $100,000 fully professional video switcher (such as a Grass Valley switcher) could do at that time. The Toaster was also the first such video device designed around a general-purpose personal computer that is capable of delivering broadcast quality NTSC signals. As such, during the early 1990s the Toaster was widely used by consumer Amiga owners, desktop video enthusiasts, and local television studios, and was even used during The Tonight Show regularly to produce special effects for comedy skits. It was often easy to detect a studio that used the Toaster by the unique and recognizable special switching effects. The NBC television network also used the Video Toaster with LightWave for its promotional campaigns, beginning with the 1990-1991 broadcast season ("NBC: The Place To Be!"). All of the external submarine shots in the TV series seaQuest DSV were created using LightWave 3D, as were the outer-space scenes in the TV series Babylon 5 (although Amiga hardware was only used for the first three seasons). Because of the heavy use of dark blues and greens (for which the NTSC television standard is weak), the external submarine shots in seaQuest DSV could not have made it to air without the use of the ASDG Abekas driver, written specifically to solve this problem by Aaron Avery at ASDG (later Elastic Reality, Inc.). This was due to "ASDG's exclusive color encoding technology which increases the apparent color bandwidth of video". An updated version called Video Toaster 4000 was later released, using the Amiga 4000's video slot. The 4000 was co-developed by actor Wil Wheaton, then famous for Star Trek: The Next Generation, who worked on product testing and quality control. He later used his public profile to serve as a technology evangelist for the product. Besides Wheaton, Penn Jillette (of Penn and Teller fame) and skateboarder Tony Hawk also served as evangelists for the 4000. Hawk was given a Video Toaster 4000 by NewTek upon learning that he was an Amiga user, in exchange for appearing in a promotional video for the product. Tony Hawk later used the Toaster for editing a promotional video for the TurboDuo game Lords of Thunder in 1993. The Amiga Video Toaster 4000 source code was released in 2004 by NewTek & DiscreetFX. Video Toaster Flyer For the second generation NewTek introduced the Video Toaster Flyer. The Flyer is a much more capable non-linear editing system. In addition to just processing live video signals, the Flyer makes use of hard drives to store video clips as well as audio and allow complex scripted playback. The Flyer is capable of simultaneous dual-channel playback, which allows the Toaster's video switcher to perform transitions and other effects on video clips without the need for rendering. The hardware component is again a card designed for the Amiga's Zorro II expansion slot, and was primarily designed by Charles Steinkuehler. The Flyer portion of the Video Toaster/Flyer combination is a complete computer of its own, having its own microprocessor and embedded software, which was written by Marty Flickinger. Its hardware includes three embedded SCSI controllers. Two of these SCSI buses are used to store video data, and the third to store audio. The hard drives are thus connected to the Flyer directly and use a proprietary filesystem layout, rather than being connected to the Amiga's buses and were available as regular devices using the included DOS driver. The Flyer uses a proprietary Wavelet compression algorithm known as VTASC, which was well-regarded at the time for offering better visual quality than comparable motion-JPEG-based nonlinear editing systems. One of the card's primary uses is for playing back LightWave 3D animations created in the Toaster. Video Toaster Screamer In 1993, NewTek announced the Video Toaster Screamer, a parallel extension to the Toaster built by DeskStation Technology, with four motherboards, each with a MIPS R4400 CPU running at and of RAM. The Screamer accelerated the rendering of animations developed using the Toaster's bundled Lightwave 3D software, and is supposedly 40 times as powerful as a Toaster 4000. Only a handful of test units were produced before NewTek abandoned the project and refocused on the Flyer. This cleared the way for DeskStation Technology to release their own cut-down version, the Raptor. Later generations Later generations of the product run on Windows NT PCs. In 2004, the source code for the Amiga version was publicly released and hosted on DiscreetFX's site Open Video Toaster. With the additions of packages such as DiscreetFX's Millennium and thousands of wipes and backgrounds added over the years, one can still find the Video Toaster systems in use today in fully professional systems. NewTek renamed the VideoToaster to "VideoToaster[2]", and later, "VT[3]" for the PC version and is now at version 5.3. Since VT[4] version 4.6, SDI switching is supported through an add-on called SX-SDI. NewTek released a spin-off product, known as the TriCaster, a portable live-production, live-projection, live-streaming, and NLE system. The TriCaster packaged the VT system as a turnkey solution in a custom-designed portable PC case with video, audio and remote computer inputs and outputs on the front and back of the case. As of April 2008, four versions were in production: the basic TriCaster 2.0, TriCaster PRO 2.0, TriCaster STUDIO 2.0 and the TriCaster BROADCAST, the latter of which added SDI and AES-EBU connectivity plus a preview output capability. The TriCaster PRO FX, a model that was situated in line between the original TriCaster PRO and TriCaster STUDIO was introduced in early 2008, and was discontinued. Its feature set was added to the TriCaster PRO 2.0. TriCaster STUDIO 2.0 and TriCaster BROADCAST which uses successively larger cases than the base model TriCaster 2.0. The units within the product line above the base-model TriCaster 2.0 enables use of LiveSet 3D Live Virtual Set technology developed by NewTek, which is also found in NewTek's venerable VT[5] Integrated Production Suite, the modern-day successor to the original Video Toaster. In late 2009, NewTek released its high-definition version of the TriCaster, called the TriCaster XD300, a three-input HD system. It is able to accept a variety formats (NTSC, 720p, or 1080i; and on multi-standard systems, PAL) that can be mixed to downstream keys. The XD300 also features five M/E style virtual inputs, permitting up to three video sources in one source, accessible like any other input on the switcher. At NAB Show 2010, NewTek announced its TCXD850, a rack-mountable eight-input switcher with 22 channels. It was released on July 15, 2010. Decline By 2009, the Video Toaster started to receive less attention from NewTek in the run-up to the transition to HD systems. In December 2010, the discontinuation of VT[5] was announced, marking the end of the Video Toaster as a stand-alone product. TriCaster systems based on the VT platform were still made up until August 2012, when the TriCaster STUDIO was replaced by the TriCaster 40. This officially marked the end of the Video Toaster. Subprograms ToasterCG is the character generation program inside Video Toaster. ToasterEdit is a video-editing subprogram inside of Video Toaster. ToasterPaint is a digital painting subprogram inside of Video Toaster. See also LightWave 3D Quantel Paintbox References External links An episode of Computer Chronicles featuring the Video Toaster on the Amiga 2000 Amiga Video Toaster/Flyer complete source-code 1990 software Products and services discontinued in 2012 Amiga software Amiga Video editing software for Windows Multimedia Multimedia software New media
Video Toaster
[ "Technology" ]
2,821
[ "Multimedia", "New media", "Multimedia software" ]
151,604
https://en.wikipedia.org/wiki/Deception
Deception is the act of convincing one or many recipients of untrue information. The person creating the deception knows it to be false while the receiver of the message has a tendency to believe it (although it is not always the case). It is often done for personal gain or advantage. Deception can involve dissimulation, propaganda and sleight of hand as well as distraction, camouflage or concealment. There is also self-deception. It can also be called, with varying subjective implications, beguilement, deceit, bluff, mystification, ruse, or subterfuge. Deception is a major relational transgression that often leads to feelings of betrayal and distrust. Deception violates relational rules and is considered to be a negative violation of expectations. Most people expect friends, relational partners, and even strangers to be truthful most of the time. If people expected most conversations to be untruthful, talking and communicating with others would require distraction and misdirection to acquire reliable information. A significant amount of deception occurs between some romantic and relational partners. Deceit and dishonesty can also form grounds for civil litigation in tort, or contract law (where it is known as misrepresentation or fraudulent misrepresentation if deliberate), or give rise to criminal prosecution for fraud. It also forms a vital part of psychological warfare in denial and deception. Types Communication Deception includes several types of communications or omissions that serve to distort or omit the whole truth. Examples of deception range from false statements to misleading claims in which relevant information is omitted, leading the receiver to infer false conclusions. For example, a claim that "sunflower oil is beneficial to brain health due to the presence of omega-3 fatty acids" may be misleading, as it leads the receiver to believe sunflower oil will benefit brain health more so than other foods. In fact, sunflower oil is relatively low in omega-3 fatty acids and is not particularly good for brain health, so while this claim is technically true, it leads the receiver to infer false information. Deception itself is intentionally managing verbal or nonverbal messages so that the message receiver will believe in a way that the message sender knows is false. Intent is critical with regard to deception. Intent differentiates between deception and an honest mistake. The Interpersonal Deception Theory explores the interrelation between communicative context and sender and receiver cognitions and behaviors in deceptive exchanges. Some forms of deception include: Lies: making up information or giving information that is the opposite or very different from the truth. Equivocations: making an indirect, ambiguous, or contradictory statement. Concealments: omitting information that is important or relevant to the given context, or engaging in behavior that helps hide relevant information. Exaggerations: overstatement or stretching the truth to a degree. Understatements: minimization or downplaying aspects of the truth. Untruths: misinterpreting the truth. Buller and Burgoon (1996) have proposed three taxonomies to distinguish motivations for deception based on their Interpersonal Deception Theory: Instrumental: to avoid punishment or to protect resources Relational: to maintain relationships or bonds Identity: to preserve "face" or the self-image Appearance Simulation consists of exhibiting false information. There are three simulation techniques: mimicry (copying another model or example, such as non-poisonous snakes which have the colours and markings of poisonous snakes), fabrication (making up a new model), and distraction (offering an alternative model) Mimicry In the biological world, mimicry involves unconscious deception by similarity to another organism, or to a natural object. Animals for example may deceive predators or prey by visual, auditory or other means. Fabrication To make something that appears to be something that it is not, usually for the purpose of encouraging an adversary to reveal, endanger, or divert that adversary's own resources (i.e., as a decoy). For example, in World War II, it was common for the Allies to use hollow tanks made out of wood to fool German reconnaissance planes into thinking a large armor unit was on the move in one area while the real tanks were well hidden and on the move in a location far from the fabricated "dummy" tanks. Mock airplanes and fake airfields have also been created. Distraction To get someone's attention from the truth by offering bait or something else more tempting to divert attention away from the object being concealed. For example, a security company publicly announces that it will ship a large gold shipment down one route, while in reality taking a different route. A military unit trying to maneuver out of a dangerous position may make a feint attack or fake retreat, to make the enemy think they are doing one thing while in fact they have another goal. Camouflage The camouflage of a physical object often works by breaking up the visual boundary of that object. This usually involves colouring the camouflaged object with the same colours as the background against which the object will be hidden. In the realm of deceptive half-truths, camouflage is realized by 'hiding' some of the truths. Military camouflage as a form of visual deception is a part of military deception. Some Allied navies during World War II used dazzle camouflage painting schemes to confuse observers regarding a naval vessel's speed and heading, by breaking up the ship's otherwise obvious silhouette. In nature, the defensive mechanisms of most octopuses to eject black ink in a large cloud to aid in escape from predators is a form of camouflage. Disguise A disguise is an appearance to create the impression of being somebody or something else; for a well-known person this is also called incognito. Passing involves more than mere dress and can include hiding one's real manner of speech. The fictional detective Sherlock Holmes often disguised himself as somebody else to avoid being recognized. In a more abstract sense, 'disguise' may refer to the act of disguising the nature of a particular proposal in order to hide an unpopular motivation or effect associated with that proposal. This is a form of political spin or propaganda, covering the matters of rationalisation and transfer within the techniques of propaganda generation. For example, depicting an act of war (an attack) as a "peace" mission or "spinning" a kidnapping as a protective custody. A seventeenth-century story collection, Zhang Yingyu's The Book of Swindles (ca. 1617), offers multiple examples of the bait-and-switch and fraud techniques involving the stimulation of greed in Ming-dynasty China. In romantic relationships Deception is particularly common within romantic relationships, with more than 90% of individuals admitting to lying or not being completely honest with their partner at one time. There are three primary motivations for deception in relationships. Deception impacts the perception of a relationship in a variety of ways, for both the deceiver and the deceived. The deceiver typically perceives less understanding and intimacy from the relationship, in that they see their partner as less empathetic and more distant. The act of deception can also result in feelings of distress for the deceiver, which become worse the longer the deceiver has known the deceived, as well as in longer-term relationships. Once discovered, deception creates feelings of detachment and uneasiness surrounding the relationship for both partners; this can eventually lead to both partners becoming more removed from the relationship or deterioration of the relationship. In general, discovery of deception can result in a decrease in relationship satisfaction and commitment level, however, in instances where a person is successfully deceived, relationship satisfaction can actually be positively impacted for the person deceived, since lies are typically used to make the other partner feel more positive about the relationship. In general, deception tends to occur less often in relationships with higher satisfaction and commitment levels and in relationships where partners have known each other longer, such as long-term relationships and marriage. In comparison, deception is more likely to occur in casual relationships and in dating where commitment level and length of acquaintanceship is often much lower. Deception and infidelity Unique to exclusive romantic relationships is the use of deception in the form of infidelity. When it comes to the occurrence of infidelity, there are many individual difference factors that can impact this behavior. Infidelity is impacted by attachment style, relationship satisfaction, executive function, sociosexual orientation, personality traits, and gender. Attachment style impacts the probability of infidelity and research indicates that people with an insecure attachment style (anxious or avoidant) are more likely to cheat compared to individuals with a secure attachment style, especially for avoidant men and anxious women. Insecure attachment styles are characterized by a lack of comfort within a romantic relationship resulting in a desire to be overly independent (avoidant attachment style) or a desire to be overly dependent on their partner in an unhealthy way (anxious attachment style). Those with an insecure attachment style are characterized by not believing that their romantic partner can/will support and comfort them in an effective way, either stemming from a negative belief regarding themselves (anxious attachment style) or a negative belief regarding romantic others (avoidant attachment style). Women are more likely to commit infidelity when they are emotionally unsatisfied with their relationship whereas men are more likely to commit infidelity if they are sexually unsatisfied with their current relationship. Women are more likely to commit emotional infidelity than men while men are more likely to commit sexual infidelity than women; however, these are not mutually exclusive categories as both men and women can and do engage in emotional or sexual infidelity. Executive control is a part of executive functions that allows for individuals to monitor and control their behavior through thinking about and managing their actions. The level of executive control that an individual possesses is impacted by development and experience and can be improved through training and practice. Those individuals that show a higher level of executive control can more easily influence/control their thoughts and behaviors in relation to potential threats to an ongoing relationship which can result in paying less attention to threats to the current relationship (other potential romantic mates). Sociosexual orientation is concerned with how freely individuals partake in casual sex outside of a committed relationship and their beliefs regarding how necessary it is to be in love in order to engage in sex with someone. Individuals with a less restrictive sociosexual orientation (more likely to partake in casual sex) are more likely to engage in infidelity. Individuals that have personality traits including (high) neuroticism, (low) agreeableness, and (low) conscientiousness are more likely to commit infidelity. Men are generally speculated to cheat more than women, but it is unclear if this is a result of socialization processes where it is more acceptable for men to cheat compared to women or due to an actual increase in this behavior for men. Research conducted by Conley and colleagues (2011) suggests that the reasoning behind these gender differences stems from the negative stigma associated with women who engage in casual sex and inferences about the sexual capability of the potential sexual partner. In their study, men and women were equally likely to accept a sexual proposal from an individual who was speculated to have a high level of sexual prowess. Additionally, women were just as likely as men to accept a casual sexual proposal when they did not anticipate being subjected to the negative stigma of sexually permissible women as slutty. Online dating deceptions Research on the use of deception in online dating has shown that people are generally truthful about themselves with the exception of physical attributes to appear more attractive. According to the Scientific American, "nine out of ten online daters will fib about their height, weight, or age" such that men were more likely to lie about height while women were more likely to lie about weight. In a study conducted by Toma and Hancock, "less attractive people were found to be more likely to have chosen a profile picture in which they were significantly more attractive than they were in everyday life". Both genders used this strategy in online dating profiles, but women more so than men. Additionally, less attractive people were more likely to have "lied about objective measures of physical attractiveness such as height and weight". In general, men are more likely to lie on dating profiles the one exception being that women are more likely to lie about weight. In business People who negotiate feel more tempted to use deceit. In negotiation, it includes both parties to trust and respect one another. In negotiations, one party is unaware of what is going on in the other side of the thing that needs to be negotiated. Deception in negotiation comes in many forms, and each has its reaction (Gaspar et al.,2019). Price reservation: Not stating the real budget or price that one has in mind. Misrepresentation of interests: Getting interests if the buyer seems desperate. Fabrication of facts: This is the most immoral part, where the person lies about materials, misleading information to get a sale. Omitting relevance: Not stating something that is helpful to know: for example, a car can be like new but it does not help if the seller omits the fact that there is a problem with the transmission. In journalism Journalistic deception ranges from passive activities (i.e. blending into a civil rights march) to active deception (i.e. falsely identifying oneself over the telephone, getting hired as a worker at a mental hospital). Paul Braun says that the journalist does not stand apart from the rest of the populace in the use of deception. In law For legal purposes, deceit is a tort that occurs when a person makes a factual misrepresentation, knowing that it is false (or having no belief in its truth and being reckless as to whether it is true) and intending it to be relied on by the recipient, and the recipient acts to his or her detriment in reliance on it. Deceit may also be grounds for legal action in contract law (known as misrepresentation, or if deliberate, fraudulent misrepresentation), or a criminal prosecution, on the basis of fraud. In government The use of deception by a government is typically frowned upon unless it is in reference to military operations. These terms refer to the means by which governments employ deception: Subterfuge – in the case of disguise and disguised movement Secrecy – in the fortification of communications and in the fortified concealing of documents. Propaganda – somewhat controversial label for what governments produce in the way of controlled information and message in media documents and communications. Fake news – in criminal investigations, the delivery of information to the public, the deliberate transformation of certain key details. Misinformation – similar to the above, but unconfined to criminal investigations. Military secret – secrecy for military operations False flag – military operations that deal with deception as their main component. In religion Deception is a common topic in religious discussions. Some sources focus on how religious texts deal with deception. But, other sources focus on the deceptions created by the religions themselves. For example, Ryan McKnight is the founder of an organization called FaithLeaks. He stated that the organizations "goal is to reduce the amount of deception and untruths and unethical behaviors that exist in some facets of religion". Christianity Islam In general, Islam never allows deception and lie. Prophet Muhammad said, "He who deceives is not of me (is not my follower)". However, there are some exceptions, especially in case of war or peace making or in case of safeguarding one's faith. For an example, Taqiya is an Islamic juridical term for the cases in which a Muslim is allowed to lie under the circumstance when need to deny their faith due to force or when faced with persecution. The concept mainly followed by Shi'ite sect, but it varies "significantly among Islamic sects, scholars, countries, and political regimes", and has been evoked by critics of Islam to portray the faith allowing dishonesty. In philosophy Deception is a recurring theme in modern philosophy. In 1641 Descartes published his meditations, in which he introduced the notion of the Deus deceptor, a posited being capable of deceiving the thinking ego about reality. The notion was used as part of his hyperbolic doubt, wherein one decides to doubt everything there is to doubt. The Deus deceptor is a mainstay of so-called skeptical arguments, which purport to put into question our knowledge of reality. The punch of the argument is that all we know might be wrong, since we might be deceived. Stanley Cavell has argued that all skepticism has its root in this fear of deception. In psychological research Psychological research often needs to deceive the subjects as to its actual purpose. The rationale for such deception is that humans are sensitive to how they appear to others (and to themselves) and this self-consciousness might interfere with or distort from how they actually behave outside of a research context (where they would not feel they were being scrutinized). For example, if a psychologist is interested in learning the conditions under which students cheat on tests, directly asking them, "how often do you cheat?", might result in a high percent of "socially desirable" answers and the researcher would, in any case, be unable to verify the accuracy of these responses. In general, then, when it is unfeasible or naive to simply ask people directly why or how often they do what they do, researchers turn to the use of deception to distract their participants from the true behavior of interest. So, for example, in a study of cheating, the participants may be told that the study has to do with how intuitive they are. During the process, they might be given the opportunity to look at (secretly, they think) another participant's [presumably highly intuitively correct] answers before handing in their own. At the conclusion of this or any research involving deception, all participants must be told of the true nature of the study and why deception was necessary (this is called debriefing). Moreover, it is customary to offer to provide a summary of the results to all participants at the conclusion of the research. Though commonly used and allowed by the ethical guidelines of the American Psychological Association, there has been debate about whether or not the use of deception should be permitted in psychological research experiments. Those against deception object to the ethical and methodological issues involved in its use. Dresser (1981) notes that, ethically, researchers are only to use subjects in an experiment after the subject has given informed consent. However, because of its very nature, a researcher conducting a deception experiment cannot reveal its true purpose to the subject, thereby making any consent given by a subject misinformed (p. 3). Baumrind (1964), criticizing the use of deception in the Milgram (1963) obedience experiment, argues that deception experiments inappropriately take advantage of the implicit trust and obedience given by the subject when the subject volunteers to participate (p. 421). From a practical perspective, there are also methodological objections to deception. Ortmann and Hertwig (1998) note that "deception can strongly affect the reputation of individual labs and the profession, thus contaminating the participant pool" (p. 806). If the subjects in the experiment are suspicious of the researcher, they are unlikely to behave as they normally would, and the researcher's control of the experiment is then compromised (p. 807). Those who do not object to the use of deception note that there is always a constant struggle in balancing "the need for conducting research that may solve social problems and the necessity for preserving the dignity and rights of the research participant" (Christensen, 1988, p. 670). They also note that, in some cases, using deception is the only way to obtain certain kinds of information, and that prohibiting all deception in research would "have the egregious consequence of preventing researchers from carrying out a wide range of important studies" (Kimmel, 1998, p. 805). Additionally, findings suggest that deception is not harmful to subjects. Christensen's (1988) review of the literature found "that research participants do not perceive that they are harmed and do not seem to mind being misled" (p. 668). Furthermore, those participating in experiments involving deception "reported having enjoyed the experience more and perceived more educational benefit" than those who participated in non-deceptive experiments (p. 668). Lastly, it has also been suggested that an unpleasant treatment used in a deception study or the unpleasant implications of the outcome of a deception study may be the underlying reason that a study using deception is perceived as unethical in nature, rather than the actual deception itself (Broder, 1998, p. 806; Christensen, 1988, p. 671). In social research Some methodologies in social research, especially in psychology, involve deception. The researchers purposely mislead or misinform the participants about the true nature of the experiment. In an experiment conducted by Stanley Milgram in 1963 the researchers told participants that they would be participating in a scientific study of memory and learning. In reality the study looked at the participants' willingness to obey commands, even when that involved inflicting pain upon another person. After the study, the subjects were informed of the true nature of the study, and steps were taken in order to ensure that the subjects left in a state of well-being. Use of deception raises many problems of research ethics and it is strictly regulated by professional bodies such as the American Psychological Association. In computer security Online disinhibition Deception occurs not only in real life, but also online. Through mediated communication, a type of communication exchanged through online platforms such as social media and mass media like radios and magazines, deceiving messages can be spread online. With the Online Disinhibition Theory, a person may not feel the need to censor their communication because of the online environment. This often occurs due to the idea that on the internet, no one can physically know whether the communication one is using is true or not.   This can lead to falsehoods since communication is not occurring face-to-face, making it difficult to perceive the words of other people. Online Disinhibition typically occurs on social media such as group chats or online games. Although not always, people are able to portray themselves as a different person than reality because of the lack of face-to face communication which allows them to fit in with a specific group they wish to be a part of. As technology continues to expand, deception online is common to see. Digital deception is widely used within different forms of technology to misrepresent someone or something. Through digital deception, people are easily capable of deceiving others whether it be for their own benefit or to ensure their safety. One form of digital deception is catfishing. By creating a false identity catfishers deceive those online to build relationships, friendships, or connections without revealing who they truly are as a person. They do so by creating an entirely new account that has made up information allowing them to  portray themselves as a different person. Most lies and misinformation are spread commonly through emails and instant messaging since these messages are erased faster. Without face to face communication, it could be easier to deceive others, making it difficult to detect the truth from a lie.  These unreliable cues allow digital deception to easily influence and mislead others. Double bluff Double bluffing is a deceptive scenario, in which the deceiver tells truth to a person about some subject, but makes the person think that the deceiver is lying. In poker, the term double bluff refers to a situation in which the deceiving player is trying to bluff with bad cards, then gets re-raised by the opponent, and then re-raises again in the hopes that the enemy player folds. This strategy works best on opponents who easily fold under pressure. Deception detection Deception detection is extremely difficult unless it is a blatant or obvious lie or contradicts something the other knows to be true. While it is difficult to deceive a person over a long period of time, deception often occurs in day-to-day conversations between relational partners. Detecting deception is difficult because there are no known completely reliable indicators of deception and because people often reply on a truth-default state. Deception, however, places a significant cognitive load on the deceiver. He or she must recall previous statements so that his or her story remains consistent and believable. As a result, deceivers often leak important information both verbally and nonverbally. Deception and its detection is a complex, fluid, and cognitive process that is based on the context of the message exchange. The interpersonal deception theory posits that interpersonal deception is a dynamic, iterative process of mutual influence between a sender, who manipulates information to depart from the truth, and a receiver, who attempts to establish the validity of the message. A deceiver's actions are interrelated to the message receiver's actions. It is during this exchange that the deceiver will reveal verbal and nonverbal information about deceit. Some research has found that there are some cues that may be correlated with deceptive communication, but scholars frequently disagree about the effectiveness of many of these cues to serve as reliable indicators. A cross cultural study conducted to analyze human behavior and deception concluded detecting deception often has to do with the judgements of a person and how they interpret non-verbal cues. One's personality can influence these judgements also as some people are more confident in deceiving compared to others. Noted deception scholar Aldert Vrij even states that there is no nonverbal behavior that is uniquely associated with deception. As previously stated, a specific behavioral indicator of deception does not exist. There are, however, some nonverbal behaviors that have been found to be correlated with deception. Vrij found that examining a "cluster" of these cues was a significantly more reliable indicator of deception than examining a single cue. Many people believe that they are good at deception, though this confidence is often misplaced. Deception detection can decrease with increased empathy. Emotion recognition training does not affect the ability to detect deception. Mark Frank proposes that deception is detected at the cognitive level. Lying requires deliberate conscious behavior, so listening to speech and watching body language are important factors in detecting lies. If a response to a question has a lot disturbances, less talking time, repeated words, and poor logical structure, then the person may be lying. Vocal cues such as frequency height and variation may also provide meaningful clues to deceit. Fear specifically causes heightened arousal in liars, which manifests in more frequent blinking, pupil dilation, speech disturbances, and a higher pitched voice. The liars that experience guilt have been shown to make attempts at putting distance between themselves and the deceptive communication, producing "nonimmediacy cues". These can be verbal or physical, including speaking in more indirect ways and showing an inability to maintain eye contact with their conversation partners. Another cue for detecting deceptive speech is the tone of the speech itself. Streeter, Krauss, Geller, Olson, and Apple (1977) have assessed that fear and anger, two emotions widely associated with deception, cause greater arousal than grief or indifference, and note that the amount of stress one feels is directly related to the frequency of the voice. See also References Citations General and cited sources American Psychological Association – Ethical principles of psychologists and code of conduct. (2010). Retrieved February 7, 2013. Bassett, Rodney L.; Basinger, David; & Livermore, Paul. (1992, December). Lying in the Laboratory: Deception in Human Research from a Psychological, Philosophical, and Theological Perspectives. ASA3.org Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram's "Behavioral Study of Obedience." American Psychologist, 19(6), 421–423. Retrieved February 21, 2008, from the PsycINFO database. Behrens, Roy R. (2009). Camoupedia: A Compendium of Research on Art, Architecture and Camouflage. Bobolink Books. . Bröder, A. (1998). Deception can be acceptable. American Psychologist, 53(7), 805–806. Retrieved February 22, 2008, from the PsycINFO database. Dresser, R. S. (1981). Deception research and the HHS final regulations. IRB: Ethics and Human Research, 3(4), 3–4. Retrieved February 21, 2008, from the JSTOR database. Edelman, Murray Constructing the political spectacle 1988 . Kimmel, A. J. (1998). "In defense of deception". American Psychologist, 53(7), 803–805. Retrieved February 22, 2008, from the PsycINFO database. Milgram, S. (1963). "Behavioral study of obedience". The Journal of Abnormal and Social Psychology, 67(4), 371–378. Retrieved February 25, 2008 from the PsycARTICLES database. Ortmann, A. & Hertwig, R. (1998). "The question remains: Is deception acceptable?" American Psychologist, 53(7), 806–807. Retrieved February 22, 2008, from the PsycINFO database. Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition). Boston: McGraw Hill. Bruce Schneier, Secrets and Lies Robert Wright, The Moral Animal: Why We Are the Way We Are: The New Science of Evolutionary Psychology. Vintage, 1995. . Further reading Robert, W.; Thompson, Nicholas S., eds., Deception. Perspectives on Human and Nonhuman Deceit. New York: State University of New York Press. Kopp, Carlo, Deception in Biology: Nature's Exploitation of Information to Win Survival Contests. Monash University, October 2011. "Scientists Pick Out Human Lie Detectors", NBC News/Associated Press Zhang Yingyu, The Book of Swindles: Selections from a Late Ming Collection, translated by Christopher Rea and Bruce Rusk (New York: Columbia University Press, 2017). External links Barriers to critical thinking Communication Human behavior Psychology experiments Lying
Deception
[ "Biology" ]
6,274
[ "Behavior", "Human behavior" ]
151,617
https://en.wikipedia.org/wiki/Authorization
Authorization or authorisation (see spelling differences) is the function of specifying rights/privileges for accessing resources, which is related to general information security and computer security, and to IAM (Identity and Access Management) in particular. More formally, "to authorize" is to define an access policy during the configuration of systems and user accounts. For example, user accounts for human resources staff are typically configured with authorization for accessing employee records, and this policy gets formalized as access control rules in a computer system. Authorization must not be confused with access control. During usage, access control enforces the authorization policy by deciding whether access requests to resources from (authenticated) consumers shall be approved (granted) or disapproved (rejected). Resources include individual files or an item's data, computer programs, computer devices and functionality provided by computer applications. Examples of consumers are computer users, computer software and other hardware on the computer. Overview IAM consists the following two phases: the configuration phase where a user account is created and its corresponding access authorization policy is defined, and the usage phase where user authentication takes place followed by access control to ensure that the user/consumer only gets access to resources for which they are authorized. Hence, access control in computer systems and networks relies on access authorization specified during configuration. Most modern, multi-user operating systems include role-based access control (RBAC) where authorization is implicitly defined by the roles. User authentication is the process of verifying the identity of consumers. When an authenticated consumer tries to access a resource, the access control process checks that the consumer has been authorized to use that resource. Authorization is the responsibility of an authority, such as a department manager, within the application domain, but is often delegated to a custodian such as a system administrator. Authorizations are expressed as access policies in some types of "policy definition application", e.g. in the form of an access control list or a capability, or a policy administration point e.g. XACML. On the basis of the "principle of least privilege": consumers should only be authorized to access whatever they need to do their jobs. Older and single user operating systems often had weak or non-existent authentication and access control systems. "Anonymous consumers" or "guests", are consumers that have not been required to authenticate. They often have limited authorization. On a distributed system, it is often desirable to grant access without requiring a unique identity. Familiar examples of access tokens include keys, certificates and tickets: they grant access without proving identity. Trusted consumers are often authorized for unrestricted access to resources on a system, but must be verified so that the access control system can make the access approval decision. "Partially trusted" and guests will often have restricted authorization in order to protect resources against improper access and usage. The access policy in some operating systems, by default, grant all consumers full access to all resources. Others do the opposite, insisting that the administrator explicitly authorizes a consumer to use each resource. Even when access is controlled through a combination of authentication and access control lists, the problems of maintaining the authorization data is not trivial, and often represents as much administrative burden as managing authentication credentials. It is often necessary to change or remove a user's authorization: this is done by changing or deleting the corresponding access rules on the system. Using atomic authorization is an alternative to per-system authorization management, where a trusted third party securely distributes authorization information. Related interpretations Public policy In public policy, authorization is a feature of trusted systems used for security or social control. Banking In banking, an authorization is a hold placed on a customer's account when a purchase is made using a debit card or credit card. Publishing In publishing, sometimes public lectures and other freely available texts are published without the approval of the author. These are called unauthorized texts. An example is the 2002 'The Theory of Everything: The Origin and Fate of the Universe' , which was collected from Stephen Hawking's lectures and published without his permission as per copyright law. See also Access control Authorization hold Authorization OSID Kerberos (protocol) Multi-party authorization OAuth OpenID Connect OpenID Usability of web authentication systems WebFinger WebID XACML References Computer access control Access control Authority
Authorization
[ "Engineering" ]
879
[ "Cybersecurity engineering", "Computer access control" ]
151,629
https://en.wikipedia.org/wiki/Glossary
A glossary (from , glossa; language, speech, wording), also known as a vocabulary or clavis, is an alphabetical list of terms in a particular domain of knowledge with the definitions for those terms. Traditionally, a glossary appears at the end of a book and includes terms within that book that are either newly introduced, uncommon, or specialized. While glossaries are most commonly associated with non-fiction books, in some cases, fiction novels sometimes include a glossary for unfamiliar terms. A bilingual glossary is a list of terms in one language defined in a second language or glossed by synonyms (or at least near-synonyms) in another language. In a general sense, a glossary contains explanations of concepts relevant to a certain field of study or action. In this sense, the term is related to the notion of ontology. Automatic methods have been also provided that transform a glossary into an ontology or a computational lexicon. Core glossary A core glossary is a simple glossary or explanatory dictionary that enables definition of other concepts, especially for newcomers to a language or field of study. It contains a small working vocabulary and definitions for important or frequently encountered concepts, usually including idioms or metaphors useful in a culture. Automatic extraction of glossaries Computational approaches to the automated extraction of glossaries from corpora or the Web have been developed in the recent years. These methods typically start from domain terminology and extract one or more glosses for each term of interest. Glosses can then be analyzed to extract hypernyms of the defined term and other lexical and semantic relations. See also Controlled vocabulary Dictionary Frahang-i Pahlavig, a glossary of Pahlavi logograms Index (publishing) Terminology extraction References External links glossarist.com: The Glossarist - Large list of glossaries www.ontopia.net: The TAO of Topic Maps www.babel-linguistics.com: Babel Linguistics Glossaries Selected Multilingual Glossaries by Industry This provides a detailed description of the development of glossaries in classical languages. Book design Lexicography
Glossary
[ "Engineering" ]
437
[ "Book design", "Design" ]
151,651
https://en.wikipedia.org/wiki/Fomalhaut
Fomalhaut (, ) is the brightest star in the southern constellation of Piscis Austrinus, the Southern Fish, and one of the brightest stars in the night sky. It has the Bayer designation Alpha Piscis Austrini, which is an alternative form of α Piscis Austrini, and is abbreviated Alpha PsA or α PsA. This is a class A star on the main sequence approximately from the Sun as measured by the Hipparcos astrometry satellite. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. It is classified as a Vega-like star that emits excess infrared radiation, indicating it is surrounded by a circumstellar disk. Fomalhaut, K-type main-sequence star TW Piscis Austrini, and M-type, red dwarf star LP 876-10 constitute a triple system, even though the companions are separated by approximately 8 degrees. Fomalhaut was the first stellar system with an extrasolar planet candidate imaged at visible wavelengths, designated Fomalhaut b. However, analyses in 2019 and 2023 of existing and new observations indicate that Fomalhaut b is not a planet, but rather an expanding region of debris from a massive planetesimal collision. Nomenclature α Piscis Austrini, or Alpha Piscis Austrini, is the system's Bayer designation. It also bears the Flamsteed designation of 24 Piscis Austrini. The classical astronomer Ptolemy included it in the constellation of Aquarius, along with the rest of Piscis Austrinus. In the 17th century, Johann Bayer firmly planted it in the primary position of Piscis Austrinus. Following Ptolemy, John Flamsteed in 1725 additionally denoted it 79 Aquarii. The current designation reflects modern consensus on Bayer's decision, that the star belongs in Piscis Austrinus. Under the rules for naming objects in multiple-star systems, the three components – Fomalhaut, TW Piscis Austrini and LP 876-10 – are designated A, B and C, respectively. The star's traditional name derives from Fom al-Haut from scientific Arabic "the mouth of the [Southern] Fish" (literally, "mouth of the whale"), a translation of how Ptolemy labeled it. Fam in Arabic means "mouth", al "the", and ḥūt "fish" or "whale". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included the name "Fomalhaut" for this star. In July 2014, the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets. The process involved public nomination and voting for the new names. In December 2015, the IAU announced "Dagon" as the winning name for Fomalhaut b. The winning name was proposed by Todd Vaccaro and forwarded by the St. Cloud State University Planetarium of St. Cloud, Minnesota, United States of America, to the IAU for consideration. Dagon was a Semitic deity, often represented as half-man, half-fish. Fomalhaut A At a declination of −29.6°, Fomalhaut is located south of the celestial equator, and hence is best viewed from the Southern Hemisphere. However, its southerly declination is not as great as that of stars such as Acrux, Alpha Centauri and Canopus, meaning that, unlike them, Fomalhaut is visible from a large part of the Northern Hemisphere as well, being best seen in autumn. Its declination is greater than that of Sirius and similar to that of Antares. At 40°N, Fomalhaut rises above the horizon for eight hours and reaches only 20° above the horizon, while Capella, which rises at approximately the same time, will stay above the horizon for twenty hours. Fomalhaut can be located in northern latitudes by the fact that the western (right-hand) side of the Square of Pegasus points to it. Continuing the line from Beta to Alpha Pegasi towards the southern horizon, Fomalhaut is about 45˚ south of Alpha Pegasi, with no bright stars in between. Properties Fomalhaut is a young star, for many years thought to be only 100 to 300 million years old, with a potential lifespan of a billion years. A 2012 study gave a slightly higher age of . The surface temperature of the star is around . Fomalhaut's mass is about 1.92 times that of the Sun, its luminosity is about 16.6 times greater, and its diameter is roughly 1.84 times as large. Fomalhaut is slightly metal-deficient compared to the Sun, which means it is composed of a smaller percentage of elements other than hydrogen and helium. The metallicity is typically determined by measuring the abundance of iron in the photosphere relative to the abundance of hydrogen. A 1997 spectroscopic study measured a value equal to 93% of the Sun's abundance of iron. A second 1997 study deduced a value of 78%, by assuming Fomalhaut has the same metallicity as the neighboring star TW Piscis Austrini, which has since been argued to be a physical companion. In 2004, a stellar evolutionary model of Fomalhaut yielded a metallicity of 79%. Finally, in 2008, a spectroscopic measurement gave a significantly lower value of 46%. Fomalhaut has been claimed to be one of approximately 16 stars belonging to the Castor Moving Group. This is an association of stars which share a common motion through space, and have been claimed to be physically associated. Other members of this group include Castor and Vega. The moving group has an estimated age of and originated from the same location. More recent work has found that purported members of the Castor Moving Group appear to not only have a wide range of ages, but their velocities are too different to have been possibly associated with one another in the distant past. Hence, "membership" in this dynamical group has no bearing on the age of the Fomalhaut system. Debris disks and suspected planets Fomalhaut is surrounded by several debris disks. The inner disk is a high-carbon small-grain (10–300 nm) ash disk, clustering at 0.1 AU from the star. Next is a disk of larger particles, with inner edge 0.4-1 AU of the star. The innermost disk is unexplained as yet. The outermost disk is at a radial distance of , in a toroidal shape with a very sharp inner edge, all inclined 24 degrees from edge-on. The dust is distributed in a belt about 25 AU wide. The geometric center of the disk is offset by about from Fomalhaut. The disk is sometimes referred to as "Fomalhaut's Kuiper belt". Fomalhaut's dusty disk is believed to be protoplanetary, and emits considerable infrared radiation. Measurements of Fomalhaut's rotation indicate that the disk is located in the star's equatorial plane, as expected from theories of star and planet formation. Herschel Space Observatory images of Fomalhaut, analysed in 2012, reveal that a large amount of fluffy micrometer-sized dust is present in the outer dust belt. Because such dust is expected to be blown out of the system by stellar radiation pressure on short timescales, its presence indicates a constant replenishment by collisions of planetesimals. The fluffy morphology of the grains suggests a cometary origin. The collision rate is estimated to be approximately 2000 kilometre-sized comets per day. Observations of this outer dust ring by the Atacama Large Millimeter Array also suggested the possible existence of two planets in the system. If there are additional planets from 4 to 10 AU, they must be under ; if from 2.5 outward, then . On November 13, 2008, astronomers announced an extrasolar planet candidate, orbiting just inside the outer debris ring. This was the first extrasolar orbiting object candidate to be directly imaged in visible light, captured by the Hubble Space Telescope. The mass of the tentative planet, Fomalhaut b, was estimated to be less than three times the mass of Jupiter, and at least the mass of Neptune. However, M-band images taken from the MMT Observatory put strong limits on the existence of gas giants within 40 AU of the star, and Spitzer Space Telescope imaging suggested that the object Fomalhaut b was more likely to be a dust cloud. A later 2019 synthesis of new and existing direct observations of the object confirmed that it is expanding, losing brightness, has not enough mass to detectably perturb the outer ring while crossing it, and is probably a dispersing cloud of debris from a massive planetesimal collision on a hyperbolic orbit destined to leave the Fomalhaut A system. Further 2022 observations with the James Webb Space Telescope in mid-infrared failed to resolve the object in the MIRI wideband filter wavelength range, reported by the same team to be consistent with the previous result. The same 2022 JWST imaging data discovered another apparent feature in the outer disk, dubbed the "Great Dust Cloud". However, another team's analysis, which included other existing data, preferred its interpretation as a coincident background object, not part of the outer ring. Another 2023 study detected 10 point sources around Fomalhaut; all but one of these are background objects, including the "Great Dust Cloud", but the nature of the last is unclear. It may be a background object, or a planetary companion to Fomalhaut. |- | Outer hot disk | colspan="4"| 0.21–0.62 AU or 0.88–1.08 AU | — | — Fomalhaut B (TW Piscis Austrini) Fomalhaut forms a binary star with the K4-type star TW Piscis Austrini (TW PsA), which lies away from Fomalhaut, and its space velocity agrees with that of Fomalhaut within , consistent with being a bound companion. A recent age estimate for TW PsA () agrees very well with the isochronal age for Fomalhaut (), further arguing for the two stars forming a physical binary. The designation TW Piscis Austrini is astronomical nomenclature for a variable star. Fomalhaut B is a flare star of the type known as a BY Draconis variable. It varies slightly in apparent magnitude, ranging from 6.44 to 6.49 over a 10.3 day period. While smaller than the Sun, it is relatively large for a flare star. Most flare stars are red M-type dwarfs. In 2019, a team of researchers analyzing the astrometry, radial velocity measurements, and images of Fomalhaut B suggested the existence of a planet orbiting the star with a mass of Jupiter masses, and a poorly defined orbital period with an estimate loosely centering around 25 years. Fomalhaut C (LP 876-10) LP 876-10 is also associated with the Fomalhaut system, making it a trinary star. In October 2013, Eric Mamajek and collaborators from the RECONS consortium announced that the previously known high-proper-motion star LP 876-10 had a distance, velocity, and color-magnitude position consistent with being another member of the Fomalhaut system. LP 876-10 was originally catalogued as a high-proper-motion star by Willem Luyten in his 1979 NLTT catalogue; however, a precise trigonometric parallax and radial velocity was only measured quite recently. LP 876-10 is a red dwarf of spectral type M4V, and located even farther from Fomalhaut A than TW PsA—about 5.7° away from Fomalhaut A in the sky, in the neighbouring constellation Aquarius, whereas both Fomalhaut A and TW PsA are located in constellation Piscis Austrinus. Its current separation from Fomalhaut A is about , and it is currently located away from TW PsA (Fomalhaut B). LP 876-10 is located well within the tidal radius of the Fomalhaut system, which is . Although LP 876-10 is itself catalogued as a binary star in the Washington Double Star Catalog (called "WSI 138"), there was no sign of a close-in stellar companion in the imaging, spectral, or astrometric data in the Mamajek et al. study. In December 2013, Kennedy et al. reported the discovery of a cold dusty debris disk associated with Fomalhaut C, using infrared images from the Herschel Space Observatory. Multiple-star systems hosting multiple debris disks are exceedingly rare. Etymology and cultural significance Fomalhaut has had various names ascribed to it through time, and has been recognized by many cultures of the northern hemisphere, including the Arabs, Persians, and Chinese. It marked the solstice in 2500 BC. It was also a marker for the worship of Demeter in Eleusis. It is considered to be one of the four "royal stars" of the Persians. The Latin names are "the mouth of the Southern Fish". A folk name among the early Arabs was Difdi' al Awwal ( ) "the first frog" (the second frog is Beta Ceti). The Chinese name (Mandarin: Běiluòshīmén), meaning North Gate of the Military Camp, because this star is marking itself and stands alone in North Gate of the Military Camp asterism, Encampment mansion (see: Chinese constellations). (Běiluòshīmén), westernized into Pi Lo Sze Mun by R.H. Allen. To the Moporr Aboriginal people of South Australia, it is a male being called Buunjill. The Wardaman people of the Northern Territory called Fomalhaut Menggen —white cockatoo. Fomalhaut-Earthwork B, in Mounds State Park near Anderson, Indiana, lines up with the rising of the star Fomalhaut in the fall months, according to the Indiana Department of Natural Resources. In 1980, astronomer Jack Robinson proposed that the rising azimuth of Fomalhaut was marked by cairn placements at both the Bighorn medicine wheel in Wyoming, USA, and the Moose Mountain medicine wheel in Saskatchewan, Canada. New Scientist magazine termed it the "Great Eye of Sauron", comparing its shape and debris ring to the aforementioned "eye" in the Peter Jackson Lord of the Rings films. USS Fomalhaut (AK-22) was a United States navy amphibious cargo ship. See also Exoasteroid 2M1207 GJ 758 HR 8799 Direct imaging of extrasolar planets Lists of exoplanets List of star systems within 25–30 light-years Notes References External links Astrobites summary of Boley et al. 2012, the ALMA observations of the Fomalhaut ring system "Eye of Sauron" debris ring Researchers find that bright nearby double star Fomalhaut is actually a triple (Astronomy magazine : October 8, 2013) Fiction set around Fomalhaut A-type main-sequence stars Triple star systems 3 M-type main-sequence stars K-type main-sequence stars BY Draconis variables Hypothetical planetary systems Circumstellar disks Castor Moving Group Piscis Austrinus Piscis Austrini, Alpha 8728 Durchmusterung objects Piscis Austrini, 24 0879 81 216956 113368 Arabic words and phrases
Fomalhaut
[ "Astronomy" ]
3,404
[ "Piscis Austrinus", "Constellations" ]
151,694
https://en.wikipedia.org/wiki/Tar%20%28computing%29
In computing, tar is a computer software utility for collecting many files into one archive file, often referred to as a tarball, for distribution or backup purposes. The name is derived from "tape archive", as it was originally developed to write data to sequential I/O devices with no file system of their own, such as devices that use magnetic tape. The archive data sets created by tar contain various file system parameters, such as name, timestamps, ownership, file-access permissions, and directory organization. POSIX abandoned tar in favor of pax, yet tar sees continued widespread use. History The command-line utility was first introduced in the Version 7 Unix in January 1979, replacing the tp program (which in turn replaced "tap"). The file structure to store this information was standardized in POSIX.1-1988 and later POSIX.1-2001, and became a format supported by most modern file archiving systems. The tar command was abandoned in POSIX.1-2001 in favor of pax command, which was to support ustar file format; the tar command was indicated for withdrawal in favor of pax command at least since 1994. Today, Unix-like operating systems usually include tools to support tar files, as well as utilities commonly used to compress them, such as xz, gzip, and bzip2. The command has also been ported to the IBM i operating system. BSD-tar has been included in Microsoft Windows since Windows 10 April 2018 Update, and there are otherwise multiple third party tools available to read and write these formats on Windows. Rationale Many historic tape drives read and write variable-length data blocks, leaving significant wasted space on the tape between blocks (for the tape to physically start and stop moving). Some tape drives (and raw disks) support only fixed-length data blocks. Also, when writing to any medium such as a file system or network, it takes less time to write one large block than many small blocks. Therefore, the tar command writes data in records of many 512 B blocks. The user can specify a blocking factor, which is the number of blocks per record. The default is 20, producing 10 KiB records. File format There are multiple tar file formats, including historical and current ones. Two tar formats are codified in POSIX: ustar and pax. Not codified but still in current use is the GNU tar format. A tar archive consists of a series of file objects, hence the popular term tarball, referencing how a tarball collects objects of all kinds that stick to its surface. Each file object includes any file data, and is preceded by a 512-byte header record. The file data is written unaltered except that its length is rounded up to a multiple of 512 bytes. The original tar implementation did not care about the contents of the padding bytes, and left the buffer data unaltered, but most modern tar implementations fill the extra space with zeros. The end of an archive is marked by at least two consecutive zero-filled records. (The origin of tar's record size appears to be the 512-byte disk sectors used in the Version 7 Unix file system.) The final block of an archive is padded out to full length with zeros. Header The file header record contains metadata about a file. To ensure portability across different architectures with different byte orderings, the information in the header record is encoded in ASCII. Thus if all the files in an archive are ASCII text files, and have ASCII names, then the archive is essentially an ASCII text file (containing many NUL characters). The fields defined by the original Unix tar format are listed in the table below. The link indicator/file type table includes some modern extensions. When a field is unused it is filled with NUL bytes. The header uses 257 bytes, then is padded with NUL bytes to make it fill a 512 byte record. There is no "magic number" in the header, for file identification. Pre-POSIX.1-1988 (i.e. v7) tar header: The pre-POSIX.1-1988 Link indicator field can have the following values: Some pre-POSIX.1-1988 tar implementations indicated a directory by having a trailing slash (/) in the name. Numeric values are encoded in octal numbers using ASCII digits, with leading zeroes. For historical reasons, a final NUL or space character should also be used. Thus although there are 12 bytes reserved for storing the file size, only 11 octal digits can be stored. This gives a maximum file size of 8 gigabytes on archived files. To overcome this limitation, in 2001 star introduced a base-256 coding that is indicated by setting the high-order bit of the leftmost byte of a numeric field. GNU-tar and BSD-tar followed this idea. Additionally, versions of tar from before the first POSIX standard from 1988 pad the values with spaces instead of zeroes. The checksum is calculated by taking the sum of the unsigned byte values of the header record with the eight checksum bytes taken to be ASCII spaces (decimal value 32). It is stored as a six digit octal number with leading zeroes followed by a NUL and then a space. Various implementations do not adhere to this format. In addition, some historic tar implementations treated bytes as signed. Implementations typically calculate the checksum both ways, and treat it as good if either the signed or unsigned sum matches the included checksum. Unix filesystems support multiple links (names) for the same file. If several such files appear in a tar archive, only the first one is archived as a normal file; the rest are archived as hard links, with the "name of linked file" field set to the first one's name. On extraction, such hard links should be recreated in the file system. UStar format Most modern tar programs read and write archives in the UStar (Unix Standard TAR) format, introduced by the POSIX IEEE P1003.1 standard from 1988. It introduced additional header fields. Older tar programs will ignore the extra information (possibly extracting partially named files), while newer programs will test for the presence of the "ustar" string to determine if the new format is in use. The UStar format allows for longer file names and stores additional information about each file. The maximum filename size is 256, but it is split among a preceding path "filename prefix" and the filename itself, so can be much less. The type flag field can have the following values: POSIX.1-1988 vendor specific extensions using link flag values 'A'–'Z' partially have a different meaning with different vendors and thus are seen as outdated and replaced by the POSIX.1-2001 extensions that also include a vendor tag. Type '7' (Contiguous file) is formally marked as reserved in the POSIX standard, but was meant to indicate files which ought to be contiguously allocated on disk. Few operating systems support creating such files explicitly, and hence most TAR programs do not support them, and will treat type 7 files as if they were type 0 (regular). An exception is older versions of GNU tar, when running on the MASSCOMP RTU (Real Time Unix) operating system, which supported an O_CTG flag to the open() function to request a contiguous file; however, that support was removed from GNU tar version 1.24 onwards. POSIX.1-2001/pax In 1997, Sun proposed a method for adding extensions to the tar format. This method was later accepted for the POSIX.1-2001 standard. This format is known as extended tar format or pax format. The new tar format allows users to add any type of vendor-tagged vendor-specific enhancements. The following tags are defined by the POSIX standard: atime, mtime: all timestamps of a file in arbitrary resolution (most implementations use nanosecond granularity) path: path names of unlimited length and character set coding linkpath: symlink target names of unlimited length and character set coding uname, gname: user and group names of unlimited length and character set coding size: files with unlimited size (the historic tar format is 8 GB) uid, gid: userid and groupid without size limitation (the historic tar format is limited to a max. id of 2097151) a character set definition for path names and user/group names (UTF-8) In 2001, the Star program became the first tar to support the new format. In 2004, GNU tar supported the new format, though it does not write it as its default output from the tar program yet. The pax format is designed so that all implementations able to read the UStar format will be able to read the pax format as well. The only exceptions are files that make use of extended features, such as longer file names. For compatibility, these are encoded in the tar files as special or type files, typically under a directory. A pax-supporting implementation would make use of the information, while non-supporting ones like 7-Zip would process them as additional files. Features of the archival utilities Besides creating and extracting archives, the functionality of the various archival utilities varies. For example, implementations might automatically detect the format of compressed TAR archives for extraction so the user does not have to specify it, and let the user limit adding files to those modified after a specified date. Uses Command syntax tar [-options] <name of the tar archive> [files or directories which to add into archive] Basic options: -c, --create — create a new archive; -a, --auto-compress — additionally compress the archive with a compressor which will be automatically determined by the file name extension of the archive. If the archive's name ends with then use gzip, if then use xz, for Zstandard etc.; -r, --append — append files to the end of an archive; -x, --extract, --get — extract files from an archive; -f, --file — specify the archive's name; -t, --list — show a list of files and folders in the archive; -v, --verbose — show a list of processed files. Basic usage Create an archive file from the file and directory : $ tar -cvf archive.tar README.txt src Extract contents for the into the current directory: $ tar -xvf archive.tar Create an archive file from the file and directory and compress it with gzip : $ tar -cavf archive.tar.gz README.txt src Extract contents for the into the current directory: $ tar -xvf archive.tar.gz Tarpipe A tarpipe is the method of creating an archive on the standard output file of the tar utility and piping it to another tar process on its standard input, working in another directory, where it is unpacked. This process copies an entire source directory tree including all special files, for example: $ tar cf - srcdir | tar x -C destdir Software distribution The tar format continues to be used extensively for open-source software distribution. *NIX-distributions use it in various source- and binary-package distribution mechanisms, with most software source code made available in compressed tar archives. Limitations The original tar format was created in the early days of Unix, and despite current widespread use, many of its design features are considered dated. Other formats have been created to address the shortcomings of tar. File names Due to the field size, the original TAR format was unable to store file paths and names in excess of 100 characters. To overcome this problem while maintaining readability by existing TAR utilities, GNU tar stores file paths and names in excess of the 100 characters are stored in @LongLink entries that would be seen as ordinary files by TAR utilities unaware of this feature. Similarly, the PAX format uses PaxHeaders entries. Attributes Many older tar implementations do not record nor restore extended attributes (xattrs) or access-control lists (ACLs). In 2001, Star introduced support for ACLs and extended attributes, through its own tags for POSIX.1-2001 pax. bsdtar uses the star extensions to support ACLs. More recent versions of GNU tar support Linux extended attributes, reimplementing star extensions. A number of extensions are reviewed in the filetype manual for BSD tar, tar(5). Tarbomb A tarbomb, in hacker slang, is a tarball containing a large number of items whose contents are written to the current directory or some other existing directory when untarred instead of the directory created by the tarball specifically for the extracted outputs. It is at best an inconvenience to the user, who is obliged to identify and delete a number of files interspersed with the directory's other contents. Such behavior is considered bad etiquette on the part of the archive's creator. A related problem is the use of absolute paths or parent directory references when creating tar files. Files extracted from such archives will often be created in unusual locations outside the working directory and, like a tarbomb, have the potential to overwrite existing files. However, modern versions of FreeBSD and GNU tar do not create or extract absolute paths and parent-directory references by default, unless it is explicitly allowed with the flag or the option . The bsdtar program, which is also available on many operating systems and is the default tar utility on Mac OS X v10.6, also does not follow parent-directory references or symbolic links. If a user has only a very old tar available, which does not feature those security measures, these problems can be mitigated by first examining a tar file using the command tar tf archive.tar, which lists the contents and allows to exclude problematic files afterwards. These commands do not extract any files, but display the names of all files in the archive. If any are problematic, the user can create a new empty directory and extract the archive into it—or avoid the tar file entirely. Most graphical tools can display the contents of the archive before extracting them. Vim can open tar archives and display their contents. GNU Emacs is also able to open a tar archive and display its contents in a dired buffer. Random access The tar format was designed without a centralized index or table of content for files and their properties for streaming to tape backup devices. The archive must be read sequentially to list or extract files. For large tar archives, this causes a performance penalty, making tar archives unsuitable for situations that often require random access to individual files. With a well-formed tar file stored on a seekable (i.e. allows efficient random reads) medium, the program can still relatively quickly (in linear time relative to file count) look for a file by skipping file reads according to the "size" field in the file headers. This is the basis for option in GNU tar. When a tar file is compressed whole, the compression format, being usually non-seekable, prevents this optimization from being done. A number of "indexed" compressors, which are aware of the tar format, can restore this feature for compressed files. To maintain seekability, tar files must be also concatenated properly, by removing the trailing zero block at the end of each file. Duplicates Another issue with tar format is that it allows several (possibly different) files in archive to have identical paths and filenames. When extracting such archive, usually the latter version of a file overwrites the former. This can create a non-explicit (unobvious) tarbomb, which technically does not contain files with absolute paths or referring to parent directories, but still causes overwriting files outside current directory (for example, archive may contain two files with the same path and filename, first of which is a symlink to some location outside current directory, and second of which is a regular file; then extracting such archive on some tar implementations may cause writing to the location pointed to by the symlink). Key implementations Historically, many systems have implemented tar, and many general file archivers have at least partial support for tar (often using one of the implementations below). The history of tar is a story of incompatibilities, known as the "tar wars". Most tar implementations can also read and create cpio and pax (the latter actually is a tar-format with POSIX-2001-extensions). Key implementations in order of origin: Solaris tar, based on the original Unix V7 tar and comes as the default on the Solaris operating system GNU tar is the default on most Linux distributions. It is based on the public domain implementation pdtar which started in 1987. Recent versions can use various formats, including ustar, pax, GNU and v7 formats. FreeBSD tar (also BSD tar) has become the default tar on most Berkeley Software Distribution-based operating systems including Mac OS X. The core functionality is available as libarchive for inclusion in other applications. This implementation automatically detects the format of the file and can extract from tar, pax, cpio, zip, rar, ar, xar, rpm and ISO 9660 cdrom images. It also comes with a functionally equivalent cpio command-line interface. Schily tar, better known as star (, ), is historically significant as some of its extensions were quite popular. First published in April 1997, its developer has stated that he began development in 1982. Python tarfile module supports multiple tar formats, including ustar, pax and gnu; it can read but not create V7 format and the SunOS tar extended format; pax is the default format for creation of archives. Available since 2003. Additionally, most pax and cpio implementations can read and create multiple types of tar files. Suffixes for compressed files tar archive files usually have the file suffix .tar (e.g. somefile.tar). A tar archive file contains uncompressed byte streams of the files which it contains. To achieve archive compression, a variety of compression programs are available, such as gzip, bzip2, xz, lzip, lzma, zstd, or compress, which compress the entire tar archive. Typically, the compressed form of the archive receives a filename by appending the format-specific compressor suffix to the archive file name. For example, a tar archive archive.tar, is named archive.tar.gz, when it is compressed by gzip. Popular tar programs like the BSD and GNU versions of tar support the command line options Z (compress), z (gzip), and j (bzip2) to compress or decompress the archive file upon creation or unpacking. Relatively recent additions include --lzma (LZMA), --lzop (lzop), --xz or J (xz), --lzip (lzip), and --zstd. The decompression of these formats is handled automatically if supported filename extensions are used, and compression is handled automatically using the same filename extensions if the option --auto-compress (short form -a) is passed to an applicable version of GNU tar. BSD tar detects an even wider range of compressors (lrzip, lz4), using not the filename but the data within. Unrecognized formats are to be manually compressed or decompressed by piping. MS-DOS's 8.3 filename limitations resulted in additional conventions for naming compressed tar archives. However, this practice has declined with FAT now offering long filenames. See also Comparison of file archivers Comparison of archive formats List of archive formats List of Unix commands References External links X/Open CAE Specification Commands and Utilities Issue 4, Version 2 (pdf), 1994, opengroup.org – indicates tar as to be withdrawn tar in The Single UNIX Specification, Version 2, 1997, opengroup.org – indicates applications should migrate to pax utility C.4 Utilities in The Open Group Base Specifications Issue 6, 2004 Edition, opengroup.org – indicates tar as removed – specifies the ustar and pax file formats – manual from GNU TAR - Windows CMD - SS64.com Archive formats Free backup software GNU Project software Unix archivers and compression-related utilities Plan 9 commands IBM i Qshell commands
Tar (computing)
[ "Technology" ]
4,310
[ "Windows commands", "IBM i Qshell commands", "Computing commands", "Plan 9 commands" ]
151,762
https://en.wikipedia.org/wiki/Zu%20Chongzhi
Zu Chongzhi (; 429 – 500), courtesy name Wenyuan (), was a Chinese astronomer, inventor, mathematician, politician, and writer during the Liu Song and Southern Qi dynasties. He was most notable for calculating pi as between 3.1415926 and 3.1415927, a record in precision which would not be surpassed for nearly 900 years. Life and works Chongzhi's ancestry was from modern Baoding, Hebei. To flee from the ravages of war, Zu's grandfather Zu Chang moved to the Yangtze, as part of the massive population movement during the Eastern Jin. Zu Chang () at one point held the position of Chief Minister for the Palace Buildings () within the Liu Song and was in charge of government construction projects. Zu's father, Zu Shuozhi (), also served the court and was greatly respected for his erudition. Zu was born in Jiankang. His family had historically been involved in astronomical research, and from childhood Zu was exposed to both astronomy and mathematics. When he was only a youth, his talent earned him much repute. When Emperor Xiaowu of Song heard of him, he was sent to the Hualin Xuesheng () academy, and later the Imperial Nanjing University (Zongmingguan) to perform research. In 461 in Nanxu (today Zhenjiang, Jiangsu), he was engaged in work at the office of the local governor. In 464, Zu moved to Louxian (today Songjiang district, Shanghai), there, he compiled the Daming calender and calculated π. Zu Chongzhi, along with his son Zu Gengzhi, wrote a mathematical text entitled Zhui Shu (; "Methods for Interpolation"). It is said that the treatise contained formulas for the volume of a sphere, cubic equations and an accurate value of pi. This book has been lost since the Song dynasty. His mathematical achievements included the Daming calendar () introduced by him in 465. distinguishing the sidereal year and the tropical year. He measured 45 years and 11 months per degree between those two; today we know the difference is 70.7 years per degree. calculating one year as 365.24281481 days, which is very close to 365.24219878 days as we know today. calculating the number of overlaps between sun and moon as 27.21223, which is very close to 27.21222 as we know today; using this number he successfully predicted an eclipse four times during 23 years (from 436 to 459). calculating the Jupiter year as about 11.858 Earth years, which is very close to 11.862 as we know of today. deriving two approximations of pi, (3.1415926535897932...) which held as the most accurate approximation for for over nine hundred years. His best approximation was between 3.1415926 and 3.1415927, with (, milü, close ratio) and (, yuelü, approximate ratio) being the other notable approximations. He obtained the result by approximating a circle with a 24,576 (= 213 × 3) sided polygon. This was an impressive feat for the time, especially considering that the counting rods he used for recording intermediate results were merely a pile of wooden sticks laid out in certain patterns. Japanese mathematician Yoshio Mikami pointed out, " was nothing more than the value obtained several hundred years earlier by the Greek mathematician Archimedes, however milü = could not be found in any Greek, Indian or Arabian manuscripts, not until 1585 Dutch mathematician Adriaan Anthoniszoon obtained this fraction; the Chinese possessed this most extraordinary fraction over a whole millennium earlier than Europe". Hence Mikami strongly urged that the fraction be named after Zu Chongzhi as Zu's fraction. In Chinese literature, this fraction is known as "Zu's ratio". Zu's ratio is a best rational approximation to , and is the closest rational approximation to from all fractions with denominator less than 16600. finding the volume of a sphere as D3/6 where D is diameter (equivalent to 4/3r3). Astronomy Zu was an accomplished astronomer who calculated the time values with unprecedented precision. His methods of interpolation and the use of integration were far ahead of his time. Even the results of the astronomer Yi Xing (who was beginning to utilize foreign knowledge) were not comparable. The Sung dynasty calendar was backwards to the "Northern barbarians" because they were implementing their daily lives with the Da Ming Li. It is said that his methods of calculation were so advanced, the scholars of the Sung dynasty and Indo influence astronomers of the Tang dynasty found it confusing. Mathematics The majority of Zu's great mathematical works are recorded in his lost text the Zhui Shu. Most schools argue about his complexity since traditionally the Chinese had developed mathematics as algebraic and equational. Logically, scholars assume that the Zhui Shu yields methods of cubic equations. His works on the accurate value of pi describe the lengthy calculations involved. Zu used the Liu Hui's algorithm described earlier by Liu Hui to inscribe a 12,288-gon. Zu's value of pi is precise to six decimal places and for almost nine hundred years thereafter no subsequent mathematician computed a value this precise. Zu also worked on deducing the formula for the volume of a sphere with his son Zu Gengzhi. In their calculation, Zu used the concept that two solids with equal cross-sectional areas at equal heights must also have equal volumes to find the volume of a Steinmetz solid. And further multiplied the volume of the Steinmetz solid with π/4, therefore found the volume of a sphere as πd^3/6 (d is the diameter of the sphere). Inventions and innovations Hammer mills In 488, Zu Chongzhi was responsible for erecting water powered trip hammer mills which was inspected by Emperor Wu of Southern Qi during the early 490s. Paddle boats Zu is also credited with inventing Chinese paddle boats or Qianli chuan in the late 5th century AD during the Southern Qi dynasty. The boats made sailing a more reliable form of transportation and based on the shipbuilding technology of its day, numerous paddle wheel ships were constructed during the Tang era as the boats were able to cruise at faster speeds than the existing vessels at the time as well as being able to cover hundreds of kilometers of distance without the aid of wind. South pointing chariot The south-pointing chariot device was first invented by the Chinese mechanical engineer Ma Jun (c. 200–265 AD). It was a wheeled vehicle that incorporated an early use of differential gears to operate a fixed figurine that would constantly point south, hence enabling one to accurately measure their directional bearings. This effect was achieved not by magnetics (like in a compass), but through intricate mechanics, the same design that allows equal amounts of torque applied to wheels rotating at different speeds for the modern automobile. After the Three Kingdoms period, the device fell out of use temporarily. However, it was Zu Chongzhi who successfully re-invented it in 478, as described in the texts of the Book of Song and the Book of Qi, with a passage from the latter below: When Emperor Wu of Liu Song subdued Guanzhong he obtained the south-pointing carriage of Yao Xing, but it was only the shell with no machinery inside. Whenever it moved it had to have a man inside to turn (the figure). In the Sheng-Ming reign period, Gao Di commissioned Zi Zu Chongzhi to reconstruct it according to the ancient rules. He accordingly made new machinery of bronze, which would turn round about without a hitch and indicate the direction with uniformity. Since Ma Jun's time such a thing had not been.Book of Qi, 52.905 Literature Zu's paradoxographical work Accounts of Strange Things [] survives. Named after him ≈ as Zu Chongzhi's ratio. The lunar crater Tsu Chung-Chi 1888 Zu Chong-Zhi is the name of asteroid 1964 VO1. ZUC stream cipher is a new encryption algorithm. Notes References Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 2. Cambridge University Press Further reading External links Encyclopædia Britannica's description of Zu Chongzhi Zu Chongzhi at Chinaculture.org Zu Chongzhi at the University of Maine 429 births 500 deaths 5th-century Chinese mathematicians 5th-century Chinese astronomers Ancient Chinese mathematicians Chinese inventors Liu Song government officials Liu Song writers Pi-related people Politicians from Nanjing Scientists from Nanjing Southern Qi government officials Writers from Nanjing Chinese geometers
Zu Chongzhi
[ "Mathematics" ]
1,793
[ "Pi-related people", "Pi" ]
151,783
https://en.wikipedia.org/wiki/Stirling%27s%20approximation
In mathematics, Stirling's approximation (or Stirling's formula) is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. One way of stating the approximation involves the logarithm of the factorial: where the big O notation means that, for all sufficiently large values of , the difference between and will be at most proportional to the logarithm of . In computer science applications such as the worst-case lower bound for comparison sorting, it is convenient to instead use the binary logarithm, giving the equivalent form The error term in either base can be expressed more precisely as , corresponding to an approximate formula for the factorial itself, Here the sign means that the two quantities are asymptotic, that is, their ratio tends to 1 as tends to infinity. Derivation Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum with an integral: The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating , one considers its natural logarithm, as this is a slowly varying function: The right-hand side of this equation minus is the approximation by the trapezoid rule of the integral and the error in this approximation is given by the Euler–Maclaurin formula: where is a Bernoulli number, and is the remainder term in the Euler–Maclaurin formula. Take limits to find that Denote this limit as . Because the remainder in the Euler–Maclaurin formula satisfies where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form: Taking the exponential of both sides and choosing any positive integer , one obtains a formula involving an unknown quantity . For , the formula is The quantity can be found by taking the limit on both sides as tends to infinity and using Wallis' product, which shows that . Therefore, one obtains Stirling's formula: Alternative derivations An alternative formula for using the gamma function is (as can be seen by repeated integration by parts). Rewriting and changing variables , one obtains Applying Laplace's method one has which recovers Stirling's formula: Higher orders In fact, further corrections can also be obtained using Laplace's method. From previous result, we know that , so we "peel off" this dominant term, then perform two changes of variables, to obtain:To verify this: . Now the function is unimodal, with maximum value zero. Locally around zero, it looks like , which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by . This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives us . Now plug back to the equation to obtainnotice how we don't need to actually find , since it is cancelled out by the integral. Higher orders can be achieved by computing more terms in , which can be obtained programmatically. Thus we get Stirling's formula to two orders: Complex-analytic version A complex-analysis version of this method is to consider as a Taylor coefficient of the exponential function , computed by Cauchy's integral formula as This line integral can then be approximated using the saddle-point method with an appropriate choice of contour radius . The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term. Using the Central Limit Theorem and the Poisson distribution An alternative version uses the fact that the Poisson distribution converges to a normal distribution by the Central Limit Theorem. Since the Poisson distribution with parameter converges to a normal distribution with mean and variance , their density functions will be approximately the same: Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to: Taking logs then results in: which can easily be rearranged to give: Evaluating at gives the usual, more precise form of Stirling's approximation. Speed of convergence and error estimates Stirling's formula is in fact the first approximation to the following series (now called the Stirling series): An explicit formula for the coefficients in this series was given by G. Nemes. Further terms are listed in the On-Line Encyclopedia of Integer Sequences as and . The first graph in this section shows the relative error vs. , for 1 through all 5 terms listed above. (Bender and Orszag p. 218) gives the asymptotic formula for the coefficients:which shows that it grows superexponentially, and that by the ratio test the radius of convergence is zero. As , the error in the truncated series is asymptotically equal to the first omitted term. This is an example of an asymptotic expansion. It is not a convergent series; for any particular value of there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, let be the Stirling series to terms evaluated at . The graphs show which, when small, is essentially the relative error. Writing Stirling's series in the form it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term. Other bounds, due to Robbins, valid for all positive integers are This upper bound corresponds to stopping the above series for after the term. The lower bound is weaker than that obtained by stopping the series after the term. A looser version of this bound is that for all . Stirling's formula for the gamma function For all positive integers, where denotes the gamma function. However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. If , then Repeated integration by parts gives where is the th Bernoulli number (note that the limit of the sum as is not convergent, so this formula is just an asymptotic expansion). The formula is valid for large enough in absolute value, when , where is positive, with an error term of . The corresponding approximation may now be written: where the expansion is identical to that of Stirling's series above for , except that is replaced with . A further application of this asymptotic expansion is for complex argument with constant . See for example the Stirling formula applied in of the Riemann–Siegel theta function on the straight line . Error bounds For any positive integer , the following notation is introduced: and Then For further information and other error bounds, see the cited papers. A convergent version of Stirling's formula Thomas Bayes showed, in a letter to John Canton published by the Royal Society in 1763, that Stirling's formula did not give a convergent series. Obtaining a convergent version of Stirling's formula entails evaluating Binet's formula: One way to do this is by means of a convergent series of inverted rising factorials. If then where where denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series which converges when . Stirling's formula may also be given in convergent form as where Versions suitable for calculators The approximation and its equivalent form can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory. Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler: or equivalently, An alternative approximation for the gamma function stated by Srinivasa Ramanujan in Ramanujan's lost notebook is for . The equivalent approximation for has an asymptotic error of and is given by The approximation may be made precise by giving paired upper and lower bounds; one such inequality is History The formula was first discovered by Abraham de Moivre in the form De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely . See also Lanczos approximation Spouge's approximation References Further reading External links Peter Luschny, Approximation formulas for the factorial function n! Approximations Asymptotic analysis Analytic number theory Gamma and related functions Theorems in analysis
Stirling's approximation
[ "Mathematics" ]
1,852
[ "Theorems in mathematical analysis", "Analytic number theory", "Mathematical analysis", "Mathematical theorems", "Mathematical relations", "Asymptotic analysis", "Mathematical problems", "Approximations", "Number theory" ]
151,828
https://en.wikipedia.org/wiki/Side%20effect
In medicine, a side effect is an effect of the use of a medicinal drug or other treatment, usually adverse but sometimes beneficial, that is unintended. Herbal and traditional medicines also have side effects. A drug or procedure usually used for a specific effect may be used specifically because of a beneficial side-effect; this is termed "off-label use" until such use is approved. For instance, X-rays have long been used as an imaging technique; the discovery of their oncolytic capability led to their use in radiotherapy for ablation of malignant tumours. Frequency of side effects The World Health Organization and other health organisations characterise the probability of experiencing side effects as: Very common, ≥ 1⁄10 Common (frequent), 1⁄10 to 1⁄100 Uncommon (infrequent), 1⁄100 to 1⁄1000 Rare, 1⁄1000 to 1⁄10000 Very rare, < 1⁄10000 The European Commission recommends that the list should contain only effects where there is "at least a reasonable possibility" that they are caused by the drug and the frequency "should represent crude incidence rates (and not differences or relative risks calculated against placebo or other comparator)". The frequency describes how often symptoms appear after taking the drug, without assuming that they were necessarily caused by the drug. Both healthcare providers and lay people misinterpret the frequency of side effects as describing the increase in frequency caused by the drug. Examples of therapeutic side effects Most drugs and procedures have a multitude of reported adverse side effects; the information leaflets provided with virtually all drugs list possible side effects. Beneficial side effects are less common; some examples, in many cases of side-effects that ultimately gained regulatory approval as intended effects, are: Bevacizumab (Avastin), used to slow the growth of blood vessels, has been used against dry age-related macular degeneration, as well as macular edema from diseases such as diabetic retinopathy and central retinal vein occlusion. Buprenorphine has been shown experimentally (1982–1995) to be effective against severe, refractory depression. Bupropion (Wellbutrin), an anti-depressant, also helps smoking cessation; this indication was later approved, and the name of the drug as sold for smoking cessation is Zyban. Bupropion branded as Zyban may be sold at a higher price than as Wellbutrin, so some physicians prescribe Wellbutrin for smoking cessation. Carbamazepine is an approved treatment for bipolar disorder and epileptic seizures, but it has side effects useful in treating attention-deficit hyperactivity disorder (ADHD), schizophrenia, phantom limb syndrome, paroxysmal extreme pain disorder, neuromyotonia, and post-traumatic stress disorder. Dexamethasone and betamethasone in premature labor, to enhance pulmonary maturation of the fetus. Doxepin has been used to treat angioedema and severe allergic reactions due to its strong antihistamine properties. Gabapentin, approved for treatment of seizures and postherpetic neuralgia in adults, has side effects which are useful in treating bipolar disorder, essential tremor, hot flashes, migraine prophylaxis, neuropathic pain syndromes, phantom limb syndrome, and restless leg syndrome. Hydroxyzine, an antihistamine, is also used as an anxiolytic. Magnesium sulfate in obstetrics for premature labor and preeclampsia. Methotrexate (MTX), approved for the treatment of choriocarcinoma, is frequently used for the medical treatment of an unruptured ectopic pregnancy. The SSRI medication sertraline is approved as an antidepressant but delays sexual climax in men, and can be used to treat premature ejaculation. Sildenafil was originally intended for pulmonary hypertension; subsequently, it was discovered that it also produces erections, for which it was later approved. Terazosin, an α1-adrenergic antagonist approved to treat benign prostatic hyperplasia (enlarged prostate) and hypertension, is (one of several drugs) used off-label to treat drug induced diaphoresis and hyperhidrosis (excessive sweating). Thalidomide, a drug sold over the counter from 1957 to 1961 as a tranquiliser and treatment for morning sickness of pregnancy, became notorious for causing tens of thousands of babies to be born without limbs and with other conditions, or stillborn. The drug, though still subject to other adverse side-effects, is now used to treat cancers and skin disorders, and is on the World Health Organization's List of Essential Medicines. See also Adverse drug reaction (ADR), a harmful unintended result caused by taking medication Combined drug intoxication Conservative management Drug-drug interaction (DDI), an alteration of the action of a drug caused by the administration of other drugs Paradoxical reaction, an effect of a substance opposite to what would usually be expected Pharmacogenetics, the use of genetic information to determine which type of drugs will work best for a patient Unintended consequences References External links Clinical pharmacology
Side effect
[ "Chemistry" ]
1,100
[ "Pharmacology", "Clinical pharmacology" ]
151,864
https://en.wikipedia.org/wiki/Divergence%20theorem
In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed. More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region". The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Green's theorem. Explanation using liquid flow Vector fields are often illustrated using the example of the velocity field of a fluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by a vector, so that the velocity of the liquid at any moment forms a vector field. Consider an imaginary closed surface S inside a body of liquid, enclosing a volume of liquid. The flux of liquid out of the volume at any time is equal to the volume rate of fluid crossing this surface, i.e., the surface integral of the velocity over the surface. Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are no sources or sinks inside the volume then the flux of liquid out of S is zero. If the liquid is moving, it may flow into the volume at some points on the surface S and out of the volume at other points, but the amounts flowing in and out at any moment are equal, so the net flux of liquid out of the volume is zero. However if a source of liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surface S. The flux outward through S equals the volume rate of flow of fluid into S from the pipe. Similarly if there is a sink or drain inside S, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surface S equals the rate of liquid removed by the sink. If there are multiple sources and sinks of liquid inside S, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to the divergence of the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed by S equals the volume rate of flux through S. This is the divergence theorem. The divergence theorem is employed in any conservation law which states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary. Mathematical statement Suppose is a subset of (in the case of represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary (also indicated with ). If is a continuously differentiable vector field defined on a neighborhood of , then: The left side is a volume integral over the volume , and the right side is the surface integral over the boundary of the volume . The closed, measurable set is oriented by outward-pointing normals, and is the outward pointing unit normal at almost each point on the boundary . ( may be used as a shorthand for .) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume , and the right-hand side represents the total flow across the boundary . Informal derivation The divergence theorem follows from the fact that if a volume is partitioned into separate parts, the flux out of the original volume is equal to the algebraic sum of the flux out of each component volume. This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed. See the diagram. A closed, bounded volume is divided into two volumes and by a surface (green). The flux out of each component region is equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is where and are the flux out of surfaces and , is the flux through out of volume 1, and is the flux through out of volume 2. The point is that surface is part of the surface of both volumes. The "outward" direction of the normal vector is opposite for each volume, so the flux out of one through is equal to the negative of the flux out of the other so these two fluxes cancel in the sum. Therefore: Since the union of surfaces and is This principle applies to a volume divided into any number of parts, as shown in the diagram. Since the integral over each internal partition (green surfaces) appears with opposite signs in the flux of the two adjacent volumes they cancel out, and the only contribution to the flux is the integral over the external surfaces (grey). Since the external surfaces of all the component volumes equal the original surface. The flux out of each volume is the surface integral of the vector field over the surface The goal is to divide the original volume into infinitely many infinitesimal volumes. As the volume is divided into smaller and smaller parts, the surface integral on the right, the flux out of each subvolume, approaches zero because the surface area approaches zero. However, from the definition of divergence, the ratio of flux to volume, , the part in parentheses below, does not in general vanish but approaches the divergence as the volume approaches zero. As long as the vector field has continuous derivatives, the sum above holds even in the limit when the volume is divided into infinitely small increments As approaches zero volume, it becomes the infinitesimal , the part in parentheses becomes the divergence, and the sum becomes a volume integral over Since this derivation is coordinate free, it shows that the divergence does not depend on the coordinates used. Proofs For bounded open subsets of Euclidean space We are going to prove the following: Proof of Theorem. For compact Riemannian manifolds with boundary We are going to prove the following: Proof of Theorem. We use the Einstein summation convention. By using a partition of unity, we may assume that and have compact support in a coordinate patch . First consider the case where the patch is disjoint from . Then is identified with an open subset of and integration by parts produces no boundary terms: In the last equality we used the Voss-Weyl coordinate formula for the divergence, although the preceding identity could be used to define as the formal adjoint of . Now suppose intersects . Then is identified with an open set in . We zero extend and to and perform integration by parts to obtain where . By a variant of the straightening theorem for vector fields, we may choose so that is the inward unit normal at . In this case is the volume element on and the above formula reads This completes the proof. Corollaries By replacing in the divergence theorem with specific forms, other useful identities can be derived (cf. vector identities). With for a scalar function and a vector field , A special case of this is , in which case the theorem is the basis for Green's identities. With for two vector fields and , where denotes a cross product, With for two vector fields and , where denotes a dot product, With for a scalar function and vector field c: The last term on the right vanishes for constant or any divergence free (solenoidal) vector field, e.g. Incompressible flows without sources or sinks such as phase change or chemical reactions etc. In particular, taking to be constant: With for vector field and constant vector c: By reordering the triple product on the right hand side and taking out the constant vector of the integral, Hence, Example Suppose we wish to evaluate where is the unit sphere defined by and is the vector field The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem, because the divergence theorem says that the integral is equal to: where is the unit ball: Since the function is positive in one hemisphere of and negative in the other, in an equal and opposite way, its total integral over is zero. The same is true for : Therefore, because the unit ball has volume . Applications Differential and integral forms of physical laws As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples are Gauss's law (in electrostatics), Gauss's law for magnetism, and Gauss's law for gravity. Continuity equations Continuity equations offer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. In fluid dynamics, electromagnetism, quantum mechanics, relativity theory, and a number of other fields, there are continuity equations that describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution of sources or sinks of that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux). Inverse-square laws Any inverse-square law can instead be written in a Gauss's law-type form (with a differential and integral form, as described above). Two examples are Gauss's law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss's law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details. History Joseph-Louis Lagrange introduced the notion of surface integrals in 1760 and again in more general terms in 1811, in the second edition of his Mécanique Analytique. Lagrange employed surface integrals in his work on fluid mechanics. He discovered the divergence theorem in 1762. Carl Friedrich Gauss was also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem. He proved additional special cases in 1833 and 1839. But it was Mikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow. Special cases were proven by George Green in 1828 in An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Siméon Denis Poisson in 1824 in a paper on elasticity, and Frédéric Sarrus in 1828 in his work on floating bodies. Worked examples Example 1 To verify the planar variant of the divergence theorem for a region : and the vector field: The boundary of is the unit circle, , that can be represented parametrically by: such that where units is the length arc from the point to the point on . Then a vector equation of is At a point on : Therefore, Because , we can evaluate and because . Thus Example 2 Let's say we wanted to evaluate the flux of the following vector field defined by bounded by the following inequalities: By the divergence theorem, We now need to determine the divergence of . If is a three-dimensional vector field, then the divergence of is given by . Thus, we can set up the following flux integral as follows: Now that we have set up the integral, we can evaluate it. Generalizations Multiple dimensions One can use the generalised Stokes' theorem to equate the -dimensional volume integral of the divergence of a vector field over a region to the -dimensional surface integral of over the boundary of : This equation is also known as the divergence theorem. When , this is equivalent to Green's theorem. When , it reduces to the fundamental theorem of calculus, part 2. Tensor fields Writing the theorem in Einstein notation: suggestively, replacing the vector field with a rank- tensor field , this can be generalized to: where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity). See also Kelvin–Stokes theorem Generalized Stokes theorem Differential form References External links Differential Operators and the Divergence Theorem at MathPages The Divergence (Gauss) Theorem by Nick Bykov, Wolfram Demonstrations Project. – This article was originally based on the GFDL article from PlanetMath at https://web.archive.org/web/20021029094728/http://planetmath.org/encyclopedia/Divergence.html Theorems in calculus
Divergence theorem
[ "Mathematics" ]
2,914
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus" ]
151,925
https://en.wikipedia.org/wiki/Del
Del, or nabla, is an operator used in mathematics (particularly in vector calculus) as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes the standard derivative of the function as defined in calculus. When applied to a field (a function defined on a multi-dimensional domain), it may denote any one of three operations depending on the way it is applied: the gradient or (locally) steepest slope of a scalar field (or sometimes of a vector field, as in the Navier–Stokes equations); the divergence of a vector field; or the curl (rotation) of a vector field. Del is a very convenient mathematical notation for those three operations (gradient, divergence, and curl) that makes many equations easier to write and remember. The del symbol (or nabla) can be formally defined as a vector operator whose components are the corresponding partial derivative operators. As a vector operator, it can act on scalar and vector fields in three different ways, giving rise to three different differential operations: first, it can act on scalar fields by a formal scalar multiplication—to give a vector field called the gradient; second, it can act on vector fields by a formal dot product—to give a scalar field called the divergence; and lastly, it can act on vector fields by a formal cross product—to give a vector field called the curl. These formal products do not necessarily commute with other operators or products. These three uses, detailed below, are summarized as: Gradient: Divergence: Curl: Definition In the Cartesian coordinate system with coordinates and standard basis , del is a vector operator whose components are the partial derivative operators ; that is, Where the expression in parentheses is a row vector. In three-dimensional Cartesian coordinate system with coordinates and standard basis or unit vectors of axes , del is written as As a vector operator, del naturally acts on scalar fields via scalar multiplication, and naturally acts on vector fields via dot products and cross products. More specifically, for any scalar field and any vector field , if one defines then using the above definition of , one may write and and Example: Del can also be expressed in other coordinate systems, see for example del in cylindrical and spherical coordinates. Notational uses Del is used as a shorthand form to simplify many long mathematical expressions. It is most commonly used to simplify expressions for the gradient, divergence, curl, directional derivative, and Laplacian. Gradient The vector derivative of a scalar field is called the gradient, and it can be represented as: It always points in the direction of greatest increase of , and it has a magnitude equal to the maximum rate of increase at the point—just like a standard derivative. In particular, if a hill is defined as a height function over a plane , the gradient at a given location will be a vector in the xy-plane (visualizable as an arrow on a map) pointing along the steepest direction. The magnitude of the gradient is the value of this steepest slope. In particular, this notation is powerful because the gradient product rule looks very similar to the 1d-derivative case: However, the rules for dot products do not turn out to be simple, as illustrated by: Divergence The divergence of a vector field is a scalar field that can be represented as: The divergence is roughly a measure of a vector field's increase in the direction it points; but more accurately, it is a measure of that field's tendency to converge toward or diverge from a point. The power of the del notation is shown by the following product rule: The formula for the vector product is slightly less intuitive, because this product is not commutative: Curl The curl of a vector field is a vector function that can be represented as: The curl at a point is proportional to the on-axis torque that a tiny pinwheel would be subjected to if it were centered at that point. The vector product operation can be visualized as a pseudo-determinant: Again the power of the notation is shown by the product rule: The rule for the vector product does not turn out to be simple: Directional derivative The directional derivative of a scalar field in the direction is defined as: Which is equal to the following when the gradient exists This gives the rate of change of a field in the direction of , scaled by the magnitude of . In operator notation, the element in parentheses can be considered a single coherent unit; fluid dynamics uses this convention extensively, terming it the convective derivative—the "moving" derivative of the fluid. Note that is an operator that takes scalar to a scalar. It can be extended to operate on a vector, by separately operating on each of its components. Laplacian The Laplace operator is a scalar operator that can be applied to either vector or scalar fields; for cartesian coordinate systems it is defined as: and the definition for more general coordinate systems is given in vector Laplacian. The Laplacian is ubiquitous throughout modern mathematical physics, appearing for example in Laplace's equation, Poisson's equation, the heat equation, the wave equation, and the Schrödinger equation. Hessian matrix While usually represents the Laplacian, sometimes also represents the Hessian matrix. The former refers to the inner product of , while the latter refers to the dyadic product of : . So whether refers to a Laplacian or a Hessian matrix depends on the context. Tensor derivative Del can also be applied to a vector field with the result being a tensor. The tensor derivative of a vector field (in three dimensions) is a 9-term second-rank tensor – that is, a 3×3 matrix – but can be denoted simply as , where represents the dyadic product. This quantity is equivalent to the transpose of the Jacobian matrix of the vector field with respect to space. The divergence of the vector field can then be expressed as the trace of this matrix. For a small displacement , the change in the vector field is given by: Product rules For vector calculus: For matrix calculus (for which can be written ): Another relation of interest (see e.g. Euler equations) is the following, where is the outer product tensor: Second derivatives When del operates on a scalar or vector, either a scalar or vector is returned. Because of the diversity of vector products (scalar, dot, cross) one application of del already gives rise to three major derivatives: the gradient (scalar product), divergence (dot product), and curl (cross product). Applying these three sorts of derivatives again to each other gives five possible second derivatives, for a scalar field f or a vector field v; the use of the scalar Laplacian and vector Laplacian gives two more: These are of interest principally because they are not always unique or independent of each other. As long as the functions are well-behaved ( in most cases), two of them are always zero: Two of them are always equal: The 3 remaining vector derivatives are related by the equation: And one of them can even be expressed with the tensor product, if the functions are well-behaved: Precautions Most of the above vector properties (except for those that rely explicitly on del's differential properties—for example, the product rule) rely only on symbol rearrangement, and must necessarily hold if the del symbol is replaced by any other vector. This is part of the value to be gained in notationally representing this operator as a vector. Though one can often replace del with a vector and obtain a vector identity, making those identities mnemonic, the reverse is not necessarily reliable, because del does not commute in general. A counterexample that demonstrates the divergence () and the advection operator () are not commutative: A counterexample that relies on del's differential properties: Central to these distinctions is the fact that del is not simply a vector; it is a vector operator. Whereas a vector is an object with both a magnitude and direction, del has neither a magnitude nor a direction until it operates on a function. For that reason, identities involving del must be derived with care, using both vector identities and differentiation identities such as the product rule. See also Del in cylindrical and spherical coordinates Notation for differentiation Vector calculus identities Maxwell's equations Navier–Stokes equations Table of mathematical symbols Quabla operator References Willard Gibbs & Edwin Bidwell Wilson (1901) Vector Analysis, Yale University Press, 1960: Dover Publications. External links Vector calculus Mathematical notation Differential operators
Del
[ "Mathematics" ]
1,781
[ "Mathematical analysis", "Differential operators", "nan" ]
151,960
https://en.wikipedia.org/wiki/University%20of%20Agricultural%20Sciences%2C%20Bengaluru
University of Agricultural Sciences, Bangalore (UAS Bangalore) is located in Bengaluru, India. It was established in 1964 as UAS Bangalore by a legislative act. Origin The British government in India, shaken by several famines in India, set up a commission to improve the state of agriculture to reduce the impact of famines. This led to the Famine Commission of 1880 and in 1889 a commission was set up with Voelcker to examine agriculture in India. The report led the rulers of Mysore kingdom (The Wodeyars) to establish research units in the field of agriculture and donated about of land to set up an Experimental Agricultural Station at Hebbal, and appointed German-Canadian chemist Adolf Lehmann in 1900 who began research on soil fertility at a laboratory that now houses the Directorate of Agriculture. About 30 acres of land was then acquired at Hebbal for experimental fields. Later in 1906, Leslie Coleman, a Canadian Entomologist and Mycologist succeeded Lehmann and served for 25 years. What began on a land was soon extended to about . The increasing reputation of this experimental station as a training center led to the foundation of the Mysore Agricultural College at Hebbal in 1946 affiliated to the Mysore University. This was soon followed by the Agricultural College at Dharwad in 1947 which was then affiliated to Karnataka University. In 1958, veterinary science as a discipline was started with the establishment of the Veterinary College at Hebbal also affiliated to Mysore University. Formation In 1948 a University Education Commission had been started under S. Radhakrishnan with members that included Zakir Hussain and the American educationist J.J. Tigert, and Arthur Ernest Morgan. They examined the Land-Grant colleges of the United States as a potential model for rural and agricultural education. This recommendation was taken up in 1961 with the formation of an Indo-American team to develop the universities and their curriculum. This resulted in a first Joint Indo-American team in 1954 that studied American universities leading to US-AID supported collaboration between American and Indian universities. A second team was established in 1959 that made recommendations to make agricultural universities autonomous and to have veterinary, home science, and agricultural education on a single campus with integrated teaching, extension and research. This was followed by a committee headed by Ralph W. Cummings of the Rockefeller Foundation, along with Ephraim Hixon of the USAID, L. Sahai (Animal Husbandry Commissioner) and K.C. Naik as convener. The then Mysore State Government through its Act No. 22 passed in 1963 provided for the creation of the University of Agricultural Sciences. The university came into existence on 21 August 1964 and was meant to serve the agricultural education needs of the state of Karnataka with a campus that included the old experimental stations at Hebbal established by Lehmann and Coleman with an additional campus at GKVK added in 1969. K.C. Naik served as the first Vice Chancellor. The UAS was inaugurated on 21 August 1964 by Vice President of India Zakir Husain in the presence of Chester Bowles, United States Ambassador to India and S. Nijalingappa, Chief Minister of Karnataka. On 12 July 1969, Prime Minister Indira Gandhi inaugurated GKVK campus with its buildings designed by the architect Achyut Kanvinde who was influenced by Walter Gropius. The Ford Foundation made a grant of $331000 in 1966 to develop graduate research in entomology at the university. The university initially included the agricultural colleges at Hebbal and Dharwad, the Veterinary College at Hebbal, the fisheries college at Mangalore, and 35 research stations located in different parts of the state following agroclimatic zonation and focus on specific crops along with 45 ICAR projects which were with the State Department of Agriculture, Horticulture, Animal Husbandry and Fisheries. Later years and growth Later on the Marine Product Processing Training Centre (MPPTC) at Mangalore and Krishi Vigyan Kendra, Hanumanamatti, Dharwad were also transferred to the university. The university established the Fisheries College at Mangalore in 1969 to provide degree level training and the Agricultural Engineering Institute at Raichur in the same year to offer a three-year diploma course in Agricultural Engineering. The Home Science College was started to impart education on rural based home science at Dharwad campus in 1974, besides establishing a College of Basic Sciences and Humanities and College of Post Graduate Studies at Hebbal. The phenomenal growth of the university, the differences in agroclimate in the parts of the state, led to the bifurcation of the university into two agricultural universities. An amendment to the University of Agricultural Sciences Act in 1986 saw the birth of the second university for agriculture in the state. The University of Agricultural Sciences, Bangalore was entrusted territorial jurisdiction over 15 southern districts of Karnataka comprising nearly fifty percent of the total area of the state, while the University of Agricultural Sciences, Dharwad, was given jurisdiction over the remaining area in the northern districts of the state. In 2005, with the needed to provide better autonomy to the veterinary education and research in the state, the Veterinary and Animal sciences faculty was bifurcated form both the Universities of Agricultural Sciences - Bangalore and Dharwad and placed under the single university - Karnataka Veterinary, Animal and Fisheries Sciences University with its headquarters in the northern district of Karnataka, Bidar by the passing of the Karnataka Veterinary, Animal and Fisheries Sciences University Bill, 2004 in the Legislative Assembly on 10 February 2004. Constituent colleges The university has constituent colleges at places of Karnataka: College of Agriculture, GKVK, Bengaluru College of Agriculture, Mandya College of Agriculture, Hassan College of Sericulture, Chintamani College of Agriculture, Chamaraja Nagara College of Agricultural Engineering, GKVK, Bengaluru Ranking and notability In 2012, the university was recognised as the best agricultural university in India by the Indian Council of Agricultural Research for which it was conferred the Sardar Patel Outstanding ICAR Institution Award for excellence in teaching, research and extension. In 2021, the university was recognised as the best agricultural university in South India and 3rd best state university in India. The NIRF (National Institutional Ranking Framework) ranked it 11th among Agriculture institutes in India in 2024. Notable people Alumni Sonny Ramaswamy, former director of the USDA's National Institute of Food and Agriculture Kalidas Shetty, Professor at North Dakota State University, Fargo Ananda Nanjundaswamy, Richard and Janice Vetter Endowed Associate Professor Professor at South Dakota State University, Brookings See also University of Agricultural Sciences, Dharwad University of Agricultural and Horticultural Sciences, Shimoga University of Agricultural Sciences, Raichur University of Horticultural Sciences, Bagalkot Karnataka Veterinary, Animal and Fisheries Sciences University References External links UAS, Bengaluru collected news and commentary at The Times of India Agricultural universities and colleges in Karnataka University of Agricultural Sciences 1964 establishments in Mysore State Educational institutions established in 1964 Biodiversity Heritage Sites of India
University of Agricultural Sciences, Bengaluru
[ "Biology" ]
1,420
[ "Biodiversity Heritage Sites of India", "Biodiversity" ]
151,965
https://en.wikipedia.org/wiki/Vernalization
Vernalization () is the induction of a plant's flowering process by exposure to the prolonged cold of winter, or by an artificial equivalent. After vernalization, plants have acquired the ability to flower, but they may require additional seasonal cues or weeks of growth before they will actually do so. The term is sometimes used to refer to the need of herbal (non-woody) plants for a period of cold dormancy in order to produce new shoots and leaves, but this usage is discouraged. Many plants grown in temperate climates require vernalization and must experience a period of low winter temperature to initiate or accelerate the flowering process. This ensures that reproductive development and seed production occurs in spring and winters, rather than in autumn. The needed cold is often expressed in chill hours. Typical vernalization temperatures are between 1 and 7 degrees Celsius (34 and 45 degrees Fahrenheit). For many perennial plants, such as fruit tree species, a period of cold is needed first to induce dormancy and then later, after the requisite period, re-emerge from that dormancy prior to flowering. Many monocarpic winter annuals and biennials, including some ecotypes of Arabidopsis thaliana and winter cereals such as wheat, must go through a prolonged period of cold before flowering occurs. History of vernalization research In the history of agriculture, farmers observed a traditional distinction between "winter cereals", whose seeds require chilling (to trigger their subsequent emergence and growth), and "spring cereals", whose seeds can be sown in spring, and germinate, and then flower soon thereafter. Scientists in the early 19th century had discussed how some plants needed cold temperatures to flower. In 1857 an American agriculturist John Hancock Klippart, Secretary of the Ohio Board of Agriculture, reported the importance and effect of winter temperature on the germination of wheat. One of the most significant works was by a German plant physiologist Gustav Gassner who made a detailed discussion in his 1918 paper. Gassner was the first to systematically differentiate the specific requirements of winter plants from those of summer plants, and also that early swollen germinating seeds of winter cereals are sensitive to cold. In 1928, the Soviet agronomist Trofim Lysenko published his works on the effects of cold on cereal seeds, and coined the term "яровизация" (yarovizatsiya : "jarovization") to describe a chilling process he used to make the seeds of winter cereals behave like spring cereals (from яровой : yarvoy, Tatar root ярый : yaryiy meaning ardent, fiery, associated with the god of spring). Lysenko himself translated the term into "vernalization" (from the Latin vernum meaning Spring). After Lysenko the term was used to explain the ability of flowering in some plants after a period of chilling due to physiological changes and external factors. The formal definition was given in 1960 by a French botanist P. Chouard, as "the acquisition or acceleration of the ability to flower by a chilling treatment." Lysenko's 1928 paper on vernalization and plant physiology drew wide attention due to its practical consequences for Russian agriculture. Severe cold and lack of winter snow had destroyed many early winter wheat seedlings. By treating wheat seeds with moisture as well as cold, Lysenko induced them to bear a crop when planted in spring. Later however, according to Richard Amasino, Lysenko inaccurately asserted that the vernalized state could be inherited, i.e. the offspring of a vernalized plant would behave as if they themselves had also been vernalized and would not require vernalization in order to flower quickly. Opposing this view and supporting Lysenko's claim, Xiuju Li and Yongsheng Liu have detailed experimental evidence from the USSR, Hungary, Bulgaria and China that shows the conversion between spring wheat and winter wheat, positing that "it is not unreasonable to postulate epigenetic mechanisms that could plausibly result in the conversion of spring to winter wheat or vice versa." Early research on vernalization focused on plant physiology; the increasing availability of molecular biology has made it possible to unravel its underlying mechanisms. For example, a lengthening daylight period (longer days), as well as cold temperatures are required for winter wheat plants to go from the vegetative to the reproductive state; the three interacting genes are called VRN1, VRN2, and FT (VRN3). In Arabidopsis thaliana Arabidopsis thaliana ("thale cress") is a much-studied model for vernalization. Some ecotypes (varieties), called "winter annuals", have delayed flowering without vernalization; others ("summer annuals") do not. The genes that underlie this difference in plant physiology have been intensively studied. The reproductive phase change of A. thaliana occurs by a sequence of two related events: first, the bolting transition (flower stalk elongates), then the floral transition (first flower appears). Bolting is a robust predictor of flower formation, and hence a good indicator for vernalization research. In winter annual Arabidopsis, vernalization of the meristem appears to confer competence to respond to floral inductive signals. A vernalized meristem retains competence for as long as 300 days in the absence of an inductive signal. At the molecular level, flowering is repressed by the protein Flowering Locus C (FLC), which binds to and represses genes that promote flowering, thus blocking flowering. Winter annual ecotypes of Arabidopsis have an active copy of the gene FRIGIDA (FRI), which promotes FLC expression, thus repression of flowering. Prolonged exposure to cold (vernalization) induces expression of VERNALIZATION INSENSTIVE3, which interacts with the VERNALIZATION2 (VRN2) polycomb-like complex to reduce FLC expression through chromatin remodeling. Levels of VRN2 protein increase during long-term cold exposure as a result of inhibition of VRN2 turnover via its N-degron. The events of histone deacetylation at Lysine 9 and 14 followed by methylation at Lys 9 and 27 is associated with the vernalization response. The epigenetic silencing of FLC by chromatin remodeling is also thought to involve the cold-induced expression of antisense FLC COOLAIR or COLDAIR transcripts. Vernalization is registered by the plant by the stable silencing of individual FLC loci. The removal of silent chromatin marks at FLC during embryogenesis prevents the inheritance of the vernalized state. Since vernalization also occurs in flc mutants (lacking FLC), vernalization must also activate a non-FLC pathway. A day-length mechanism is also important. Vernalization response works in concert with the photo-periodic genes CO, FT, PHYA, CRY2 to induce flowering. Devernalization It is possible to devernalize a plant by exposure to sometimes low and high temperatures subsequent to vernalization. For example, commercial onion growers store sets at low temperatures, but devernalize them before planting, because they want the plant's energy to go into enlarging its bulb (underground stem), not making flowers. See also Stratification (seeds) References External links https://www.jic.ac.uk/staff/caroline-dean/vernalization.htm Article in New Scientist Agricultural terminology Plant physiology Winter phenomena
Vernalization
[ "Biology" ]
1,616
[ "Plant physiology", "Plants" ]
152,036
https://en.wikipedia.org/wiki/S-100%20bus
The S-100 bus or Altair bus, IEEE 696-1983 (inactive-withdrawn), is an early computer bus designed in 1974 as a part of the Altair 8800. The bus was the first industry standard expansion bus for the microcomputer industry. computers, consisting of processor and peripheral cards, were produced by a number of manufacturers. The bus formed the basis for homebrew computers whose builders (e.g., the Homebrew Computer Club) implemented drivers for CP/M and MP/M. These microcomputers ran the gamut from hobbyist toy to small business workstation and were common in early home computers until the advent of the IBM PC. Architecture The bus is a passive backplane of 100-pin printed circuit board edge connectors wired in parallel. Circuit cards measuring serving the functions of CPU, memory, or I/O interface plugged into these connectors. The bus signal definitions closely follow those of an 8080 microprocessor system, since the Intel 8080 microprocessor was the first microprocessor hosted on the bus. The 100 lines of the bus can be grouped into four types: 1) Power, 2) Data, 3) Address, and 4) Clock and control. Power supplied on the bus is bulk unregulated +8 Volt DC and ±16 Volt DC, designed to be regulated on the cards to +5 V (used by TTL ICs), -5 V and +12 V for Intel 8080 CPU IC, ±12 V RS-232 line driver ICs, +12 V for disk drive motors. The onboard voltage regulation is typically performed by devices of the 78xx family (for example, a 7805 device to produce +5 volts). These are linear regulators which are commonly mounted on heat sinks. The bi-directional 8-bit data bus of the Intel 8080 is split into two unidirectional 8-bit data buses. The processor could use only one of these at a time. The Sol-20 used a variation that had only a single 8-bit bus and used the now-unused pins as signal grounds to reduce electronic noise. The direction of the bus, in or out, was signaled using the otherwise unused DBIN pin. This became universal in the market as well, making the second bus superfluous. Later, these two 8-bit buses would be combined to support a 16-bit data width for more advanced processors, using the Sol's system to signal the direction. The address bus is 16-bits wide in the initial implementation and later extended to 24-bits wide. A bus control signal can put these lines in a tri-state condition to allow direct memory access. The Cromemco Dazzler, for example, is an early card that retrieved digital images from memory using direct memory access. Clock and control signals are used to manage the traffic on the bus. For example, the DO Disable line will tristate the address lines during direct memory access. Unassigned lines of the original bus specification were later assigned to support more advanced processors. For example, the Zilog Z-80 processor has a non-maskable interrupt line that the Intel 8080 processor does not. One unassigned line of the bus then was reassigned to support the non-maskable interrupt request. History During the design of the Altair, the hardware required to make a usable machine was not available in time for the January 1975 launch date. The designer, Ed Roberts, also had the problem of the backplane taking up too much room. Attempting to avoid these problems, he placed the existing components in a case with additional "slots", so that the missing components could be plugged in later when they became available. The backplane is split into four separate cards, with the CPU on a fifth. He then looked for an inexpensive source of connectors, and he came across a supply of military surplus 100-pin edge connectors. The 100-pin bus was created by an anonymous draftsman, who selected the connector from a parts catalog and arbitrarily assigned signal names to groups of connector pins. A burgeoning industry of "clone" machines followed the introduction of the Altair in 1975. Most of these used the same bus layout as the Altair, creating a new industry standard. These companies were forced to refer to the system as the "Altair bus", and wanted another name in order to avoid referring to their competitor when describing their own system. The "" name, short for "Standard 100", was coined by Harry Garland and Roger Melen, co-founders of Cromemco. While on a flight to attend the Atlantic City PC '76 microcomputer conference in August 1976, they shared the cabin with Bob Marsh and Lee Felsenstein of Processor Technology. Melen went over to them to convince them to adopt the same name. He had a beer in his hand and when the plane hit a bump, Melen spilt some of the beer on Marsh. Marsh agreed to use the name, which Melen ascribes to him wanting to get Melen to leave with his beer. The term first appeared in print in a Cromemco advertisement in the November 1976 issue of Byte magazine. The first symposium on the bus, moderated by Jim Warren, was held November 20, 1976 at Diablo Valley College with a panel consisting of Harry Garland, George Morrow, and Lee Felsenstein. Just one year later, the Bus would be described as "the most used busing standard ever developed in the computer industry." Cromemco was the largest of the manufacturers, followed by Vector Graphic and North Star Computers. Other innovators were companies such as Alpha Microsystems, IMS Associates, Inc., Godbout Electronics (later CompuPro), and Ithaca InterSystems. In May 1984, Microsystems published a comprehensive product directory listing over 500 "/IEEE-696" products from over 150 companies. The bus signals were simple to create using an 8080 CPU, but increasingly less so when using other processors like the 68000. More board space was occupied by signal conversion logic. Nonetheless by 1984, eleven different processors were hosted on the bus, from the 8-bit Intel 8080 to the 16-bit Zilog Z-8000. In 1986, Cromemco introduced the XXU card, designed by Ed Lupin, utilizing a 32-bit Motorola 68020 processor. IEEE-696 Standard As the bus gained momentum, there was a need to develop a formal specification of the bus to help assure compatibility of products produced by different manufacturers. There was also a need to extend the bus so that it could support processors more capable than the Intel 8080 used in the original Altair Computer. In May 1978, George Morrow and Howard Fullmer published a "Proposed Standard for the Bus" noting that 150 vendors were already supplying products for the Bus. This proposed standard documented the 8-bit data path and 16-bit address path of the bus and stated that consideration was being given to extending the data path to 16 bits and the address path to 24 bits. In July 1979 Kells Elmquist, Howard Fullmer, David Gustavson, and George Morrow published a "Standard Specification for Bus Interface Devices." In this specification the data path was extended to 16 bits and the address path was extended to 24 bits. The IEEE 696 Working Group, chaired by Mark Garetz, continued to develop the specification which was proposed as an IEEE Standard and approved by the IEEE Computer Society on June 10, 1982. The American National Standards Institute (ANSI) approved the IEEE standard on September 8, 1983. The computer bus structure developed by Ed Roberts for the Altair 8800 computer had been extended, rigorously documented, and now designated as the American National Standard IEEE Std 696–1983. Retirement IBM introduced the IBM Personal Computer in 1981 and followed it with increasingly capable models: the XT in 1983 and the AT in 1984. The success of these computers, which used IBM's own, incompatible bus architecture, cut deeply into the market for bus products. In May 1984, Sol Libes (who had been a member of the IEEE-696 Working Group) wrote in Microsystems: "there is no doubt that the S-100 market can now be considered a mature industry with only moderate growth potential, compared to the IBM PC-compatible market". As the IBM PC products captured the low-end of the market, machines moved up-scale to more powerful OEM and multiuser systems. Banks of bus computers were used, for example, to process the trades at the Chicago Mercantile Exchange; the United States Air Force deployed bus machines for their mission planning systems. However throughout the 1980s the market for bus machines for the hobbyist, for personal use, and even for small business was on the decline. The market for bus products continued to contract through the early 1990s, as IBM-compatible computers became more capable. In 1992, the Chicago Mercantile Exchange, for example, replaced their bus computers with the IBM model PS/2. By 1994, the bus industry had contracted sufficiently that the IEEE did not see a need to continue supporting the IEEE-696 standard. The IEEE-696 standard was retired on June 14, 1994. References External links "S100 Computers", A website containing many photos of cards, documentation, and history ""Cromemco" based, S-100 micro-computer" , Robert Kuhmann's images of several cards "Herb's S-100 Stuff", Herbert Johnson's collection of history "IEEE-696 / Bus Documentation and Manuals Archive", Howard Harte's manuals collection Computer buses S-100 IEEE standards Computer-related introductions in 1974 Cromemco
S-100 bus
[ "Technology" ]
2,015
[ "Computer standards", "IEEE standards" ]
152,038
https://en.wikipedia.org/wiki/Karyotype
A karyotype is the general appearance of the complete set of chromosomes in the cells of a species or in an individual organism, mainly including their sizes, numbers, and shapes. Karyotyping is the process by which a karyotype is discerned by determining the chromosome complement of an individual, including the number of chromosomes and any abnormalities. A karyogram or idiogram is a graphical depiction of a karyotype, wherein chromosomes are generally organized in pairs, ordered by size and position of centromere for chromosomes of the same size. Karyotyping generally combines light microscopy and photography in the metaphase of the cell cycle, and results in a photomicrographic (or simply micrographic) karyogram. In contrast, a schematic karyogram is a designed graphic representation of a karyotype. In schematic karyograms, just one of the sister chromatids of each chromosome is generally shown for brevity, and in reality they are generally so close together that they look as one on photomicrographs as well unless the resolution is high enough to distinguish them. The study of whole sets of chromosomes is sometimes known as karyology. Karyotypes describe the chromosome count of an organism and what these chromosomes look like under a light microscope. Attention is paid to their length, the position of the centromeres, banding pattern, any differences between the sex chromosomes, and any other physical characteristics. The preparation and study of karyotypes is part of cytogenetics. The basic number of chromosomes in the somatic cells of an individual or a species is called the somatic number and is designated 2n. In the germ-line (the sex cells) the chromosome number is n (humans: n = 23).p28 Thus, in humans 2n = 46. So, in normal diploid organisms, autosomal chromosomes are present in two copies. There may, or may not, be sex chromosomes. Polyploid cells have multiple copies of chromosomes and haploid cells have single copies. Karyotypes can be used for many purposes; such as to study chromosomal aberrations, cellular function, taxonomic relationships, medicine and to gather information about past evolutionary events (karyosystematics). Observations on karyotypes Staining The study of karyotypes is made possible by staining. Usually, a suitable dye, such as Giemsa, is applied after cells have been arrested during cell division by a solution of colchicine usually in metaphase or prometaphase when most condensed. In order for the Giemsa stain to adhere correctly, all chromosomal proteins must be digested and removed. For humans, white blood cells are used most frequently because they are easily induced to divide and grow in tissue culture. Sometimes observations may be made on non-dividing (interphase) cells. The sex of an unborn fetus can be predicted by observation of interphase cells (see amniotic centesis and Barr body). Observations Six different characteristics of karyotypes are usually observed and compared: Differences in absolute sizes of chromosomes. Chromosomes can vary in absolute size by as much as twenty-fold between genera of the same family. For example, the legumes Lotus tenuis and Vicia faba each have six pairs of chromosomes, yet V. faba chromosomes are many times larger. These differences probably reflect different amounts of DNA duplication. Differences in the position of centromeres. These differences probably came about through translocations. Differences in relative size of chromosomes. These differences probably arose from segmental interchange of unequal lengths. Differences in basic number of chromosomes. These differences could have resulted from successive unequal translocations which removed all the essential genetic material from a chromosome, permitting its loss without penalty to the organism (the dislocation hypothesis) or through fusion. Humans have one pair fewer chromosomes than the great apes. Human chromosome 2 appears to have resulted from the fusion of two ancestral chromosomes, and many of the genes of those two original chromosomes have been translocated to other chromosomes. Differences in number and position of satellites. Satellites are small bodies attached to a chromosome by a thin thread. Differences in degree and distribution of GC content (Guanine-Cytosine pairs versus Adenine-Thymine). In metaphase where the karyotype is typically studied, all DNA is condensed, but most of the time, DNA with a high GC content is usually less condensed, that is, it tends to appear as euchromatin rather than heterochromatin. GC rich DNA tends to contain more coding DNA and be more transcriptionally active. GC rich DNA is lighter on Giemsa staining. Euchromatin regions contain larger amounts of Guanine-Cytosine pairs (that is, it has a higher GC content). The staining technique using Giemsa staining is called G banding and therefore produces the typical "G-Bands". A full account of a karyotype may therefore include the number, type, shape and banding of the chromosomes, as well as other cytogenetic information. Variation is often found: between the sexes, between the germ-line and soma (between gametes and the rest of the body), between members of a population (chromosome polymorphism), in geographic specialization, and in mosaics or otherwise abnormal individuals. Human karyogram Both the micrographic and schematic karyograms shown in this section have a standard chromosome layout, and display darker and lighter regions as seen on G banding, which is the appearance of the chromosomes after treatment with trypsin (to partially digest the chromosomes) and staining with Giemsa stain. Compared to darker regions, the lighter regions are generally more transcriptionally active, with a greater ratio of coding DNA versus non-coding DNA, and a higher GC content. Both the micrographic and schematic karyograms show the normal human diploid karyotype, which is the typical composition of the genome within a normal cell of the human body, and which contains 22 pairs of autosomal chromosomes and one pair of sex chromosomes (allosomes). A major exception to diploidy in humans is gametes (sperm and egg cells) which are haploid with 23 unpaired chromosomes, and this ploidy is not shown in these karyograms. The micrographic karyogram is converted into grayscale, whereas the schematic karyogram shows the purple hue as typically seen on Giemsa stain (and is a result of its azure B component, which stains DNA purple). The schematic karyogram in this section is a graphical representation of the idealized karyotype. For each chromosome pair, the scale to the left shows the length in terms of million base pairs, and the scale to the right shows the designations of the bands and sub-bands. Such bands and sub-bands are used by the International System for Human Cytogenomic Nomenclature to describe locations of chromosome abnormalities. Each row of chromosomes is vertically aligned at centromere level. Human chromosome groups Based on the karyogram characteristics of size, position of the centromere and sometimes the presence of a chromosomal satellite (a segment distal to a secondary constriction), the human chromosomes are classified into the following groups: Alternatively, the human genome can be classified as follows, based on pairing, sex differences, as well as location within the cell nucleus versus inside mitochondria: 22 homologous autosomal chromosome pairs (chromosomes 1 to 22). Homologous means that they have the same genes in the same loci, and autosomal means that they are not sex chromomes. Two sex chromosome (in green rectangle at bottom right in the schematic karyogram, with adjacent silhouettes of typical representative phenotypes): The most common karyotypes for females contain two X chromosomes and are denoted 46,XX; males usually have both an X and a Y chromosome denoted 46,XY. However, approximately 0.018% percent of humans are intersex, sometimes due to variations in sex chromosomes. The human mitochondrial genome (shown at bottom left in the schematic karyogram, to scale compared to the nuclear DNA in terms of base pairs), although this is not included in micrographic karyograms in clinical practice. Its genome is relatively tiny compared to the rest. Copy number Schematic karyograms generally display a DNA copy number corresponding to the G0 phase of the cellular state (outside of the replicative cell cycle) which is the most common state of cells. The schematic karyogram in this section also shows this state. In this state (as well as during the G1 phase of the cell cycle), each cell has 2 autosomal chromosomes of each kind (designated 2n), where each chromosome has one copy of each locus, making a total copy number of 2 for each locus (2c). At top center in the schematic karyogram, it also shows the chromosome 3 pair after having undergone DNA synthesis, occurring in the S phase (annotated as S) of the cell cycle. This interval includes the G2 phase and metaphase (annotated as "Meta."). During this interval, there is still 2n, but each chromosome will have 2 copies of each locus, wherein each sister chromatid (chromosome arm) is connected at the centromere, for a total of 4c. The chromosomes on micrographic karyograms are in this state as well, because they are generally micrographed in metaphase, but during this phase the two copies of each chromosome are so close to each other that they appear as one unless the image resolution is high enough to distinguish them. In reality, during the G0 and G1 phases, nuclear DNA is dispersed as chromatin and does not show visually distinguishable chromosomes even on micrography. The copy number of the human mitochondrial genome per human cell varies from 0 (erythrocytes) up to 1,500,000 (oocytes), mainly depending on the number of mitochondria per cell. Diversity and evolution of karyotypes Although the replication and transcription of DNA is highly standardized in eukaryotes, the same cannot be said for their karyotypes, which are highly variable. There is variation between species in chromosome number, and in detailed organization, despite their construction from the same macromolecules. This variation provides the basis for a range of studies in evolutionary cytology. In some cases there is even significant variation within species. In a review, Godfrey and Masters conclude: Although much is known about karyotypes at the descriptive level, and it is clear that changes in karyotype organization has had effects on the evolutionary course of many species, it is quite unclear what the general significance might be. Changes during development Instead of the usual gene repression, some organisms go in for large-scale elimination of heterochromatin, or other kinds of visible adjustment to the karyotype. Chromosome elimination. In some species, as in many sciarid flies, entire chromosomes are eliminated during development. Chromatin diminution (founding father: Theodor Boveri). In this process, found in some copepods and roundworms such as Ascaris suum, portions of the chromosomes are cast away in particular cells. This process is a carefully organised genome rearrangement where new telomeres are constructed and certain heterochromatin regions are lost. In A. suum, all the somatic cell precursors undergo chromatin diminution. X-inactivation. The inactivation of one X chromosome takes place during the early development of mammals (see Barr body and dosage compensation). In placental mammals, the inactivation is random as between the two Xs; thus the mammalian female is a mosaic in respect of her X chromosomes. In marsupials it is always the paternal X which is inactivated. In human females some 15% of somatic cells escape inactivation, and the number of genes affected on the inactivated X chromosome varies between cells: in fibroblast cells up about 25% of genes on the Barr body escape inactivation. Number of chromosomes in a set A spectacular example of variability between closely related species is the muntjac, which was investigated by Kurt Benirschke and Doris Wurster. The diploid number of the Chinese muntjac, Muntiacus reevesi, was found to be 46, all telocentric. When they looked at the karyotype of the closely related Indian muntjac, Muntiacus muntjak, they were astonished to find it had female = 6, male = 7 chromosomes. The number of chromosomes in the karyotype between (relatively) unrelated species is hugely variable. The low record is held by the nematode Parascaris univalens, where the haploid n = 1; and an ant: Myrmecia pilosula. The high record would be somewhere amongst the ferns, with the adder's tongue fern Ophioglossum ahead with an average of 1262 chromosomes. Top score for animals might be the shortnose sturgeon Acipenser brevirostrum at 372 chromosomes. The existence of supernumerary or B chromosomes means that chromosome number can vary even within one interbreeding population; and aneuploids are another example, though in this case they would not be regarded as normal members of the population. Fundamental number The fundamental number, FN, of a karyotype is the number of visible major chromosomal arms per set of chromosomes. Thus, FN ≤ 2 × 2n, the difference depending on the number of chromosomes considered single-armed (acrocentric or telocentric) present. Humans have FN = 82, due to the presence of five acrocentric chromosome pairs: 13, 14, 15, 21, and 22 (the human Y chromosome is also acrocentric). The fundamental autosomal number or autosomal fundamental number, FNa or AN, of a karyotype is the number of visible major chromosomal arms per set of autosomes (non-sex-linked chromosomes). Ploidy Ploidy is the number of complete sets of chromosomes in a cell. Polyploidy, where there are more than two sets of homologous chromosomes in the cells, occurs mainly in plants. It has been of major significance in plant evolution according to Stebbins. The proportion of flowering plants which are polyploid was estimated by Stebbins to be 30–35%, but in grasses the average is much higher, about 70%. Polyploidy in lower plants (ferns, horsetails and psilotales) is also common, and some species of ferns have reached levels of polyploidy far in excess of the highest levels known in flowering plants. Polyploidy in animals is much less common, but it has been significant in some groups.Polyploid series in related species which consist entirely of multiples of a single basic number are known as euploid. Haplo-diploidy, where one sex is diploid, and the other haploid. It is a common arrangement in the Hymenoptera, and in some other groups. Endopolyploidy occurs when in adult differentiated tissues the cells have ceased to divide by mitosis, but the nuclei contain more than the original somatic number of chromosomes. In the endocycle (endomitosis or endoreduplication) chromosomes in a 'resting' nucleus undergo reduplication, the daughter chromosomes separating from each other inside an intact nuclear membrane.In many instances, endopolyploid nuclei contain tens of thousands of chromosomes (which cannot be exactly counted). The cells do not always contain exact multiples (powers of two), which is why the simple definition 'an increase in the number of chromosome sets caused by replication without cell division' is not quite accurate.This process (especially studied in insects and some higher plants such as maize) may be a developmental strategy for increasing the productivity of tissues which are highly active in biosynthesis.The phenomenon occurs sporadically throughout the eukaryote kingdom from protozoa to humans; it is diverse and complex, and serves differentiation and morphogenesis in many ways. Aneuploidy Aneuploidy is the condition in which the chromosome number in the cells is not the typical number for the species. This would give rise to a chromosome abnormality such as an extra chromosome or one or more chromosomes lost. Abnormalities in chromosome number usually cause a defect in development. Down syndrome and Turner syndrome are examples of this. Aneuploidy may also occur within a group of closely related species. Classic examples in plants are the genus Crepis, where the gametic (= haploid) numbers form the series x = 3, 4, 5, 6, and 7; and Crocus, where every number from x = 3 to x = 15 is represented by at least one species. Evidence of various kinds shows that trends of evolution have gone in different directions in different groups. In primates, the great apes have 24x2 chromosomes whereas humans have 23x2. Human chromosome 2 was formed by a merger of ancestral chromosomes, reducing the number. Chromosomal polymorphism Some species are polymorphic for different chromosome structural forms. The structural variation may be associated with different numbers of chromosomes in different individuals, which occurs in the ladybird beetle Chilocorus stigma, some mantids of the genus Ameles, the European shrew Sorex araneus. There is some evidence from the case of the mollusc Thais lapillus (the dog whelk) on the Brittany coast, that the two chromosome morphs are adapted to different habitats. Species trees The detailed study of chromosome banding in insects with polytene chromosomes can reveal relationships between closely related species: the classic example is the study of chromosome banding in Hawaiian drosophilids by Hampton L. Carson. In about , the Hawaiian Islands have the most diverse collection of drosophilid flies in the world, living from rainforests to subalpine meadows. These roughly 800 Hawaiian drosophilid species are usually assigned to two genera, Drosophila and Scaptomyza, in the family Drosophilidae. The polytene banding of the 'picture wing' group, the best-studied group of Hawaiian drosophilids, enabled Carson to work out the evolutionary tree long before genome analysis was practicable. In a sense, gene arrangements are visible in the banding patterns of each chromosome. Chromosome rearrangements, especially inversions, make it possible to see which species are closely related. The results are clear. The inversions, when plotted in tree form (and independent of all other information), show a clear "flow" of species from older to newer islands. There are also cases of colonization back to older islands, and skipping of islands, but these are much less frequent. Using K-Ar dating, the present islands date from 0.4 million years ago (mya) (Mauna Kea) to 10mya (Necker). The oldest member of the Hawaiian archipelago still above the sea is Kure Atoll, which can be dated to 30 mya. The archipelago itself (produced by the Pacific Plate moving over a hot spot) has existed for far longer, at least into the Cretaceous. Previous islands now beneath the sea (guyots) form the Emperor Seamount Chain. All of the native Drosophila and Scaptomyza species in Hawaii have apparently descended from a single ancestral species that colonized the islands, probably 20 million years ago. The subsequent adaptive radiation was spurred by a lack of competition and a wide variety of niches. Although it would be possible for a single gravid female to colonise an island, it is more likely to have been a group from the same species. There are other animals and plants on the Hawaiian archipelago which have undergone similar, if less spectacular, adaptive radiations. Chromosome banding Chromosomes display a banded pattern when treated with some stains. Bands are alternating light and dark stripes that appear along the lengths of chromosomes. Unique banding patterns are used to identify chromosomes and to diagnose chromosomal aberrations, including chromosome breakage, loss, duplication, translocation or inverted segments. A range of different chromosome treatments produce a range of banding patterns: G-bands, R-bands, C-bands, Q-bands, T-bands and NOR-bands. Depiction of karyotypes Types of banding Cytogenetics employs several techniques to visualize different aspects of chromosomes: G-banding is obtained with Giemsa stain following digestion of chromosomes with trypsin. It yields a series of lightly and darkly stained bands — the dark regions tend to be heterochromatic, late-replicating and AT rich. The light regions tend to be euchromatic, early-replicating and GC rich. This method will normally produce 300–400 bands in a normal, human genome. It is the most common chromosome banding method. R-banding is the reverse of G-banding (the R stands for "reverse"). The dark regions are euchromatic (guanine-cytosine rich regions) and the bright regions are heterochromatic (thymine-adenine rich regions). C-banding: Giemsa binds to constitutive heterochromatin, so it stains centromeres. The name is derived from centromeric or constitutive heterochromatin. The preparations undergo alkaline denaturation prior to staining leading to an almost complete depurination of the DNA. After washing the probe the remaining DNA is renatured again and stained with Giemsa solution consisting of methylene azure, methylene violet, methylene blue, and eosin. Heterochromatin binds a lot of the dye, while the rest of the chromosomes absorb only little of it. The C-bonding proved to be especially well-suited for the characterization of plant chromosomes. Q-banding is a fluorescent pattern obtained using quinacrine for staining. The pattern of bands is very similar to that seen in G-banding. They can be recognized by a yellow fluorescence of differing intensity. Most part of the stained DNA is heterochromatin. Quinacrin (atebrin) binds both regions rich in AT and in GC, but only the AT-quinacrin-complex fluoresces. Since regions rich in AT are more common in heterochromatin than in euchromatin, these regions are labelled preferentially. The different intensities of the single bands mirror the different contents of AT. Other fluorochromes like DAPI or Hoechst 33258 lead also to characteristic, reproducible patterns. Each of them produces its specific pattern. In other words: the properties of the bonds and the specificity of the fluorochromes are not exclusively based on their affinity to regions rich in AT. Rather, the distribution of AT and the association of AT with other molecules like histones, for example, influences the binding properties of the fluorochromes. T-banding: visualize telomeres. Silver staining: Silver nitrate stains the nucleolar organization region-associated protein. This yields a dark region where the silver is deposited, denoting the activity of rRNA genes within the NOR. Classic karyotype cytogenetics In the "classic" (depicted) karyotype, a dye, often Giemsa (G-banding), less frequently mepacrine (quinacrine), is used to stain bands on the chromosomes. Giemsa is specific for the phosphate groups of DNA. Quinacrine binds to the adenine-thymine-rich regions. Each chromosome has a characteristic banding pattern that helps to identify them; both chromosomes in a pair will have the same banding pattern. Karyotypes are arranged with the short arm of the chromosome on top, and the long arm on the bottom. Some karyotypes call the short and long arms p and q, respectively. In addition, the differently stained regions and sub-regions are given numerical designations from proximal to distal on the chromosome arms. For example, Cri du chat syndrome involves a deletion on the short arm of chromosome 5. It is written as 46,XX,5p-. The critical region for this syndrome is deletion of p15.2 (the locus on the chromosome), which is written as 46,XX,del(5)(p15.2). Multicolor FISH (mFISH) and spectral karyotype (SKY technique) Multicolor FISH and the older spectral karyotyping are molecular cytogenetic techniques used to simultaneously visualize all the pairs of chromosomes in an organism in different colors. Fluorescently labeled probes for each chromosome are made by labeling chromosome-specific DNA with different fluorophores. Because there are a limited number of spectrally distinct fluorophores, a combinatorial labeling method is used to generate many different colors. Fluorophore combinations are captured and analyzed by a fluorescence microscope using up to 7 narrow-banded fluorescence filters or, in the case of spectral karyotyping, by using an interferometer attached to a fluorescence microscope. In the case of an mFISH image, every combination of fluorochromes from the resulting original images is replaced by a pseudo color in a dedicated image analysis software. Thus, chromosomes or chromosome sections can be visualized and identified, allowing for the analysis of chromosomal rearrangements. In the case of spectral karyotyping, image processing software assigns a pseudo color to each spectrally different combination, allowing the visualization of the individually colored chromosomes. Multicolor FISH is used to identify structural chromosome aberrations in cancer cells and other disease conditions when Giemsa banding or other techniques are not accurate enough. Digital karyotyping Digital karyotyping is a technique used to quantify the DNA copy number on a genomic scale. Short sequences of DNA from specific loci all over the genome are isolated and enumerated. This method is also known as virtual karyotyping. Using this technique, it is possible to detect small alterations in the human genome, that cannot be detected through methods employing metaphase chromosomes. Some loci deletions are known to be related to the development of cancer. Such deletions are found through digital karyotyping using the loci associated with cancer development. Chromosome abnormalities Chromosome abnormalities can be numerical, as in the presence of extra or missing chromosomes, or structural, as in derivative chromosome, translocations, inversions, large-scale deletions or duplications. Numerical abnormalities, also known as aneuploidy, often occur as a result of nondisjunction during meiosis in the formation of a gamete; trisomies, in which three copies of a chromosome are present instead of the usual two, are common numerical abnormalities. Structural abnormalities often arise from errors in homologous recombination. Both types of abnormalities can occur in gametes and therefore will be present in all cells of an affected person's body, or they can occur during mitosis and give rise to a genetic mosaic individual who has some normal and some abnormal cells. In humans Chromosomal abnormalities that lead to disease in humans include Turner syndrome results from a single X chromosome (45,X or 45,X0). Klinefelter syndrome, the most common male chromosomal disease, otherwise known as 47,XXY, is caused by an extra X chromosome. Edwards syndrome is caused by trisomy (three copies) of chromosome 18. Down syndrome, a common chromosomal disease, is caused by trisomy of chromosome 21. Patau syndrome is caused by trisomy of chromosome 13. Trisomy 9, believed to be the 4th most common trisomy, has many long lived affected individuals but only in a form other than a full trisomy, such as trisomy 9p syndrome or mosaic trisomy 9. They often function quite well, but tend to have trouble with speech. Also documented are trisomy 8 and trisomy 16, although they generally do not survive to birth. Some disorders arise from loss of just a piece of one chromosome, including Cri du chat (cry of the cat), from a truncated short arm on chromosome 5. The name comes from the babies' distinctive cry, caused by abnormal formation of the larynx. 1p36 Deletion syndrome, from the loss of part of the short arm of chromosome 1. Angelman syndrome – 50% of cases have a segment of the long arm of chromosome 15 missing; a deletion of the maternal genes, example of imprinting disorder. Prader-Willi syndrome – 50% of cases have a segment of the long arm of chromosome 15 missing; a deletion of the paternal genes, example of imprinting disorder. Chromosomal abnormalities can also occur in cancerous cells of an otherwise genetically normal individual; one well-documented example is the Philadelphia chromosome, a translocation mutation commonly associated with chronic myelogenous leukemia and less often with acute lymphoblastic leukemia. History of karyotype studies Chromosomes were first observed in plant cells by Carl Wilhelm von Nägeli in 1842. Their behavior in animal (salamander) cells was described by Walther Flemming, the discoverer of mitosis, in 1882. The name was coined by another German anatomist, Heinrich von Waldeyer in 1888. It is Neo-Latin from Ancient Greek κάρυον karyon, "kernel", "seed", or "nucleus", and τύπος typos, "general form") The next stage took place after the development of genetics in the early 20th century, when it was appreciated that chromosomes (that can be observed by karyotype) were the carrier of genes. The term karyotype as defined by the phenotypic appearance of the somatic chromosomes, in contrast to their genic contents was introduced by Grigory Levitsky who worked with Lev Delaunay, Sergei Navashin, and Nikolai Vavilov. The subsequent history of the concept can be followed in the works of C. D. Darlington and Michael JD White. Investigation into the human karyotype took many years to settle the most basic question: how many chromosomes does a normal diploid human cell contain? In 1912, Hans von Winiwarter reported 47 chromosomes in spermatogonia and 48 in oogonia, concluding an XX/XO sex determination mechanism. Painter in 1922 was not certain whether the diploid of humans was 46 or 48, at first favoring 46, but revised his opinion from 46 to 48, and he correctly insisted on humans having an XX/XY system. Considering the techniques of the time, these results were remarkable. Joe Hin Tjio working in Albert Levan's lab found the chromosome count to be 46 using new techniques available at the time: Using cells in tissue culture Pretreating cells in a hypotonic solution, which swells them and spreads the chromosomes Arresting mitosis in metaphase by a solution of colchicine Squashing the preparation on the slide forcing the chromosomes into a single plane Cutting up a photomicrograph and arranging the result into an indisputable karyogram. The work took place in 1955, and was published in 1956. The karyotype of humans includes only 46 chromosomes. The other great apes have 48 chromosomes. Human chromosome 2 is now known to be a result of an end-to-end fusion of two ancestral ape chromosomes. See also References External links Making a karyotype, an online activity from the University of Utah's Genetic Science Learning Center. Karyotyping activity with case histories from the University of Arizona's Biology Project. Printable karyotype project from Biology Corner, a resource site for biology and science teachers. Chromosome Staining and Banding Techniques Bjorn Biosystems for Karyotyping and FISH Cell biology Chromosomes Cytogenetics Evolutionary biology Genetics techniques
Karyotype
[ "Engineering", "Biology" ]
6,794
[ "Genetics techniques", "Evolutionary biology", "Cell biology", "Genetic engineering" ]
152,109
https://en.wikipedia.org/wiki/Scalable%20Coherent%20Interface
The Scalable Coherent Interface or Scalable Coherent Interconnect (SCI), is a high-speed interconnect standard for shared memory multiprocessing and message passing. The goal was to scale well, provide system-wide memory coherence and a simple interface; i.e. a standard to replace existing buses in multiprocessor systems with one with no inherent scalability and performance limitations. The IEEE Std 1596-1992, IEEE Standard for Scalable Coherent Interface (SCI) was approved by the IEEE standards board on March 19, 1992. It saw some use during the 1990s, but never became widely used and has been replaced by other systems from the early 2000s. History Soon after the Fastbus (IEEE 960) follow-on Futurebus (IEEE 896) project in 1987, some engineers predicted it would already be too slow for the high performance computing marketplace by the time it would be released in the early 1990s. In response, a "Superbus" study group was formed in November 1987. Another working group of the standards association of the Institute of Electrical and Electronics Engineers (IEEE) spun off to form a standard targeted at this market in July 1988. It was essentially a subset of Futurebus features that could be easily implemented at high speed, along with minor additions to make it easier to connect to other systems, such as VMEbus. Most of the developers had their background from high-speed computer buses. Representatives from companies in the computer industry and research community included Amdahl, Apple Computer, BB&N, Hewlett-Packard, CERN, Dolphin Server Technology, Cray Research, Sequent, AT&T, Digital Equipment Corporation, McDonnell Douglas, National Semiconductor, Stanford Linear Accelerator Center, Tektronix, Texas Instruments, Unisys, University of Oslo, University of Wisconsin. The original intent was a single standard for all buses in the computer. The working group soon came up with the idea of using point-to-point communication in the form of insertion rings. This avoided the lumped capacitance, limited physical length/speed of light problems and stub reflections in addition to allowing parallel transactions. The use of insertion rings is credited to Manolis Katevenis who suggested it at one of the early meetings of the working group. The working group for developing the standard was led by David B. Gustavson (chair) and David V. James (Vice Chair). David V. James was a major contributor for writing the specifications including the executable C-code. Stein Gjessing’s group at the University of Oslo used formal methods to verify the coherence protocol and Dolphin Server Technology implemented a node controller chip including the cache coherence logic. Different versions and derivatives of SCI were implemented by companies like Dolphin Interconnect Solutions, Convex, Data General AViiON (using cache controller and link controller chips from Dolphin), Sequent and Cray Research. Dolphin Interconnect Solutions implemented a PCI and PCI-Express connected derivative of SCI that provides non-coherent shared memory access. This implementation was used by Sun Microsystems for its high-end clusters, Thales Group and several others including volume applications for message passing within HPC clustering and medical imaging. SCI was often used to implement non-uniform memory access architectures. It was also used by Sequent Computer Systems as the processor memory bus in their NUMA-Q systems. Numascale developed a derivative to connect with coherent HyperTransport. The standard The standard defined two interface levels: The physical level that deals with electrical signals, connectors, mechanical and thermal conditions The logical level that describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives, control and status registers, and initialization and error recovery facilities. This structure allowed new developments in physical interface technology to be easily adapted without any redesign on the logical level. Scalability for large systems is achieved through a distributed directory-based cache coherence model. (The other popular models for cache coherency are based on system-wide eavesdropping (snooping) of memory transactions – a scheme which is not very scalable.) In SCI each node contains a directory with a pointer to the next node in a linked list that shares a particular cache line. SCI defines a 64-bit flat address space (16 exabytes) where 16 bits are used for identifying a node (65,536 nodes) and 48 bits for address within the node (256 terabytes). A node can contain many processors and/or memory. The SCI standard defines a packet switched network. Topologies SCI can be used to build systems with different types of switching topologies from centralized to fully distributed switching: With a central switch, each node is connected to the switch with a ringlet (in this case a two-node ring). In distributed switching systems, each node can be connected to a ring of arbitrary length and either all or some of the nodes can be connected to two or more rings. The most common way to describe these multi-dimensional topologies is k-ary n-cubes (or tori). The SCI standard specification mentions several such topologies as examples. The 2-D torus is a combination of rings in two dimensions. Switching between the two dimensions requires a small switching capability in the node. This can be expanded to three or more dimensions. The concept of folding rings can also be applied to the Torus topologies to avoid any long connection segments. Transactions SCI sends information in packets. Each packet consists of an unbroken sequence of 16-bit symbols. The symbol is accompanied by a flag bit. A transition of the flag bit from 0 to 1 indicates the start of a packet. A transition from 1 to 0 occurs 1 (for echoes) or 4 symbols before the packet end. A packet contains a header with address command and status information, payload (from 0 through optional lengths of data) and a CRC check symbol. The first symbol in the packet header contains the destination node address. If the address is not within the domain handled by the receiving node, the packet is passed to the output through the bypass FIFO. In the other case, the packet is fed to a receive queue and may be transferred to a ring in another dimension. All packets are marked when they pass the scrubber (a node is established as scrubber when the ring is initialized). Packets without a valid destination address will be removed when passing the scrubber for the second time to avoid filling the ring with packets that would otherwise circulate indefinitely. Cache coherence Cache coherence ensures data consistency in multiprocessor systems. The simplest form applied in earlier systems was based on clearing the cache contents between context switches and disabling the cache for data that were shared between two or more processors. These methods were feasible when the performance difference between the cache and memory were less than one order of magnitude. Modern processors with caches that are more than two orders of magnitude faster than main memory would not perform anywhere near optimal without more sophisticated methods for data consistency. Bus based systems use eavesdropping (snooping) methods since buses are inherently broadcast. Modern systems with point-to point links use broadcast methods with snoop filter options to improve performance. Since broadcast and eavesdropping are inherently non-scalable, these are not used in SCI. Instead, SCI uses a distributed directory-based cache coherence protocol with a linked list of nodes containing processors that share a particular cache line. Each node holds a directory for the main memory of the node with a tag for each line of memory (same line length as the cache line). The memory tag holds a pointer to the head of the linked list and a state code for the line (three states – home, fresh, gone). Associated with each node is also a cache for holding remote data with a directory containing forward and backward pointers to nodes in the linked list sharing the cache line. The tag for the cache has seven states (invalid, only fresh, head fresh, only dirty, head dirty, mid valid, tail valid). The distributed directory is scalable. The overhead for the directory based cache coherence is a constant percentage of the node’s memory and cache. This percentage is in the order of 4% for the memory and 7% for the cache. Legacy SCI is a standard for connecting the different resources within a multiprocessor computer system, and it is not as widely known to the public as for example the Ethernet family for connecting different systems. Different system vendors implemented different variants of SCI for their internal system infrastructure. These different implementations interface to very intricate mechanisms in processors and memory systems and each vendor has to preserve some degrees of compatibility for both hardware and software. Gustavson led a group called the Scalable Coherent Interface and Serial Express Users, Developers, and Manufacturers Association and maintained a web site for the technology starting in 1996. A series of workshops were held through 1999. After the first 1992 edition, follow-on projects defined shared data formats in 1993, a version using low-voltage differential signaling in 1996, and a memory interface known as Ramlink later in 1996. In January 1998, the SLDRAM corporation was formed to hold patents on an attempt to define a new memory interface that was related to another working group called SerialExpress or Local Area Memory Port. However, by early 1999 the new memory standard was abandoned. In 1999 a series of papers was published as a book on SCI. An updated specification was published in July 2000 by the International Electrotechnical Commission (IEC) of the International Organization for Standardization (ISO) as ISO/IEC 13961. See also Dolphin Interconnect Solutions List of device bandwidths NUMAlink QuickRing HIPPI IEEE 1355 RapidIO Myrinet QsNet Futurebus InfiniBand References Supercomputing Computer networks
Scalable Coherent Interface
[ "Technology" ]
2,025
[ "Supercomputing" ]
152,176
https://en.wikipedia.org/wiki/Philosopher%27s%20stone
The philosopher's stone is a mythic alchemical substance capable of turning base metals such as mercury into gold or silver; it was also known as "the tincture" and "the powder". Alchemists additionally believed that it could be used to make an elixir of life which made possible rejuvenation and immortality. For many centuries, it was the most sought-after goal in alchemy. The philosopher's stone was the central symbol of the mystical terminology of alchemy, symbolizing perfection at its finest, divine illumination, and heavenly bliss. Efforts to discover the philosopher's stone were known as the Magnum Opus ("Great Work"). Antiquity The earliest known written mention of the philosopher's stone is about 4000 years ago in an ancient stone carving, then again in the Cheirokmeta by Zosimos of Panopolis (). Alchemical writers assign a longer history. Elias Ashmole and the anonymous author of Gloria Mundi (1620) claim that its history goes back to Adam, who acquired the knowledge of the stone directly from God. This knowledge was said to have been passed down through biblical patriarchs, giving them their longevity. The legend of the stone was also compared to the biblical history of the Temple of Solomon and the rejected cornerstone described in Psalm 118. The theoretical roots outlining the stone's creation can be traced to Greek philosophy. Alchemists later used the classical elements, the concept of anima mundi, and Creation stories presented in texts like Plato's Timaeus as analogies for their process. According to Plato, the four elements are derived from a common source or prima materia (first matter), associated with chaos. Prima materia is also the name alchemists assign to the starting ingredient for the creation of the philosopher's stone. The importance of this philosophical first matter persisted throughout the history of alchemy. In the seventeenth century, Thomas Vaughan writes, "the first matter of the stone is the very same with the first matter of all things." Middle Ages In the Byzantine Empire and the Arab empires, early medieval alchemists built upon the work of Zosimos. Byzantine and Muslim alchemists were fascinated by the concept of metal transmutation and attempted to carry out the process. The eighth-century Muslim alchemist Jabir ibn Hayyan (Latinized as Geber) analysed each classical element in terms of the four basic qualities. Fire was both hot and dry, earth cold and dry, water cold and moist, and air hot and moist. He theorized that every metal was a combination of these four principles, two of them interior and two exterior. From this premise, it was reasoned that the transmutation of one metal into another could be effected by the rearrangement of its basic qualities. This change would be mediated by a substance, which came to be called xerion in Greek and al-iksir in Arabic (from which the word elixir is derived). It was often considered to exist as a dry red powder (also known as al-kibrit al-ahmar, red sulfur) made from a legendary stone—the philosopher's stone. The elixir powder came to be regarded as a crucial component of transmutation by later Arab alchemists. In the 11th century, there was a debate among Muslim world chemists on whether the transmutation of substances was possible. A leading opponent was the Persian polymath Avicenna (Ibn Sina), who discredited the theory of the transmutation of substances, stating, "Those of the chemical craft know well that no change can be effected in the different species of substances, though they can produce the appearance of such change." According to legend, the 13th-century scientist and philosopher, Albertus Magnus, is said to have discovered the philosopher's stone. Magnus does not confirm he discovered the stone in his writings, but he did record that he witnessed the creation of gold by "transmutation". Renaissance to early modern period The 16th-century Swiss alchemist Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim) believed in the existence of alkahest, which he thought to be an undiscovered element from which all other elements (earth, fire, water, air) were simply derivative forms. Paracelsus believed that this element was, in fact, the philosopher's stone. The English philosopher Sir Thomas Browne in his spiritual testament Religio Medici (1643) identified the religious aspect of the quest for the philosopher's Stone when declaring: A mystical text published in the 17th century called the Mutus Liber appears to be a symbolic instruction manual for concocting a philosopher's stone. Called the "wordless book", it was a collection of 15 illustrations. In Buddhism and Hinduism The equivalent of the philosopher's stone in Buddhism and Hinduism is the Cintamani, also spelled as Chintamani. It is also referred to as Paras/Parasmani (, ) or Paris (). In Mahayana Buddhism, Chintamani is held by the bodhisattvas, Avalokiteshvara and Ksitigarbha. It is also seen carried upon the back of the Lung ta (wind horse) which is depicted on Tibetan prayer flags. By reciting the Dharani of Chintamani, Buddhist tradition maintains that one attains the Wisdom of Buddhas, is able to understand the truth of the Buddhas, and turns afflictions into Bodhi. It is said to allow one to see the Holy Retinue of Amitabha and his assembly upon one's deathbed. In Tibetan Buddhist tradition the Chintamani is sometimes depicted as a luminous pearl and is in the possession of several different forms of the Buddha. Within Hinduism, it is connected with the gods Vishnu and Ganesha. In Hindu tradition it is often depicted as a fabulous jewel in the possession of the Nāga king or as on the forehead of the Makara. The Yoga Vasistha, originally written in the tenth century AD, contains a story about the philosopher's stone. A great Hindu sage wrote about the spiritual accomplishment of Gnosis using the metaphor of the philosopher's stone. Sant Jnaneshwar (1275–1296) wrote a commentary with 17 references to the philosopher's stone that explicitly transmutes base metal into gold. The seventh-century Siddhar Thirumoolar in his classic Tirumandhiram explains man's path to immortal divinity. In verse 2709 he declares that the name of God, Shiva is an alchemical vehicle that turns the body into immortal gold. Another depiction of the philosopher's stone is the Shyāmantaka Mani (). According to Hindu mythology, the Shyāmantaka Mani is a ruby, capable of preventing all natural calamities such as droughts, floods, etc. around its owner, as well as producing eight bhāras (≈170 pounds or 700 kilograms) of gold, every day. Properties The most commonly mentioned properties are the ability to transmute base metals into gold or silver, and the ability to heal all forms of illness and prolong the life of any person who consumes a small part of the philosopher's stone diluted in wine. Other mentioned properties include: creation of perpetually burning lamps, transmutation of common crystals into precious stones and diamonds, reviving of dead plants, creation of flexible or malleable glass, and the creation of a clone or homunculus. Names Numerous synonyms were used to make oblique reference to the stone, such as "white stone" (calculus albus, identified with the calculus candidus of Revelation 2:17 which was taken as a symbol of the glory of heaven), vitriol (as expressed in the backronym Visita Interiora Terrae Rectificando Invenies Occultum Lapidem), also lapis noster, lapis occultus, in water at the box, and numerous oblique, mystical or mythological references such as Adam, Aer, Animal, Alkahest, Antidotus, Antimonium, Aqua benedicta, Aqua volans per aeram, Arcanum, Atramentum, Autumnus, Basilicus, Brutorum cor, Bufo, Capillus, Capistrum auri, Carbones, Cerberus, Chaos, Cinis cineris, Crocus, Dominus philosophorum, Divine quintessence, Draco elixir, Filius ignis, Fimus, Folium, Frater, Granum, Granum frumenti, Haematites, Hepar, Herba, Herbalis, Lac, Melancholia, Ovum philosophorum, Panacea salutifera, Pandora, Phoenix, Philosophic mercury, Pyrites, Radices arboris solares, Regina, Rex regum, Sal metallorum, Salvator terrenus, Talcum, Thesaurus, Ventus hermetis. Many of the medieval allegories of Christ were adopted for the lapis, and the Christ and the Stone were indeed taken as identical in a mystical sense. The name of "Stone" or lapis itself is informed by early Christian allegory, such as Priscillian (4th century), who stated, In some texts, it is simply called "stone", or our stone, or in the case of Thomas Norton's Ordinal, "oure delycious stone". The stone was frequently praised and referred to in such terms. It may be noted that the Latin expression , as well as the Arabic from which the Latin derives, both employ the plural form of the word for philosopher. Thus a literal translation would be philosophers' stone rather than philosopher's stone. Appearance Descriptions of the philosopher's stone are numerous and various. According to alchemical texts, the stone of the philosophers came in two varieties, prepared by an almost identical method: white (for the purpose of making silver), and red (for the purpose of making gold), the white stone being a less matured version of the red stone. Some ancient and medieval alchemical texts leave clues to the physical appearance of the stone of the philosophers, specifically the red stone. It is often said to be orange (saffron coloured) or red when ground to powder. Or in a solid form, an intermediate between red and purple, transparent and glass-like. The weight is spoken of as being heavier than gold, and it is soluble in any liquid, and incombustible in fire. Alchemical authors sometimes suggest that the stone's descriptors are metaphorical. The appearance is expressed geometrically in Atalanta Fugiens Emblem XXI : He further describes in greater detail the metaphysical nature of the meaning of the emblem as a divine union of feminine and masculine principles: Rupescissa uses the imagery of the Christian passion, saying that it ascends "from the sepulcher of the Most Excellent King, shining and glorious, resuscitated from the dead and wearing a red diadem...". Interpretations The various names and attributes assigned to the philosopher's stone have led to long-standing speculation on its composition and source. Exoteric candidates have been found in metals, plants, rocks, chemical compounds, and bodily products such as hair, urine, and eggs. Justus von Liebig states that 'it was indispensable that every substance accessible... should be observed and examined'. Alchemists once thought a key component in the creation of the stone was a mythical element named carmot. Esoteric hermetic alchemists may reject work on exoteric substances, instead directing their search for the philosopher's stone inward. Though esoteric and exoteric approaches are sometimes mixed, it is clear that some authors "are not concerned with material substances but are employing the language of exoteric alchemy for the sole purpose of expressing theological, philosophical, or mystical beliefs and aspirations". New interpretations continue to be developed around spagyric, chemical, and esoteric schools of thought. The transmutation mediated by the stone has also been interpreted as a psychological process. Idries Shah devotes a chapter of his book, The Sufis, to provide a detailed analysis of the symbolic significance of alchemical work with the philosopher's stone. His analysis is based in part on a linguistic interpretation through Arabic equivalents of one of the terms for the stone (Azoth) as well as for sulfur, salt, and mercury. Creation The philosopher's stone is created by the alchemical method known as The Magnum Opus or The Great Work. Often expressed as a series of color changes or chemical processes, the instructions for creating the philosopher's stone are varied. When expressed in colours, the work may pass through phases of nigredo, albedo, citrinitas, and rubedo. When expressed as a series of chemical processes it often includes seven or twelve stages concluding in multiplication, and projection. Art and entertainment The philosopher's stone has been an inspiration, plot feature, or subject of innumerable artistic works: animations, comics, films, musical compositions, novels, and video games. Examples include Harry Potter and the Philosopher's Stone, As Above, So Below, Fullmetal Alchemist, The Flash and The Mystery of Mamo. The philosopher's stone is an important motif in Gothic fiction, and originated in William Godwin's 1799 novel St. Leon. See also Angelicall Stone Biological transmutation Cintamani Cupellation Midas Nicolas Flamel Nuclear transmutation Panacea (medicine) Synthesis of precious metals The Net (substance) Unobtainium Footnotes References Further reading Encyclopædia Britannica (2011). "Philosopher's stone" and "Alchemy". Guiley, Rosemary (2006). The Encyclopedia of Magic and Alchemy. New York: Facts on File. . pp. 250–252. Marlan, Stanton (2014). The Philosophers' Stone: Alchemical Imagination and the Soul's Logical Life. Doctoral dissertation. Pittsburgh, Penn.: Duquesne University. Myers, Richard (2003). The Basics of Chemistry. Westport, Conn.: Greenwood Publishing Group, USA. . pp. 11–12. Pagel, Walter (1982). Paracelsus: An Introduction to Philosophical Medicine in the Era of the Renaissance. Basel, Switzerland: Karger Publishers. . Thompson, C. J. S. (2002) [1932]. Alchemy and Alchemists. Chapter IX: "The Philosopher's Stone and the Elixir of Life". Mineola, NY: Dover Publications. . pp. 68–76. External links "The Stone of The Philosophers" by Edward Kelly MSS 95, Item 18 Lapis philosophorum at OPenn Alchemical substances Immortality Longevity myths Magic items Medieval legends Mythological substances Pandora Phoenix birds Supernatural legends Stones
Philosopher's stone
[ "Physics", "Chemistry" ]
3,114
[ "Stones", "Alchemical substances", "Magic items", "Mythological substances", "Physical objects", "Matter" ]
152,205
https://en.wikipedia.org/wiki/Forcing%20%28mathematics%29
In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. Intuitively, forcing can be thought of as a technique to expand the set theoretical universe to a larger universe by introducing a new "generic" object . Forcing was first used by Paul Cohen in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. It has been considerably reworked and simplified in the following years, and has since served as a powerful technique, both in set theory and in areas of mathematical logic such as recursion theory. Descriptive set theory uses the notions of forcing from both recursion theory and set theory. Forcing has also been used in model theory, but it is common in model theory to define genericity directly without mention of forcing. Intuition Forcing is usually used to construct an expanded universe that satisfies some desired property. For example, the expanded universe might contain many new real numbers (at least of them), identified with subsets of the set of natural numbers, that were not there in the old universe, and thereby violate the continuum hypothesis. In order to intuitively justify such an expansion, it is best to think of the "old universe" as a model of the set theory, which is itself a set in the "real universe" . By the Löwenheim–Skolem theorem, can be chosen to be a "bare bones" model that is externally countable, which guarantees that there will be many subsets (in ) of that are not in . Specifically, there is an ordinal that "plays the role of the cardinal " in , but is actually countable in . Working in , it should be easy to find one distinct subset of per each element of . (For simplicity, this family of subsets can be characterized with a single subset .) However, in some sense, it may be desirable to "construct the expanded model within ". This would help ensure that "resembles" in certain aspects, such as being the same as (more generally, that cardinal collapse does not occur), and allow fine control over the properties of . More precisely, every member of should be given a (non-unique) name in . The name can be thought as an expression in terms of , just like in a simple field extension every element of can be expressed in terms of . A major component of forcing is manipulating those names within , so sometimes it may help to directly think of as "the universe", knowing that the theory of forcing guarantees that will correspond to an actual model. A subtle point of forcing is that, if is taken to be an arbitrary "missing subset" of some set in , then the constructed "within " may not even be a model. This is because may encode "special" information about that is invisible within (e.g. the countability of ), and thus prove the existence of sets that are "too complex for to describe". Forcing avoids such problems by requiring the newly introduced set to be a generic set relative to . Some statements are "forced" to hold for any generic : For example, a generic is "forced" to be infinite. Furthermore, any property (describable in ) of a generic set is "forced" to hold under some forcing condition. The concept of "forcing" can be defined within , and it gives enough reasoning power to prove that is indeed a model that satisfies the desired properties. Cohen's original technique, now called ramified forcing, is slightly different from the unramified forcing expounded here. Forcing is also equivalent to the method of Boolean-valued models, which some feel is conceptually more natural and intuitive, but usually much more difficult to apply. The role of the model In order for the above approach to work smoothly, must in fact be a standard transitive model in , so that membership and other elementary notions can be handled intuitively in both and . A standard transitive model can be obtained from any standard model through the Mostowski collapse lemma, but the existence of any standard model of (or any variant thereof) is in itself a stronger assumption than the consistency of . To get around this issue, a standard technique is to let be a standard transitive model of an arbitrary finite subset of (any axiomatization of has at least one axiom schema, and thus an infinite number of axioms), the existence of which is guaranteed by the reflection principle. As the goal of a forcing argument is to prove consistency results, this is enough since any inconsistency in a theory must manifest with a derivation of a finite length, and thus involve only a finite number of axioms. Forcing conditions and forcing posets Each forcing condition can be regarded as a finite piece of information regarding the object adjoined to the model. There are many different ways of providing information about an object, which give rise to different forcing notions. A general approach to formalizing forcing notions is to regard forcing conditions as abstract objects with a poset structure. A forcing poset is an ordered triple, , where is a preorder on , and is the largest element. Members of are the forcing conditions (or just conditions). The order relation means " is stronger than ". (Intuitively, the "smaller" condition provides "more" information, just as the smaller interval provides more information about the number than the interval does.) Furthermore, the preorder must be atomless, meaning that it must satisfy the splitting condition: For each , there are such that , with no such that . In other words, it must be possible to strengthen any forcing condition in at least two incompatible directions. Intuitively, this is because is only a finite piece of information, whereas an infinite piece of information is needed to determine . There are various conventions in use. Some authors require to also be antisymmetric, so that the relation is a partial order. Some use the term partial order anyway, conflicting with standard terminology, while some use the term preorder. The largest element can be dispensed with. The reverse ordering is also used, most notably by Saharon Shelah and his co-authors. Examples Let be any infinite set (such as ), and let the generic object in question be a new subset . In Cohen's original formulation of forcing, each forcing condition is a finite set of sentences, either of the form or , that are self-consistent (i.e. and for the same value of do not appear in the same condition). This forcing notion is usually called Cohen forcing. The forcing poset for Cohen forcing can be formally written as , the finite partial functions from to under reverse inclusion. Cohen forcing satisfies the splitting condition because given any condition , one can always find an element not mentioned in , and add either the sentence or to to get two new forcing conditions, incompatible with each other. Another instructive example of a forcing poset is , where and is the collection of Borel subsets of having non-zero Lebesgue measure. The generic object associated with this forcing poset is a random real number . It can be shown that falls in every Borel subset of with measure 1, provided that the Borel subset is "described" in the original unexpanded universe (this can be formalized with the concept of Borel codes). Each forcing condition can be regarded as a random event with probability equal to its measure. Due to the ready intuition this example can provide, probabilistic language is sometimes used with other divergent forcing posets. Generic filters Even though each individual forcing condition cannot fully determine the generic object , the set of all true forcing conditions does determine . In fact, without loss of generality, is commonly considered to be the generic object adjoined to , so the expanded model is called . It is usually easy enough to show that the originally desired object is indeed in the model . Under this convention, the concept of "generic object" can be described in a general way. Specifically, the set should be a generic filter on relative to . The "filter" condition means that it makes sense that is a set of all true forcing conditions: if , then if , then there exists an such that For to be "generic relative to " means: If is a "dense" subset of (that is, for each , there exists a such that ), then . Given that is a countable model, the existence of a generic filter follows from the Rasiowa–Sikorski lemma. In fact, slightly more is true: Given a condition , one can find a generic filter such that . Due to the splitting condition on , if is a filter, then is dense. If , then because is a model of . For this reason, a generic filter is never in . P-names and interpretations Associated with a forcing poset is the class of -names. A -name is a set of the form Given any filter on , the interpretation or valuation map from -names is given by The -names are, in fact, an expansion of the universe. Given , one defines to be the -name Since , it follows that . In a sense, is a "name for " that does not depend on the specific choice of . This also allows defining a "name for " without explicitly referring to : so that . Rigorous definitions The concepts of -names, interpretations, and may be defined by transfinite recursion. With the empty set, the successor ordinal to ordinal , the power-set operator, and a limit ordinal, define the following hierarchy: Then the class of -names is defined as The interpretation map and the map can similarly be defined with a hierarchical construction. Forcing Given a generic filter , one proceeds as follows. The subclass of -names in is denoted . Let To reduce the study of the set theory of to that of , one works with the "forcing language", which is built up like ordinary first-order logic, with membership as the binary relation and all the -names as constants. Define (to be read as " forces in the model with poset "), where is a condition, is a formula in the forcing language, and the 's are -names, to mean that if is a generic filter containing , then . The special case is often written as "" or simply "". Such statements are true in , no matter what is. What is important is that this external definition of the forcing relation is equivalent to an internal definition within , defined by transfinite induction (specifically -induction) over the -names on instances of and , and then by ordinary induction over the complexity of formulae. This has the effect that all the properties of are really properties of , and the verification of in becomes straightforward. This is usually summarized as the following three key properties: Truth: if and only if it is forced by , that is, for some condition , we have . Definability: The statement "" is definable in . Coherence: . Internal definition There are many different but equivalent ways to define the forcing relation in . One way to simplify the definition is to first define a modified forcing relation that is strictly stronger than . The modified relation still satisfies the three key properties of forcing, but and are not necessarily equivalent even if the first-order formulae and are equivalent. The unmodified forcing relation can then be defined as In fact, Cohen's original concept of forcing is essentially rather than . The modified forcing relation can be defined recursively as follows: means means means means means Other symbols of the forcing language can be defined in terms of these symbols: For example, means , means , etc. Cases 1 and 2 depend on each other and on case 3, but the recursion always refer to -names with lesser ranks, so transfinite induction allows the definition to go through. By construction, (and thus ) automatically satisfies Definability. The proof that also satisfies Truth and Coherence is by inductively inspecting each of the five cases above. Cases 4 and 5 are trivial (thanks to the choice of and as the elementary symbols), cases 1 and 2 relies only on the assumption that is a filter, and only case 3 requires to be a generic filter. Formally, an internal definition of the forcing relation (such as the one presented above) is actually a transformation of an arbitrary formula to another formula where and are additional variables. The model does not explicitly appear in the transformation (note that within , just means " is a -name"), and indeed one may take this transformation as a "syntactic" definition of the forcing relation in the universe of all sets regardless of any countable transitive model. However, if one wants to force over some countable transitive model , then the latter formula should be interpreted under (i.e. with all quantifiers ranging only over ), in which case it is equivalent to the external "semantic" definition of described at the top of this section: For any formula there is a theorem of the theory (for example conjunction of finite number of axioms) such that for any countable transitive model such that and any atomless partial order and any -generic filter over This the sense under which the forcing relation is indeed "definable in ". Consistency The discussion above can be summarized by the fundamental consistency result that, given a forcing poset , we may assume the existence of a generic filter , not belonging to the universe , such that is again a set-theoretic universe that models . Furthermore, all truths in may be reduced to truths in involving the forcing relation. Both styles, adjoining to either a countable transitive model or the whole universe , are commonly used. Less commonly seen is the approach using the "internal" definition of forcing, in which no mention of set or class models is made. This was Cohen's original method, and in one elaboration, it becomes the method of Boolean-valued analysis. Cohen forcing The simplest nontrivial forcing poset is , the finite partial functions from to under reverse inclusion. That is, a condition is essentially two disjoint finite subsets and of , to be thought of as the "yes" and "no" parts of with no information provided on values outside the domain of . " is stronger than " means that , in other words, the "yes" and "no" parts of are supersets of the "yes" and "no" parts of , and in that sense, provide more information. Let be a generic filter for this poset. If and are both in , then is a condition because is a filter. This means that is a well-defined partial function from to because any two conditions in agree on their common domain. In fact, is a total function. Given , let . Then is dense. (Given any , if is not in 's domain, adjoin a value for —the result is in .) A condition has in its domain, and since , we find that is defined. Let , the set of all "yes" members of the generic conditions. It is possible to give a name for directly. Let Then Now suppose that in . We claim that . Let Then is dense. (Given any , find that is not in its domain, and adjoin a value for contrary to the status of "".) Then any witnesses . To summarize, is a "new" subset of , necessarily infinite. Replacing with , that is, consider instead finite partial functions whose inputs are of the form , with and , and whose outputs are or , one gets new subsets of . They are all distinct, by a density argument: Given , let then each is dense, and a generic condition in it proves that the αth new set disagrees somewhere with the th new set. This is not yet the falsification of the continuum hypothesis. One must prove that no new maps have been introduced which map onto , or onto . For example, if one considers instead , finite partial functions from to , the first uncountable ordinal, one gets in a bijection from to . In other words, has collapsed, and in the forcing extension, is a countable ordinal. The last step in showing the independence of the continuum hypothesis, then, is to show that Cohen forcing does not collapse cardinals. For this, a sufficient combinatorial property is that all of the antichains of the forcing poset are countable. The countable chain condition An (strong) antichain of is a subset such that if and , then and are incompatible (written ), meaning there is no in such that and . In the example on Borel sets, incompatibility means that has zero measure. In the example on finite partial functions, incompatibility means that is not a function, in other words, and assign different values to some domain input. satisfies the countable chain condition (c.c.c.) if and only if every antichain in is countable. (The name, which is obviously inappropriate, is a holdover from older terminology. Some mathematicians write "c.a.c." for "countable antichain condition".) It is easy to see that satisfies the c.c.c. because the measures add up to at most . Also, satisfies the c.c.c., but the proof is more difficult. Given an uncountable subfamily , shrink to an uncountable subfamily of sets of size , for some . If for uncountably many , shrink this to an uncountable subfamily and repeat, getting a finite set and an uncountable family of incompatible conditions of size such that every is in for at most countable many . Now, pick an arbitrary , and pick from any that is not one of the countably many members that have a domain member in common with . Then and are compatible, so is not an antichain. In other words, -antichains are countable. The importance of antichains in forcing is that for most purposes, dense sets and maximal antichains are equivalent. A maximal antichain is one that cannot be extended to a larger antichain. This means that every element is compatible with some member of . The existence of a maximal antichain follows from Zorn's Lemma. Given a maximal antichain , let Then is dense, and if and only if . Conversely, given a dense set , Zorn's Lemma shows that there exists a maximal antichain , and then if and only if . Assume that satisfies the c.c.c. Given , with a function in , one can approximate inside as follows. Let be a name for (by the definition of ) and let be a condition that forces to be a function from to . Define a function , by By the definability of forcing, this definition makes sense within . By the coherence of forcing, a different come from an incompatible . By c.c.c., is countable. In summary, is unknown in as it depends on , but it is not wildly unknown for a c.c.c.-forcing. One can identify a countable set of guesses for what the value of is at any input, independent of . This has the following very important consequence. If in , is a surjection from one infinite ordinal onto another, then there is a surjection in , and consequently, a surjection in . In particular, cardinals cannot collapse. The conclusion is that in . Easton forcing The exact value of the continuum in the above Cohen model, and variants like for cardinals in general, was worked out by Robert M. Solovay, who also worked out how to violate (the generalized continuum hypothesis), for regular cardinals only, a finite number of times. For example, in the above Cohen model, if holds in , then holds in . William B. Easton worked out the proper class version of violating the for regular cardinals, basically showing that the known restrictions, (monotonicity, Cantor's Theorem and König's Theorem), were the only -provable restrictions (see Easton's Theorem). Easton's work was notable in that it involved forcing with a proper class of conditions. In general, the method of forcing with a proper class of conditions fails to give a model of . For example, forcing with , where is the proper class of all ordinals, makes the continuum a proper class. On the other hand, forcing with introduces a countable enumeration of the ordinals. In both cases, the resulting is visibly not a model of . At one time, it was thought that more sophisticated forcing would also allow an arbitrary variation in the powers of singular cardinals. However, this has turned out to be a difficult, subtle and even surprising problem, with several more restrictions provable in and with the forcing models depending on the consistency of various large-cardinal properties. Many open problems remain. Random reals Random forcing can be defined as forcing over the set of all compact subsets of of positive measure ordered by relation (smaller set in context of inclusion is smaller set in ordering and represents condition with more information). There are two types of important dense sets: For any positive integer the set is dense, where is diameter of the set . For any Borel subset of measure 1, the set is dense. For any filter and for any finitely many elements there is such that holds . In case of this ordering, this means that any filter is set of compact sets with finite intersection property. For this reason, intersection of all elements of any filter is nonempty. If is a filter intersecting the dense set for any positive integer , then the filter contains conditions of arbitrarily small positive diameter. Therefore, the intersection of all conditions from has diameter 0. But the only nonempty sets of diameter 0 are singletons. So there is exactly one real number such that . Let be any Borel set of measure 1. If intersects , then . However, a generic filter over a countable transitive model is not in . The real defined by is provably not an element of . The problem is that if , then " is compact", but from the viewpoint of some larger universe , can be non-compact and the intersection of all conditions from the generic filter is actually empty. For this reason, we consider the set of topological closures of conditions from G (i.e., ). Because of and the finite intersection property of , the set also has the finite intersection property. Elements of the set are bounded closed sets as closures of bounded sets. Therefore, is a set of compact sets with the finite intersection property and thus has nonempty intersection. Since and the ground model inherits a metric from the universe , the set has elements of arbitrarily small diameter. Finally, there is exactly one real that belongs to all members of the set . The generic filter can be reconstructed from as . If is name of , and for holds " is Borel set of measure 1", then holds for some . There is name such that for any generic filter holds Then holds for any condition . Every Borel set can, non-uniquely, be built up, starting from intervals with rational endpoints and applying the operations of complement and countable unions, a countable number of times. The record of such a construction is called a Borel code. Given a Borel set in , one recovers a Borel code, and then applies the same construction sequence in , getting a Borel set . It can be proven that one gets the same set independent of the construction of , and that basic properties are preserved. For example, if , then . If has measure zero, then has measure zero. This mapping is injective. For any set such that and " is a Borel set of measure 1" holds . This means that is "infinite random sequence of 0s and 1s" from the viewpoint of , which means that it satisfies all statistical tests from the ground model . So given , a random real, one can show that Because of the mutual inter-definability between and , one generally writes for . A different interpretation of reals in was provided by Dana Scott. Rational numbers in have names that correspond to countably-many distinct rational values assigned to a maximal antichain of Borel sets – in other words, a certain rational-valued function on . Real numbers in then correspond to Dedekind cuts of such functions, that is, measurable functions. Boolean-valued models Perhaps more clearly, the method can be explained in terms of Boolean-valued models. In these, any statement is assigned a truth value from some complete atomless Boolean algebra, rather than just a true/false value. Then an ultrafilter is picked in this Boolean algebra, which assigns values true/false to statements of our theory. The point is that the resulting theory has a model that contains this ultrafilter, which can be understood as a new model obtained by extending the old one with this ultrafilter. By picking a Boolean-valued model in an appropriate way, we can get a model that has the desired property. In it, only statements that must be true (are "forced" to be true) will be true, in a sense (since it has this extension/minimality property). Meta-mathematical explanation In forcing, we usually seek to show that some sentence is consistent with (or optionally some extension of ). One way to interpret the argument is to assume that is consistent and then prove that combined with the new sentence is also consistent. Each "condition" is a finite piece of information – the idea is that only finite pieces are relevant for consistency, since, by the compactness theorem, a theory is satisfiable if and only if every finite subset of its axioms is satisfiable. Then we can pick an infinite set of consistent conditions to extend our model. Therefore, assuming the consistency of , we prove the consistency of extended by this infinite set. Logical explanation By Gödel's second incompleteness theorem, one cannot prove the consistency of any sufficiently strong formal theory, such as , using only the axioms of the theory itself, unless the theory is inconsistent. Consequently, mathematicians do not attempt to prove the consistency of using only the axioms of , or to prove that is consistent for any hypothesis using only . For this reason, the aim of a consistency proof is to prove the consistency of relative to the consistency of . Such problems are known as problems of relative consistency, one of which proves The general schema of relative consistency proofs follows. As any proof is finite, it uses only a finite number of axioms: For any given proof, can verify the validity of this proof. This is provable by induction on the length of the proof. Then resolve By proving the following it can be concluded that which is equivalent to which gives (*). The core of the relative consistency proof is proving (**). A proof of can be constructed for any given finite subset of the axioms (by instruments of course). (No universal proof of of course.) In , it is provable that for any condition , the set of formulas (evaluated by names) forced by is deductively closed. Furthermore, for any axiom, proves that this axiom is forced by . Then it suffices to prove that there is at least one condition that forces . In the case of Boolean-valued forcing, the procedure is similar: proving that the Boolean value of is not . Another approach uses the Reflection Theorem. For any given finite set of axioms, there is a proof that this set of axioms has a countable transitive model. For any given finite set of axioms, there is a finite set of axioms such that proves that if a countable transitive model satisfies , then satisfies . By proving that there is finite set of axioms such that if a countable transitive model satisfies , then satisfies the hypothesis . Then, for any given finite set of axioms, proves . Sometimes in (**), a stronger theory than is used for proving . Then we have proof of the consistency of relative to the consistency of . Note that , where is (the axiom of constructibility). See also List of forcing notions Nice name Notes References Bibliography
Forcing (mathematics)
[ "Mathematics" ]
5,873
[ "Forcing (mathematics)", "Mathematical logic" ]
152,207
https://en.wikipedia.org/wiki/Compactness%20theorem
In mathematical logic, the compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This theorem is an important tool in model theory, as it provides a useful (but generally not effective) method for constructing models of any set of sentences that is finitely consistent. The compactness theorem for the propositional calculus is a consequence of Tychonoff's theorem (which says that the product of compact spaces is compact) applied to compact Stone spaces, hence the theorem's name. Likewise, it is analogous to the finite intersection property characterization of compactness in topological spaces: a collection of closed sets in a compact space has a non-empty intersection if every finite subcollection has a non-empty intersection. The compactness theorem is one of the two key properties, along with the downward Löwenheim–Skolem theorem, that is used in Lindström's theorem to characterize first-order logic. Although there are some generalizations of the compactness theorem to non-first-order logics, the compactness theorem itself does not hold in them, except for a very limited number of examples. History Kurt Gödel proved the countable compactness theorem in 1930. Anatoly Maltsev proved the uncountable case in 1936. Applications The compactness theorem has many applications in model theory; a few typical results are sketched here. Robinson's principle The compactness theorem implies the following result, stated by Abraham Robinson in his 1949 dissertation. Robinson's principle: If a first-order sentence holds in every field of characteristic zero, then there exists a constant such that the sentence holds for every field of characteristic larger than This can be seen as follows: suppose is a sentence that holds in every field of characteristic zero. Then its negation together with the field axioms and the infinite sequence of sentences is not satisfiable (because there is no field of characteristic 0 in which holds, and the infinite sequence of sentences ensures any model would be a field of characteristic 0). Therefore, there is a finite subset of these sentences that is not satisfiable. must contain because otherwise it would be satisfiable. Because adding more sentences to does not change unsatisfiability, we can assume that contains the field axioms and, for some the first sentences of the form Let contain all the sentences of except Then any field with a characteristic greater than is a model of and together with is not satisfiable. This means that must hold in every model of which means precisely that holds in every field of characteristic greater than This completes the proof. The Lefschetz principle, one of the first examples of a transfer principle, extends this result. A first-order sentence in the language of rings is true in (or equivalently, in ) algebraically closed field of characteristic 0 (such as the complex numbers for instance) if and only if there exist infinitely many primes for which is true in algebraically closed field of characteristic in which case is true in algebraically closed fields of sufficiently large non-0 characteristic One consequence is the following special case of the Ax–Grothendieck theorem: all injective complex polynomials are surjective (indeed, it can even be shown that its inverse will also be a polynomial). In fact, the surjectivity conclusion remains true for any injective polynomial where is a finite field or the algebraic closure of such a field. Upward Löwenheim–Skolem theorem A second application of the compactness theorem shows that any theory that has arbitrarily large finite models, or a single infinite model, has models of arbitrary large cardinality (this is the Upward Löwenheim–Skolem theorem). So for instance, there are nonstandard models of Peano arithmetic with uncountably many 'natural numbers'. To achieve this, let be the initial theory and let be any cardinal number. Add to the language of one constant symbol for every element of Then add to a collection of sentences that say that the objects denoted by any two distinct constant symbols from the new collection are distinct (this is a collection of sentences). Since every subset of this new theory is satisfiable by a sufficiently large finite model of or by any infinite model, the entire extended theory is satisfiable. But any model of the extended theory has cardinality at least . Non-standard analysis A third application of the compactness theorem is the construction of nonstandard models of the real numbers, that is, consistent extensions of the theory of the real numbers that contain "infinitesimal" numbers. To see this, let be a first-order axiomatization of the theory of the real numbers. Consider the theory obtained by adding a new constant symbol to the language and adjoining to the axiom and the axioms for all positive integers Clearly, the standard real numbers are a model for every finite subset of these axioms, because the real numbers satisfy everything in and, by suitable choice of can be made to satisfy any finite subset of the axioms about By the compactness theorem, there is a model that satisfies and also contains an infinitesimal element A similar argument, this time adjoining the axioms etc., shows that the existence of numbers with infinitely large magnitudes cannot be ruled out by any axiomatization of the reals. It can be shown that the hyperreal numbers satisfy the transfer principle: a first-order sentence is true of if and only if it is true of Proofs One can prove the compactness theorem using Gödel's completeness theorem, which establishes that a set of sentences is satisfiable if and only if no contradiction can be proven from it. Since proofs are always finite and therefore involve only finitely many of the given sentences, the compactness theorem follows. In fact, the compactness theorem is equivalent to Gödel's completeness theorem, and both are equivalent to the Boolean prime ideal theorem, a weak form of the axiom of choice. Gödel originally proved the compactness theorem in just this way, but later some "purely semantic" proofs of the compactness theorem were found; that is, proofs that refer to but not to . One of those proofs relies on ultraproducts hinging on the axiom of choice as follows: Proof: Fix a first-order language and let be a collection of -sentences such that every finite subcollection of -sentences, of it has a model Also let be the direct product of the structures and be the collection of finite subsets of For each let The family of all of these sets generates a proper filter, so there is an ultrafilter containing all sets of the form Now for any sentence in the set is in whenever then hence holds in the set of all with the property that holds in is a superset of hence also in Łoś's theorem now implies that holds in the ultraproduct So this ultraproduct satisfies all formulas in See also Notes References External links Compactness Theorem, Internet Encyclopedia of Philosophy. Mathematical logic Metatheorems Model theory Theorems in the foundations of mathematics
Compactness theorem
[ "Mathematics" ]
1,476
[ "Foundations of mathematics", "Mathematical logic", "Model theory", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
152,214
https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel%20set%20theory
In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded. Informally, Zermelo–Fraenkel set theory is intended to formalize a single primitive notion, that of a hereditary well-founded set, so that all entities in the universe of discourse are such sets. Thus the axioms of Zermelo–Fraenkel set theory refer only to pure sets and prevent its models from containing urelements (elements that are not themselves sets). Furthermore, proper classes (collections of mathematical objects defined by a property shared by their members where the collections are too big to be sets) can only be treated indirectly. Specifically, Zermelo–Fraenkel set theory does not allow for the existence of a universal set (a set containing all sets) nor for unrestricted comprehension, thereby avoiding Russell's paradox. Von Neumann–Bernays–Gödel set theory (NBG) is a commonly used conservative extension of Zermelo–Fraenkel set theory that does allow explicit treatment of proper classes. There are many equivalent formulations of the axioms of Zermelo–Fraenkel set theory. Most of the axioms state the existence of particular sets defined from other sets. For example, the axiom of pairing says that given any two sets and there is a new set containing exactly and . Other axioms describe properties of set membership. A goal of the axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe (also known as the cumulative hierarchy). The metamathematics of Zermelo–Fraenkel set theory has been extensively studied. Landmark results in this area established the logical independence of the axiom of choice from the remaining Zermelo-Fraenkel axioms and of the continuum hypothesis from ZFC. The consistency of a theory such as ZFC cannot be proved within the theory itself, as shown by Gödel's second incompleteness theorem. History The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. However, the discovery of paradoxes in naive set theory, such as Russell's paradox, led to the desire for a more rigorous form of set theory that was free of these paradoxes. In 1908, Ernst Zermelo proposed the first axiomatic set theory, Zermelo set theory. However, as first pointed out by Abraham Fraenkel in a 1921 letter to Zermelo, this theory was incapable of proving the existence of certain sets and cardinal numbers whose existence was taken for granted by most set theorists of the time, notably the cardinal number aleph-omega () and the set where is any infinite set and is the power set operation. Moreover, one of Zermelo's axioms invoked a concept, that of a "definite" property, whose operational meaning was not clear. In 1922, Fraenkel and Thoralf Skolem independently proposed operationalizing a "definite" property as one that could be formulated as a well-formed formula in a first-order logic whose atomic formulas were limited to set membership and identity. They also independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Appending this schema, as well as the axiom of regularity (first proposed by John von Neumann), to Zermelo set theory yields the theory denoted by ZF. Adding to ZF either the axiom of choice (AC) or a statement that is equivalent to it yields ZFC. Formal language Formally, ZFC is a one-sorted theory in first-order logic. The equality symbol can be treated as either a primitive logical symbol or a high-level abbreviation for having exactly the same elements. The former approach is the most common. The signature has a single predicate symbol, usually denoted , which is a predicate symbol of arity 2 (a binary relation symbol). This symbol symbolizes a set membership relation. For example, the formula means that is an element of the set (also read as is a member of ). There are different ways to formulate the formal language. Some authors may choose a different set of connectives or quantifiers. For example, the logical connective NAND alone can encode the other connectives, a property known as functional completeness. This section attempts to strike a balance between simplicity and intuitiveness. The language's alphabet consists of: A countably infinite amount of variables used for representing sets The logical connectives , , The quantifier symbols , The equality symbol The set membership symbol Brackets ( ) With this alphabet, the recursive rules for forming well-formed formulae (wff) are as follows: Let and be metavariables for any variables. These are the two ways to build atomic formulae (the simplest wffs): Let and be metavariables for any wff, and be a metavariable for any variable. These are valid wff constructions: A well-formed formula can be thought as a syntax tree. The leaf nodes are always atomic formulae. Nodes and have exactly two child nodes, while nodes , and have exactly one. There are countably infinitely many wffs, however, each wff has a finite number of nodes. Axioms There are many equivalent formulations of the ZFC axioms. The following particular axiom set is from . The axioms in order below are expressed in a mixture of first order logic and high-level abbreviations. Axioms 1–8 form ZF, while the axiom 9 turns ZF into ZFC. Following , we use the equivalent well-ordering theorem in place of the axiom of choice for axiom 9. All formulations of ZFC imply that at least one set exists. Kunen includes an axiom that directly asserts the existence of a set, although he notes that he does so only "for emphasis". Its omission here can be justified in two ways. First, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty. Hence, it is a logical theorem of first-order logic that something exists — usually expressed as the assertion that something is identical to itself, . Consequently, it is a theorem of every first-order theory that something exists. However, as noted above, because in the intended semantics of ZFC, there are only sets, the interpretation of this logical theorem in the context of ZFC is that some set exists. Hence, there is no need for a separate axiom asserting that a set exists. Second, however, even if ZFC is formulated in so-called free logic, in which it is not provable from logic alone that something exists, the axiom of infinity asserts that an infinite set exists. This implies that a set exists, and so, once again, it is superfluous to include an axiom asserting as much. Axiom of extensionality Two sets are equal (are the same set) if they have the same elements. The converse of this axiom follows from the substitution property of equality. ZFC is constructed in first-order logic. Some formulations of first-order logic include identity; others do not. If the variety of first-order logic in which you are constructing set theory does not include equality "", may be defined as an abbreviation for the following formula: In this case, the axiom of extensionality can be reformulated as which says that if and have the same elements, then they belong to the same sets. Axiom of regularity (also called the axiom of foundation) Every non-empty set contains a member such that and are disjoint sets. or in modern notation: This (along with the axioms of pairing and union) implies, for example, that no set is an element of itself and that every set has an ordinal rank. Axiom schema of specification (or of separation, or of restricted comprehension) Subsets are commonly constructed using set builder notation. For example, the even integers can be constructed as the subset of the integers satisfying the congruence modulo predicate : In general, the subset of a set obeying a formula with one free variable may be written as: The axiom schema of specification states that this subset always exists (it is an axiom schema because there is one axiom for each ). Formally, let be any formula in the language of ZFC with all free variables among ( is not free in ). Then: Note that the axiom schema of specification can only construct subsets and does not allow the construction of entities of the more general form: This restriction is necessary to avoid Russell's paradox (let then ) and its variants that accompany naive set theory with unrestricted comprehension (since under this restriction only refers to sets within that don't belong to themselves, and has not been established, even though is the case, so stands in a separate position from which it can't refer to or comprehend itself; therefore, in a certain sense, this axiom schema is saying that in order to build a on the basis of a formula , we need to previously restrict the sets will regard within a set that leaves outside so can't refer to itself; or, in other words, sets shouldn't refer to themselves). In some other axiomatizations of ZF, this axiom is redundant in that it follows from the axiom schema of replacement and the axiom of the empty set. On the other hand, the axiom schema of specification can be used to prove the existence of the empty set, denoted , once at least one set is known to exist. One way to do this is to use a property which no set has. For example, if is any existing set, the empty set can be constructed as Thus, the axiom of the empty set is implied by the nine axioms presented here. The axiom of extensionality implies the empty set is unique (does not depend on ). It is common to make a definitional extension that adds the symbol "" to the language of ZFC. Axiom of pairing If and are sets, then there exists a set which contains and as elements, for example if x = {1,2} and y = {2,3} then z will be {{1,2},{2,3}} The axiom schema of specification must be used to reduce this to a set with exactly these two elements. The axiom of pairing is part of Z, but is redundant in ZF because it follows from the axiom schema of replacement if we are given a set with at least two elements. The existence of a set with at least two elements is assured by either the axiom of infinity, or by the and the axiom of the power set applied twice to any set. Axiom of union The union over the elements of a set exists. For example, the union over the elements of the set is The axiom of union states that for any set of sets , there is a set containing every element that is a member of some member of : Although this formula doesn't directly assert the existence of , the set can be constructed from in the above using the axiom schema of specification: Axiom schema of replacement The axiom schema of replacement asserts that the image of a set under any definable function will also fall inside a set. Formally, let be any formula in the language of ZFC whose free variables are among so that in particular is not free in . Then: (The unique existential quantifier denotes the existence of exactly one element such that it follows a given statement.) In other words, if the relation represents a definable function , represents its domain, and is a set for every then the range of is a subset of some set . The form stated here, in which may be larger than strictly necessary, is sometimes called the axiom schema of collection. Axiom of infinity Let abbreviate where is some set. (We can see that is a valid set by applying the axiom of pairing with so that the set is ). Then there exists a set such that the empty set , defined axiomatically, is a member of and, whenever a set is a member of then is also a member of . or in modern notation: More colloquially, there exists a set having infinitely many members. (It must be established, however, that these members are all different because if two elements are the same, the sequence will loop around in a finite cycle of sets. The axiom of regularity prevents this from happening.) The minimal set satisfying the axiom of infinity is the von Neumann ordinal which can also be thought of as the set of natural numbers Axiom of power set By definition, a set is a subset of a set if and only if every element of is also an element of : The Axiom of power set states that for any set , there is a set that contains every subset of : The axiom schema of specification is then used to define the power set as the subset of such a containing the subsets of exactly: Axioms 1–8 define ZF. Alternative forms of these axioms are often encountered, some of which are listed in . Some ZF axiomatizations include an axiom asserting that the empty set exists. The axioms of pairing, union, replacement, and power set are often stated so that the members of the set whose existence is being asserted are just those sets which the axiom asserts must contain. The following axiom is added to turn ZF into ZFC: Axiom of well-ordering (choice) The last axiom, commonly known as the axiom of choice, is presented here as a property about well-orders, as in . For any set , there exists a binary relation which well-orders . This means is a linear order on such that every nonempty subset of has a least element under the order . Given axioms 1 – 8, many statements are equivalent to axiom 9. The most common of these goes as follows. Let be a set whose members are all nonempty. Then there exists a function from to the union of the members of , called a "choice function", such that for all one has . A third version of the axiom, also equivalent, is Zorn's lemma. Since the existence of a choice function when is a finite set is easily proved from axioms 1–8, AC only matters for certain infinite sets. AC is characterized as nonconstructive because it asserts the existence of a choice function but says nothing about how this choice function is to be "constructed". Motivation via the cumulative hierarchy One motivation for the ZFC axioms is the cumulative hierarchy of sets introduced by John von Neumann. In this viewpoint, the universe of set theory is built up in stages, with one stage for each ordinal number. At stage 0, there are no sets yet. At each following stage, a set is added to the universe if all of its elements have been added at previous stages. Thus the empty set is added at stage 1, and the set containing the empty set is added at stage 2. The collection of all sets that are obtained in this way, over all the stages, is known as V. The sets in V can be arranged into a hierarchy by assigning to each set the first stage at which that set was added to V. It is provable that a set is in V if and only if the set is pure and well-founded. And V satisfies all the axioms of ZFC if the class of ordinals has appropriate reflection properties. For example, suppose that a set x is added at stage α, which means that every element of x was added at a stage earlier than α. Then, every subset of x is also added at (or before) stage α, because all elements of any subset of x were also added before stage α. This means that any subset of x which the axiom of separation can construct is added at (or before) stage α, and that the powerset of x will be added at the next stage after α. The picture of the universe of sets stratified into the cumulative hierarchy is characteristic of ZFC and related axiomatic set theories such as Von Neumann–Bernays–Gödel set theory (often called NBG) and Morse–Kelley set theory. The cumulative hierarchy is not compatible with other set theories such as New Foundations. It is possible to change the definition of V so that at each stage, instead of adding all the subsets of the union of the previous stages, subsets are only added if they are definable in a certain sense. This results in a more "narrow" hierarchy, which gives the constructible universe L, which also satisfies all the axioms of ZFC, including the axiom of choice. It is independent from the ZFC axioms whether V = L. Although the structure of L is more regular and well behaved than that of V, few mathematicians argue that V = L should be added to ZFC as an additional "axiom of constructibility". Metamathematics Virtual classes Proper classes (collections of mathematical objects defined by a property shared by their members which are too big to be sets) can only be treated indirectly in ZF (and thus ZFC). An alternative to proper classes while staying within ZF and ZFC is the virtual class notational construct introduced by , where the entire construct y ∈ { x | Fx } is simply defined as Fy. This provides a simple notation for classes that can contain sets but need not themselves be sets, while not committing to the ontology of classes (because the notation can be syntactically converted to one that only uses sets). Quine's approach built on the earlier approach of . Virtual classes are also used in , , and in the Metamath implementation of ZFC. Finite axiomatization The axiom schemata of replacement and separation each contain infinitely many instances. included a result first proved in his 1957 Ph.D. thesis: if ZFC is consistent, it is impossible to axiomatize ZFC using only finitely many axioms. On the other hand, von Neumann–Bernays–Gödel set theory (NBG) can be finitely axiomatized. The ontology of NBG includes proper classes as well as sets; a set is any class that can be a member of another class. NBG and ZFC are equivalent set theories in the sense that any theorem not mentioning classes and provable in one theory can be proved in the other. Consistency Gödel's second incompleteness theorem says that a recursively axiomatizable system that can interpret Robinson arithmetic can prove its own consistency only if it is inconsistent. Moreover, Robinson arithmetic can be interpreted in general set theory, a small fragment of ZFC. Hence the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). Thus, to the extent that ZFC is identified with ordinary mathematics, the consistency of ZFC cannot be demonstrated in ordinary mathematics. The consistency of ZFC does follow from the existence of a weakly inaccessible cardinal, which is unprovable in ZFC if ZFC is consistent. Nevertheless, it is deemed unlikely that ZFC harbors an unsuspected contradiction; it is widely believed that if ZFC were inconsistent, that fact would have been uncovered by now. This much is certain — ZFC is immune to the classic paradoxes of naive set theory: Russell's paradox, the Burali-Forti paradox, and Cantor's paradox. studied a subtheory of ZFC consisting of the axioms of extensionality, union, powerset, replacement, and choice. Using models, they proved this subtheory consistent, and proved that each of the axioms of extensionality, replacement, and power set is independent of the four remaining axioms of this subtheory. If this subtheory is augmented with the axiom of infinity, each of the axioms of union, choice, and infinity is independent of the five remaining axioms. Because there are non-well-founded models that satisfy each axiom of ZFC except the axiom of regularity, that axiom is independent of the other ZFC axioms. If consistent, ZFC cannot prove the existence of the inaccessible cardinals that category theory requires. Huge sets of this nature are possible if ZF is augmented with Tarski's axiom. Assuming that axiom turns the axioms of infinity, power set, and choice (7 – 9 above) into theorems. Independence Many important statements are independent of ZFC. The independence is usually proved by forcing, whereby it is shown that every countable transitive model of ZFC (sometimes augmented with large cardinal axioms) can be expanded to satisfy the statement in question. A different expansion is then shown to satisfy the negation of the statement. An independence proof by forcing automatically proves independence from arithmetical statements, other concrete statements, and large cardinal axioms. Some statements independent of ZFC can be proven to hold in particular inner models, such as in the constructible universe. However, some statements that are true about constructible sets are not consistent with hypothesized large cardinal axioms. Forcing proves that the following statements are independent of ZFC: Axiom of constructibility (V=L) (which is also not a ZFC axiom) Continuum hypothesis Diamond principle Martin's axiom (which is not a ZFC axiom) Suslin hypothesis Remarks: The consistency of V=L is provable by inner models but not forcing: every model of ZF can be trimmed to become a model of ZFC + V=L. The diamond principle implies the continuum hypothesis and the negation of the Suslin hypothesis. Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis. The constructible universe satisfies the generalized continuum hypothesis, the diamond principle, Martin's axiom and the Kurepa hypothesis. The failure of the Kurepa hypothesis is equiconsistent with the existence of a strongly inaccessible cardinal. A variation on the method of forcing can also be used to demonstrate the consistency and unprovability of the axiom of choice, i.e., that the axiom of choice is independent of ZF. The consistency of choice can be (relatively) easily verified by proving that the inner model L satisfies choice. (Thus every model of ZF contains a submodel of ZFC, so that Con(ZF) implies Con(ZFC).) Since forcing preserves choice, we cannot directly produce a model contradicting choice from a model satisfying choice. However, we can use forcing to create a model which contains a suitable submodel, namely one satisfying ZF but not C. Another method of proving independence results, one owing nothing to forcing, is based on Gödel's second incompleteness theorem. This approach employs the statement whose independence is being examined, to prove the existence of a set model of ZFC, in which case Con(ZFC) is true. Since ZFC satisfies the conditions of Gödel's second theorem, the consistency of ZFC is unprovable in ZFC (provided that ZFC is, in fact, consistent). Hence no statement allowing such a proof can be proved in ZFC. This method can prove that the existence of large cardinals is not provable in ZFC, but cannot prove that assuming such cardinals, given ZFC, is free of contradiction. Proposed additions The project to unify set theorists behind additional axioms to resolve the continuum hypothesis or other meta-mathematical ambiguities is sometimes known as "Gödel's program". Mathematicians currently debate which axioms are the most plausible or "self-evident", which axioms are the most useful in various domains, and about to what degree usefulness should be traded off with plausibility; some "multiverse" set theorists argue that usefulness should be the sole ultimate criterion in which axioms to customarily adopt. One school of thought leans on expanding the "iterative" concept of a set to produce a set-theoretic universe with an interesting and complex but reasonably tractable structure by adopting forcing axioms; another school advocates for a tidier, less cluttered universe, perhaps focused on a "core" inner model. Criticisms ZFC has been criticized both for being excessively strong and for being excessively weak, as well as for its failure to capture objects such as proper classes and the universal set. Many mathematical theorems can be proven in much weaker systems than ZFC, such as Peano arithmetic and second-order arithmetic (as explored by the program of reverse mathematics). Saunders Mac Lane and Solomon Feferman have both made this point. Some of "mainstream mathematics" (mathematics not directly connected with axiomatic set theory) is beyond Peano arithmetic and second-order arithmetic, but still, all such mathematics can be carried out in ZC (Zermelo set theory with choice), another theory weaker than ZFC. Much of the power of ZFC, including the axiom of regularity and the axiom schema of replacement, is included primarily to facilitate the study of the set theory itself. On the other hand, among axiomatic set theories, ZFC is comparatively weak. Unlike New Foundations, ZFC does not admit the existence of a universal set. Hence the universe of sets under ZFC is not closed under the elementary operations of the algebra of sets. Unlike von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory (MK), ZFC does not admit the existence of proper classes. A further comparative weakness of ZFC is that the axiom of choice included in ZFC is weaker than the axiom of global choice included in NBG and MK. There are numerous mathematical statements independent of ZFC. These include the continuum hypothesis, the Whitehead problem, and the normal Moore space conjecture. Some of these conjectures are provable with the addition of axioms such as Martin's axiom or large cardinal axioms to ZFC. Some others are decided in ZF+AD where AD is the axiom of determinacy, a strong supposition incompatible with choice. One attraction of large cardinal axioms is that they enable many results from ZF+AD to be established in ZFC adjoined by some large cardinal axiom. The Mizar system and metamath have adopted Tarski–Grothendieck set theory, an extension of ZFC, so that proofs involving Grothendieck universes (encountered in category theory and algebraic geometry) can be formalized. See also Foundations of mathematics Inner model Large cardinal axiom Related axiomatic set theories: Morse–Kelley set theory Von Neumann–Bernays–Gödel set theory Tarski–Grothendieck set theory Constructive set theory Internal set theory Notes Bibliography . Fraenkel's final word on ZF and ZFC. Includes annotated English translations of the classic articles by Zermelo, Fraenkel, and Skolem bearing on ZFC. . English translation in External links Stanford Encyclopedia of Philosophy articles by Joan Bagaria: Metamath version of the ZFC axioms — A concise and nonredundant axiomatization. The background first order logic is defined especially to facilitate machine verification of proofs. A derivation in Metamath of a version of the separation schema from a version of the replacement schema. Foundations of mathematics Systems of set theory Z notation
Zermelo–Fraenkel set theory
[ "Mathematics" ]
5,879
[ "Z notation", "Foundations of mathematics" ]
152,260
https://en.wikipedia.org/wiki/Rooibos
Rooibos ( ; , ), or , is a broom-like member of the plant family Fabaceae that grows in South Africa's Fynbos biome. The leaves are used to make a caffeine free herbal tea that is called rooibos (especially in Southern Africa), bush tea, red tea, or redbush tea (predominantly in Great Britain). The tea has been popular in Southern Africa for generations, and since the 2000s has gained popularity internationally. The tea has an earthy flavour that is similar to yerba mate or tobacco. Rooibos was formerly classified as but is now thought to be part of following Dahlgren (1980). The specific name of was given by Burman (1759) for the plant's linear growing structure and needle-like leaves. The name rooibos is Afrikaans from , meaning . The name is protected in South Africa and has protected designation of origin status in the EU. Production and processing Rooibos is usually grown in the Cederberg, a small mountainous area in the West Coast District of the Western Cape province of South Africa. Generally, the leaves undergo oxidation. This process produces the distinctive reddish-brown colour of rooibos and enhances the flavour. Unoxidised green rooibos is also produced, but the more demanding production process for green rooibos (similar to the method by which green tea is produced) makes it more expensive than traditional rooibos. It carries a malty and slightly grassy flavour somewhat different from its red counterpart. Use Rooibos is commonly prepared as a tisane by steeping in hot water, in the same manner as black tea. The infusion is consumed on its own or flavoured by addition of milk, lemon, sugar or honey. It is also served as lattes, cappuccinos or iced tea. Chemical composition As a fresh leaf, rooibos contains a high content of ascorbic acid (vitamin C). Rooibos tea does not contain caffeine and has low tannin levels compared to black tea or green tea. Rooibos contains polyphenols, including flavanols, flavones, flavanones, dihydrochalcones, aspalathin and nothofagin. The processed leaves and stems contain benzoic and cinnamic acids. Grading Rooibos grades are largely related to the percentage needle or leaf to stem content in the mix. A higher leaf content results in a darker liquor, richer flavour and less "dusty" aftertaste. The high-grade rooibos is exported and does not reach local markets, with major consumers being the EU, particularly Germany, where it is used in creating flavoured blends for loose-leaf tea markets. History Three species of the Borboniae group of Aspalathus, namely A. angustifolia, A. cordata and A. crenata, were once used as tea. These plants have simple, rigid, spine-tipped leaves, hence the common name 'stekeltee'. The earliest record of the use of Aspalathus as a source of tea was that of Carl Peter Thunberg, who wrote about the use of A. cordata as tea: "Of the leaves of Borbonia cordata the country people make tea." (Thunberg, July 1772, at Paarl). This anecdote is sometimes erroneously associated with rooibos tea (Aspalathus linearis). Archaeological records suggest that Aspalathus linearis could have been used thousands of years ago, but that does not imply rooibos tea was made in precolonial times. The traditional method of harvesting and processing rooibos (for making rooibos infusion or decoction tea) could have, at least partly, originated in precolonial times. However, it does not necessarily follow that San and Khoikhoi used that method to prepare a beverage that they consumed for pleasure as tea. The earliest available ethnobotanical records of rooibos tea originate in the late 19th century. No Khoi or San vernacular names of the species have been recorded. Several authors have assumed that the tea originated from the local inhabitants of the Cederberg. Apparently, rooibos tea is a traditional drink of Khoi-descended people of the Cederberg (and "poor whites"). However, that tradition has not been traced further back than the last quarter of the 19th century. Traditionally, the local people would climb the mountains and cut the fine needle-like leaves from wild rooibos plants. They then rolled the bunches of leaves into hessian bags and brought them down the steep slopes using donkeys. Rooibos tea was traditionally processed by beating the material on a flat rock with a heavy wooden pole or club or a large wooden hammer. The historical record of the use of rooibos in precolonial and early colonial times is mostly a record of absence. Colonial-era settlers could have learnt about some properties of the Aspalathus linearis from pastoralists and hunter-gatherers of the Cederberg region. The nature of that knowledge was not documented. Given the available data, the origin of rooibos tea can be viewed in the context of the global expansion of tea trade and the colonial habit of drinking Chinese and later Ceylon tea. In that case, the rooibos infusion or decoction served as a local replacement for the expensive Asian product. It appears that both the indigenous (San and Khoikhoi) and the colonial inhabitants of rooibos-growing areas contributed to the traditional knowledge of rooibos in some way. For instance, medicinal uses might have been introduced before the 18th century by Khoisan pastoralists or San hunter-gatherers. Also, the use of the Aspalathus linearis to make tea, including the production processes, such as bruising and oxidising the leaves, is more likely to have been introduced in colonial times by settlers who were accustomed to drinking Asian tea or its substitutes. In 1904, Benjamin Ginsberg ran a variety of experiments at Rondegat Farm and finally cured rooibos. He simulated the traditional Chinese method of making Keemun by fermenting the tea in barrels. The major hurdle in growing rooibos commercially was that farmers could not germinate the rooibos seeds. The seeds were hard to find and impossible to germinate commercially. A medical doctor by profession and business partner to Ginsberg, Pieter le Fras Nortier, ascertained that seeds require a process of scarification before they are planted in acidic, sandy soil. By the late 1920s, growing demand for the tea had led to problems with supply of the wild rooibos plants. As a remedy, Pieter le Fras Nortier, a district surgeon in Clanwilliam and an avid naturalist, proposed to develop a cultivated variety of rooibos to be raised on appropriately-situated land. Nortier worked on cultivation of the rooibos species in partnership with the farmers Oloff Bergh and William Riordan and with the encouragement of Benjamin Ginsberg. Bergh harvested a large amount of rooibos in 1925 on his farm Kleinvlei, in the Pakhuis Mountains. Nortier collected seeds in the Pakhuis Mountains (Rocklands) and in a large valley, called Grootkloof, and those first selected seeds are known as the Nortier-type and Redtea-type. In 1930, Nortier began conducting experiments with the commercial cultivation of the rooibos plant. He cultivated the first plants at Clanwilliam on his farm of Eastside and on the farm of Klein Kliphuis. The tiny seeds were very difficult to come by Nortier, who paid the local villagers £5 per matchbox of seeds collected. An aged Khoi woman found an unusual seed source: having chanced upon ants dragging seed, she followed them back to their nest and, on breaking it open, found a granary. Nortier's research was ultimately successful, and he subsequently showed all the local farmers how to germinate their own seeds. The secret lay in scarifying the seed pods. Nortier placed a layer of seeds between two mill stones and ground away some of the seed pod wall. Thereafter the seeds were easily propagated. Over the next decade the price of seeds rose to £80 per pound, the most expensive vegetable seed in the world, as farmers rushed to plant rooibos. Today, the seed is gathered by special sifting processes. Nortier is today accepted as the father of the rooibos tea industry. The variety developed by Nortier has become the mainstay of the rooibos industry enabling it to expand and create income and jobs for inhabitants of rooibos-growing regions. Thanks to Nortier's research, rooibos tea became an iconic national beverage and then a globalised commodity. Production is today the economic mainstay of the Clanwilliam district. In 1948, the University of Stellenbosch awarded Nortier an Honorary Doctorate D.Sc. (Agria) in recognition for his valuable contribution to South African agriculture. Life history and reproduction Aspalathus linearis has a small endemic range in the wild, but horticultural techniques to maximise production have been effective at maintaining cultivation as a semi-wild crop to supply the new demands of the broadening rooibos tea industry. A. linearis is often grouped with the honeybush (Cyclopia), another plant from the Fynbos region of Southern Africa, which is also used to make tea. Like other members of the genus, A. linearis is considered a part of the Fynbos ecoregion in the Cape Floristic Region, whose plants often depend on fire for reproduction. A. linearis is a legume and thus an angiosperm and produces an indehiscent fruit. Its flowers make up a raceme inflorescence. Seed germination can be slow, but sprouting can be induced by acid treatment. The seeds are hard-shelled and often need scarification. For A. linearis, fire can stimulate resprouting in the species, but the sprouting is less than that of other plants in the Fynbos ecoregion. A. linearis can be considered facultative and obligate sprouters and have lignotuber development for after fires. Typically, there are two classifications of A. lineraris in response to fire: reseeders and resprouters. Reseeders are killed by fire, but it stimulates their seeds’ germination. Resprouters are not completely killed during a fire and grow back from established lignotubers. Seeds of wild populations are dispersed by species of ants, whose use as dispersers reduces parent-offspring and sibling-sibling competition. Ants are also helpful in dispersion as they reduce the susceptibility of seeds to other herbivores. Like most other legumes, there is a symbiotic relationship between rhizoids and the underground lignotuber structure that promotes nitrogen fixation and growth. The nitrogen content in the soil is an important environmental factor for growth, development, and reproduction. Hawkins, Malgas, & Biénabe (2011) suggested that there are multiple ecotypes of A. linearis that have different selected methods of growth and morphology and are dependent on the environment. It is unclear how many ecotypes there might be, given their limited geographic range and the limited literature about genetic diversity. Van der Bank, Van der Bank, & Van Wyk (1999) suggest that resprouting populations and reseeding populations have been selected for based on the environment as a way to reduce genetic bottlenecks; however, whether that promotes certain reproductive strategies over others was unclear. Wild populations can contain both sprouting and non-sprouting individuals, but cultivated rooibos are typically reseeders, not resprouters, and have higher growth rates. Cultivated A. linearis can be selected for certain traits that are desirable for human use. Cultivated plants are diploid with a base chromosome number of 9 ( 18 chromosomes), but the understanding of how this might differ in ecotypes is limited. The selection process can include human-mediated pollination, fire suppression, and supplementing soil contents. Like many other Fynbos plants, A. linearis is not significantly pollinated by cape honey bees, which suggests an alternative way of primary pollination. Some wasps likely play an important role in pollinating the flowers and some wasp species are thought to be specially adapted to accessing the A. linearis flower. US trademark controversy In 1994, Burke International registered the name "Rooibos" with the US Patent and Trademark Office and so established a monopoly on the name in the United States when the plant was virtually unknown there. When it later entered more widespread use, Burke demanded for companies to pay fees to use the name or to cease its use. In 2005, the American Herbal Products Association and a number of import companies succeeded in defeating the trademark through petitions and lawsuits. After losing one of the cases, Burke surrendered the name to the public domain. Legal protection of the name rooibos The South African Department of Trade and Industry issued final rules on 6 September 2013 that protects and restricts the use of the names "rooibos", "red bush", "rooibostee", "rooibos tea", "rooitee", and "rooibosch" in the country so that the name cannot be used for things unless they are derived from the Aspalathus linearis plant. It also provides guidance and restrictions for how products that include rooibos and in what measures should use the name rooibos in their branding. In May 2021, the European Union conferred protected designation of origin (PDO) status to "rooibos". Any foodstuff sold as "rooibos" in the EU and several countries outside the bloc must be made by using only Aspalathus linearis leaves that are cultivated in the Cederberg region of South Africa. Environmental concerns The rooibos plant is endemic to a small part of the Western Cape Province, South Africa. It grows in a symbiotic relationship with local micro-organisms. A 2012 South African news item cited concerns regarding the prospects of rooibos farming in the face of climate change. The use of rooibos and the expansion of its cultivation are threatening other local species of plants endemic to the area such as Protea convexa, Roridula dentata and P. scolymocephala. See also Cyclopia (plant) Rooibos wine References External links Crotalarieae Endemic flora of the Cape Provinces Crops originating from South Africa Fynbos Herbal teas Medicinal plants of Africa Nitrogen-fixing crops Afrikaans words and phrases South African cuisine Products with protected designation of origin Taxa named by Nicolaas Laurens Burman Biopiracy
Rooibos
[ "Biology" ]
3,090
[ "Biopiracy", "Biodiversity" ]
152,262
https://en.wikipedia.org/wiki/Will-o%27-the-wisp
In folklore, a will-o'-the-wisp, will-o'-wisp, or ; ), is an atmospheric ghost light seen by travellers at night, especially over bogs, swamps or marshes. The phenomenon is known in the United Kingdom by a variety of names, including jack-o'-lantern, friar's lantern, and hinkypunk, and is said to mislead and/or guide travellers by resembling a flickering lamp or lantern. Equivalents of the will-o'-the-wisps appear in European folklore by various names, e.g., in Latin, in French, or in Germany, Hessdalen light in Norway. Equivalents occur in traditions of cultures worldwide (cf. ); e.g., the Naga fireballs on the Mekong in Thailand. In North America the phenomenon is known as the Paulding Light in Upper Peninsula of Michigan, the Spooklight in Southwestern Missouri and Northeastern Oklahoma, and St. Louis Light in Saskatchewan. In Arab folklore it is known as . In folklore, will-o'-the-wisps are typically attributed as ghosts, fairies or elemental spirits meant to reveal a path or direction. These wisps are portrayed as dancing or flowing in a static form, until noticed or followed, in which case they visually fade or disappear. Modern science explains the light aspect as natural phenomena such as bioluminescence or chemiluminescence, caused by the oxidation of phosphine (), diphosphane () and methane (), produced by organic decay. Nomenclature Etymology The term will-o'-the-wisp comes from wisp, a bundle of sticks or paper sometimes used as a torch and the name 'Will', thus meaning 'Will of the torch'. The term jack-o'-lantern ('Jack of the lantern') originally referred to a will-o'-the-wisp. In the United States, they are often called spook-lights, ghost-lights, or orbs by folklorists. The Latin name is composed of , meaning 'fire' and , an adjective meaning 'foolish', 'silly' or 'simple'; it can thus be literally translated into English as 'foolish fire' or more idiomatically as 'giddy flame'. Despite its Latin origins, the term is not attested in antiquity, and the name for the will-o'-the-wisp used by the ancient Romans is uncertain. The term is not attested in the Middle Ages either. Instead, the Latin is documented no earlier than the 16th century in Germany, where it was coined by a German humanist, and appears to be a free translation of the long-existing German name ('wandering light' or 'deceiving light') conceived of in German folklore as a mischievous spirit of nature; the Latin translation was made to lend the German name intellectual credibility. Beside , the will-o'-the-wisp has also been called in German (where translates to 'wisp'), as found in e.g. Martin Luther's writings of the same 16th century. Synonyms The names will-o'-the-wisp and jack-o'-lantern are used in etiological folk-tales, recorded in many variant forms in Ireland, Scotland, England, Wales, Appalachia, and Newfoundland. Folk belief attributes the phenomenon explicitly in the term hob lantern or hobby lantern (var. 'Hob and his Lantern', 'hob-and-lanthorns"). In her book A Dictionary of Fairies, K. M. Briggs provides an extensive list of other names for the same phenomenon, though the place where they are observed (graveyard, bogs, etc.) influences the naming considerably. When observed in graveyards, it is known as a ghost candle or corpse candle. Folklore In the etiological (origin) tales, protagonists named either Will or Jack are doomed to haunt the marshes with a light for some misdeed. One version from Shropshire is recounted by Briggs in A Dictionary of Fairies and refers to Will Smith. Will is a wicked blacksmith who is given a second chance by Saint Peter at the gates of heaven, but leads such a bad life that he ends up being doomed to wander the earth. The Devil provides him with a single burning coal with which to warm himself, which he then uses to lure foolish travellers into the marshes. An Irish version of the tale has a ne'er-do-well named Drunk Jack or Stingy Jack who, when the Devil comes to collect his soul, tricks him into turning into a coin, so he can pay for his one last drink. When the Devil obliges, Jack places him in his pocket next to a crucifix, preventing him from returning to his original form. In exchange for his freedom, the Devil grants Jack ten more years of life. When the term expires, the Devil comes to collect his due. But Jack tricks him again by making him climb a tree and then carving a cross underneath, preventing him from climbing down. In exchange for removing the cross, the Devil forgives Jack's debt. However, no one as bad as Jack would ever be allowed into heaven, so Jack is forced upon his death to travel to hell and ask for a place there. The Devil denies him entrance in revenge but grants him an ember from the fires of hell to light his way through the twilight world to which lost souls are forever condemned. Jack places it in a carved turnip to serve as a lantern. Another version of the tale is "Willy the Whisp", related in Irish Folktales by Henry Glassie. Séadna by Peadar Ua Laoghaire is yet another version—and also the first modern novel in the Irish language. Global folklore Americas Mexico has equivalents. Folklore explains the phenomenon to be witches who transformed into these lights. Another explanation refers to the lights as indicators to places where gold or hidden treasures are buried which can be found only with the help of children. In this one, they are called luces del dinero (money lights) or luces del tesoro (treasure lights). The swampy area of Massachusetts known as the Bridgewater Triangle has folklore of ghostly orbs of light, and there have been modern observations of these ghost-lights in this area as well. The fifollet (or feu-follet) of Louisiana derives from the French. The legend says that the fifollet is a soul sent back from the dead to do God's penance, but instead attacks people for vengeance. While it mostly takes part in harmless mischievous acts, the fifollet sometimes sucked the blood of children. Some legends say that it was the soul of a child who died before baptism. Boi-tatá () is the Brazilian equivalent of the will-o'-the-wisp. Regionally it is called Boitatá, Baitatá, Batatá, Bitatá, Batatão, Biatatá, M'boiguaçu, Mboitatá and Mbaê-Tata. The name comes from the Old Tupi language and means "fiery serpent" (mboî tatá). Its great fiery eyes leave it almost blind by day, but by night, it can see everything. According to legend, Boi-tatá was a big serpent which survived a great deluge. A "boiguaçu" (cave anaconda) left its cave after the deluge and, in the dark, went through the fields preying on the animals and corpses, eating exclusively its favourite morsel, the eyes. The collected light from the eaten eyes gave "Boitatá" its fiery gaze. Not really a dragon but a giant snake (in the native language, boa or mboi or mboa). In Argentina and Uruguay, the will-o'-the-wisp phenomenon is known as luz mala (evil light) and is one of the most important myths in both countries' folklore. This phenomenon is quite feared and is mostly seen in rural areas. It consists of an extremely shiny ball of light floating a few inches from the ground. In Colombia, la Bolefuego or Candileja is the will-o'-the-wisp ghost of a vicious grandmother who raised her grandchildren without morals, and as such they became thieves and murderers. In the afterlife, the grandmother's spirit was condemned to wander the world surrounded in flames. In Trinidad and Tobago, a soucouyant is a "fireball witch" — an evil spirit that takes on the form of a flame at night. It enters homes through any gap it can find and drinks the blood of its victims. Asia Aleya (or marsh ghost-light) is the name given to a strange light phenomenon occurring over the marshes as observed by Bengalis, especially the fishermen of Bangladesh and West Bengal. This marsh light is attributed to some kind of marsh gas apparitions that confuse fishermen, make them lose their bearings, and may even lead to drowning if one decided to follow them moving over the marshes. Local communities in the region believe that these strange hovering marsh-lights are in fact Ghost-lights representing the ghosts of fisherman who died fishing. Sometimes they confuse the fishermen, and sometimes they help them avoid future dangers. Chir batti (ghost-light), also spelled "chhir batti" or "cheer batti", is a dancing light phenomenon occurring on dark nights reported from the Banni grasslands, its seasonal marshy wetlands and the adjoining desert of the marshy salt flats of the Rann of Kutch Other varieties (and sources) of ghost-lights appear in folklore across India, including the Kollivay Pey of Tamil Nadu and Karnataka, the Kuliyande Choote of Kerala, and many variants from different tribes in Northeast India. In Kashmir, the Bramrachokh carries a pot of fire on its head. Similar phenomena are described in Japanese folklore, including , hi no tama ("ball of flame"), aburagae, , ushionibi, etc. All these phenomena are described as associated with graveyards. Kitsune, mythical yokai demons, are also associated with will 'o the wisp, with the marriage of two kitsune producing kitsune-bi (狐火), literally meaning 'fox-fire'. These phenomena are described in Shigeru Mizuki's 1985 book Graphic World of Japanese Phantoms (妖怪伝 in Japanese). In Korea the lights are associated with rice paddies, old trees, mountains or even in some houses and were called 'dokkebi bul’ (Hangul: 도깨비 불), meaning goblin fire (or goblin light). They were deemed malevolent and impish, as they confused and lured passersby to lose their way or fall into pits at night. The earliest Chinese reference to a will-o'-the-wisp appears to be the Chinese character 粦 lín, attested as far back as the Shang dynasty oracle bones, depicting a human-like figure surrounded by dots presumably representing the glowing lights of the will-o'-the-wisp, to which feet such as those under 舞 wǔ, 'to dance' were added in bronze script. Before the Han dynasty the top had evolved or been corrupted to represent fire (later further corrupted to resemble 米 mǐ, rice), as the small seal script graph in Shuowen Jiezi, compiled in the Han dynasty, shows. Although no longer in use alone, 粦 lín is in the character 磷 lín phosphorus, an element involved in scientific explanations of the will-o'-the-wisp phenomenon, and is also a phonetic component in other common characters with the same pronunciation. Chinese polymath Shen Gua may have recorded such a phenomenon in the Book of Dreams, stating, "In the middle of the reign of emperor Jia You, at Yanzhou, in the Jiangsu province, an enormous pearl was seen especially in gloomy weather. At first it appeared in the marsh… and disappeared finally in the Xinkai Lake." It was described as very bright, illuminating the surrounding countryside and was a reliable phenomenon over ten years, an elaborate Pearl Pavilion being built by local inhabitants for those who wished to observe it. Europe In European folklore the lights are often believed to be the spirits of un-baptised or stillborn children, flitting between heaven and hell (purgatory). In Germany there was a belief that a Irrlicht was the soul of an unbaptised child, but that it could be redeemed if the remains are first buried near the eaves of the church, so that at the moment rainwater splashes onto this grave, the churchman could pronounce the baptismal formula to sanctify the child. In Sweden also, the will-o'-the-wisp represents the soul of an unbaptised person "trying to lead travellers to water in the hope of being baptized". Danes, Finns, Swedes, Estonians, Latvians, Lithuanians, and Irish people and amongst some other groups believed that a will-o'-the-wisp also marked the location of a treasure deep in ground or water, which could be taken only when the fire was there. Sometimes magical procedures, and even a dead man's hand, were required as well, to uncover the treasure. In Finland and several other northern countries, it was believed that early autumn was the best time to search for will-o'-the-wisps and treasures below them. It was believed that when someone hid treasure in the ground, he made the treasure available only at the summer solstice (Midsummer, or Saint John's Day), and set a will-o'-the-wisp to mark the exact place and time so that he could reclaim the treasure. The Aarnivalkea (also known as virvatuli, aarretuli and aarreliekki), in Finnish mythology, are spots where an eternal flame associated with will-o'-the-wisps burns. They are claimed to mark the places where faerie gold is buried. They are protected by a glamour that would prevent anyone finding them by pure chance. However, if one finds a fern seed from a mythical flowering fern, the magical properties of that seed will lead the fortunate person to these treasures, in addition to providing one with a glamour of invisibility. Since in reality the fern produces no flower and reproduces via spores under the leaves, the myth specifies that it blooms only extremely rarely. Britain In Welsh folklore, it is said that the light is "fairy fire" held in the hand of a púca, or pwca, a small goblin-like fairy that mischievously leads lone travellers off the beaten path at night. As the traveller follows the púca through the marsh or bog, the fire is extinguished, leaving them lost. The púca is said to be one of the Tylwyth Teg, or fairy family. In Wales the light predicts a funeral that will take place soon in the locality. Wirt Sikes in his book British Goblins mentions the following Welsh tale about púca. A peasant travelling home at dusk sees a bright light travelling along ahead of him. Looking closer, he sees that the light is a lantern held by a "dusky little figure", which he follows for several miles. All of a sudden he finds himself standing on the edge of a vast chasm with a roaring torrent of water rushing below him. At that precise moment the lantern-carrier leaps across the gap, lifts the light high over its head, lets out a malicious laugh and blows out the light, leaving the poor peasant a long way from home, standing in pitch darkness at the edge of a precipice. This is a fairly common cautionary tale concerning the phenomenon; however, the ignis fatuus was not always considered dangerous. Some tales present the will-o'-the-wisp as a treasure-guardian, leading those brave enough to follow it to certain riches - a form of behaviour sometimes ascribed also to the Irish leprechaun. Other stories tell of travellers surprising a will-o'-the-wisp while lost in the woods and being either guided out or led further astray, depending on whether they treated the spirit kindly or harshly. Also related, the pixy-light from Devon and Cornwall which leads travellers away from the safe and reliable route and into the bogs with glowing lights. "Like Poltergeist they can generate uncanny sounds. They were less serious than their German Weiße Frauen kin, frequently blowing out candles on unsuspecting courting couples or producing obscene kissing sounds, which were always misinterpreted by parents." Pixy-Light was also associated with "lambent light" which the Old Norse might have seen guarding their tombs. In Cornish folklore, Pixy-Light also has associations with the Colt pixie. "A colt pixie is a pixie that has taken the shape of a horse and enjoys playing tricks such as neighing at the other horses to lead them astray". In Guernsey, the light is known as the faeu boulanger (rolling fire), and is believed to be a lost soul. On being confronted with the spectre, tradition prescribes two remedies. The first is to turn one's cap or coat inside out. This has the effect of stopping the faeu boulanger in its tracks. The other solution is to stick a knife into the ground, blade up. The faeu, in an attempt to kill itself, will attack the blade. The will-o'-the-wisp was also known as the Spunkie in the Scottish Highlands where it would take the form of a linkboy (a boy who carried a flaming torch to light the way for pedestrians in exchange for a fee), or else simply a light that always seemed to recede, in order to lead unwary travellers to their doom. The spunkie has also been blamed for shipwrecks at night after being spotted on land and mistaken for a harbour light. Other tales of Scottish folklore regard these mysterious lights as omens of death or the ghosts of once living human beings. They often appeared over lochs or on roads along which funeral processions were known to travel. A strange light sometimes seen in the Hebrides is referred to as the teine sith, or "fairy light", though there was no formal connection between it and the fairy race. Oceania The Australian equivalent, known as the Min Min light is reportedly seen in parts of the outback after dark. The majority of sightings are reported to have occurred in the Channel Country region. Stories about the lights can be found in aboriginal myth pre-dating western settlement of the region and have since become part of wider Australian folklore. Indigenous Australians hold that the number of sightings has increased alongside the increasing ingression of Europeans into the region. According to folklore, the lights sometimes followed or approached people and have disappeared when fired upon, only to reappear later on. Scientific explanations Science proposes that will-o'-the-wisp phenomena (ignis fatuus) are caused by the oxidation of phosphine (PH3), diphosphane (P2H4), and methane (CH4). These compounds, produced by organic decay, can cause photon emissions. Since phosphine and diphosphane mixtures spontaneously ignite on contact with the oxygen in air, only small quantities of it would be needed to ignite the much more abundant methane to create ephemeral fires. Furthermore, phosphine produces phosphorus pentoxide as a by-product, which forms phosphoric acid upon contact with water vapor, which can explain "viscous moisture" sometimes described as accompanying ignis fatuus. Historical explanations The idea of the will-o'-the-wisp phenomena being caused by natural gases can be found as early as 1596, as mentioned in the works of Ludwig Lavater. In 1776 Alessandro Volta first proposed that natural electrical phenomena (like lightning) interacting with methane marsh gas may be the cause of ignis fatuus. This was supported by the British polymath Joseph Priestley in his series of works Experiments and Observations on Different Kinds of Air (1772–1790); and by the French physicist Pierre Bertholon de Saint-Lazare in De l'électricité des météores (1787). Early critics of the marsh gas hypothesis often dismissed it on various grounds including the unlikeliness of spontaneous combustion, the absence of warmth in some observed ignis fatuus, the odd behavior of ignis fatuus receding upon being approached, and the differing accounts of ball lightning (which was also classified as a kind of ignis fatuus). An example of such criticism is found in Folk-Lore from Buffalo Valley (1891) by the American anthropologist John G. Owens. The apparent retreat of ignis fatuus upon being approached might be explained simply by the agitation of the air by nearby moving objects, causing the gases to disperse. This was observed in the very detailed accounts of several close interactions with ignis fatuus published earlier in 1832 by Major Louis Blesson after a series of experiments in various localities where they were known to occur. Of note is his first encounter with ignis fatuus in a marshland between a deep valley in the forest of Gorbitz, Newmark, Germany. Blesson observed that the water was covered by an iridescent film, and during day-time, bubbles could be observed rising abundantly from certain areas. At night, Blesson observed bluish-purple flames in the same areas and concluded that it was connected to the rising gas. He spent several days investigating the phenomenon, finding to his dismay that the flames retreated every time he tried to approach them. He eventually succeeded and was able to confirm that the lights were indeed caused by ignited gas. The British scientist Charles Tomlinson in On Certain Low-Lying Meteors (1893) described Blesson's experiments. Blesson also observed differences in the colour and heat of the flames in different marshes. The ignis fatuus in Malapane, Upper Silesia (now Ozimek, Poland) could be ignited and extinguished, but were unable to burn pieces of paper or wood shavings. Similarly, the ignis fatuus in another forest in Poland coated pieces of paper and wood shavings with an oily viscous fluid instead of burning them. Blesson also accidentally created ignis fatuus in the marshes of Porta Westfalica, Germany, while launching fireworks. 20th century A description of 'The Will-o'-the Wisp appeared in a 1936 UK publication of The Scout's Book of Gadgets and Dodges, where the author (Sam F. Braham), describes it as follows: 'This is an uncertain light which may sometimes be seen dancing over churchyards and marshy places. No one really know how it is produced, and chemists are continually experimenting to discover its nature. It is thought that it is formed by the mixing of marsh gas, which is giving off decaying vegetable matter, with phosphoretted hydrogen, a gas which ignites instantly. But this theory has not been definitely proved.' One attempt to replicate ignis fatuus under laboratory conditions was in 1980 by British geologist Alan A. Mills of Leicester University. Though he did succeed in creating a cool glowing cloud by mixing crude phosphine and natural gas, the color of the light was green and it produced copious amounts of acrid smoke. This was contrary to most eyewitness accounts of ignis fatuus. As an alternative, Mills proposed in 2000 that ignis fatuus may instead be cold flames. These are luminescent pre-combustion halos that occur when various compounds are heated to just below ignition point. Cold flames are indeed typically bluish in color and as their name suggests, they generate very little heat. Cold flames occur in a wide variety of compounds, including hydrocarbons (including methane), alcohols, aldehydes, oils, acids, and even waxes. However it is unknown if cold flames occur naturally, though a lot of compounds which exhibit cold flames are the natural byproducts of organic decay. A related hypothesis involves the natural chemiluminescence of phosphine. In 2008 the Italian chemists Luigi Garlaschelli and Paolo Boschetti attempted to recreate Mills' experiments. They successfully created a faint cool light by mixing phosphine with air and nitrogen. Though the glow was still greenish in colour, Garlaschelli and Boschetti noted that under low-light conditions, the human eye cannot easily distinguish between colours. Furthermore, by adjusting the concentrations of the gases and the environmental conditions (temperature, humidity, etc.), it was possible to eliminate the smoke and smell, or at least render it to undetectable levels. Garlaschelli and Boschetti also agreed with Mills that cold flames may also be a plausible explanation for other instances of ignis fatuus. In 1993 professors Derr and Persinger proposed that some ignis fatuus may be geologic in origin, piezoelectrically generated under tectonic strain. The strains that move faults would also heat up the rocks, vaporizing the water in them. Rock or soil containing something piezoelectric, like quartz, silicon, or arsenic, may also produce electricity, channelled up to the surface through the soil via a column of vaporized water, there somehow appearing as earth lights. This would explain why the lights appear electrical, erratic, or even intelligent in their behaviour. The will-o'-the-wisp phenomena may occur due to the bioluminescence of various forest dwelling micro-organisms and insects. The eerie glow emitted from certain fungal species, such as the honey fungus, during chemical reactions to form white rot could be mistaken for the mysterious will-o'-the-wisp or foxfire lights. There are many other bioluminescent organisms that could create the illusions of fairy lights, such as fireflies. Light reflecting off larger forest dwelling creatures could explain the phenomenon of will-o'-the-wisp moving and reacting to other lights. The white plumage of barn owls may reflect enough light from the Moon to appear as a will-o'-the-wisp; hence the possibility of the lights moving, reacting to other lights, etc. Ignis fatuus sightings are rarely reported today. The decline is believed to be the result of the draining and reclamation of swamplands in recent centuries, such as the formerly vast Fenlands of eastern England which have now been converted to farmlands. Global terms Americas Canada Fireship of Baie des Chaleurs in New Brunswick United States Arbyrd/Senath Light of Missouri Bragg Road ghost light (Light of Saratoga) of Texas Brown Mountain Lights of North Carolina Devil’s Torchlight or Devil’s Lantern in the Southern United States and Deep South Gurdon light of Arkansas Hornet ghost light (The Spooklight) of Missouri-Oklahoma state line Maco light of North Carolina Marfa lights of Texas Paulding Light of Michigan's Upper Peninsula Cohoke Light of eastern Virginia's Cohoke Swamp wetlands Light of Saratoga Argentina and Uruguay Luz Mala Asia Chir batti in Gujarat Naga fireballs on the Mekong in Thailand Aleya in Bengal Dhon guloi in Assam Europe Hessdalen light, Norway Martebo lights, Sweden Paasselkä devil, Finland Lidércfény, Hungary Ballybar, near Carlow, Ireland Ferbane, County Offaly, Ireland Dwaallichtjes in the Netherlands and Belgium Sheeries, Ireland Liam na lasóige, Ireland Fuego fatuo, Spain Fuoco fatuo, Italy Irrlicht, Germany Oceania Min Min light of the Outback Australia See also Chir Batti Corpse road Feuermann (ghost) Foo fighter Hessdalen Lights Kitsunebi Lantern man Lidérc Mãe-do-Ouro Omphalotus olearius Santelmo Shiranui Simonside Dwarfs St. Elmo's fire Yan-gant-y-tan Explanatory notes References Bibliography Corliss, William (2001) Remarkable Luminous Phenomena in Nature Elsschot, Willem Het dwaallicht Tremayne, Peter The Haunted Abbot External links The Ignis Erraticus – A Bibliographic Survey of the names of the Will-'o-the-wisp Atmospheric ghost lights European folklore European ghosts Wetlands in folklore Methane Pixies Supernatural legends Swamp monsters Swamps in fiction Wetlands
Will-o'-the-wisp
[ "Chemistry", "Environmental_science" ]
5,932
[ "Greenhouse gases", "Hydrology", "Methane", "Wetlands" ]
152,277
https://en.wikipedia.org/wiki/Jane%20Addams
Laura Jane Addams (September 6, 1860May 21, 1935) was an American settlement activist, reformer, social worker, sociologist, public administrator, philosopher, and author. She was a leader in the history of social work and Women's suffrage. In 1889, Addams co-founded Hull House, one of America's most famous settlement houses, in Chicago, Illinois, providing extensive social services to poor, largely immigrant families. Philosophically a "radical pragmatist", she was arguably the first woman public philosopher in the United States. In the Progressive Era, when even presidents such as Theodore Roosevelt and Woodrow Wilson identified themselves as reformers and might be seen as social activists, Addams was one of the most prominent reformers. An advocate for world peace, and recognized as the founder of the social work profession in the United States, in 1931 Addams became the first American woman to be awarded the Nobel Peace Prize. Earlier, Addams was awarded an honorary Master of Arts degree from Yale University in 1910, becoming the first woman to receive an honorary degree from the school. In 1920, she was a co-founder of the American Civil Liberties Union (ACLU). Addams helped America address and focus on issues that were of concern to mothers or extensions of the domestic-work assigned to women, such as the needs of children, local public health, and world peace. In her essay "Utilization of Women in City Government", Addams noted the connection between the workings of government and the household, stating that many departments of government, such as sanitation and the schooling of children, could be traced back to traditional women's roles in the private sphere. When she died in 1935, Addams was the best-known female public figure in the United States. Early life Born in Cedarville, Illinois, Jane Addams was the youngest of eight children born into a prosperous northern Illinois family of English-American descent which traced back to colonial Pennsylvania. In 1863, when Addams was two years old, her mother, Sarah Addams (née Weber), died while pregnant with her ninth child. Thereafter Addams was cared for mostly by her older sisters. By the time Addams was eight, four of her siblings had died: three in infancy and one at the age of 16. Addams spent her childhood playing outdoors, reading indoors, and attending Sunday school. When she was four she contracted tuberculosis of the spine, known as Potts's disease, which caused a curvature in her spine and lifelong health problems. This made it complicated as a child to function with the other children, considering she had a limp and could not run as well. As a child, she thought she was ugly and later remembered wanting not to embarrass her father, when he was dressed in his Sunday best, by walking down the street with him. Jane Addams adored her father, John H. Addams, when she was a child, as she made clear in the stories in her memoir, Twenty Years at Hull House (1910). He was a founding member of the Illinois Republican Party, served as an Illinois State Senator (1855–70), and supported his friend Abraham Lincoln in his candidacies for senator (1854) and the presidency (1860). He kept a letter from Lincoln in his desk, and Addams loved to look at it as a child. Her father was an agricultural businessman with large timber, cattle, and agricultural holdings; flour and timber mills and a wool factory. He was the president of The Second National Bank of Freeport, Illinois. He remarried in 1868 when Addams was eight years old. His second wife was Anna Hosteler Haldeman, the widow of a miller in Freeport. During her childhood, Addams had big dreams of doing something useful in the world. As a voracious reader, she became interested in the poor from her reading of Charles Dickens. Inspired by his works and by her own mother's kindness to the Cedarville poor, Addams decided to become a doctor so that she could live and work among the poor. Addams's father encouraged her to pursue higher education but close to home. She was eager to attend the new college for women, Smith College in Massachusetts; but her father required her to attend nearby Rockford Female Seminary (now Rockford University), in Rockford, Illinois. Her experience at Rockford put her in a first wave of U.S. women to receive a college education. She excelled in this all women environment. She edited the college newspaper, was the valedictorian, participated in the debate club and led the class of 1881. Addams recognized that she and others who were engaged in post secondary education would have new opportunities and challenges. She expressed this in Bread Givers (1880), a speech she gave her junior year. She noted the "change which has taken place... in the ambition and aspirations of women." In the process of developing their intellect and direct labor, something new was emerging. Educated women of her generation wished "not to be a man nor like a man" but claim "the same right to independent thought and action." Each young woman was gaining "a new confidence in her possibilities, and a fresher hope in her steady progress." At 20, Addams recognized a changing cultural environment and was learning the skills at Rockford to lead the future settlement movement. Whilst at Rockford, her readings of Thomas Carlyle, John Ruskin, Leo Tolstoy and others became significant influences. After graduating from Rockford in 1881, with a collegiate certificate and membership in Phi Beta Kappa, she still hoped to attend Smith to earn a proper B.A. That summer, her father died unexpectedly from a sudden case of appendicitis. Each child inherited roughly $50,000 (equivalent to $ in 2016). That fall, Addams, her sister Alice, Alice's husband Harry, and their stepmother, Anna Haldeman Addams, moved to Philadelphia so that the three young people could pursue medical educations. Harry was already trained in medicine and did further studies at the University of Pennsylvania. Jane and Alice completed their first year of medical school at the Woman's Medical College of Pennsylvania, but Jane's health problems, a spinal operation and a nervous breakdown prevented her from completing the degree. She was filled with sadness at her failure. Her stepmother Anna was also ill, so the entire family canceled their plans to stay two years and returned to Cedarville. her brother-in-law Harry performed surgery on her back, to straighten it. He then advised that she not pursue studies but, instead, travel. In August 1883, she set off for a two-year tour of Europe with her stepmother, traveling some of the time with friends and family who joined them. Addams decided that she did not have to become a doctor to be able to help the poor. Upon her return home in June 1887, she lived with her stepmother in Cedarville and spent winters with her in Baltimore. Addams, still filled with vague ambition, sank into depression, unsure of her future and feeling useless leading the conventional life expected of a well-to-do young woman. She wrote long letters to her friend from Rockford Seminary, Ellen Gates Starr, mostly about Christianity and books but sometimes about her despair. Her nephew was James Weber Linn (1876–1939) who taught English at the University of Chicago and served in the Illinois General Assembly. Linn also wrote books and newspaper articles. Settlement house Meanwhile, Addams gathered inspiration from what she read. Fascinated by the early Christians and Tolstoy's book My Religion, she was baptized a Christian in the Cedarville Presbyterian Church in the summer of 1886. Reading Giuseppe Mazzini's Duties of Man, she began to be inspired by the idea of democracy as a social ideal. Yet she felt confused about her role as a woman. John Stuart Mill's The Subjection of Women made her question the social pressures on a woman to marry and devote her life to family. In the summer of 1887, Addams read in a magazine about the new idea of starting a settlement house. She decided to visit the world's first, Toynbee Hall, in London. She and several friends, including Ellen Gates Starr, traveled in Europe from December 1887 through the summer of 1888. After watching a bullfight in Madrid, fascinated by what she saw as an exotic tradition, Addams condemned this fascination and her inability to feel outraged at the suffering of the horses and bulls. At first, Addams told no one about her dream to start a settlement house; but, she felt increasingly guilty for not acting on her dream. Believing that sharing her dream might help her to act on it, she told Ellen Gates Starr. Starr loved the idea and agreed to join Addams in starting a settlement house. Addams and another friend traveled to London without Starr, who was busy. Visiting Toynbee Hall, Addams was enchanted. She described it as "a community of University men who live there, have their recreation clubs and society all among the poor people, yet, in the same style in which they would live in their own circle. It is so free of 'professional doing good,' so unaffectedly sincere and so productive of good results in its classes and libraries seems perfectly ideal." Addams's dream of the classes mingling socially to mutual benefit, as they had in early Christian circles seemed embodied in the new type of institution. The settlement house as Addams discovered was a space within which unexpected cultural connections could be made and where the narrow boundaries of culture, class, and education could be expanded. They doubled as community arts centers and social service facilities. They laid the foundations for American civil society, a neutral space within which different communities and ideologies could learn from each other and seek common grounds for collective action. The role of the settlement house was an "unending effort to make culture and 'the issue of things' go together." The unending effort was the story of her own life, a struggle to reinvigorate her own culture by reconnecting with diversity and conflict of the immigrant communities in America's cities and with the necessities of social reform. Hull House In 1889 Addams and her college friend and paramour Ellen Gates Starr co-founded Hull House, a settlement house in Chicago. The run-down mansion had been built by Charles Hull in 1856 and needed repairs and upgrading. Addams at first paid for all of the capital expenses (repairing the roof of the porch, repainting the rooms, buying furniture) and most of the operating costs. However gifts from individuals supported the House beginning in its first year and Addams was able to reduce the proportion of her contributions, although the annual budget grew rapidly. Some wealthy women became long-term donors to the House, including Helen Culver, who managed her first cousin Charles Hull's estate, and who eventually allowed the contributors to use the house rent-free. Other contributors were Louise DeKoven Bowen, Mary Rozet Smith, Mary Wilmarth, and others. Addams and Starr were the first two occupants of the house, which would later become the residence of about 25 women. At its height, Hull House was visited each week by some 2,000 people. Hull House was a center for research, empirical analysis, study, and debate, as well as a pragmatic center for living in and establishing good relations with the neighborhood. Among the aims of Hull House was to give privileged, educated young people contact with the real life of the majority of the population. Residents of Hull House conducted investigations on housing, midwifery, fatigue, tuberculosis, typhoid, garbage collection, cocaine, and truancy. The core Hull House residents were well-educated women bound together by their commitment to labour unions, the National Consumers League and the suffrage movement. Dr. Harriett Alleyne Rice joined Hull House to provide medical treatment for poor families. Its facilities included a night school for adults, clubs for older children, a public kitchen, an art gallery, a gym, a girls' club, a bathhouse, a book bindery, a music school, a drama group and a theater, apartments, a library, meeting rooms for discussion, clubs, an employment bureau, and a lunchroom. Her adult night school was a forerunner of the continuing education classes offered by many universities today. In addition to making available social services and cultural events for the largely immigrant population of the neighborhood, Hull House afforded an opportunity for young social workers to acquire training. Eventually, Hull House became a 13-building settlement complex, which included a playground and a summer camp (known as Bowen Country Club). One aspect of the Hull House that was very important to Jane Addams was the Art Program. The art program at Hull House allowed Addams to challenge the system of industrialized education, which "fitted" the individual to a specific job or position. She wanted the house to provide a space, time and tools to encourage people to think independently. She saw art as the key to unlocking the diversity of the city through collective interaction, mutual self-discovery, recreation and the imagination. Art was integral to her vision of community, disrupting fixed ideas and stimulating the diversity and interaction on which a healthy society depends, based on a continual rewriting of cultural identities through variation and interculturalism. With funding from Edward Butler, Addams opened an art exhibition and studio space as one of the first additions to Hull House. On the first floor of the new addition there was a branch of the Chicago Public Library, and the second was the Butler Art Gallery, which featured recreations of famous artwork as well as the work of local artists. Studio space within the art gallery provided both Hull House residents and the entire community with the opportunity to take art classes or to come in and hone their craft whenever they liked. As Hull House grew, and the relationship with the neighborhood deepened, that opportunity became less of a comfort to the poor and more of an outlet of expression and exchange of different cultures and diverse communities. Art and culture was becoming a bigger and more important part of the lives of immigrants within the 19th ward, and soon children caught on to the trend. These working-class children were offered instruction in all forms and levels of art. Places such as the Butler Art Gallery or the Bowen Country Club often hosted these classes, but more informal lessons would often be taught outdoors. Addams, with the help of Ellen Gates Starr, founded the Chicago Public School Art Society (CPSAS) in response to the positive reaction the art classes for children caused. The CPSAS provided public schools with reproductions of world-renowned pieces of art, hired artists to teach children how to create art, and also took the students on field trips to Chicago's many art museums. Near west side neighborhood The Hull House neighborhood was a mix of European ethnic groups that had immigrated to Chicago around the start of the 20th century. That mix was the ground where Hull House's inner social and philanthropic elitists tested their theories and challenged the establishment. The ethnic mix is recorded by the Bethlehem-Howard Neighborhood Center: "Germans and Jews resided south of that inner core (south of Twelfth Street) ... The Greek delta formed by Harrison, Halsted Street, and Blue Island Streets served as a buffer to the Irish residing to the north and the French Canadians to the northwest." Italians resided within the inner core of the Hull House Neighborhood ... from the river on the east end, on out to the western ends of what came to be known as Little Italy. Greeks and Jews, along with the remnants of other immigrant groups, began their exodus from the neighborhood in the early 20th century. Only Italians continued as an intact and thriving community through the Great Depression, World War II, and well beyond the ultimate demise of Hull House proper in 1963. Hull House became America's best known settlement house. Addams used it to generate system-directed change, on the principle that to keep families safe, community and societal conditions had to be improved. The neighborhood was controlled by local political bosses. Ethics Starr and Addams developed three "ethical principles" for social settlements: "to teach by example, to practice cooperation, and to practice social democracy, that is, egalitarian, or democratic, social relations across class lines." Thus Hull House offered a comprehensive program of civic, cultural, recreational, and educational activities and attracted admiring visitors from all over the world, including William Lyon Mackenzie King, a graduate student from Harvard University who later became prime minister of Canada. In the 1890s Julia Lathrop, Florence Kelley, and other residents of the house made it a world center of social reform activity. Hull House used the latest methodology (pioneering in statistical mapping) to study overcrowding, truancy, typhoid fever, cocaine, children's reading, newsboys, infant mortality, and midwifery. Starting with efforts to improve the immediate neighborhood, the Hull House group became involved in city and statewide campaigns for better housing, improvements in public welfare, stricter child-labor laws, and protection of working women. Addams brought in prominent visitors from around the world and had close links with leading Chicago intellectuals and philanthropists. In 1912, she helped start the new Progressive Party and supported the presidential campaign of Theodore Roosevelt. "Addams' philosophy combined feminist sensibilities with an unwavering commitment to social improvement through cooperative efforts. Although she sympathized with feminists, socialists, and pacifists, Addams refused to be labeled. This refusal was pragmatic rather than ideological." Emphasis on children Hull House stressed the importance of the role of children in the Americanization process of new immigrants. This philosophy also fostered the play movement and the research and service fields of leisure, youth, and human services. Addams argued in The Spirit of Youth and the City Streets (1909) that play and recreation programs are needed because cities are destroying the spirit of youth. Hull House featured multiple programs in art and drama, kindergarten classes, boys' and girls' clubs, language classes, reading groups, college extension courses, along with public baths, a gymnasium, a labor museum and playground, all within a free-speech atmosphere. They were all designed to foster democratic cooperation, collective action and downplay individualism. She helped pass the first model tenement code and the first factory laws. Along with her colleagues from Hull House, in 1901 Jane Addams founded what would become the Juvenile Protective Association. JPA provided the first probation officers for the first Juvenile Court in the United States until this became a government function. From 1907 until the 1940s, JPA engaged in many studies examining such subjects as racism, child labor and exploitation, drug abuse and prostitution in Chicago and their effects on child development. Through the years, their mission has now become improving the social and emotional well-being and functioning of vulnerable children so they can reach their fullest potential at home, in school, and in their communities. Documenting social illnesses Addams and her colleagues documented the communal geography of typhoid fever and reported that poor workers were bearing the brunt of the illness. She identified the political corruption and business avarice that caused the city bureaucracy to ignore health, sanitation, and building codes. Linking environmental justice and municipal reform, she eventually defeated the bosses and fostered a more equitable distribution of city services and modernized inspection practices. Addams spoke of the "undoubted powers of public recreation to bring together the classes of a community in the keeping them apart." Addams worked with the Chicago Board of Health and served as the first vice-president of the Playground Association of America. Emphasis on prostitution In 1912, Addams published A New Conscience and Ancient Evil, about prostitution. This book was extremely popular. Addams believed that prostitution was a result of kidnapping only. Her book later inspired Stella Wynne Herron's 1916 short story Shoes, which Lois Weber adapted into a groundbreaking 1916 film of the same name. Feminine ideals Addams and her colleagues originally intended Hull House as a transmission device to bring the values of the college-educated high culture to the masses, including the Efficiency Movement, a major movement in industrial nations in the early 20th century that sought to identify and eliminate waste in the economy and society, and to develop and implement best practices. However, over time, the focus changed from bringing art and culture to the neighborhood (as evidenced in the construction of the Butler Building) to responding to the needs of the community by providing childcare, educational opportunities, and large meeting spaces. Hull House became more than a proving ground for the new generation of college-educated, professional women: it also became part of the community in which it was founded, and its development reveals a shared history. Addams called on women, especially middle-class women with leisure time and energy as well as rich philanthropists, to exercise their civic duty to become involved in municipal affairs as a matter of "civic housekeeping". Addams thereby enlarged the concept of civic duty to include roles for women beyond motherhood (which involved child rearing). Women's lives revolved around "responsibility, care, and obligation", which represented the source of women's power. This notion provided the foundation for the municipal or civil housekeeping role that Addams defined and gave added weight to the women's suffrage movement that Addams supported. Addams argued that women, as opposed to men, were trained in the delicate matters of human welfare and needed to build upon their traditional roles of housekeeping to be civic housekeepers. Enlarged housekeeping duties involved reform efforts regarding poisonous sewage, impure milk (which often carried tuberculosis), smoke-laden air, and unsafe factory conditions. Addams led the "garbage wars"; in 1894 she became the first woman appointed as sanitary inspector of Chicago's 19th Ward. With the help of the Hull House Women's Club, within a year over 1,000 health department violations were reported to city council and garbage collection reduced death and disease. Addams had long discussions with philosopher John Dewey in which they redefined democracy in terms of pragmatism and civic activism, with an emphasis more on duty and less on rights. The two leading perspectives that distinguished Addams and her coalition from the modernizers more concerned with efficiency were the need to extend to social and economic life the democratic structures and practices that had been limited to the political sphere, as in Addams's programmatic support of trade unions and second, their call for a new social ethic to supplant the individualist outlook as being no longer adequate in modern society. Addams's construction of womanhood involved daughterhood, sexuality, wifehood, and motherhood. In both of her autobiographical volumes, Twenty Years at Hull-House (1910) and The Second Twenty Years at Hull-House (1930), Addams's gender constructions parallel the Progressive-Era ideology she championed. In A New Conscience and an Ancient Evil (1912) she dissected the social pathology of sex slavery, prostitution and other sexual behaviors among working-class women in American industrial centers from 1890 to 1910. Addams's autobiographical persona manifests her ideology and supports her popularized public activist persona as the "Mother of Social Work", in the sense that she represents herself as a celibate matron who served the suffering immigrant masses through Hull House, as if they were her own children. Although not a mother herself, Addams became the "mother to the nation", identified with motherhood in the sense of protective care of her people. Teaching Addams kept up her heavy schedule of public lectures around the country, especially at college campuses. In addition, she offered college courses through the Extension Division of the University of Chicago. She declined offers from the university to become directly affiliated with it, including an offer from Albion Small, chair of the Department of Sociology, of a graduate faculty position. She declined in order to maintain her independent role outside of academia. Her goal was to teach adults not enrolled in formal academic institutions, because of their poverty and/or lack of credentials. Furthermore, she wanted no university controls over her political activism. Addams was appointed to serve on the Chicago Board of Education. Addams was a charter member of the American Sociological Society, founded in 1905. She gave papers to it in 1912, 1915, and 1919. She was the most prominent woman member during her lifetime. Relationships Generally, Addams was close to a wide set of other women and was very good at eliciting their involvement from different classes in Hull House's programs. Nevertheless, throughout her life Addams did have romantic relationships with a few of these women, including Mary Rozet Smith and Ellen Starr. Her relationships offered her the time and energy to pursue her social work while being supported emotionally and romantically. From her exclusively romantic relationships with women, she would most likely be described as a lesbian in contemporary terms, similar to many leading figures in the Women's International League for Peace and Freedom of the time. Her first romantic partner was Ellen Starr, with whom she founded Hull House, who she met when both were students at Rockford Female Seminary. In 1889, the two visited Toynbee Hall together and started their settlement house project, purchasing a house in Chicago. Her second romantic partner was Mary Rozet Smith, who was wealthy and supported Addams's work at Hull House, and with whom she shared a house. Historian Lilian Faderman wrote that Jane was in love and she addressed Mary as "My Ever Dear", "Darling" and "Dearest One", and concluded that they shared the intimacy of a married couple. They remained together until 1934, when Mary died of pneumonia, after 40 years together. It was said that, "Mary Smith became and always remained the highest and clearest note in the music that was Jane Addams' personal life". Together they owned a summer house in Bar Harbor, Maine. When apart, they would write to each other at least once a day – sometimes twice. Addams would write to Smith, "I miss you dreadfully and am yours 'til death". The letters also show that the women saw themselves as a married couple: "There is reason in the habit of married folks keeping together", Addams wrote to Smith. Religion and religious motives Addams's religious beliefs were shaped by her wide reading and life experience. She saw her settlement work as part of the "social Christian" movement. Addams learned about social Christianity from the co-founders of Toynbee Hall, Samuel and Henrietta Barnett. The Barnetts held a great interest in converting others to Christianity, but they believed that Christians should be more engaged with the world and, in the words of one of the leaders of the social Christian movement in England, W. H. Fremantle, "imbue all human relations with the spirit of Christ's self-renouncing love". According to Christie and Gauvreau (2001), while the Christian settlement houses sought to Christianize, Jane Addams "had come to epitomize the force of secular humanism." Her image was, however, "reinvented" by the Christian churches. According to Joslin (2004), "The new humanism, as [Addams] interprets it comes from a secular, and not a religious, pattern of belief". According to the Jane Addams Hull-House Museum, "Some social settlements were linked to religious institutions. Others, like Hull-House [co-founded by Addams], were secular." Hilda Satt Polacheck, a former resident of Hull House, stated that Addams firmly believed in religious freedom and bringing people of all faiths into the social, secular fold of Hull House. The one exception, she notes, was the annual Christmas Party, although Addams left the religious side to the church. The Bible served Addams as both a source of inspiration for her life of service and a manual for pursuing her calling. The emphasis on following Jesus' example and actively advancing the establishment of God's Kingdom on earth is also evident in Addams's work and the Social Gospel movement. Politics Peace movement In 1898, Addams joined the Anti-Imperialist League, in opposition to the U.S. annexation of the Philippines. A staunch supporter of the Progressive Party, she nominated Theodore Roosevelt for the presidency during the Party Convention, held in Chicago in August 1912. She signed up on the party platform, even though it called for building more battleships. She went on to speak and campaign extensively for Roosevelt's 1912 presidential campaign. In January 1915, she became involved in the Woman's Peace Party and was elected national chairman. Addams was invited by European women peace activists to preside over the International Congress of Women in The Hague, April 28–30, 1915, and was chosen to head the commission to find an end to the war. This included meeting ten leaders in neutral countries as well as those at war to discuss mediation. This was the first significant international effort against the war. Addams, along with co-delegates Emily Balch and Alice Hamilton, documented their experiences of this venture, published as a book, Women at The Hague (University of Illinois). In her journal, Balch recorded her impression of Jane Addams (April 1915): Miss Addams shines, so respectful of everyone's views, so eager to understand and sympathize, so patient of anarchy and even ego, yet always there, strong, wise and in the lead. No 'managing', no keeping dark and bringing things subtly to pass, just a radiating wisdom and power of judgement. Addams was elected president of the International Committee of Women for a Permanent Peace, established to continue the work of the Hague Congress, at a conference in 1919 in Zürich, Switzerland. The International Committee developed into the Women's International League for Peace and Freedom (WILPF). Addams continued as president, a position that entailed frequent travel to Europe and Asia. In 1917, she also became a member of the Fellowship of Reconciliation USA (American branch of the International Fellowship of Reconciliation founded in 1919) and was a member of the Fellowship Council until 1933. When the US joined the war in 1917, Addams started to be strongly criticized. She faced increasingly harsh rebukes and criticism as a pacifist. Her 1915 speech on pacifism at Carnegie Hall received negative coverage by newspapers such as The New York Times, which branded her as unpatriotic. Later, during her travels, she spent time meeting with a wide variety of diplomats and civic leaders and reiterating her Victorian belief in women's special mission to preserve peace. Recognition of these efforts came with the award of the Nobel Peace Prize to Addams in 1931. As the first U.S. woman to win the prize, Addams was applauded for her "expression of an essentially American democracy." She donated her share of the prize money to the Women's International League for Peace and Freedom. Pacifism Addams was a major synthesizing figure in the domestic and international peace movements, serving as both a figurehead and leading theoretician; she was influenced especially by Russian novelist Leo Tolstoy and by the pragmatism of philosophers John Dewey and George Herbert Mead. Her books, particularly Newer Ideals of Peace and Peace and Bread in Time of War, and her peace activism informed early feminist theories and perspectives on peace and war. She envisioned democracy, social justice and peace as mutually reinforcing; they all had to advance together to achieve any one. Addams became an anti-war activist from 1899, as part of the anti-imperialist movement that followed the Spanish–American War. Her book Newer Ideals of Peace (1907) reshaped the peace movement worldwide to include ideals of social justice. She recruited social justice reformers like Alice Hamilton, Lillian Wald, Florence Kelley, and Emily Greene Balch to join her in the new international women's peace movement after 1914. Addams's work came to fruition after World War I, when major institutional bodies began to link peace with social justice and probe the underlying causes of war and conflict. In 1899 and 1907, world leaders sought peace by convening an innovative and influential peace conference at The Hague. These conferences produced Hague Conventions of 1899 and 1907. A 1914 conference was canceled due to World War I. The void was filled by an unofficial conference convened by Women at the Hague. At the time, both the US and The Netherlands were neutral. Jane Addams chaired this pathbreaking International Congress of Women at the Hague, which included almost 1,200 participants from 12 warring and neutral countries. Their goal was to develop a framework to end the violence of war. Both national and international political systems excluded women's voices. The women delegates argued that the exclusion of women from policy discourse and decisions around war and peace resulted in flawed policy. The delegates adopted a series of resolutions addressing these problems and called for extending the franchise and women's meaningful inclusion in formal international peace processes at war's end. Following the conference, Addams and a congressional delegation traveled throughout Europe meeting with leaders, citizen groups, and wounded soldiers from both sides. Her leadership during the conference and her travels to the capitals of the war-torn regions were cited in nominations for the Nobel Peace Prize. Addams was opposed to U.S. interventionism and expansionism and ultimately was against those who sought American dominance abroad. In 1915, she gave a speech at Carnegie Hall and was booed offstage for opposing U.S. intervention into World War I. Addams damned war as a cataclysm that undermined human kindness, solidarity, and civic friendship, and caused families across the world to struggle. In turn, her views were denounced by patriotic groups and newspapers during World War I (1917–18). Oswald Garrison Villard came to her defense when she suggested that armies gave liquor to soldiers just before major ground attacks. "Take the case of Jane Addams for one. With what abuse did not the [New York] Times cover her, one of the noblest of our women, because she told the simple truth that the Allied troops were often given liquor or drugs before charging across No Man's Land. Yet when the facts came out at the hands of Sir Philip Gibbs and others not one word of apology was ever forthcoming." Even after the war, the WILPF's program of peace and disarmament was characterized by opponents as radical, Communist-influenced, unpatriotic, and unfeminine. Young veterans in the American Legion, supported by some members of the Daughters of the American Revolution (DAR) and the League of Women Voters, were ill-prepared to confront the older, better-educated, more financially secure and nationally famous women of the WILPF. Nevertheless, the DAR could and did expel Addams from membership in their organization. The Legion's efforts to portray the WILPF members as dangerously naive females resonated with working class audiences, but President Calvin Coolidge and the middle classes supported Addams and her WILPF efforts in the 1920s to prohibit poison gas and outlaw war. After 1920, however, she was widely regarded as the greatest woman of the Progressive Era. In 1931, the award of the Nobel Peace prize earned her near-unanimous acclaim. Philosophy and "peaceweaving" Jane Addams was also a philosopher of peace. Peace theorists often distinguish between negative and positive peace. Negative peace deals with the absence of violence or war. Positive peace is more complicated. It deals with the kind of society we aspire to, and can take into account concepts like justice, cooperation, the quality of relationships, freedom, order and harmony. Jane Addams's philosophy of peace is a type of positive peace. Patricia Shields and Joseph Soeters (2017) have summarized her ideas of peace using the term Peaceweaving. They use weaving as a metaphor because it denotes connection. Fibers come together to form a cloth, which is both flexible and strong. Further, weaving is an activity in which men and women have historically engaged. Addams's peaceweaving is a process which builds "the fabric of peace by emphasizing relationships. Peaceweaving builds these relationships by working on practical problems, engaging people widely with sympathetic understanding while recognizing that progress is measured by the welfare of the vulnerable" Eugenics Addams supported eugenics and was vice president of the American Social Hygiene Association, which advocated eugenics in an effort to improve the social 'hygiene' of American society. She was a close friend of noted eugenicists David Starr Jordan and Charlotte Perkins Gilman, and was an avid proponent of the ideas of G. Stanley Hall. Addams belief in eugenics was tied to her desire to eliminate what she perceived to be 'social ills': Prohibition While "no record is available of any speech she ever made on behalf of the eighteenth amendment", she nonetheless supported prohibition on the basis that alcohol "was of course a leading lure and a necessary element in houses of prostitution, both from a financial and a social standpoint." She repeated the claim that "professional houses of prostitution could not sustain themselves without the 'vehicle of alcohol.'" Death While Addams was often troubled by health problems in her youth and throughout her life, her health began to take a more serious decline after she suffered a heart attack in 1926. She died on May 21, 1935, at the age of 74, in Chicago and is buried in her hometown of Cedarville, Illinois. Adult life and legacy Jane Addams is buried at Cedarville Cemetery, Cedarville, Illinois. Hull House and the Peace Movement are widely recognized as the key tangible pillars of Addams's legacy. While her life focused on the development of individuals, her ideas continue to influence social, political and economic reform in the United States, as well as internationally. Addams and Starr's creation of the settlement house, Hull House, impacted the community, immigrant residents, and social work. Willard Motley, a resident artist of Hull House, extracting from Addams' central theory on symbolic interactionism, used the neighborhood and its people to write his 1948 best seller, Knock on Any Door. His novel later became a well known court-room film in 1949. This book and film brought attention to how a resident lived an everyday life inside a settlement house and his relationship with Jane Addams. Addams's role as reformer enabled her to petition the establishment at and alter the social and physical geography of her Chicago neighborhood. Although contemporary academic sociologists defined her engagement as "social work", Addams's efforts differed significantly from activities typically labeled as "social work" during that time period. Before Addams's powerful influence on the profession, social work was largely informed by a "friendly visitor" model in which typically wealthy women of high public stature visited impoverished individuals and, through systematic assessment and intervention, aimed to improve the lives of the poor. Addams rejected the friendly visitor model in favor of a model of social reform/social theory-building, thereby introducing the now-central tenets of social justice and reform to the field of social work. Addams worked with other reform groups toward goals including the first juvenile court law, tenement-house regulation, an eight-hour working day for women, factory inspection, and workers' compensation. She advocated research aimed at determining the causes of poverty and crime, and she supported women's suffrage. She was a strong advocate of justice for immigrants, African Americans, and minority groups by becoming a chartered member of the NAACP. Among the projects that the members of Hull House opened were the Immigrants' Protective League, the Juvenile Protective Association, the first juvenile court in the United States, and a juvenile psychopathic clinic. Addams's influential writings and speeches, on behalf of the formation of the League of Nations and as a peace advocate, influenced the later shape of the United Nations. Jane Addams also sponsored the work of Neva Boyd, who founded the Recreational Training School at Hull House, a one-year educational program in group games, gymnastics, dancing, dramatic arts, play theory, and social problems. At Hull House, Neva Boyd ran movement and recreational groups for children, using games and improvisation to teach language skills, problem-solving, self-confidence and social skills. During the Great Depression, Boyd worked with the Recreational Project in the Works Progress Administration, (WPA) as The Chicago Training School for Playground Workers, which subsequently became the foundation for the Recreational Therapy and Educational Drama movements in the U.S. One of her best known disciples, Viola Spolin taught in the Recreational Theater Program at Hull House during the WPA era. Spolin went on to be a pioneer in the improvisational theater movement in the US and the inventor of Theater Games. The main legacy left by Jane Addams includes her involvement in the creation of the Hull House, impacting communities and the whole social structure, reaching out to colleges and universities in hopes of bettering the educational system, and passing on her knowledge to others through speeches and books. She paved the way for women by publishing several books and co-winning the Nobel Peace Prize in 1931 with Starr. The Jane Addams Papers Project, originally housed at the University of Illinois at Chicago and Duke University, was relocated to Ramapo College in 2015. The project's digital edition actively engages students and the world with the work and correspondence of Jane Addams. The Addams neighborhood and elementary school in Long Beach, California are named for her. Sociology Jane Addams was intimately involved with the founding of sociology as a field in the United States. Hull House enabled Addams to befriend and become a colleague to early members of the Chicago School of Sociology. She actively contributed to the sociology academic literature, publishing five articles in the American Journal of Sociology between 1896 and 1914. Her influence, through her work in applied sociology, impacted the thought and direction of the Chicago School of Sociology's members. In 1893, she co-authored the compilation of essays written by Hull House residents and workers titled, Hull-House Maps and Papers. These ideas helped shape and define the interests and methodologies of the Chicago School. She worked with American philosopher George Herbert Mead and John Dewey on social reform issues, including promoting women's rights, ending child labor, and mediating during the 1910 Garment Workers' Strike. This strike in particular bent thoughts of protests because it dealt with women workers, ethnicity, and working conditions. All of these subjects were key items that Addams wanted to see in society. The University of Chicago Sociology department was established in 1892, three years after Hull House was established (1889). Members of Hull House welcomed the first group of professors, who soon were "intimately involved with Hull House" and assiduously engaged with applied social reform and philanthropy". In 1893, for example, faculty (Vincent, Small and Bennis) worked with Jane Addams and fellow Hull House resident Florence Kelley to pass legislation "banning sweat shops and employment of children" Albion Small, chair of the Chicago Department of Sociology and founder of the American Journal of Sociology, called for a sociology that was active "in the work of perfecting and applying plans and devices for social improvement and amelioration", which took place in the "vast sociological laboratory" that was 19th-century Chicago. Although untenured, women residents of Hull House taught classes in the Chicago Sociology Department. During and after World War I, the focus of the Chicago Sociology Department shifted away from social activism toward a more scholarly orientation. Social activism was also associated with Communism and a "weaker" woman's work orientation. In response to this change, women sociologists in the department "were moved inmasse out of sociology and into social work" in 1920. The contributions of Jane Addams and other Hull House residents were buried in history. Mary Jo Deegan, in her 1988 book Jane Addams and the Men of the Chicago School, 1892–1918 was the first person to recover Addams' influence on sociology. Deegan's work has led to recognition of Addams's place in sociology. In a 2001 address, for example, Joe Feagin, then president of the American Sociology Association, identified Addams as a "key founder" and he called for sociology to again claim its activist roots and commitment to social justice. Remembrances On December 10, 2007, Illinois celebrated the first annual Jane Addams Day. Jane Addams Day was initiated by a dedicated school teacher from Dongola, Illinois, assisted by the Illinois Division of the American Association of University Women (AAUW). Chicago activist Jan Lisa Huttner traveled throughout Illinois as Director of International Relations for AAUW-Illinois to help publicize the date, and later gave annual presentations about Jane Addams Day in costume as Jane Addams. In 2010, Huttner appeared as Jane Addams at a 150th Birthday Party sponsored by Rockford University (Jane Addams' alma mater), and in 2011, she appeared as Jane Addams at an event sponsored by the Chicago Park District. There is a Jane Addams Memorial Park located near Navy Pier in Chicago. A six-piece sculptural grouping honoring Addams by Louise Bourgeois called "Helping Hands" was originally installed in 1993 at Addams Memorial Park. However, they were "relocated to Chicago Women's Park and Gardens" in 2011 after being vandalized. The Jane Addams memorial sculpture was Chicago's first major artwork to honor an important woman. In 2007, the state of Illinois renamed the Northwest Tollway as the Jane Addams Memorial Tollway. Hull House buildings were mostly demolished for the establishment of the campus of the University of Illinois at Chicago in 1963, or relocated. The Hull residence itself and a related building are preserved as a museum and monument to Jane Addams. The Jane Addams College of Social Work is a professional school at the University of Illinois at Chicago. Jane Addams Business Careers Center is a high school in Cleveland, Ohio. Jane Addams High School For Academic Careers is a high school in The Bronx, NY. Jane Addams House is a residence hall built in 1936 at Connecticut College. In 1973, Jane Addams was inducted into the National Women's Hall of Fame. In 2008 Jane Addams was inducted into the Chicago Gay and Lesbian Hall of Fame. Addams was inducted into the Chicago Literary Hall of Fame in 2012. Also, in 2012 she was inducted into the Legacy Walk, an outdoor public display which celebrates LGBTQ history and people. In 2014, Jane Addams was one of the first 20 honorees awarded a 3-foot x 3-foot bronze plaque on San Francisco's Rainbow Honor Walk (www.rainbowhonorwalk.org) paying tribute to LGBT heroes and heroines. In 2015, Addams was named by Equality Forum as one of their 31 Icons of the 2015 LGBT History Month. Works by Jane Addams Books Democracy and Social Ethics. New York, The Macmillan Company, 1902. Newer Ideals of Peace. New York, The Macmillan Company, 1907. The Spirit of Youth and the City Streets. New York, The Macmillan Company, 1909. Twenty Years at Hull House. With autobiographical notes. New York, The New American Library, 1910. Symposium: child labor on the stage. National Child Labor Committee, New York [1911?]. A New Conscience And An Ancient Evil,. New York, The Macmillan company, 1912. The Long Road of Woman's Memory. New York, The Macmillan Company, 1916. Peace and Bread in Time of War. New York, The Macmillan Company, 1922. The Second Twenty Years at Hull House. New York, The Macmillan Company, 1930. The Excellent Becomes the Permanent. New York, The Macmillan Company, 1932. My Friend Julia Lathrop. New York, The Macmillan Company, 1935. (ed. 2004, Urbana, University of Illinois Press) Collaborative Works Women at The Hague: The International Congress of Women, with Alice Hamilton and Emily Greene Balch, Macmillan Company 1915. Personal Papers Jane Addams Digital Edition Jane Addams Papers Project, Ramapo College of New Jersey. See also Jane Addams Burial Site Jane Addams School for Democracy Jane Addams Middle School Jane Addams Children's Book Award John H. Addams Homestead List of American philosophers List of female Nobel laureates List of peace activists List of suffragists and suffragettes List of women's rights activists John Dewey Florence Kelley Flora Dunlap Mary Treglia Elizabeth Harrison (educator) Community practice social work Stanton Street Settlement Progressive Party (United States, 1912) American philosophy International Fellowship of Reconciliation Addams (crater) References Further reading Archival resources Jane Addams Collection, 1838-date (bulk 1880–1935) () is housed at Swarthmore College Peace Collection. Jane Addams Papers, 1904–1960 (bulk 1904–1936) () is housed at Smith College Sophia Smith Collection. In 2015, The Jane Addams Papers Project relaunched at Ramapo College led by Cathy Moran Hajo, and others https://janeaddams.ramapo.edu For more information on the history and current archival efforts see Moran Hajo, Cathy, (2023) 'Making the Jane Addams Papers Accessible to New Audiences', in Patricia M. Shields, Maurice Hamington, and Joseph Soeters (eds), The Oxford Handbook of Jane Addams Oxford Academic, . Jane Addams Correspondence, 1872–1935 (inclusive) (23 reels) is housed at Harvard University Radcliffe Institute of Advanced Study. Biographies Davis, Allen F. American Heroine: The Life and Legend of Jane Addams (1973), 339pp, solid scholarship but tends toward debunking Diliberto, Gioia. A Useful Woman: The Early Life of Jane Addams. (1999). 318 pp. Elshtain, Jean Bethke. Jane Addams and the Dream of American Democracy: A Life Basic Books: 2002 online edition , by a leading conservative scholar Haldeman-Julius, Marcet. Jane Addams As I Knew Her. Girard, Kansas: Haldeman-Julius Publications, ca. 1936. Marcet was Addams's niece. Knight, Louise W. Citizen: Jane Addams and the Struggle for Democracy. (2005). 582 pp.; biography to 1899 online edition Knight, Louise W. Jane Addams: Spirit in Action. (2010). 334 pp., complete biography aimed at a broader audience. Joslin, Katherine. Jane Addams: A Writer's Life. (2004). 306 pp. Linn, James W. Jane Addams: A Biography. (1935) 457 pp, by her admiring nephew Specialty studies Agnew, Elizabeth N. "A Will to Peace: Jane Addams, World War I, and 'Pacifism in Practice'" Peace & Change (2017) 42#1 pp 5–31 | Alonso, Harriet Hyman. "Nobel Peace Laureates, Jane Addams And Emily Greene Balch: Two Women of the Women's International League for Peace and Freedom". Journal of Women's History 1995 7(2): 6–26. Beauboeuf-Lafontant, Tamara. "Becoming Jane Addams: Feminist Developmental Theory and' The College Woman'" Girlhood Studies (2014) 7#2 pp: 61–78. Beer, Janet and Joslin, Katherine. "Diseases of the Body Politic: White Slavery in Jane Addams' "A New Conscience and an Ancient Evil" and "Selected Short Stories" by Charlotte Perkins Gilman". Journal of American Studies 1999 33(1): 1–18. Bowen, Louise de Koven. Growing up with Pity. New York: The Macmillan Company, 1926. Brinkmann, Tobias. Sundays at Sinai: A Jewish Congregation in Chicago (2012), on Addams relationship with Chicago Jews. Bryan, Mary Linn McCree, and Allen F. Davis. One Hundred Years at Hull-House (1990), a history of the programs there Burnier, D. (2022) The long road of administrative memory: Jane Addams, Frances Perkins, and care-centered administration. In Shields, P. and Elias, N. eds. The Handbook of Gender and Public Administration. pp. 53–67. Edward Elgar. https://www.elgaronline.com/display/edcoll/9781789904727/9781789904727.00012.xml Craraft, James. Two Shining Souls: Jane Addams, Leo Tolstoy, and the Quest for Global Peace (Lanham: Lexington, 2012).179 pp. Carson, Minal. Settlement Folk: Social Thought and the American Settlement Movement, 1885–1930 (1990) Chansky, Dorothy. "Re-visioning Reform", American Quarterly vol 55 #3 (2003) 515–523 online at Project MUSE Curti, Merle. "Jane Addams on Human Nature", Journal of the History of Ideas Vol. 22, No. 2 (Apr. 1961), pp. 240–253 in JSTOR Danielson, Caroline Page. "Citizen Acts: Citizenship and Political Agency in the Works of Jane Addams, Charlotte Perkins Gilman, and Emma Goldman". PhD dissertation U. of Michigan 1996. 331 pp. DAI 1996 57(6): 2651-A. DA9635502 Fulltext: ProQuest Dissertations & Theses Dawley, Alan. Changing the World: American Progressives in War and Revolution (2003) Deegan, Mary Jo. "Jane Addams, the Hull-House School of Sociology, and Social Justice, 1892 to 1935". Humanity & Society (2013) 37#3 pp: 248–258. Deegan, Mary Jo. Jane Addams and the Men of the Chicago School, 1892–1918. (Transaction, Inc., 1988). Donovan, Brian. White Slave Crusades: Race, Gender, and Anti-Vice Activism, 1887–1917. (U of Illinois Press. 2006). 186 pp. Duffy, William. "Remembering is the Remedy: Jane Addams's Response to Conflicted Discourse". Rhetoric Review (2011) 30#2 pp: 135–152. Fischer, Marilyn; Nackenoff, Carol; Chmielewski, Wendy eds. Jane Addams and the Practice of Democracy (2009), 230 pp; 11 specialized essays by scholars. Foust, Mathew A. "Perplexities of Filiality: Confucius and Jane Addams on the Private/Public Distinction", Asian Philosophy (2008) 18(2): 149–166. Grimm, Robert Thornton Jr. "Forerunners for a Domestic Revolution: Jane Addams, Charlotte Perkins Gilman, and the Ideology Of Childhood, 1900–1916". Illinois Historical Journal 1997 90(1): 47–64. Gustafson, Melanie. Women and the Republican Party, 1854–1924 (University of Illinois Press, 2001). Hamington, Maurice. "Jane Addams", Stanford Encyclopedia of Philosophy (2007) online edition, Addams as philosopher Hamington, Maurice. Embodied Care Jane Addams, Maurice Merleau-Ponty, and Feminist Ethics (2004) excerpt and online search at amazon.com Hamington, Maurice. "Jane Addams and a Politics of Embodied Care", The Journal of Speculative Philosophy v 15 #2 2001, pp. 105–121 online at Project MUSE Hamington, Maurice. "Public Pragmatism: Jane Addams and Ida B. Wells on Lynching", The Journal of Speculative Philosophy v. 19#2 (2005), pp. 167–174 online at Project MUSE Hansen, Jonathan M. "Fighting Words: The Transnational Patriotism of Eugene V. Debs, Jane Addams, and W. E. B. Du Bois". PhD dissertation Boston U. 1997. 286 pp. DAI 1997 57(10): 4511-A. DA9710148 Fulltext: ProQuest Dissertations & Theses Henderson, Karla A. "Jane Addams: Leisure Services Pioneer". Journal of Physical Education, Recreation & Dance, (1982) 53#2 pp. 42–45 Imai, Konomi, and 今井小の実. "The Women's Movement and the Settlement Movement in Early Twentieth-Century Japan: The Impact of Hull House and Jane Addams on Hiratsuka Raichō". Kwansei Gakuin University humanities review 17 (2013): 85–109. online Jackson, Shannon. Lines of Activity: Performance, Historiography, Hull-House Domesticity (2000). 384 pp. Joslin, Katherine. Jane Addams: A writer's Life (2009) excerpt and text search Krysiak, Barbara H. "Full-Service Community Schools: Jane Addams Meets John Dewey". School Business Affairs, v67 n8 pp. Aug 4–8, 2001. Knight, Louise W. "An Authoritative Voice: Jane Addams and the Oratorical Tradition". Gender & History 1998 10(2): 217–251. Fulltext: Ebsco Knight, Louise W. "Biography's Window on Social Change: Benevolence and Justice in Jane Addams's 'A Modern Lear.'" Journal of Women's History 1997 9(1): 111–138. Fulltext: Ebsco Knight, Louise W., (2023)'A Biographer's Angle on Jane Addams's Feminism', in P. Shields, M. Hamington, and J. Soeters (eds), The Oxford Handbook of Jane Addams. pp. 279–304. Oxford Academic, https://doi.org/10.1093/oxfordhb/9780197544518.013.2 Lissak, R. S. Pluralism and Progressives: Hull-House and the New Immigrants. (1989) Matassarin, Kat. "Jane Addams of Hull-House: Creative Drama at the Turn of the Century". Children's Theatre Review, Oct 1983. v32 n4 pp 13–15 Morton, Keith. "Addams, Day, and Dewey: The Emergence of Community Service in American Culture". Michigan Journal of Community Service Learning, Fall 1997 v4 pp 137–49 * Oakes, Jeannie. Becoming Good American Schools: The Struggle for Civic Virtue in Education Reform. (2000). Ostman, Heather Elaine. "Social Activist Visions: Constructions of Womanhood in the Autobiographies of Jane Addams and Emma Goldman". PhD dissertation Fordham U. 2004. 240 pp. DAI 2004 65(3): 934-A. DA3125022 Fulltext: ProQuest Dissertations & Theses Packard, Sandra. "Jane Addams: Contributions and Solutions for Art Education". Art Education, 29, 1, 9–12, Jan 76. Phillips, J. O. C. "The Education of Jane Addams". History of Education Quarterly, 14, 1, 49–68, Spr 74. Philpott, Thomas. L. The Slum and the Ghetto: Immigrants, Blacks, and Reformers in Chicago, 1880–1930. (1991). Platt, Harold. "Jane Addams and the Ward Boss Revisited: Class, Politics, and Public Health in Chicago, 1890–1930". Environmental History 2000 5(2): 194–222. Polacheck, Hilda Satt. I Came a Stranger: The Story of a Hull-House Girl. Chicago, Illinois: University of Illinois Press, 1989. Sargent, David Kevin. "Jane Addams's Rhetorical Ethic". PhD dissertation Northwestern U. 1996. 275 pp. DAI 1997 57(11): 4597-A. DA9714673 Fulltext: ProQuest Dissertations & Theses Scherman, Rosemarie Redlich. "Jane Addams and the Chicago Social Justice Movement, 1889–1912". PhD dissertation City U. of New York 1999. 337 pp. DAI 1999 60(4): 1297-A. DA9924849 Fulltext: ProQuest Dissertations & Theses Schott, Linda. "Jane Addams and William James on Alternatives to War". Journal of the History of Ideas 1993 54(2): 241–254. in JSTOR Seigfried, Charlene H. "A Pragmatist Response to Death: Jane Addams on the Permanent and the Transient". Journal of Speculative Philosophy (2007) 21(2): 133–141. Shields, Patricia M. 2006. "Democracy and the Social Feminist Ethics of Jane Addams: A Vision for Public Administration". Administrative Theory & Praxis, vol. 28, no. 3, September, pp. 418–443. Democracy and the Social Feminist Ethics of Jane Addams: A Vision for Public Administration Shields, Patricia M. 2011. "Jane Addams' Theory of Democracy and Social Ethics: Incorporating a Feminist Perspective". In Women in Public Administration: Theory and Practice. Edited by Maria D'Agostiono and Helisse Levine, Sudbury, MA: Jones and Bartlet. Shields, Patricia M. 2017. "Jane Addams: Progressive Pioneer of Peace, Philosophy, Sociology, Social Work and Public Administration". New York: Springer. Shields, Patricia M. and Soeters, Joseph. 2017. Peaceweaving: Jane Addams, Positive Peace and Public Administration. The American Review of Public Administration Vol. 47, no 3 pp. 323–399. doi/10.1177/0275074015589629. Shields, Patricia M., Maurice Hamington, and Joseph Soeters (eds). (2023) The Oxford Handbook of Jane Addams Oxford academic. https://doi.org/10.1093/oxfordhb/9780197544518.001.0001 Sklar, Kathryn Kish. "Hull House in the 1890s: A Community of Women Reformers", Signs, Vol. 10, No. 4, (Summer, 1985), pp. 658–677 in JSTOR Sklar, Kathryn Kish. "'Some of us who deal with the Social Fabric': Jane Addams Blends Peace and Social Justice, 1907–1919". Journal of the Gilded Age and Progressive Era 2003 2(1): 80–96. Soeters, Joseph. 2018. "Jane Addams: From Peace Activism to Pragmatic Peacekeeper" Chapter 5 in Sociology and Military Studies: Classical and Current Foundations New York: Routledge Stebner, E. J. The Women of Hull-House: A Study in Spirituality, Vocation, and Friendship. (1997). Stiehm, Judith Hicks. Champions for Peace: Women Winners of the Nobel Peace Prize. Rowman and Littlefield, 2006. Sullivan, M. "Social work's legacy of peace: Echoes from the early 20th century". Social Work, Sep. 93; 38(5): 513–520. EBSCO Toft, Jessica and Abrams, Laura S. "Progressive Maternalists and the Citizenship Status of Low-Income Single Mothers". Social Service Review 2004 78(3): 447–465. Fulltext: Ebsco Primary sources Addams, Jane. "A Belated Industry" The American Journal of Sociology Vol. 1, No. 5 (Mar. 1896), pp. 536–550 in JSTOR Addams, Jane. The subjective value of a social settlement (1892) online Addams, Jane, ed. Hull-House Maps and Papers: A Presentation of Nationalities and Wages in a Congested District of Chicago, Together with Comments and Essays on Problems Growing Out of the Social Conditions (1896; reprint 2007) excerpts and online search from amazon.com full text Kelley, Florence. "Hull House" The New England Magazine. Volume 24, Issue 5. (July 1898) pp. 550–566 online at MOA Addams, Jane. "Ethical Survivals in Municipal Corruption", International Journal of Ethics Vol. 8, No. 3 (Apr. 1898), pp. 273–291 in JSTOR Addams, Jane. "Trades Unions and Public Duty", The American Journal of Sociology Vol. 4, No. 4 (Jan. 1899), pp. 448–462 in JSTOR Addams, Jane. "The Subtle Problems of Charity", The Atlantic Monthly. Volume 83, Issue 496 (February 1899) pp. 163–179 online at MOA Addams, Jane. Democracy and Social Ethics (1902) online at Internet Archive online at Harvard Library 23 editions published between 1902 and 2006 in English and held by 1,570 libraries worldwide Addams, Jane. Child labor 1905 Harvard Library online Addams, Jane. "Problems of Municipal Administration", The American Journal of Sociology Vol. 10, No. 4 (Jan. 1905), pp. 425–444 JSTOR Addams, Jane. "Child Labor Legislation – A Requisite for Industrial Efficiency", Annals of the American Academy of Political and Social Science Vol. 25, Child Labor (May 1905), pp. 128–136 in JSTOR Addams, Jane. The operation of the Illinois child labor law, (1906) online at Harvard Library Addams, Jane. Newer Ideals of Peace (1906) online at Internet Archive 13 editions published between 1906 and 2007 in English and held by 686 libraries worldwide Addams, Jane. National protection for children 1907 online at Harvard Library Addams, Jane. The Spirit of Youth and the City Streets (1909) online at books.google.com, online at Harvard Library 16 editions published between 1909 and 1972 in English and held by 1,094 libraries worldwide Addams, Jane. Twenty Years at Hull-House: With Autobiographical Notes, 1910 online at A Celebration of Women Writers online at Harvard Library 72 editions published between 1910 and 2007 in English and held by 3,250 libraries worldwide Addams, Jane. A new conscience and an ancient evil (1912) online at Harvard Library 14 editions published between 1912 and 2003 in English and held by 912 libraries worldwide Addams, Jane; Balch, Emily Greene; and Hamilton, Alice. Women at the Hague: The International Congress of Women and Its Results. (1915) reprint ed by Harriet Hyman Alonso, (2003). 91 pp. online at Harvard Library Addams, Jane. The Long Road of Woman's Memory (1916) online at Internet Archive online at Harvard Library, also reprint U. of Illinois Press, 2002. 84 pp. Addams, Jane. Peace and Bread in Time of War 1922 online edition , online at Harvard Library 12 editions published between 1922 and 2002 in English and held by 835 libraries worldwide Addams, Jane. My Friend, Julia Lathrop. (1935; reprint U. of Illinois Press, 2004) 166 pp. Addams, Jane. Jane Addams: A Centennial Reader (1960) online edition Bryan, Mary Lynn McCree, Barbara Bair, and Maree De Angury. eds., The Selected Papers of Jane Addams Volume 1: Preparing to Lead, 1860–1881. University of Illinois Press, 2002. online excerpt and text search Elshtain, Jean B. ed. The Jane Addams Reader (2002), 488pp Lasch, Christopher, ed. (1965). The Social Thought of Jane Addams. External links Digital collections Harvard University Library Open Collections Program. Women Working, 1870–1930. Jane Addams (1860–1935). A full-text searchable online database with complete access to publications written by Jane Addams. Jane Addams Digital Edition, Ramapo College of New Jersey Jane Addams: bibliographical and biographical references. - Center for the History of Women Philosophers and Scientists Physical collections Online photograph exhibit of Jane Addams from Swarthmore College's Peace Collection Guide to the Jane Addams Collection 1894–1919 at the University of Chicago Special Collections Research Center Jane Addams Papers at the Sophia Smith Collection, Smith College Ellen Gates Starr Papers at the Sophia Smith Collection, Smith College Biographical information FBI file on Jane Addams Jane Addams on the history of social work timeline Jane Addams National Women's Hall of Fame Kathi Coon Badertscher: "Jane Addams", In: 1914–1918-online. International Encyclopedia of the First World War Hull House links Jane Addams Hull-House Museum Jane Addams's Hull-House Taylor Street Archives; Hull House: Bowen Country Club Scholarship and analysis Michals, Debra "Jane Addams". National Women's History Museum. 2017. Sklar, Kathryn Kish et al. "How Did Changes in the Built Environment at Hull-House Reflect the Settlement's Interaction with Its Neighbors, 1889–1912?" Sklar, Women and Social Movements in the United States, 1600–2000 Looks at her as "the first woman 'public philosopher' in United States history". American Commission for Peace in Ireland Interim Report Other links The Bitter Cry of Outcast London by Rev. Andrew Mearns International Fellowship of Reconciliation Short historical film showing Jane Addams in Berlin in 1915, on her peace mission with Aletta Jacobs and Alice Hamilton. 1860 births 1935 deaths 19th-century American LGBTQ people 19th-century American non-fiction writers 19th-century American women writers 19th-century Presbyterians 20th-century American LGBTQ people 20th-century American memoirists 20th-century American philosophers 20th-century American women writers 20th-century Presbyterians Activists from Chicago American anti–World War I activists American Civil Liberties Union people American community activists American eugenicists American humanists American Nobel laureates American pacifists American political activists American political writers American Presbyterians American social workers American sociologists American temperance activists American anti-poverty advocates Child labor in the United States American children's rights activists Deaths from cancer in Illinois Hall of Fame for Great Americans inductees Illinois Progressives (1912) American LGBTQ academics LGBTQ Christians LGBTQ memoirists LGBTQ Nobel laureates LGBTQ people from Illinois American LGBTQ writers Nobel Peace Prize laureates American nonviolence advocates People from Stephenson County, Illinois Philosophers from Illinois Progressive Era in the United States Rockford University alumni American women memoirists Women Nobel laureates American women sociologists Women's International League for Peace and Freedom people Daughters of the American Revolution people Writers from Chicago International Congress of Women people Members of the Chicago Board of Education LGBTQ social workers 19th-century feminists Suffragists from Illinois Settlement workers Alpha Kappa Alpha members Women's firsts American women founders
Jane Addams
[ "Technology" ]
14,490
[ "Women Nobel laureates", "Women in science and technology" ]
152,323
https://en.wikipedia.org/wiki/Babylonian%20cuneiform%20numerals
Babylonian cuneiform numerals, also used in Assyria and Chaldea, were written in cuneiform, using a wedge-tipped reed stylus to print a mark on a soft clay tablet which would be exposed in the sun to harden to create a permanent record. The Babylonians, who were famous for their astronomical observations, as well as their calculations (aided by their invention of the abacus), used a sexagesimal (base-60) positional numeral system inherited from either the Sumerian or the Akkadian civilizations. Neither of the predecessors was a positional system (having a convention for which 'end' of the numeral represented the units). Origin This system first appeared around 2000 BC; its structure reflects the decimal lexical numerals of Semitic languages rather than Sumerian lexical numbers. However, the use of a special Sumerian sign for 60 (beside two Semitic signs for the same number) attests to a relation with the Sumerian system. Symbols The Babylonian system is credited as being the first known positional numeral system, in which the value of a particular digit depends both on the digit itself and its position within the number. This was an extremely important development because non-place-value systems require unique symbols to represent each power of a base (ten, one hundred, one thousand, and so forth), which can make calculations more difficult. Only two symbols (𒁹 to count units and 𒌋 to count tens) were used to notate the 59 non-zero digits. These symbols and their values were combined to form a digit in a sign-value notation quite similar to that of Roman numerals; for example, the combination 𒌋𒌋𒁹𒁹𒁹 represented the digit for 23 (see table of digits above). These digits were used to represent larger numbers in the base 60 (sexagesimal) positional system. For example, 𒁹𒁹 𒌋𒌋𒁹𒁹𒁹 𒁹𒁹𒁹 would represent 2×602+23×60+3 = 8583. A space was left to indicate a place without value, similar to the modern-day zero. Babylonians later devised a sign to represent this empty place. They lacked a symbol to serve the function of radix point, so the place of the units had to be inferred from context: 𒌋𒌋𒁹𒁹𒁹 could have represented 23, 23×60 (𒌋𒌋𒁹𒁹𒁹␣), 23×60×60 (𒌋𒌋𒁹𒁹𒁹␣␣), or 23/60, etc. Their system clearly used internal decimal to represent digits, but it was not really a mixed-radix system of bases 10 and 6, since the ten sub-base was used merely to facilitate the representation of the large set of digits needed, while the place-values in a digit string were consistently 60-based and the arithmetic needed to work with these digit strings was correspondingly sexagesimal. The legacy of sexagesimal still survives to this day, in the form of degrees (360° in a circle or 60° in an angle of an equilateral triangle), arcminutes, and arcseconds in trigonometry and the measurement of time, although both of these systems are actually mixed radix. A common theory is that 60, a superior highly composite number (the previous and next in the series being 12 and 120), was chosen due to its prime factorization: 2×2×3×5, which makes it divisible by 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60. Integers and fractions were represented identically—a radix point was not written but rather made clear by context. Zero The Babylonians did not technically have a digit for, nor a concept of, the number zero. Although they understood the idea of nothingness, it was not seen as a number—merely the lack of a number. Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as we do in numbers like . See also Babylon Babylonia Babylonian mathematics Cuneiform (Unicode block) History of zero Numeral system References Bibliography External links Babylonian numerals Cuneiform numbers Babylonian Mathematics High resolution photographs, descriptions, and analysis of the root(2) tablet (YBC 7289) from the Yale Babylonian Collection Photograph, illustration, and description of the root(2) tablet from the Yale Babylonian Collection Babylonian Numerals by Michael Schreiber, Wolfram Demonstrations Project. CESCNC – a handy and easy-to use numeral converter Babylonian mathematics Non-standard positional numeral systems Numeral systems Numerals
Babylonian cuneiform numerals
[ "Mathematics" ]
957
[ "Numeral systems", "Numerals", "Mathematical objects", "Numbers" ]
152,420
https://en.wikipedia.org/wiki/Passphrase
A passphrase is a sequence of words or other text used to control access to a computer system, program or data. It is similar to a password in usage, but a passphrase is generally longer for added security. Passphrases are often used to control both access to, and the operation of, cryptographic programs and systems, especially those that derive an encryption key from a passphrase. The origin of the term is by analogy with password. The modern concept of passphrases is believed to have been invented by Sigmund N. Porter in 1982. Security Considering that the entropy of written English is less than 1.1 bits per character, passphrases can be relatively weak. NIST has estimated that the 23-character passphrase "IamtheCapitanofthePina4" contains a 45-bit strength. The equation employed here is: 4 bits (1st character) + 14 bits (characters 2–8) + 18 bits (characters 9–20) + 3 bits (characters 21–23) + 6 bits (bonus for upper case, lower case, and alphanumeric) = 45 bits (This calculation does not take into account that this is a well-known quote from the operetta H.M.S. Pinafore. An MD5 hash of this passphrase can be cracked in 4 seconds using crackstation.net, indicating that the phrase is found in password cracking databases.) Using this guideline, to achieve the 80-bit strength recommended for high security (non-military) by NIST, a passphrase would need to be 58 characters long, assuming a composition that includes uppercase and alphanumeric. There is room for debate regarding the applicability of this equation, depending on the number of bits of entropy assigned. For example, the characters in five-letter words each contain 2.3 bits of entropy, which would mean only a 35-character passphrase is necessary to achieve 80 bit strength. If the words or components of a passphrase may be found in a language dictionary—especially one available as electronic input to a software program—the passphrase is rendered more vulnerable to dictionary attack. This is a particular issue if the entire phrase can be found in a book of quotations or phrase compilations. However, the required effort (in time and cost) can be made impracticably high if there are enough words in the passphrase and if they are randomly chosen and ordered in the passphrase. The number of combinations which would have to be tested under sufficient conditions make a dictionary attack so difficult as to be infeasible. These are difficult conditions to meet, and selecting at least one word that cannot be found in any dictionary significantly increases passphrase strength. If passphrases are chosen by humans, they are usually biased by the frequency of particular words in natural language. In the case of four word phrases, actual entropy rarely exceeds 30 bits. On the other hand, user-selected passwords tend to be much weaker than that, and encouraging users to use even 2-word passphrases may be able to raise entropy from below 10 bits to over 20 bits. For example, the widely used cryptography standard OpenPGP requires that a user make up a passphrase that must be entered whenever decrypting or signing messages. Internet services like Hushmail provide free encrypted e-mail or file sharing services, but the security present depends almost entirely on the quality of the chosen passphrase. Compared to passwords Passphrases differ from passwords. A password is usually short—six to ten characters. Such passwords may be adequate for various applications if frequently changed, chosen using an appropriate policy, not found in dictionaries, sufficiently random, and/or if the system prevents online guessing, etc., such as: Logging onto computer systems Negotiating keys in an interactive setting such as using password-authenticated key agreement Enabling a smart-card or PIN for an ATM card where the password data (hopefully) cannot be extracted But passwords are typically not safe to use as keys for standalone security systems such as encryption systems that expose data to enable offline password guessing by an attacker. Passphrases are theoretically stronger, and so should make a better choice in these cases. First, they usually are and always should be much longer—20 to 30 characters or more is typical—making some kinds of brute force attacks entirely impractical. Second, if well chosen, they will not be found in any phrase or quote dictionary, so such dictionary attacks will be almost impossible. Third, they can be structured to be more easily memorable than passwords without being written down, reducing the risk of hardcopy theft. However, if a passphrase is not protected appropriately by the authenticator and the clear-text passphrase is revealed its use is no better than other passwords. For this reason it is recommended that passphrases not be reused across different or unique sites and services. In 2012, two Cambridge University researchers analyzed passphrases from the Amazon PayPhrase system and found that a significant percentage are easy to guess due to common cultural references such as movie names and sports teams, losing much of the potential of using long passwords. When used in cryptography, commonly the passphrase protects a long machine generated key, and the key protects the data. The key is so long a brute force attack directly on the data is impossible. A key derivation function is used, involving many thousands of iterations (salted & hashed), to slow down password cracking attacks. Passphrases selection Typical advice about choosing a passphrase includes suggestions that it should be: Long enough to be hard to guess Not a famous quotation from literature, holy books, et cetera Hard to guess by intuition—even by someone who knows the user well Easy to remember and type accurately For better security, any easily memorable encoding at the user's own level can be applied. Not reused between sites, applications and other different sources Example methods One method to create a strong passphrase is to use dice to select words at random from a long list, a technique often referred to as diceware. While such a collection of words might appear to violate the "not from any dictionary" rule, the security is based entirely on the large number of possible ways to choose from the list of words and not from any secrecy about the words themselves. For example, if there are 7776 words in the list and six words are chosen randomly, then there are 7,7766 = 221,073,919,720,733,357,899,776 combinations, providing about 78 bits of entropy. (The number 7776 was chosen to allow words to be selected by throwing five dice. 7776 = 65) Random word sequences may then be memorized using techniques such as the memory palace. Another is to choose two phrases, turn one into an acronym, and include it in the second, making the final passphrase. For instance, using two English language typing exercises, we have the following. The quick brown fox jumps over the lazy dog, becomes tqbfjotld. Including it in, Now is the time for all good men to come to the aid of their country, might produce, Now is the time for all good tqbfjotld to come to the aid of their country as the passphrase. There are several points to note here, all relating to why this example passphrase is not a good one. It has appeared in public and so should be avoided by everyone. It is long (which is a considerable virtue in theory) and requires a good typist as typing errors are much more likely for extended phrases. Individuals and organizations serious about cracking computer security have compiled lists of passwords derived in this manner from the most common quotations, song lyrics, and so on. The PGP Passphrase FAQ suggests a procedure that attempts a better balance between theoretical security and practicality than this example. All procedures for picking a passphrase involve a tradeoff between security and ease of use; security should be at least "adequate" while not "too seriously" annoying users. Both criteria should be evaluated to match particular situations. Another supplementary approach to frustrating brute-force attacks is to derive the key from the passphrase using a deliberately slow hash function, such as PBKDF2 as described in RFC 2898. Windows support If backward compatibility with Microsoft LAN Manager is not needed, in versions of Windows NT (including Windows 2000, Windows XP and later), a passphrase can be used as a substitute for a Windows password. If the passphrase is longer than 14 characters, this will also avoid the generation of a very weak LM hash. Unix support In recent versions of Unix-like operating systems such as Linux, OpenBSD, NetBSD, Solaris and FreeBSD, up to 255-character passphrases can be used. See also Keyfile Password-based cryptography Password psychology References External links Diceware page xkcd Password Strength common-viewed explanation of concept Cryptography Password authentication
Passphrase
[ "Mathematics", "Engineering" ]
1,888
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
152,428
https://en.wikipedia.org/wiki/Ceva%27s%20theorem
In Euclidean geometry, Ceva's theorem is a theorem about triangles. Given a triangle , let the lines be drawn from the vertices to a common point (not on one of the sides of ), to meet opposite sides at respectively. (The segments are known as cevians.) Then, using signed lengths of segments, In other words, the length is taken to be positive or negative according to whether is to the left or right of in some fixed orientation of the line. For example, is defined as having positive value when is between and and negative otherwise. Ceva's theorem is a theorem of affine geometry, in the sense that it may be stated and proved without using the concepts of angles, areas, and lengths (except for the ratio of the lengths of two line segments that are collinear). It is therefore true for triangles in any affine plane over any field. A slightly adapted converse is also true: If points are chosen on respectively so that then are concurrent, or all three parallel. The converse is often included as part of the theorem. The theorem is often attributed to Giovanni Ceva, who published it in his 1678 work De lineis rectis. But it was proven much earlier by Yusuf Al-Mu'taman ibn Hűd, an eleventh-century king of Zaragoza. Associated with the figures are several terms derived from Ceva's name: cevian (the lines are the cevians of ), cevian triangle (the triangle is the cevian triangle of ); cevian nest, anticevian triangle, Ceva conjugate. (Ceva is pronounced Chay'va; cevian is pronounced chev'ian.) The theorem is very similar to Menelaus' theorem in that their equations differ only in sign. By re-writing each in terms of cross-ratios, the two theorems may be seen as projective duals. Proofs Several proofs of the theorem have been created. Two proofs are given in the following. The first one is very elementary, using only basic properties of triangle areas. However, several cases have to be considered, depending on the position of the point . The second proof uses barycentric coordinates and vectors, but is more natural and not case dependent. Moreover, it works in any affine plane over any field. Using triangle areas First, the sign of the left-hand side is positive since either all three of the ratios are positive, the case where is inside the triangle (upper diagram), or one is positive and the other two are negative, the case is outside the triangle (lower diagram shows one case). To check the magnitude, note that the area of a triangle of a given height is proportional to its base. So Therefore, (Replace the minus with a plus if and are on opposite sides of .) Similarly, and Multiplying these three equations gives as required. The theorem can also be proven easily using Menelaus's theorem. From the transversal of triangle , and from the transversal of triangle , The theorem follows by dividing these two equations. The converse follows as a corollary. Let be given on the lines so that the equation holds. Let meet at and let be the point where crosses . Then by the theorem, the equation also holds for . Comparing the two, But at most one point can cut a segment in a given ratio so . Using barycentric coordinates Given three points that are not collinear, and a point , that belongs to the same plane, the barycentric coordinates of with respect of are the unique three numbers such that and for every point (for the definition of this arrow notation and further details, see Affine space). For Ceva's theorem, the point is supposed to not belong to any line passing through two vertices of the triangle. This implies that If one takes for the intersection of the lines and (see figures), the last equation may be rearranged into The left-hand side of this equation is a vector that has the same direction as the line , and the right-hand side has the same direction as the line . These lines have different directions since are not collinear. It follows that the two members of the equation equal the zero vector, and It follows that where the left-hand-side fraction is the signed ratio of the lengths of the collinear line segments and . The same reasoning shows Ceva's theorem results immediately by taking the product of the three last equations. Generalizations The theorem can be generalized to higher-dimensional simplexes using barycentric coordinates. Define a cevian of an -simplex as a ray from each vertex to a point on the opposite ()-face (facet). Then the cevians are concurrent if and only if a mass distribution can be assigned to the vertices such that each cevian intersects the opposite facet at its center of mass. Moreover, the intersection point of the cevians is the center of mass of the simplex. Another generalization to higher-dimensional simplexes extends the conclusion of Ceva's theorem that the product of certain ratios is 1. Starting from a point in a simplex, a point is defined inductively on each -face. This point is the foot of a cevian that goes from the vertex opposite the -face, in a ()-face that contains it, through the point already defined on this ()-face. Each of these points divides the face on which it lies into lobes. Given a cycle of pairs of lobes, the product of the ratios of the volumes of the lobes in each pair is 1. Routh's theorem gives the area of the triangle formed by three cevians in the case that they are not concurrent. Ceva's theorem can be obtained from it by setting the area equal to zero and solving. The analogue of the theorem for general polygons in the plane has been known since the early nineteenth century. The theorem has also been generalized to triangles on other surfaces of constant curvature. The theorem also has a well-known generalization to spherical and hyperbolic geometry, replacing the lengths in the ratios with their sines and hyperbolic sines, respectively. See also Projective geometry Median (geometry) – an application Circumcevian triangle Menelaus's theorem - Wikipedia Triangle - Wikipedia Stewart's theorem - Wikipedia Cevian - Wikipedia References Further reading External links Menelaus and Ceva at MathPages Derivations and applications of Ceva's Theorem at cut-the-knot Trigonometric Form of Ceva's Theorem at cut-the-knot Glossary of Encyclopedia of Triangle Centers includes definitions of cevian triangle, cevian nest, anticevian triangle, Ceva conjugate, and cevapoint Conics Associated with a Cevian Nest, by Clark Kimberling Ceva's Theorem by Jay Warendorff, Wolfram Demonstrations Project. Experimentally finding the centroid of a triangle with different weights at the vertices: a practical application of Ceva's theorem at Dynamic Geometry Sketches, an interactive dynamic geometry sketch using the gravity simulator of Cinderella. Affine geometry Theorems about triangles Articles containing proofs Euclidean plane geometry
Ceva's theorem
[ "Mathematics" ]
1,478
[ "Articles containing proofs", "Planes (geometry)", "Euclidean plane geometry" ]
152,440
https://en.wikipedia.org/wiki/Stellar%20nucleosynthesis
In astrophysics, stellar nucleosynthesis is the creation of chemical elements by nuclear fusion reactions within stars. Stellar nucleosynthesis has occurred since the original creation of hydrogen, helium and lithium during the Big Bang. As a predictive theory, it yields accurate estimates of the observed abundances of the elements. It explains why the observed abundances of elements change over time and why some elements and their isotopes are much more abundant than others. The theory was initially proposed by Fred Hoyle in 1946, who later refined it in 1954. Further advances were made, especially to nucleosynthesis by neutron capture of the elements heavier than iron, by Margaret and Geoffrey Burbidge, William Alfred Fowler and Fred Hoyle in their famous 1957 B2FH paper, which became one of the most heavily cited papers in astrophysics history. Stars evolve because of changes in their composition (the abundance of their constituent elements) over their lifespans, first by burning hydrogen (main sequence star), then helium (horizontal branch star), and progressively burning higher elements. However, this does not by itself significantly alter the abundances of elements in the universe as the elements are contained within the star. Later in its life, a low-mass star will slowly eject its atmosphere via stellar wind, forming a planetary nebula, while a higher–mass star will eject mass via a sudden catastrophic event called a supernova. The term supernova nucleosynthesis is used to describe the creation of elements during the explosion of a massive star or white dwarf. The advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon, oxygen and silicon. However, most of the nucleosynthesis in the mass range (from silicon to nickel) is actually caused by the upper layers of the star collapsing onto the core, creating a compressional shock wave rebounding outward. The shock front briefly raises temperatures by roughly 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis, is the final epoch of stellar nucleosynthesis. A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. The need for a physical description was already inspired by the relative abundances of the chemical elements in the solar system. Those abundances, when plotted on a graph as a function of the atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions (see history of nucleosynthesis theory). This suggested a natural process that is not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light. History In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by F.W. Aston and a preliminary suggestion by Jean Perrin, proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars. This was a preliminary step toward the idea of stellar nucleosynthesis. In 1928 George Gamow derived what is now called the Gamow factor, a quantum-mechanical formula yielding the probability for two contiguous nuclei to overcome the electrostatic Coulomb barrier between them and approach each other closely enough to undergo nuclear reaction due to the strong nuclear force which is effective only at very short distances. In the following decade the Gamow factor was used by Atkinson and Houtermans and later by Edward Teller and Gamow himself to derive the rate at which nuclear reactions would occur at the high temperatures believed to exist in stellar interiors. In 1939, in a Nobel lecture entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium. He defined two processes that he believed to be the sources of energy in stars. The first one, the proton–proton chain reaction, is the dominant energy source in stars with masses up to about the mass of the Sun. The second process, the carbon–nitrogen–oxygen cycle, which was also considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars. These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. Bethe's two papers did not address the creation of heavier nuclei, however. That theory was begun by Fred Hoyle in 1946 with his argument that a collection of very hot nuclei would assemble thermodynamically into iron. Hoyle followed that in 1954 with a paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass. Hoyle's theory was extended to other processes, beginning with the publication of the 1957 review paper "Synthesis of the Elements in Stars" by Burbidge, Burbidge, Fowler and Hoyle, more commonly referred to as the B2FH paper. This review paper collected and refined earlier research into a heavily cited picture that gave promise of accounting for the observed relative abundances of the elements; but it did not itself enlarge Hoyle's 1954 picture for the origin of primary nuclei as much as many assumed, except in the understanding of nucleosynthesis of those elements heavier than iron by neutron capture. Significant improvements were made by Alastair G. W. Cameron and by Donald D. Clayton. In 1957 Cameron presented his own independent approach to nucleosynthesis, informed by Hoyle's example, and introduced computers into time-dependent calculations of evolution of nuclear systems. Clayton calculated the first time-dependent models of the s-process in 1961 and of the r-process in 1965, as well as of the burning of silicon into the abundant alpha-particle nuclei and iron-group elements in 1968, and discovered radiogenic chronologies for determining the age of the elements. Key reactions The most important reactions in stellar nucleosynthesis: Hydrogen fusion: Deuterium fusion The proton–proton chain The carbon–nitrogen–oxygen cycle Helium fusion: The triple-alpha process The alpha process Fusion of heavier elements: Lithium burning: a process found most commonly in brown dwarfs Carbon-burning process Neon-burning process Oxygen-burning process Silicon-burning process Production of elements heavier than iron: Neutron capture: The r-process The s-process Proton capture: The rp-process The p-process Photodisintegration Hydrogen fusion Hydrogen fusion (nuclear fusion of four protons to form a helium-4 nucleus) is the dominant process that generates energy in the cores of main-sequence stars. It is also called "hydrogen burning", which should not be confused with the chemical combustion of hydrogen in an oxidizing atmosphere. There are two predominant processes by which stellar hydrogen fusion occurs: proton–proton chain and the carbon–nitrogen–oxygen (CNO) cycle. Ninety percent of all stars, with the exception of white dwarfs, are fusing hydrogen by these two processes. In the cores of lower-mass main-sequence stars such as the Sun, the dominant energy production process is the proton–proton chain reaction. This creates a helium-4 nucleus through a sequence of reactions that begin with the fusion of two protons to form a deuterium nucleus (one proton plus one neutron) along with an ejected positron and neutrino. In each complete fusion cycle, the proton–proton chain reaction releases about 26.2 MeV. Proton-proton chain with a dependence of approximately T^4, meaning the reaction cycle is highly sensitive to temperature; a 10% rise of temperature would increase energy production by this method by 46%, hence, this hydrogen fusion process can occur in up to a third of the star's radius and occupy half the star's mass. For stars above 35% of the Sun's mass, the energy flux toward the surface is sufficiently low and energy transfer from the core region remains by radiative heat transfer, rather than by convective heat transfer. As a result, there is little mixing of fresh hydrogen into the core or fusion products outward. In higher-mass stars, the dominant energy production process is the CNO cycle, which is a catalytic cycle that uses nuclei of carbon, nitrogen and oxygen as intermediaries and in the end produces a helium nucleus as with the proton–proton chain. During a complete CNO cycle, 25.0 MeV of energy is released. The difference in energy production of this cycle, compared to the proton–proton chain reaction, is accounted for by the energy lost through neutrino emission. CNO cycle is highly sensitive to temperature, with rates proportional to T^{16-20}, a 10% rise of temperature would produce a 350% rise in energy production. About 90% of the CNO cycle energy generation occurs within the inner 15% of the star's mass, hence it is strongly concentrated at the core. This results in such an intense outward energy flux that convective energy transfer becomes more important than does radiative transfer. As a result, the core region becomes a convection zone, which stirs the hydrogen fusion region and keeps it well mixed with the surrounding proton-rich region. This core convection occurs in stars where the CNO cycle contributes more than 20% of the total energy. As the star ages and the core temperature increases, the region occupied by the convection zone slowly shrinks from 20% of the mass down to the inner 8% of the mass. The Sun produces on the order of 1% of its energy from the CNO cycle. The type of hydrogen fusion process that dominates in a star is determined by the temperature dependency differences between the two reactions. The proton–proton chain reaction starts at temperatures about , making it the dominant fusion mechanism in smaller stars. A self-maintaining CNO chain requires a higher temperature of approximately , but thereafter it increases more rapidly in efficiency as the temperature rises, than does the proton–proton reaction. Above approximately , the CNO cycle becomes the dominant source of energy. This temperature is achieved in the cores of main-sequence stars with at least 1.3 times the mass of the Sun. The Sun itself has a core temperature of about . As a main-sequence star ages, the core temperature will rise, resulting in a steadily increasing contribution from its CNO cycle. Helium fusion Main sequence stars accumulate helium in their cores as a result of hydrogen fusion, but the core does not become hot enough to initiate helium fusion. Helium fusion first begins when a star leaves the red giant branch after accumulating sufficient helium in its core to ignite it. In stars around the mass of the Sun, this begins at the tip of the red giant branch with a helium flash from a degenerate helium core, and the star moves to the horizontal branch where it burns helium in its core. More massive stars ignite helium in their core without a flash and execute a blue loop before reaching the asymptotic giant branch. Such a star initially moves away from the AGB toward bluer colours, then loops back again to what is called the Hayashi track. An important consequence of blue loops is that they give rise to classical Cepheid variables, of central importance in determining distances in the Milky Way and to nearby galaxies. Despite the name, stars on a blue loop from the red giant branch are typically not blue in colour but are rather yellow giants, possibly Cepheid variables. They fuse helium until the core is largely carbon and oxygen. The most massive stars become supergiants when they leave the main sequence and quickly start helium fusion as they become red supergiants. After the helium is exhausted in the core of a star, helium fusion will continue in a shell around the carbon–oxygen core. In all cases, helium is fused to carbon via the triple-alpha process, i.e., three helium nuclei are transformed into carbon via 8Be. This can then form oxygen, neon, and heavier elements via the alpha process. In this way, the alpha process preferentially produces elements with even numbers of protons by the capture of helium nuclei. Elements with odd numbers of protons are formed by other fusion pathways. Reaction rate The reaction rate density between species A and B, having number densities nA,B, is given by: where k is the reaction rate constant of each single elementary binary reaction composing the nuclear fusion process: here, σ(v) is the cross-section at relative velocity v, and averaging is performed over all velocities. Semi-classically, the cross section is proportional to , where is the de Broglie wavelength. Thus semi-classically the cross section is proportional to . However, since the reaction involves quantum tunneling, there is an exponential damping at low energies that depends on Gamow factor EG, giving an Arrhenius equation: where S(E) depends on the details of the nuclear interaction, and has the dimension of an energy multiplied for a cross section. One then integrates over all energies to get the total reaction rate, using the Maxwell–Boltzmann distribution and the relation: where is the reduced mass. Since this integration has an exponential damping at high energies of the form and at low energies from the Gamow factor, the integral almost vanished everywhere except around the peak, called Gamow peak, at E0, where: Thus: The exponent can then be approximated around E0 as: And the reaction rate is approximated as: Values of S(E0) are typically , but are damped by a huge factor when involving a beta decay, due to the relation between the intermediate bound state (e.g. diproton) half-life and the beta decay half-life, as in the proton–proton chain reaction. Note that typical core temperatures in main-sequence stars give kT of the order of keV. Thus, the limiting reaction in the CNO cycle, proton capture by , has S(E0) ~ S(0) = 3.5keV·b, while the limiting reaction in the proton–proton chain reaction, the creation of deuterium from two protons, has a much lower S(E0) ~ S(0) = 4×10−22keV·b. Incidentally, since the former reaction has a much higher Gamow factor, and due to the relative abundance of elements in typical stars, the two reaction rates are equal at a temperature value that is within the core temperature ranges of main-sequence stars. References Notes Citations Further reading External links "How the Sun Shines", by John N. Bahcall (Nobel prize site, accessed 6 January 2020) Nucleosynthesis in NASA's Cosmicopia Nucleosynthesis Nuclear physics Nucleosynthesis, Stellar Concepts in stellar astronomy Concepts in astronomy
Stellar nucleosynthesis
[ "Physics", "Chemistry", "Astronomy" ]
3,098
[ "Nuclear fission", "Concepts in astrophysics", "Concepts in astronomy", "Astrophysics", "Nucleosynthesis", "Nuclear physics", "Concepts in stellar astronomy", "Nuclear fusion", "Astronomical sub-disciplines", "Stellar astronomy" ]
152,457
https://en.wikipedia.org/wiki/Futurebus
Futurebus (IEEE 896) is a computer bus standard designed to replace all local bus connections in a computer, including the CPU, plug-in cards, and even some LAN links between machines. The project started in 1979 and was completed in 1987, but then went through a redesign until 1994. It has seen little real-world use, although custom implementations are still designed. History In the late 1970s, VMEbus was faster than the parts plugged into it. It can connect a CPU and RAM to VME on separate cards to build a computer. However, as the speed of the CPUs and RAM rapidly increased, Futurebus created a successor to VMEbus using asynchronous links. Though the ability to have several cards in the system as "masters", allowing Futurebus to build multiprocessor machines, required some form of "distributed arbitration" to allow the various cards to gain access to the bus at any point, as opposed to VME, which put a single master in slot 0 with overall control. Typical IEEE standards start with a company building a device, then submitting it to the IEEE for the standardization effort. In the case of Futurebus, the whole system was being designed during the standardization effort. It took eight years before the specification was finally agreed on in 1987. Tektronix did make a few workstations based on Futurebus. It took another four years for the Futurebus+ Standard to be released. The IEEE 896 committee later split from the IEEE Microcomputer Standards Committee and formed the IEEE Bus Architecture Standards Committee (BASC). Futurebus+ transceivers that meet the IEEE Standard 1194.1-1991 Backplane Transceiver Logic (BTL) standard are still made by Texas Instruments. Futurebus+ was used as the I/O bus in the DEC 4000 AXP and DEC 10000 AXP systems. Futurebus+ FDDI boards are still supported in the OpenVMS operating system. See also InfiniBand QuickRing Scalable Coherent Interconnect (SCI) Bus topology FASTBUS References Further reading The Futurebus+ Handbook, John Theus, VITA Futurebus+ Handbook for Digital Systems, Digital Equipment Corporation IEEE Standard Backplane Bus Specifications for Multiprocessor Architectures: Futurebus+, IEEE Std 896.1-1987] IEEE Standard for Futurebus+(R) -- Logical Protocol Specification', IEEE Std 896.1-1991 IEEE Standard Backplane Bus Specification for Multiprocessor Architectures: Futurebus+'', IEEE Std 896.2-1991 External links Computer buses IEEE standards
Futurebus
[ "Technology" ]
542
[ "Computer standards", "IEEE standards" ]
152,464
https://en.wikipedia.org/wiki/Nuclide
Nuclides (or nucleides, from nucleus, also known as nuclear species) are a class of atoms characterized by their number of protons, Z, their number of neutrons, N, and their nuclear energy state. The word nuclide was coined by the American nuclear physicist Truman P. Kohman in 1947. Kohman defined nuclide as a "species of atom characterized by the constitution of its nucleus" containing a certain number of neutrons and protons. The term thus originally focused on the nucleus. Nuclides vs isotopes A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, while the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has large effects on nuclear properties, but its effect on chemical reactions is negligible for most elements. Even in the case of the very lightest elements, where the ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect, but it matters in some circumstances. For hydrogen, the lightest element, the isotope effect is large enough to affect biological systems strongly. In the case of helium, helium-4 obeys Bose–Einstein statistics, while helium-3 obeys Fermi–Dirac statistics. Since isotope is the older term, it is better known than nuclide, and is still occasionally used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine. Types of nuclides Although the words nuclide and isotope are often used interchangeably, being isotopes is actually only one relation between nuclides. The following table names some other relations. A set of nuclides with equal proton number (atomic number), i.e., of the same chemical element but different neutron numbers, are called isotopes of the element. Particular nuclides are still often loosely called "isotopes", but the term "nuclide" is the correct one in general (i.e., when Z is not fixed). In similar manner, a set of nuclides with equal mass number A, but different atomic number, are called isobars (isobar = equal in weight), and isotones are nuclides of equal neutron number but different proton numbers. Likewise, nuclides with the same neutron excess (N − Z) are called isodiaphers. The name isotone was derived from the name isotope to emphasize that in the first group of nuclides it is the number of neutrons (n) that is constant, whereas in the second the number of protons (p). See Isotope#Notation for an explanation of the notation used for different nuclide or isotope types. Nuclear isomers are members of a set of nuclides with equal proton number and equal mass number (thus making them by definition the same isotope), but different states of excitation. An example is the two states of the single isotope shown among the decay schemes. Each of these two states (technetium-99m and technetium-99) qualifies as a different nuclide, illustrating one way that nuclides may differ from isotopes (an isotope may consist of several different nuclides of different excitation states). The longest-lived non-ground state nuclear isomer is the nuclide tantalum-180m (), which has a half-life in excess of 1,000 trillion years. This nuclide occurs primordially, and has never been observed to decay to the ground state. (In contrast, the ground state nuclide tantalum-180 does not occur primordially, since it decays with a half life of only 8 hours to 180Hf (86%) or 180W (14%).) There are 251 nuclides in nature that have never been observed to decay. They occur among the 80 different elements that have one or more stable isotopes. See stable nuclide and primordial nuclide. Unstable nuclides are radioactive and are called radionuclides. Their decay products ('daughter' products) are called radiogenic nuclides. Origins of naturally occurring radionuclides Natural radionuclides may be conveniently subdivided into three types. First, those whose half-lives t1/2 are at least 2% as long as the age of the Earth (for practical purposes, these are difficult to detect with half-lives less than 10% of the age of the Earth) (). These are remnants of nucleosynthesis that occurred in stars before the formation of the Solar System. For example, the isotope (t1/2 = ) of uranium is still fairly abundant in nature, but the shorter-lived isotope (t1/2 = ) is 138 times rarer. About 34 of these nuclides have been discovered (see List of nuclides and Primordial nuclide for details). The second group of radionuclides that exist naturally consists of radiogenic nuclides such as (t1/2 = ), an isotope of radium, which are formed by radioactive decay. They occur in the decay chains of primordial isotopes of uranium or thorium. Some of these nuclides are very short-lived, such as isotopes of francium. There exist about 51 of these daughter nuclides that have half-lives too short to be primordial, and which exist in nature solely due to decay from longer lived radioactive primordial nuclides. The third group consists of nuclides that are continuously being made in another fashion that is not simple spontaneous radioactive decay (i.e., only one atom involved with no incoming particle) but instead involves a natural nuclear reaction. These occur when atoms react with natural neutrons (from cosmic rays, spontaneous fission, or other sources), or are bombarded directly with cosmic rays. The latter, if non-primordial, are called cosmogenic nuclides. Other types of natural nuclear reactions produce nuclides that are said to be nucleogenic nuclides. An example of nuclides made by nuclear reactions, are cosmogenic (radiocarbon) that is made by cosmic ray bombardment of other elements, and nucleogenic which is still being created by neutron bombardment of natural as a result of natural fission in uranium ores. Cosmogenic nuclides may be either stable or radioactive. If they are stable, their existence must be deduced against a background of stable nuclides, since every known stable nuclide is present on Earth primordially. Artificially produced nuclides Beyond the naturally occurring nuclides, more than 3000 radionuclides of varying half-lives have been artificially produced and characterized. The known nuclides are shown in Table of nuclides. A list of primordial nuclides is given sorted by element, at List of elements by stability of isotopes. List of nuclides is sorted by half-life, for the 905 nuclides with half-lives longer than one hour. Summary table for numbers of each class of nuclides This is a summary table for the 905 nuclides with half-lives longer than one hour, given in list of nuclides. Note that numbers are not exact, and may change slightly in the future, if some "stable" nuclides are observed to be radioactive with very long half-lives. Nuclear properties and stability Atomic nuclei other than hydrogen have protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to be bound into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph). For example, although the neutron–proton ratio of is 1:2, the neutron–proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons. Even and odd nucleon numbers The proton–neutron ratio is not the only factor affecting nuclear stability. It depends also on even or odd parity of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture or more exotic means, such as spontaneous fission and cluster decay. The majority of stable nuclides are even-proton–even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton–even-neutron, and even-proton–odd-neutron nuclides. Odd-proton–odd-neutron nuclides (and nuclei) are the least common. See also Isotope (much more information on abundance of stable nuclides) List of elements by stability of isotopes List of nuclides (sorted by half-life) Table of nuclides Alpha nuclide Monoisotopic element Mononuclidic element Primordial element Radionuclide Hypernucleus References External links Livechart - Table of Nuclides at The International Atomic Energy Agency Nuclear physics
Nuclide
[ "Physics", "Chemistry" ]
2,140
[ "Isotopes", "Nuclear physics" ]
152,465
https://en.wikipedia.org/wiki/Optic%20nerve
In neuroanatomy, the optic nerve, also known as the second cranial nerve, cranial nerve II, or simply CN II, is a paired cranial nerve that transmits visual information from the retina to the brain. In humans, the optic nerve is derived from optic stalks during the seventh week of development and is composed of retinal ganglion cell axons and glial cells; it extends from the optic disc to the optic chiasma and continues as the optic tract to the lateral geniculate nucleus, pretectal nuclei, and superior colliculus. Structure The optic nerve has been classified as the second of twelve paired cranial nerves, but it is technically a myelinated tract of the central nervous system, rather than a classical nerve of the peripheral nervous system because it is derived from an out-pouching of the diencephalon (optic stalks) during embryonic development. As a consequence, the fibers of the optic nerve are covered with myelin produced by oligodendrocytes, rather than Schwann cells of the peripheral nervous system, and are encased within the meninges. Peripheral neuropathies like Guillain–Barré syndrome do not affect the optic nerve. However, most typically, the optic nerve is grouped with the other eleven cranial nerves and is considered to be part of the peripheral nervous system. The optic nerve is ensheathed in all three meningeal layers (dura, arachnoid, and pia mater) rather than the epineurium, perineurium, and endoneurium found in peripheral nerves. Fiber tracts of the mammalian central nervous system have only limited regenerative capabilities compared to the peripheral nervous system. Therefore, in most mammals, optic nerve damage results in irreversible blindness. The fibers from the retina run along the optic nerve to nine primary visual nuclei in the brain, from which a major relay inputs into the primary visual cortex. The optic nerve is composed of retinal ganglion cell axons and glia. Each human optic nerve contains between 770,000 and 1.7 million nerve fibers, which are axons of the retinal ganglion cells of one retina. In the fovea, which has high acuity, these ganglion cells connect to as few as 5 photoreceptor cells; in other areas of the retina, they connect to thousands of photoreceptors. The optic nerve leaves the orbit (eye socket) via the optic canal, running postero-medially towards the optic chiasm, where there is a partial decussation (crossing) of fibers from the temporal visual fields (the nasal hemi-retina) of both eyes. The proportion of decussating fibers varies between species, and is correlated with the degree of binocular vision enjoyed by a species. Most of the axons of the optic nerve terminate in the lateral geniculate nucleus from where information is relayed to the visual cortex, while other axons terminate in the pretectal area and are involved in reflexive eye movements. Other axons terminate in the suprachiasmatic nucleus and are involved in regulating the sleep-wake cycle. Its diameter increases from about 1.6 mm within the eye to 3.5 mm in the orbit to 4.5 mm within the cranial space. The optic nerve component lengths are 1 mm in the globe, 24 mm in the orbit, 9 mm in the optic canal, and 16 mm in the cranial space before joining the optic chiasm. There, partial decussation occurs, and about 53% of the fibers cross to form the optic tracts. Most of these fibers terminate in the lateral geniculate body. Based on this anatomy, the optic nerve may be divided into four parts as indicated in the image at the top of this section (this view is from above as if you were looking into the orbit after the top of the skull had been removed): 1. the optic head (which is where it begins in the eyeball (globe) with fibers from the retina); 2. orbital part (which is the part within the orbit); 3. intracanicular part (which is the part within a bony canal known as the optic canal); and, 4. cranial part (the part within the cranial cavity, which ends at the optic chiasm). From the lateral geniculate body, fibers of the optic radiation pass to the visual cortex in the occipital lobe of the brain. In more specific terms, fibers carrying information from the contralateral superior visual field traverse Meyer's loop to terminate in the lingual gyrus below the calcarine fissure in the occipital lobe, and fibers carrying information from the contralateral inferior visual field terminate more superiorly, to the cuneus. Function The optic nerve transmits all visual information including brightness perception, color perception and contrast (visual acuity). It also conducts the visual impulses that are responsible for two important neurological reflexes: the light reflex and the accommodation reflex. The light reflex refers to the constriction of both pupils that occurs when light is shone into either eye. The accommodation reflex refers to the swelling of the lens of the eye that occurs when one looks at a near object (for example: when reading, the lens adjusts to near vision). The eye's blind spot is a result of the absence of photoreceptors in the area of the retina where the optic nerve leaves the eye. Clinical significance Disease Damage to the optic nerve typically causes permanent and potentially severe loss of vision, as well as an abnormal pupillary reflex, which is important for the diagnosis of nerve damage. The type of visual field loss will depend on which portions of the optic nerve were damaged. In general, the location of the damage in relation to the optic chiasm (see diagram above) will affect the areas of vision loss. Damage to the optic nerve that is anterior, or in front of the optic chiasm (toward the face) causes loss of vision in the eye on the same side as the damage. Damage at the optic chiasm itself typically causes loss of vision laterally in both visual fields or bitemporal hemianopsia (see image to the right). Such damage may occur with large pituitary tumors, such as pituitary adenoma. Finally, damage to the optic tract, which is posterior to, or behind the chiasm, causes loss of the entire visual field from the side opposite the damage, e.g. if the left optic tract were cut, there would be a loss of vision from the entire right visual field. Injury to the optic nerve can be the result of congenital or inheritable problems like Leber's hereditary optic neuropathy, glaucoma, trauma, toxicity, inflammation, ischemia, infection (very rarely), or compression from tumors or aneurysms. By far, the three most common injuries to the optic nerve are from glaucoma; optic neuritis, especially in those younger than 50 years of age; and anterior ischemic optic neuropathy, usually in those older than 50. Glaucoma is a group of diseases involving loss of retinal ganglion cells causing optic neuropathy in a pattern of peripheral vision loss, initially sparing central vision. Glaucoma is frequently associated with increased intraocular pressure that damages the optic nerve as it exits the eyeball. The trabecular meshwork assists the drainage of aqueous humor fluid. The presence of excess aqueous humor, increases IOP, yielding the diagnosis and symptoms of glaucoma. Optic neuritis is inflammation of the optic nerve. It is associated with a number of diseases, the most notable one being multiple sclerosis. The patient will likely experience varying vision loss and eye pain. The condition tends to be episodic. Anterior ischemic optic neuropathy is commonly known as a "stroke of the optic nerve" and affects the optic nerve head (where the nerve exits the eyeball). There is usually a sudden loss of blood supply and nutrients to the optic nerve head. Vision loss is typically sudden and most commonly occurs upon waking up in the morning. This condition is most common in diabetic patients 40–70 years old. Other optic nerve problems are less common. Optic nerve hypoplasia is the underdevelopment of the optic nerve resulting in little to no vision in the affected eye. Tumors, especially those of the pituitary gland, can put pressure on the optic nerve causing various forms of visual loss. Similarly, cerebral aneurysms, a swelling of blood vessel(s), can also affect the nerve. Trauma can cause serious injury to the nerve. Direct optic nerve injury can occur from a penetrating injury to the orbit, but the nerve can also be injured by indirect trauma in which severe head impact or movement stretches or even tears the nerve. Ophthalmologists and optometrists can detect and diagnose some optic nerve diseases but neuro-ophthalmologists are often best suited to diagnose and treat diseases of the optic nerve. The International Foundation for Optic Nerve Diseases (IFOND) sponsors research and provides information on a variety of optic nerve disorders. Additional images See also Ophthalmic nerve (CN V1) Bistratified cell References External links The optic nerve on MRI IFOND online case history – Optic nerve analysis with both scanning laser polarimetry with variable corneal compensation (GDx VCC) and confocal scanning laser ophthalmoscopy (HRT II - Heidelberg Retina Tomograph). Also includes actual fundus photos. Animations of extraocular cranial nerve and muscle function and damage (University of Liverpool) () () Cranial nerves Human head and neck Nervous system Neurology Nerves of the head and neck Ophthalmology Otorhinolaryngology Visual system
Optic nerve
[ "Biology" ]
2,064
[ "Organ systems", "Nervous system" ]
152,518
https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini%20theorem
In mathematics, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Here, general means that the coefficients of the equation are viewed and manipulated as indeterminates. The theorem is named after Paolo Ruffini, who made an incomplete proof in 1799 (which was refined and completed in 1813 and accepted by Cauchy) and Niels Henrik Abel, who provided a proof in 1824. Abel–Ruffini theorem refers also to the slightly stronger result that there are equations of degree five and higher that cannot be solved by radicals. This does not follow from Abel's statement of the theorem, but is a corollary of his proof, as his proof is based on the fact that some polynomials in the coefficients of the equation are not the zero polynomial. This improved statement follows directly from . Galois theory implies also that is the simplest equation that cannot be solved in radicals, and that almost all polynomials of degree five or higher cannot be solved in radicals. The impossibility of solving in degree five or higher contrasts with the case of lower degree: one has the quadratic formula, the cubic formula, and the quartic formula for degrees two, three, and four, respectively. Context Polynomial equations of degree two can be solved with the quadratic formula, which has been known since antiquity. Similarly the cubic formula for degree three, and the quartic formula for degree four, were found during the 16th century. At that time a fundamental problem was whether equations of higher degree could be solved in a similar way. The fact that every polynomial equation of positive degree has solutions, possibly non-real, was asserted during the 17th century, but completely proved only at the beginning of the 19th century. This is the fundamental theorem of algebra, which does not provide any tool for computing exactly the solutions, although Newton's method allows approximating the solutions to any desired accuracy. From the 16th century to beginning of the 19th century, the main problem of algebra was to search for a formula for the solutions of polynomial equations of degree five and higher, hence the name the "fundamental theorem of algebra". This meant a solution in radicals, that is, an expression involving only the coefficients of the equation, and the operations of addition, subtraction, multiplication, division, and th root extraction. The Abel–Ruffini theorem proves that this is impossible. However, this impossibility does not imply that a specific equation of any degree cannot be solved in radicals. On the contrary, there are equations of any degree that can be solved in radicals. This is the case of the equation for any , and the equations defined by cyclotomic polynomials, all of whose solutions can be expressed in radicals. Abel's proof of the theorem does not explicitly contain the assertion that there are specific equations that cannot be solved by radicals. Such an assertion is not a consequence of Abel's statement of the theorem, as the statement does not exclude the possibility that "every particular quintic equation might be soluble, with a special formula for each equation." However, the existence of specific equations that cannot be solved in radicals seems to be a consequence of Abel's proof, as the proof uses the fact that some polynomials in the coefficients are not the zero polynomial, and, given a finite number of polynomials, there are values of the variables at which none of the polynomials takes the value zero. Soon after Abel's publication of its proof, Évariste Galois introduced a theory, now called Galois theory that allows deciding, for any given equation, whether it is solvable in radicals. This was purely theoretical before the rise of electronic computers. With modern computers and programs, deciding whether a polynomial is solvable by radicals can be done for polynomials of degree greater than 100. Computing the solutions in radicals of solvable polynomials requires huge computations. Even for the degree five, the expression of the solutions is so huge that it has no practical interest. Proof The proof of the Abel–Ruffini theorem predates Galois theory. However, Galois theory allows a better understanding of the subject, and modern proofs are generally based on it, while the original proofs of the Abel–Ruffini theorem are still presented for historical purposes. The proofs based on Galois theory comprise four main steps: the characterization of solvable equations in terms of field theory; the use of the Galois correspondence between subfields of a given field and the subgroups of its Galois group for expressing this characterization in terms of solvable groups; the proof that the symmetric group is not solvable if its degree is five or higher; and the existence of polynomials with a symmetric Galois group. Algebraic solutions and field theory An algebraic solution of a polynomial equation is an expression involving the four basic arithmetic operations (addition, subtraction, multiplication, and division), and root extractions. Such an expression may be viewed as the description of a computation that starts from the coefficients of the equation to be solved and proceeds by computing some numbers, one after the other. At each step of the computation, one may consider the smallest field that contains all numbers that have been computed so far. This field is changed only for the steps involving the computation of an th root. So, an algebraic solution produces a sequence of fields, and elements such that for with for some integer An algebraic solution of the initial polynomial equation exists if and only if there exists such a sequence of fields such that contains a solution. For having normal extensions, which are fundamental for the theory, one must refine the sequence of fields as follows. If does not contain all -th roots of unity, one introduces the field that extends by a primitive root of unity, and one redefines as So, if one starts from a solution in terms of radicals, one gets an increasing sequence of fields such that the last one contains the solution, and each is a normal extension of the preceding one with a Galois group that is cyclic. Conversely, if one has such a sequence of fields, the equation is solvable in terms of radicals. For proving this, it suffices to prove that a normal extension with a cyclic Galois group can be built from a succession of radical extensions. Galois correspondence The Galois correspondence establishes a one to one correspondence between the subextensions of a normal field extension and the subgroups of the Galois group of the extension. This correspondence maps a field such to the Galois group of the automorphisms of that leave fixed, and, conversely, maps a subgroup of to the field of the elements of that are fixed by . The preceding section shows that an equation is solvable in terms of radicals if and only if the Galois group of its splitting field (the smallest field that contains all the roots) is solvable, that is, it contains a sequence of subgroups such that each is normal in the preceding one, with a quotient group that is cyclic. (Solvable groups are commonly defined with abelian instead of cyclic quotient groups, but the fundamental theorem of finite abelian groups shows that the two definitions are equivalent). So, for proving the Abel–Ruffini theorem, it remains to show that the symmetric group is not solvable, and that there are polynomials with symmetric Galois groups. Solvable symmetric groups For , the symmetric group of degree has only the alternating group as a nontrivial normal subgroup (see ). For , the alternating group is simple (that is, it does not have any nontrivial normal subgroup) and not abelian. This implies that both and are not solvable for . Thus, the Abel–Ruffini theorem results from the existence of polynomials with a symmetric Galois group; this will be shown in the next section. On the other hand, for , the symmetric group and all its subgroups are solvable. This explains the existence of the quadratic, cubic, and quartic formulas, since a major result of Galois theory is that a polynomial equation has a solution in radicals if and only if its Galois group is solvable (the term "solvable group" takes its origin from this theorem). Polynomials with symmetric Galois groups General equation The general or generic polynomial equation of degree is the equation where are distinct indeterminates. This is an equation defined over the field of the rational fractions in with rational number coefficients. The original Abel–Ruffini theorem asserts that, for , this equation is not solvable in radicals. In view of the preceding sections, this results from the fact that the Galois group over of the equation is the symmetric group (this Galois group is the group of the field automorphisms of the splitting field of the equation that fix the elements of , where the splitting field is the smallest field containing all the roots of the equation). For proving that the Galois group is it is simpler to start from the roots. Let be new indeterminates, aimed to be the roots, and consider the polynomial Let be the field of the rational fractions in and be its subfield generated by the coefficients of The permutations of the induce automorphisms of . Vieta's formulas imply that every element of is a symmetric function of the and is thus fixed by all these automorphisms. It follows that the Galois group is the symmetric group The fundamental theorem of symmetric polynomials implies that the are algebraic independent, and thus that the map that sends each to the corresponding is a field isomorphism from to . This means that one may consider as a generic equation. This finishes the proof that the Galois group of a general equation is the symmetric group, and thus proves the original Abel–Ruffini theorem, which asserts that the general polynomial equation of degree cannot be solved in radicals for . Explicit example The equation is not solvable in radicals, as will be explained below. Let be . Let be its Galois group, which acts faithfully on the set of complex roots of . Numbering the roots lets one identify with a subgroup of the symmetric group . Since factors as in , the group contains a permutation that is a product of disjoint cycles of lengths 2 and 3 (in general, when a monic integer polynomial reduces modulo a prime to a product of distinct monic irreducible polynomials, the degrees of the factors give the lengths of the disjoint cycles in some permutation belonging to the Galois group); then also contains , which is a transposition. Since is irreducible in , the same principle shows that contains a 5-cycle. Because 5 is prime, any transposition and 5-cycle in generate the whole group; see . Thus . Since the group is not solvable, the equation is not solvable in radicals. Cayley's resolvent Testing whether a specific quintic is solvable in radicals can be done by using Cayley's resolvent. This is a univariate polynomial of degree six whose coefficients are polynomials in the coefficients of a generic quintic. A specific irreducible quintic is solvable in radicals if and only, when its coefficients are substituted in Cayley's resolvent, the resulting sextic polynomial has a rational root. History Around 1770, Joseph Louis Lagrange began the groundwork that unified the many different methods that had been used up to that point to solve equations, relating them to the theory of groups of permutations, in the form of Lagrange resolvents. This innovative work by Lagrange was a precursor to Galois theory, and its failure to develop solutions for equations of fifth and higher degrees hinted that such solutions might be impossible, but it did not provide conclusive proof. The first person who conjectured that the problem of solving quintics by radicals might be impossible to solve was Carl Friedrich Gauss, who wrote in 1798 in section 359 of his book Disquisitiones Arithmeticae (which would be published only in 1801) that "there is little doubt that this problem does not so much defy modern methods of analysis as that it proposes the impossible". The next year, in his thesis, he wrote "After the labors of many geometers left little hope of ever arriving at the resolution of the general equation algebraically, it appears more and more likely that this resolution is impossible and contradictory." And he added "Perhaps it will not be so difficult to prove, with all rigor, the impossibility for the fifth degree. I shall set forth my investigations of this at greater length in another place." Actually, Gauss published nothing else on this subject. The theorem was first nearly proved by Paolo Ruffini in 1799. He sent his proof to several mathematicians to get it acknowledged, amongst them Lagrange (who did not reply) and Augustin-Louis Cauchy, who sent him a letter saying: "Your memoir on the general solution of equations is a work which I have always believed should be kept in mind by mathematicians and which, in my opinion, proves conclusively the algebraic unsolvability of general equations of higher than fourth degree." However, in general, Ruffini's proof was not considered convincing. Abel wrote: "The first and, if I am not mistaken, the only one who, before me, has sought to prove the impossibility of the algebraic solution of general equations is the mathematician Ruffini. But his memoir is so complicated that it is very difficult to determine the validity of his argument. It seems to me that his argument is not completely satisfying." The proof also, as it was discovered later, was incomplete. Ruffini assumed that all radicals that he was dealing with could be expressed from the roots of the polynomial using field operations alone; in modern terms, he assumed that the radicals belonged to the splitting field of the polynomial. To see why this is really an extra assumption, consider, for instance, the polynomial . According to Cardano's formula, one of its roots (all of them, actually) can be expressed as the sum of a cube root of with a cube root of . On the other hand, since , , , and , the roots , , and of are all real and therefore the field is a subfield of . But then the numbers cannot belong to . While Cauchy either did not notice Ruffini's assumption or felt that it was a minor one, most historians believe that the proof was not complete until Abel proved the theorem on natural irrationalities, which asserts that the assumption holds in the case of general polynomials. The Abel–Ruffini theorem is thus generally credited to Abel, who published a proof compressed into just six pages in 1824. (Abel adopted a very terse style to save paper and money: the proof was printed at his own expense.) A more elaborated version of the proof would be published in 1826. Proving that the general quintic (and higher) equations were unsolvable by radicals did not completely settle the matter, because the Abel–Ruffini theorem does not provide necessary and sufficient conditions for saying precisely which quintic (and higher) equations are unsolvable by radicals. Abel was working on a complete characterization when he died in 1829. According to Nathan Jacobson, "The proofs of Ruffini and of Abel [...] were soon superseded by the crowning achievement of this line of research: Galois' discoveries in the theory of equations." In 1830, Galois (at the age of 18) submitted to the Paris Academy of Sciences a memoir on his theory of solvability by radicals, which was ultimately rejected in 1831 as being too sketchy and for giving a condition in terms of the roots of the equation instead of its coefficients. Galois was aware of the contributions of Ruffini and Abel, since he wrote "It is a common truth, today, that the general equation of degree greater than cannot be solved by radicals... this truth has become common (by hearsay) despite the fact that geometers have ignored the proofs of Abel and Ruffini..." Galois then died in 1832 and his paper Mémoire sur les conditions de resolubilité des équations par radicaux remained unpublished until 1846, when it was published by Joseph Liouville accompanied by some of his own explanations. Prior to this publication, Liouville announced Galois' result to the academy in a speech he gave on 4 July 1843. A simplification of Abel's proof was published by Pierre Wantzel in 1845. When Wantzel published it, he was already aware of the contributions by Galois and he mentions that, whereas Abel's proof is valid only for general polynomials, Galois' approach can be used to provide a concrete polynomial of degree 5 whose roots cannot be expressed in radicals from its coefficients. In 1963, Vladimir Arnold discovered a topological proof of the Abel–Ruffini theorem, which served as a starting point for topological Galois theory. References External links Articles containing proofs Galois theory Niels Henrik Abel Solvable groups Theorems about polynomials
Abel–Ruffini theorem
[ "Mathematics" ]
3,520
[ "Theorems in algebra", "Articles containing proofs", "Theorems about polynomials" ]
152,547
https://en.wikipedia.org/wiki/Bisection
In geometry, bisection is the division of something into two equal or congruent parts (having the same shape and size). Usually it involves a bisecting line, also called a bisector. The most often considered types of bisectors are the segment bisector, a line that passes through the midpoint of a given segment, and the angle bisector, a line that passes through the apex of an angle (that divides it into two equal angles). In three-dimensional space, bisection is usually done by a bisecting plane, also called the bisector. Perpendicular line segment bisector Definition The perpendicular bisector of a line segment is a line which meets the segment at its midpoint perpendicularly. The perpendicular bisector of a line segment also has the property that each of its points is equidistant from segment AB's endpoints: (D). The proof follows from and Pythagoras' theorem: Property (D) is usually used for the construction of a perpendicular bisector: Construction by straight edge and compass In classical geometry, the bisection is a simple compass and straightedge construction, whose possibility depends on the ability to draw arcs of equal radii and different centers: The segment is bisected by drawing intersecting circles of equal radius , whose centers are the endpoints of the segment. The line determined by the points of intersection of the two circles is the perpendicular bisector of the segment. Because the construction of the bisector is done without the knowledge of the segment's midpoint , the construction is used for determining as the intersection of the bisector and the line segment. This construction is in fact used when constructing a line perpendicular to a given line at a given point : drawing a circle whose center is such that it intersects the line in two points , and the perpendicular to be constructed is the one bisecting segment . Equations If are the position vectors of two points , then its midpoint is and vector is a normal vector of the perpendicular line segment bisector. Hence its vector equation is . Inserting and expanding the equation leads to the vector equation (V) With one gets the equation in coordinate form: (C) Or explicitly: (E), where , , and . Applications Perpendicular line segment bisectors were used solving various geometric problems: Construction of the center of a Thales' circle, Construction of the center of the Excircle of a triangle, Voronoi diagram boundaries consist of segments of such lines or planes. Perpendicular line segment bisectors in space The perpendicular bisector of a line segment is a plane, which meets the segment at its midpoint perpendicularly. Its vector equation is literally the same as in the plane case: (V) With one gets the equation in coordinate form: (C3) Property (D) (see above) is literally true in space, too: (D) The perpendicular bisector plane of a segment has for any point the property: . Angle bisector An angle bisector divides the angle into two angles with equal measures. An angle only has one bisector. Each point of an angle bisector is equidistant from the sides of the angle. The 'interior' or 'internal bisector' of an angle is the line, half-line, or line segment that divides an angle of less than 180° into two equal angles. The 'exterior' or 'external bisector' is the line that divides the supplementary angle (of 180° minus the original angle), formed by one side forming the original angle and the extension of the other side, into two equal angles. To bisect an angle with straightedge and compass, one draws a circle whose center is the vertex. The circle meets the angle at two points: one on each leg. Using each of these points as a center, draw two circles of the same size. The intersection of the circles (two points) determines a line that is the angle bisector. The proof of the correctness of this construction is fairly intuitive, relying on the symmetry of the problem. The trisection of an angle (dividing it into three equal parts) cannot be achieved with the compass and ruler alone (this was first proved by Pierre Wantzel). The internal and external bisectors of an angle are perpendicular. If the angle is formed by the two lines given algebraically as and then the internal and external bisectors are given by the two equations Triangle Concurrencies and collinearities The bisectors of two exterior angles and the bisector of the other interior angle are concurrent. Three intersection points, each of an external angle bisector with the opposite extended side, are collinear (fall on the same line as each other). Three intersection points, two of them between an interior angle bisector and the opposite side, and the third between the other exterior angle bisector and the opposite side extended, are collinear. Angle bisector theorem The angle bisector theorem is concerned with the relative lengths of the two segments that a triangle's side is divided into by a line that bisects the opposite angle. It equates their relative lengths to the relative lengths of the other two sides of the triangle. Lengths If the side lengths of a triangle are , the semiperimeter and A is the angle opposite side , then the length of the internal bisector of angle A is or in trigonometric terms, If the internal bisector of angle A in triangle ABC has length and if this bisector divides the side opposite A into segments of lengths m and n, then where b and c are the side lengths opposite vertices B and C; and the side opposite A is divided in the proportion b:c. If the internal bisectors of angles A, B, and C have lengths and , then No two non-congruent triangles share the same set of three internal angle bisector lengths. Integer triangles There exist integer triangles with a rational angle bisector. Quadrilateral The internal angle bisectors of a convex quadrilateral either form a cyclic quadrilateral (that is, the four intersection points of adjacent angle bisectors are concyclic), or they are concurrent. In the latter case the quadrilateral is a tangential quadrilateral. Rhombus Each diagonal of a rhombus bisects opposite angles. Ex-tangential quadrilateral The excenter of an ex-tangential quadrilateral lies at the intersection of six angle bisectors. These are the internal angle bisectors at two opposite vertex angles, the external angle bisectors (supplementary angle bisectors) at the other two vertex angles, and the external angle bisectors at the angles formed where the extensions of opposite sides intersect. Parabola The tangent to a parabola at any point bisects the angle between the line joining the point to the focus and the line from the point and perpendicular to the directrix. Bisectors of the sides of a polygon Triangle Medians Each of the three medians of a triangle is a line segment going through one vertex and the midpoint of the opposite side, so it bisects that side (though not in general perpendicularly). The three medians intersect each other at a point which is called the centroid of the triangle, which is its center of mass if it has uniform density; thus any line through a triangle's centroid and one of its vertices bisects the opposite side. The centroid is twice as close to the midpoint of any one side as it is to the opposite vertex. Perpendicular bisectors The interior perpendicular bisector of a side of a triangle is the segment, falling entirely on and inside the triangle, of the line that perpendicularly bisects that side. The three perpendicular bisectors of a triangle's three sides intersect at the circumcenter (the center of the circle through the three vertices). Thus any line through a triangle's circumcenter and perpendicular to a side bisects that side. In an acute triangle the circumcenter divides the interior perpendicular bisectors of the two shortest sides in equal proportions. In an obtuse triangle the two shortest sides' perpendicular bisectors (extended beyond their opposite triangle sides to the circumcenter) are divided by their respective intersecting triangle sides in equal proportions. For any triangle the interior perpendicular bisectors are given by and where the sides are and the area is Quadrilateral The two bimedians of a convex quadrilateral are the line segments that connect the midpoints of opposite sides, hence each bisecting two sides. The two bimedians and the line segment joining the midpoints of the diagonals are concurrent at a point called the "vertex centroid" and are all bisected by this point. The four "maltitudes" of a convex quadrilateral are the perpendiculars to a side through the midpoint of the opposite side, hence bisecting the latter side. If the quadrilateral is cyclic (inscribed in a circle), these maltitudes are concurrent at (all meet at) a common point called the "anticenter". Brahmagupta's theorem states that if a cyclic quadrilateral is orthodiagonal (that is, has perpendicular diagonals), then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side. The perpendicular bisector construction forms a quadrilateral from the perpendicular bisectors of the sides of another quadrilateral. Area bisectors and perimeter bisectors Triangle There is an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Three other area bisectors are parallel to the triangle's sides; each of these intersects the other two sides so as to divide them into segments with the proportions . These six lines are concurrent three at a time: in addition to the three medians being concurrent, any one median is concurrent with two of the side-parallel area bisectors. The envelope of the infinitude of area bisectors is a deltoid (broadly defined as a figure with three vertices connected by curves that are concave to the exterior of the deltoid, making the interior points a non-convex set). The vertices of the deltoid are at the midpoints of the medians; all points inside the deltoid are on three different area bisectors, while all points outside it are on just one. The sides of the deltoid are arcs of hyperbolas that are asymptotic to the extended sides of the triangle. The ratio of the area of the envelope of area bisectors to the area of the triangle is invariant for all triangles, and equals i.e. 0.019860... or less than 2%. A cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. The three cleavers concur at (all pass through) the center of the Spieker circle, which is the incircle of the medial triangle. The cleavers are parallel to the angle bisectors. A splitter of a triangle is a line segment having one endpoint at one of the three vertices of the triangle and bisecting the perimeter. The three splitters concur at the Nagel point of the triangle. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle. A line through the incenter bisects one of the area or perimeter if and only if it also bisects the other. Parallelogram Any line through the midpoint of a parallelogram bisects the area and the perimeter. Circle and ellipse All area bisectors and perimeter bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area and perimeter. In the case of a circle they are the diameters of the circle. Bisectors of diagonals Parallelogram The diagonals of a parallelogram bisect each other. Quadrilateral If a line segment connecting the diagonals of a quadrilateral bisects both diagonals, then this line segment (the Newton Line) is itself bisected by the vertex centroid. Volume bisectors A plane that divides two opposite edges of a tetrahedron in a given ratio also divides the volume of the tetrahedron in the same ratio. Thus any plane containing a bimedian (connector of opposite edges' midpoints) of a tetrahedron bisects the volume of the tetrahedron References External links The Angle Bisector at cut-the-knot Angle Bisector definition. Math Open Reference With interactive applet Line Bisector definition. Math Open Reference With interactive applet Perpendicular Line Bisector. With interactive applet Animated instructions for bisecting an angle and bisecting a line Using a compass and straightedge Elementary geometry
Bisection
[ "Mathematics" ]
2,824
[ "Elementary mathematics", "Elementary geometry" ]
152,552
https://en.wikipedia.org/wiki/Aerospike%20engine
The aerospike engine is a type of rocket engine that maintains its aerodynamic efficiency across a wide range of altitudes. It belongs to the class of altitude compensating nozzle engines. Aerospike engines were proposed for many single-stage-to-orbit (SSTO) designs. They were a contender for the Space Shuttle main engine. However, as of 2023 no such engine was in commercial production, although some large-scale aerospikes were in testing phases. The term aerospike was originally used for a truncated plug nozzle with a rough conical taper and some gas injection, forming an "air spike" to help make up for the absence of the plug tail. However, a full-length plug nozzle may also be called an aerospike. Principles The purpose of any engine bell is to direct the exhaust of a rocket engine in one direction, generating thrust in the opposite direction. The exhaust, a high-temperature mix of gases, has an effectively random momentum distribution (i.e., the exhaust pushes in any direction it can). If the exhaust is allowed to escape in this form, only a small part of the flow will be moving in the correct direction and thus contribute to forward thrust. The bell redirects exhaust moving in the wrong direction so that it generates thrust in the correct direction. Ambient air pressure also imparts a small pressure against the exhaust, helping to keep it moving in the "right" direction as it exits the engine. As the vehicle travels upward through the atmosphere, ambient air pressure is reduced. This causes the thrust-generating exhaust to begin to expand outside the edge of the bell. Since this exhaust begins traveling in the "wrong" direction (i.e., outward from the main exhaust plume), the efficiency of the engine is reduced as the rocket travels because this escaping exhaust is no longer contributing to the thrust of the engine. An aerospike rocket engine seeks to eliminate this loss of efficiency. Instead of firing the exhaust out of a small hole in the middle of a bell, an aerospike engine avoids this random distribution by firing along the outside edge of a wedge-shaped protrusion, the "spike", which serves the same function as a traditional engine bell. The spike forms one side of a "virtual" bell, with the other side being formed by the outside air. The idea behind the aerospike design is that at low altitude the ambient pressure compresses the exhaust against the spike. Exhaust recirculation in the base zone of the spike can raise the pressure in that zone to nearly ambient. Since the pressure in front of the vehicle is ambient, this means that the exhaust at the base of the spike nearly balances out with the drag experienced by the vehicle. It gives no overall thrust, but this part of the nozzle also doesn't lose thrust by forming a partial vacuum. The thrust at the base part of the nozzle can be ignored at low altitude. As the vehicle climbs to higher altitudes, the air pressure holding the exhaust against the spike decreases, as does the drag in front of the vehicle. The recirculation zone at the base of the spike maintains the pressure in that zone to a fraction of 1 bar, higher than the near-vacuum in front of the vehicle, thus giving extra thrust as altitude increases. This effectively behaves like an "altitude compensator" in that the size of the bell automatically compensates as air pressure falls. The disadvantages of aerospikes seem to be extra weight for the spike. Furthermore, the larger cooled area can reduce performance below theoretical levels by reducing the pressure against the nozzle. Aerospikes work relatively poorly between Mach 1–3, where the airflow around the vehicle has reduced the pressure, thus reducing the thrust. Variations Several versions of the design exist, differentiated by their shapes. In the toroidal aerospike the spike is bowl-shaped with the exhaust exiting in a ring around the outer rim. In theory this requires an infinitely long spike for best efficiency, but by blowing a small amount of gas out of the center of a shorter truncated spike (like base bleed in an artillery shell), something similar can be achieved. In the linear aerospike the spike consists of a tapered wedge-shaped plate, with exhaust exiting on either side at the "thick" end. This design has the advantage of being stackable, allowing several smaller engines to be placed in a row to make one larger engine while augmenting steering performance with the use of individual engine throttle control. Performance Rocketdyne conducted a lengthy series of tests in the 1960s on various designs. Later models of these engines were based on their highly reliable J-2 engine machinery and provided the same sort of thrust levels as the conventional engines they were based on; 200,000 lbf (890 kN) in the J-2T-200k, and 250,000 lbf (1.1 MN) in the J-2T-250k (the T refers to the toroidal combustion chamber). Thirty years later their work was revived for use in NASA's X-33 project. In this case the slightly upgraded J-2S engine machinery was used with a linear spike, creating the XRS-2200. After more development and considerable testing, this project was cancelled when the X-33's composite fuel tanks repeatedly failed. Three XRS-2200 engines were built during the X-33 program and underwent testing at NASA's Stennis Space Center. The single-engine tests were a success, but the program was halted before the testing for the two-engine setup could be completed. The XRS-2200 produces thrust with an Isp of 339 seconds at sea level, and thrust with an Isp of 436.5 seconds in a vacuum. The RS-2200 Linear Aerospike Engine was derived from the XRS-2200. The RS-2200 was to power the VentureStar single-stage-to-orbit vehicle. In the latest design, seven RS-2200s producing each would boost the VentureStar into low Earth orbit. The development on the RS-2200 was formally halted in early 2001 when the X-33 program did not receive Space Launch Initiative funding. Lockheed Martin chose to not continue the VentureStar program without any funding support from NASA. An engine of this type is on outdoor display on the grounds of the NASA Marshall Space Flight Center in Huntsville Alabama. The cancellation of the Lockheed Martin X-33 by the federal government in 2001 decreased funding availability, but aerospike engines remain an area of active research. For example, a milestone was achieved when a joint academic/industry team from California State University, Long Beach (CSULB) and Garvey Spacecraft Corporation successfully conducted a flight test of a liquid-propellant powered aerospike engine in the Mojave Desert on September 20, 2003. CSULB students had developed their Prospector 2 (P-2) rocket using a 1,000 lbf (4.4 kN) LOX/ethanol aerospike engine. This work on aerospike engines continues; Prospector-10, a ten-chamber aerospike engine, was test-fired June 25, 2008. Further progress came in March 2004 when two successful tests sponsored by the NASA Dryden Flight Research Center using high-power rockets manufactured by Blacksky Corporation, based in Carlsbad, California. The aerospike nozzles and solid rocket motors were developed and built by the rocket motor division of Cesaroni Technology Incorporated, north of Toronto, Ontario. The two rockets were solid-fuel powered and fitted with non-truncated toroidal aerospike nozzles. Flown at the Pecos County Aerospace Development Center, Fort Stockton, Texas, the rockets achieved apogees of and speeds of about Mach 1.5. Small-scale aerospike engine development using a hybrid rocket propellant configuration has been ongoing by members of the Reaction Research Society. In 2020 the TU Dresden and Fraunhofer IWS started their CFDμSAT-Project for research on additively manufactured aerospike-engines. A prototype has already been tested in a test cell at TU Dresden's Institute of Aerospace Engineering, achieving a burn time of 30 seconds. Implementations Firefly Aerospace In July 2014 Firefly Space Systems announced its planned Alpha launcher that uses an aerospike engine for its first stage. Intended for the small satellite launch market, it is designed to launch satellites into low-Earth orbit (LEO) at a price of US$8–9 million, much lower than with conventional launchers. Firefly Alpha 1.0 was designed to carry payloads of up to . It uses carbon composite materials and uses the same basic design for both stages. The plug-cluster aerospike engine puts out of thrust. The engine has a bell-shaped nozzle that has been cut in half, then stretched to form a ring with the half-nozzle now forming the profile of a plug. This rocket design was never launched. The design was abandoned after Firefly Space Systems went bankrupt. A new company, Firefly Aerospace, has replaced the aerospike engine with a conventional engine in the Alpha 2.0 design. However, the company has proposed Firefly Gamma, a partially reusable spaceplane with aerospike engines. ARCA Space In March 2017 ARCA Space Corporation announced their intention to build a single-stage-to-orbit (SSTO) rocket, named Haas 2CA, using a linear aerospike engine. The rocket is designed to send up to 100 kg into low-Earth orbit, at a price of US$1 million per launch. They later announced that their Executor Aerospike engine would produce of thrust at sea level and of thrust in a vacuum. In June 2017, ARCA announced that they would fly their Demonstrator3 rocket to space, also using a linear aerospike engine. This rocket was designed to test several components of their Haas 2CA at lower cost. They announced a flight for August 2017. In September 2017, ARCA announced that, after being delayed, their linear aerospike engine was ready to perform ground tests and flight tests on a Demonstrator3 rocket. On December 20, 2019, ARCA tested the LAS 25DA aerospike steam rocket engine for the Launch Assist System. KSF Space and Interstellar Space Another spike engine concept model, by KSF Space and Interstellar Space in Los Angeles, was designed for orbital vehicle named SATORI. Due to lack of funding, the concept is still undeveloped. Rocketstar Rocketstar planned to launch its 3D-printed aerospike rocket to an altitude of 50 miles in February 2019 but canceled the mission three days ahead of liftoff citing safety concerns. They are working on a second launch attempt. Pangea Aerospace In November 2021, Spain-based Pangea Aerospace began hot-fire testing of its small-scale demonstration methane-oxygen aerospike engine DemoP1. After successfully testing the demonstrator DemoP1, Pangea plans to up-scale to the 300 kN ARCOS engine. Stoke Space Headquartered in Kent, Washington, Stoke Space is building and testing a distributed architecture LH2/LOX aerospike system for its reusable second stage. Polaris Spaceplanes The Bremen-based German startup POLARIS Raumflugzeuge GmbH received a Bundeswehr contract to design and flight test a linear aerospike engine in April 2023. The company is set to test this new engine on board of its fourth spaceplane demonstrator, DEMO-4 MIRA, in late 2023 at Peenemünde, where the V-2 rockets were developed. The original MIRA demonstrator was catastrophically damaged in a runway accident in February 2024. On 29 October 2024, the company was the first ever to ignite an aerospike engine in a flight over the Baltic Sea, powering a four-engine, kerosene-fueled, turbojet MIRA-II demonstrator. The test involved a three-second burn to collect data with minimal engine stress. The vehicle achieved an acceleration of 4 m/s², producing 900 newtons of thrust. Bath Rocket Team Based at the University of Bath, the Bath Rocket Team has been developing their own hybrid rocket engine with an aerospike nozzle since 2020. The engine was first tested at the UK Race to Space National Propulsion Competition in 2023. The team is developing a flight-ready version of the engine they are planning to fly for the first time at EuRoC24. SpaceFields SpaceFields, incubated at IISc, has successfully tested India's first AeroSpike Rocket Engine at its Challakere facility on 11-Sep-2024. The engine achieved a peak thrust of 2000N and featured altitude compensation for optimal efficiency. LEAP 71 LEAP 71 a company based in Dubai, successfully hot fired a 5000N Aerospike powered by cryogenic liquid oxygen (LOX) and Kerosene at the test stand of Airborne Engineering in Westcott, UK. The engine was created through the Noyron Large Computational Engineering Model, and 3D-printed using Selective Laser Melting as a single monolithic part from copper (CuCrZr). The central spike was cooled using LOX, whereas the outer jacket was cooled using the Kerosene fuel. See also Expanding nozzle Expansion deflection nozzle References External links Aerospike Engine Advanced Engines planned for uprated Saturn and Nova boosters — includes the J-2T Linear Aerospike Engine — Propulsion for the X-33 Vehicle Dryden Flight Research Center Aerospike Engine Control System Features And Performance X-33 Attitude Control Using The XRS-2200 Linear Aerospike Engine Are Aerospikes Better Than Bell Nozzles? Rocket propulsion Rocket engines Industrial design Engineering
Aerospike engine
[ "Technology", "Engineering" ]
2,835
[ "Industrial design", "Design engineering", "Engines", "Rocket engines", "Design" ]
152,567
https://en.wikipedia.org/wiki/Generalized%20Riemann%20hypothesis
The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global L-functions, which are formally similar to the Riemann zeta-function. One can then ask the same question about the zeros of these L-functions, yielding various generalizations of the Riemann hypothesis. Many mathematicians believe these generalizations of the Riemann hypothesis to be true. The only cases of these conjectures which have been proven occur in the algebraic function field case (not the number field case). Global L-functions can be associated to elliptic curves, number fields (in which case they are called Dedekind zeta-functions), Maass forms, and Dirichlet characters (in which case they are called Dirichlet L-functions). When the Riemann hypothesis is formulated for Dedekind zeta-functions, it is known as the extended Riemann hypothesis (ERH) and when it is formulated for Dirichlet L-functions, it is known as the generalized Riemann hypothesis or generalised Riemann hypothesis (GRH). These two statements will be discussed in more detail below. (Many mathematicians use the label generalized Riemann hypothesis to cover the extension of the Riemann hypothesis to all global L-functions, not just the special case of Dirichlet L-functions.) Generalized Riemann hypothesis (GRH) The generalized Riemann hypothesis (for Dirichlet L-functions) was probably formulated for the first time by Adolf Piltz in 1884. Like the original Riemann hypothesis, it has far reaching consequences about the distribution of prime numbers. The formal statement of the hypothesis follows. A Dirichlet character is a completely multiplicative arithmetic function χ such that there exists a positive integer k with for all n and whenever . If such a character is given, we define the corresponding Dirichlet L-function by for every complex number s such that . By analytic continuation, this function can be extended to a meromorphic function (only when is primitive) defined on the whole complex plane. The generalized Riemann hypothesis asserts that, for every Dirichlet character χ and every complex number s with , if s is not a negative real number, then the real part of s is 1/2. The case for all n yields the ordinary Riemann hypothesis. Consequences of GRH Dirichlet's theorem states that if a and d are coprime natural numbers, then the arithmetic progression a, , , , ... contains infinitely many prime numbers. Let denote the number of prime numbers in this progression which are less than or equal to x. If the generalized Riemann hypothesis is true, then for every coprime a and d and for every , where is Euler's totient function and is the Big O notation. This is a considerable strengthening of the prime number theorem. If GRH is true, then every proper subgroup of the multiplicative group omits a number less than , as well as a number coprime to n less than . In other words, is generated by a set of numbers less than . This is often used in proofs, and it has many consequences, for example (assuming GRH): The Miller–Rabin primality test is guaranteed to run in polynomial time. (A polynomial-time primality test which does not require GRH, the AKS primality test, was published in 2002.) The Shanks–Tonelli algorithm is guaranteed to run in polynomial time. The Ivanyos–Karpinski–Saxena deterministic algorithm for factoring polynomials over finite fields with prime constant-smooth degrees is guaranteed to run in polynomial time. If GRH is true, then for every prime p there exists a primitive root mod p (a generator of the multiplicative group of integers modulo p) that is less than Goldbach's weak conjecture also follows from the generalized Riemann hypothesis. The yet to be verified proof of Harald Helfgott of this conjecture verifies the GRH for several thousand small characters up to a certain imaginary part to obtain sufficient bounds that prove the conjecture for all integers above 1029, integers below which have already been verified by calculation. Assuming the truth of the GRH, the estimate of the character sum in the Pólya–Vinogradov inequality can be improved to , q being the modulus of the character. Extended Riemann hypothesis (ERH) Suppose K is a number field (a finite-dimensional field extension of the rationals Q) with ring of integers OK (this ring is the integral closure of the integers Z in K). If a is an ideal of OK, other than the zero ideal, we denote its norm by Na. The Dedekind zeta-function of K is then defined by for every complex number s with real part > 1. The sum extends over all non-zero ideals a of OK. The Dedekind zeta-function satisfies a functional equation and can be extended by analytic continuation to the whole complex plane. The resulting function encodes important information about the number field K. The extended Riemann hypothesis asserts that for every number field K and every complex number s with ζK(s) = 0: if the real part of s is between 0 and 1, then it is in fact 1/2. The ordinary Riemann hypothesis follows from the extended one if one takes the number field to be Q, with ring of integers Z. The ERH implies an effective version of the Chebotarev density theorem: if L/K is a finite Galois extension with Galois group G, and C a union of conjugacy classes of G, the number of unramified primes of K of norm below x with Frobenius conjugacy class in C is where the constant implied in the big-O notation is absolute, n is the degree of L over Q, and Δ its discriminant. See also Artin's conjecture Dirichlet L-function Selberg class Grand Riemann hypothesis References Further reading Zeta and L-functions Algebraic geometry Conjectures Unsolved problems in mathematics Bernhard Riemann
Generalized Riemann hypothesis
[ "Mathematics" ]
1,278
[ "Unsolved problems in mathematics", "Fields of abstract algebra", "Conjectures", "Algebraic geometry", "Mathematical problems" ]
152,611
https://en.wikipedia.org/wiki/Cellular%20differentiation
Cellular differentiation is the process in which a stem cell changes from one type to a differentiated one. Usually, the cell changes to a more specialized type. Differentiation happens multiple times during the development of a multicellular organism as it changes from a simple zygote to a complex system of tissues and cell types. Differentiation continues in adulthood as adult stem cells divide and create fully differentiated daughter cells during tissue repair and during normal cell turnover. Some differentiation occurs in response to antigen exposure. Differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals. These changes are largely due to highly controlled modifications in gene expression and are the study of epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Metabolic composition, however, gets dramatically altered where stem cells are characterized by abundant metabolites with highly unsaturated structures whose levels decrease upon differentiation. Thus, different cells can have very different physical characteristics despite having the same genome. A specialized type of differentiation, known as terminal differentiation, is of importance in some tissues, including vertebrate nervous system, striated muscle, epidermis and gut. During terminal differentiation, a precursor cell formerly capable of cell division permanently leaves the cell cycle, dismantles the cell cycle machinery and often expresses a range of genes characteristic of the cell's final function (e.g. myosin and actin for a muscle cell). Differentiation may continue to occur after terminal differentiation if the capacity and functions of the cell undergo further changes. Among dividing cells, there are multiple levels of cell potency, which is the cell's ability to differentiate into other cell types. A greater potency indicates a larger number of cell types that can be derived. A cell that can differentiate into all cell types, including the placental tissue, is known as totipotent. In mammals, only the zygote and subsequent blastomeres are totipotent, while in plants, many differentiated cells can become totipotent with simple laboratory techniques. A cell that can differentiate into all cell types of the adult organism is known as pluripotent. Such cells are called meristematic cells in higher plants and embryonic stem cells in animals, though some groups report the presence of adult pluripotent cells. Virally induced expression of four transcription factors Oct4, Sox2, , and Klf4 (Yamanaka factors) is sufficient to create pluripotent (iPS) cells from adult fibroblasts. A multipotent cell is one that can differentiate into multiple different, but closely related cell types. Oligopotent cells are more restricted than multipotent, but can still differentiate into a few closely related cell types. Finally, unipotent cells can differentiate into only one cell type, but are capable of self-renewal. In cytopathology, the level of cellular differentiation is used as a measure of cancer progression. "Grade" is a marker of how differentiated a cell in a tumor is. Mammalian cell types Three basic categories of cells make up the mammalian body: germ cells, somatic cells, and stem cells. Each of the approximately 37.2 trillion (3.72x1013) cells in an adult human has its own copy or copies of the genome except certain cell types, such as red blood cells, that lack nuclei in their fully differentiated state. Most cells are diploid; they have two copies of each chromosome. Such cells, called somatic cells, make up most of the human body, such as skin and muscle cells. Cells differentiate to specialize for different functions. Germ line cells are any line of cells that give rise to gametes—eggs and sperm—and thus are continuous through the generations. Stem cells, on the other hand, have the ability to divide for indefinite periods and to give rise to specialized cells. They are best described in the context of normal human development. Development begins when a sperm fertilizes an egg and creates a single cell that has the potential to form an entire organism. In the first hours after fertilization, this cell divides into identical cells. In humans, approximately four days after fertilization and after several cycles of cell division, these cells begin to specialize, forming a hollow sphere of cells, called a blastocyst. The blastocyst has an outer layer of cells, and inside this hollow sphere, there is a cluster of cells called the inner cell mass. The cells of the inner cell mass go on to form virtually all of the tissues of the human body. Although the cells of the inner cell mass can form virtually every type of cell found in the human body, they cannot form an organism. These cells are referred to as pluripotent. Pluripotent stem cells undergo further specialization into multipotent progenitor cells that then give rise to functional cells. Examples of stem and progenitor cells include: Radial glial cells (embryonic neural stem cells) that give rise to excitatory neurons in the fetal brain through the process of neurogenesis. Hematopoietic stem cells (adult stem cells) from the bone marrow that give rise to red blood cells, white blood cells, and platelets. Mesenchymal stem cells (adult stem cells) from the bone marrow that give rise to stromal cells, fat cells, and types of bone cells Epithelial stem cells (progenitor cells) that give rise to the various types of skin cells Muscle satellite cells (progenitor cells) that contribute to differentiated muscle tissue. A pathway that is guided by the cell adhesion molecules consisting of four amino acids, arginine, glycine, asparagine, and serine, is created as the cellular blastomere differentiates from the single-layered blastula to the three primary layers of germ cells in mammals, namely the ectoderm, mesoderm and endoderm (listed from most distal (exterior) to proximal (interior)). The ectoderm ends up forming the skin and the nervous system, the mesoderm forms the bones and muscular tissue, and the endoderm forms the internal organ tissues. Dedifferentiation Dedifferentiation, or integration, is a cellular process seen in the more basal life forms in animals, such as worms and amphibians where a differentiated cell reverts to an earlier developmental stageusually as part of a regenerative process. Dedifferentiation also occurs in plant cells. And, in cell culture in the laboratory, cells can change shape or may lose specific properties such as protein expressionwhich processes are also termed dedifferentiation. Some hypothesize that dedifferentiation is an aberration that likely results in cancers, but others explain it as a natural part of the immune response that was lost to humans at some point of evolution. A newly discovered molecule dubbed reversine, a purine analog, has proven to induce dedifferentiation in myotubes. These manifestly dedifferentiated cellsnow performing essentially as stem cellscould then redifferentiate into osteoblasts and adipocytes. Mechanisms Each specialized cell type in an organism expresses a subset of all the genes that constitute the genome of that species. Each cell type is defined by its particular pattern of regulated gene expression. Cell differentiation is thus a transition of a cell from one cell type to another and it involves a switch from one pattern of gene expression to another. Cellular differentiation during development can be understood as the result of a gene regulatory network. A regulatory gene and its cis-regulatory modules are nodes in a gene regulatory network; they receive input and create output elsewhere in the network. The systems biology approach to developmental biology emphasizes the importance of investigating how developmental mechanisms interact to produce predictable patterns (morphogenesis). However, an alternative view has been proposed recently. Based on stochastic gene expression, cellular differentiation is the result of a Darwinian selective process occurring among cells. In this frame, protein and gene networks are the result of cellular processes and not their cause. While evolutionarily conserved molecular processes are involved in the cellular mechanisms underlying these switches, in animal species these are very different from the well-characterized gene regulatory mechanisms of bacteria, and even from those of the animals' closest unicellular relatives. Specifically, cell differentiation in animals is highly dependent on biomolecular condensates of regulatory proteins and enhancer DNA sequences. Cellular differentiation is often controlled by cell signaling. Many of the signal molecules that convey information from cell to cell during the control of cellular differentiation are called growth factors. Although the details of specific signal transduction pathways vary, these pathways often share the following general steps. A ligand produced by one cell binds to a receptor in the extracellular region of another cell, inducing a conformational change in the receptor. The shape of the cytoplasmic domain of the receptor changes, and the receptor acquires enzymatic activity. The receptor then catalyzes reactions that phosphorylate other proteins, activating them. A cascade of phosphorylation reactions eventually activates a dormant transcription factor or cytoskeletal protein, thus contributing to the differentiation process in the target cell. Cells and tissues can vary in competence, their ability to respond to external signals. Signal induction refers to cascades of signaling events, during which a cell or tissue signals to another cell or tissue to influence its developmental fate. Yamamoto and Jeffery investigated the role of the lens in eye formation in cave- and surface-dwelling fish, a striking example of induction. Through reciprocal transplants, Yamamoto and Jeffery found that the lens vesicle of surface fish can induce other parts of the eye to develop in cave- and surface-dwelling fish, while the lens vesicle of the cave-dwelling fish cannot. Other important mechanisms fall under the category of asymmetric cell divisions, divisions that give rise to daughter cells with distinct developmental fates. Asymmetric cell divisions can occur because of asymmetrically expressed maternal cytoplasmic determinants or because of signaling. In the former mechanism, distinct daughter cells are created during cytokinesis because of an uneven distribution of regulatory molecules in the parent cell; the distinct cytoplasm that each daughter cell inherits results in a distinct pattern of differentiation for each daughter cell. A well-studied example of pattern formation by asymmetric divisions is body axis patterning in Drosophila. RNA molecules are an important type of intracellular differentiation control signal. The molecular and genetic basis of asymmetric cell divisions has also been studied in green algae of the genus Volvox, a model system for studying how unicellular organisms can evolve into multicellular organisms. In Volvox carteri, the 16 cells in the anterior hemisphere of a 32-cell embryo divide asymmetrically, each producing one large and one small daughter cell. The size of the cell at the end of all cell divisions determines whether it becomes a specialized germ or somatic cell. Epigenetic control Since each cell, regardless of cell type, possesses the same genome, determination of cell type must occur at the level of gene expression. While the regulation of gene expression can occur through cis- and trans-regulatory elements including a gene's promoter and enhancers, the problem arises as to how this expression pattern is maintained over numerous generations of cell division. As it turns out, epigenetic processes play a crucial role in regulating the decision to adopt a stem, progenitor, or mature cell fate This section will focus primarily on mammalian stem cells. In systems biology and mathematical modeling of gene regulatory networks, cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory. Importance of epigenetic control The first question that can be asked is the extent and complexity of the role of epigenetic processes in the determination of cell fate. A clear answer to this question can be seen in the 2011 paper by Lister R, et al. on aberrant epigenomic programming in human induced pluripotent stem cells. As induced pluripotent stem cells (iPSCs) are thought to mimic embryonic stem cells in their pluripotent properties, few epigenetic differences should exist between them. To test this prediction, the authors conducted whole-genome profiling of DNA methylation patterns in several human embryonic stem cell (ESC), iPSC, and progenitor cell lines. Female adipose cells, lung fibroblasts, and foreskin fibroblasts were reprogrammed into induced pluripotent state with the OCT4, SOX2, KLF4, and MYC genes. Patterns of DNA methylation in ESCs, iPSCs, somatic cells were compared. Lister R, et al. observed significant resemblance in methylation levels between embryonic and induced pluripotent cells. Around 80% of CG dinucleotides in ESCs and iPSCs were methylated, the same was true of only 60% of CG dinucleotides in somatic cells. In addition, somatic cells possessed minimal levels of cytosine methylation in non-CG dinucleotides, while induced pluripotent cells possessed similar levels of methylation as embryonic stem cells, between 0.5 and 1.5%. Thus, consistent with their respective transcriptional activities, DNA methylation patterns, at least on the genomic level, are similar between ESCs and iPSCs. However, upon examining methylation patterns more closely, the authors discovered 1175 regions of differential CG dinucleotide methylation between at least one ES or iPS cell line. By comparing these regions of differential methylation with regions of cytosine methylation in the original somatic cells, 44-49% of differentially methylated regions reflected methylation patterns of the respective progenitor somatic cells, while 51-56% of these regions were dissimilar to both the progenitor and embryonic cell lines. In vitro-induced differentiation of iPSC lines saw transmission of 88% and 46% of hyper and hypo-methylated differentially methylated regions, respectively. Two conclusions are readily apparent from this study. First, epigenetic processes are heavily involved in cell fate determination, as seen from the similar levels of cytosine methylation between induced pluripotent and embryonic stem cells, consistent with their respective patterns of transcription. Second, the mechanisms of reprogramming (and by extension, differentiation) are very complex and cannot be easily duplicated, as seen by the significant number of differentially methylated regions between ES and iPS cell lines. Now that these two points have been established, we can examine some of the epigenetic mechanisms that are thought to regulate cellular differentiation. Mechanisms of epigenetic regulation Pioneer factors (Oct4, Sox2, Nanog) Three transcription factors, OCT4, SOX2, and NANOG – the first two of which are used in induced pluripotent stem cell (iPSC) reprogramming, along with Klf4 and c-Myc – are highly expressed in undifferentiated embryonic stem cells and are necessary for the maintenance of their pluripotency. It is thought that they achieve this through alterations in chromatin structure, such as histone modification and DNA methylation, to restrict or permit the transcription of target genes. While highly expressed, their levels require a precise balance to maintain pluripotency, perturbation of which will promote differentiation towards different lineages based on how the gene expression levels change. Differential regulation of Oct-4 and SOX2 levels have been shown to precede germ layer fate selection. Increased levels of Oct4 and decreased levels of Sox2 promote a mesendodermal fate, with Oct4 actively suppressing genes associated with a neural ectodermal fate. Similarly, increased levels of Sox2 and decreased levels of Oct4 promote differentiation towards a neural ectodermal fate, with Sox2 inhibiting differentiation towards a mesendodermal fate. Regardless of the lineage cells differentiate down, suppression of NANOG has been identified as a necessary prerequisite for differentiation. Polycomb repressive complex (PRC2) In the realm of gene silencing, Polycomb repressive complex 2, one of two classes of the Polycomb group (PcG) family of proteins, catalyzes the di- and tri-methylation of histone H3 lysine 27 (H3K27me2/me3). By binding to the H3K27me2/3-tagged nucleosome, PRC1 (also a complex of PcG family proteins) catalyzes the mono-ubiquitinylation of histone H2A at lysine 119 (H2AK119Ub1), blocking RNA polymerase II activity and resulting in transcriptional suppression. PcG knockout ES cells do not differentiate efficiently into the three germ layers, and deletion of the PRC1 and PRC2 genes leads to increased expression of lineage-affiliated genes and unscheduled differentiation. Presumably, PcG complexes are responsible for transcriptionally repressing differentiation and development-promoting genes. Trithorax group proteins (TrxG) Alternately, upon receiving differentiation signals, PcG proteins are recruited to promoters of pluripotency transcription factors. PcG-deficient ES cells can begin differentiation but cannot maintain the differentiated phenotype. Simultaneously, differentiation and development-promoting genes are activated by Trithorax group (TrxG) chromatin regulators and lose their repression. TrxG proteins are recruited at regions of high transcriptional activity, where they catalyze the trimethylation of histone H3 lysine 4 (H3K4me3) and promote gene activation through histone acetylation. PcG and TrxG complexes engage in direct competition and are thought to be functionally antagonistic, creating at differentiation and development-promoting loci what is termed a "bivalent domain" and rendering these genes sensitive to rapid induction or repression. DNA methylation Regulation of gene expression is further achieved through DNA methylation, in which the DNA methyltransferase-mediated methylation of cytosine residues in CpG dinucleotides maintains heritable repression by controlling DNA accessibility. The majority of CpG sites in embryonic stem cells are unmethylated and appear to be associated with H3K4me3-carrying nucleosomes. Upon differentiation, a small number of genes, including OCT4 and NANOG, are methylated and their promoters repressed to prevent their further expression. Consistently, DNA methylation-deficient embryonic stem cells rapidly enter apoptosis upon in vitro differentiation. Nucleosome positioning While the DNA sequence of most cells of an organism is the same, the binding patterns of transcription factors and the corresponding gene expression patterns are different. To a large extent, differences in transcription factor binding are determined by the chromatin accessibility of their binding sites through histone modification and/or pioneer factors. In particular, it is important to know whether a nucleosome is covering a given genomic binding site or not. This can be determined using a chromatin immunoprecipitation assay. Histone acetylation and methylation DNA-nucleosome interactions are characterized by two states: either tightly bound by nucleosomes and transcriptionally inactive, called heterochromatin, or loosely bound and usually, but not always, transcriptionally active, called euchromatin. The epigenetic processes of histone methylation and acetylation, and their inverses demethylation and deacetylation primarily account for these changes. The effects of acetylation and deacetylation are more predictable. An acetyl group is either added to or removed from the positively charged Lysine residues in histones by enzymes called histone acetyltransferases or histone deactylases, respectively. The acetyl group prevents Lysine's association with the negatively charged DNA backbone. Methylation is not as straightforward, as neither methylation nor demethylation consistently correlate with either gene activation or repression. However, certain methylations have been repeatedly shown to either activate or repress genes. The trimethylation of lysine 4 on histone 3 (H3K4Me3) is associated with gene activation, whereas trimethylation of lysine 27 on histone 3 represses genes In stem cells During differentiation, stem cells change their gene expression profiles. Recent studies have implicated a role for nucleosome positioning and histone modifications during this process. There are two components of this process: turning off the expression of embryonic stem cell (ESC) genes, and the activation of cell fate genes. Lysine specific demethylase 1 (KDM1A) is thought to prevent the use of enhancer regions of pluripotency genes, thereby inhibiting their transcription. It interacts with Mi-2/NuRD complex (nucleosome remodelling and histone deacetylase) complex, giving an instance where methylation and acetylation are not discrete and mutually exclusive, but intertwined processes. Role of signaling in epigenetic control A final question to ask concerns the role of cell signaling in influencing the epigenetic processes governing differentiation. Such a role should exist, as it would be reasonable to think that extrinsic signaling can lead to epigenetic remodeling, just as it can lead to changes in gene expression through the activation or repression of different transcription factors. Little direct data is available concerning the specific signals that influence the epigenome, and the majority of current knowledge about the subject consists of speculations on plausible candidate regulators of epigenetic remodeling. We will first discuss several major candidates thought to be involved in the induction and maintenance of both embryonic stem cells and their differentiated progeny, and then turn to one example of specific signaling pathways in which more direct evidence exists for its role in epigenetic change. The first major candidate is Wnt signaling pathway. The Wnt pathway is involved in all stages of differentiation, and the ligand Wnt3a can substitute for the overexpression of c-Myc in the generation of induced pluripotent stem cells. On the other hand, disruption of β-catenin, a component of the Wnt signaling pathway, leads to decreased proliferation of neural progenitors. Growth factors comprise the second major set of candidates of epigenetic regulators of cellular differentiation. These morphogens are crucial for development, and include bone morphogenetic proteins, transforming growth factors (TGFs), and fibroblast growth factors (FGFs). TGFs and FGFs have been shown to sustain expression of OCT4, SOX2, and NANOG by downstream signaling to Smad proteins. Depletion of growth factors promotes the differentiation of ESCs, while genes with bivalent chromatin can become either more restrictive or permissive in their transcription. Several other signaling pathways are also considered to be primary candidates. Cytokine leukemia inhibitory factors are associated with the maintenance of mouse ESCs in an undifferentiated state. This is achieved through its activation of the Jak-STAT3 pathway, which has been shown to be necessary and sufficient towards maintaining mouse ESC pluripotency. Retinoic acid can induce differentiation of human and mouse ESCs, and Notch signaling is involved in the proliferation and self-renewal of stem cells. Finally, Sonic hedgehog, in addition to its role as a morphogen, promotes embryonic stem cell differentiation and the self-renewal of somatic stem cells. The problem, of course, is that the candidacy of these signaling pathways was inferred primarily on the basis of their role in development and cellular differentiation. While epigenetic regulation is necessary for driving cellular differentiation, they are certainly not sufficient for this process. Direct modulation of gene expression through modification of transcription factors plays a key role that must be distinguished from heritable epigenetic changes that can persist even in the absence of the original environmental signals. Only a few examples of signaling pathways leading to epigenetic changes that alter cell fate currently exist, and we will focus on one of them. Expression of Shh (Sonic hedgehog) upregulates the production of BMI1, a component of the PcG complex that recognizes H3K27me3. This occurs in a Gli-dependent manner, as Gli1 and Gli2 are downstream effectors of the Hedgehog signaling pathway. In culture, Bmi1 mediates the Hedgehog pathway's ability to promote human mammary stem cell self-renewal. In both humans and mice, researchers showed Bmi1 to be highly expressed in proliferating immature cerebellar granule cell precursors. When Bmi1 was knocked out in mice, impaired cerebellar development resulted, leading to significant reductions in postnatal brain mass along with abnormalities in motor control and behavior. A separate study showed a significant decrease in neural stem cell proliferation along with increased astrocyte proliferation in Bmi null mice. An alternative model of cellular differentiation during embryogenesis is that positional information is based on mechanical signalling by the cytoskeleton using Embryonic differentiation waves. The mechanical signal is then epigenetically transduced via signal transduction systems (of which specific molecules such as Wnt are part) to result in differential gene expression. In summary, the role of signaling in the epigenetic control of cell fate in mammals is largely unknown, but distinct examples exist that indicate the likely existence of further such mechanisms. Effect of matrix elasticity In order to fulfill the purpose of regenerating a variety of tissues, adult stems are known to migrate from their niches, adhere to new extracellular matrices (ECM) and differentiate. The ductility of these microenvironments are unique to different tissue types. The ECM surrounding brain, muscle and bone tissues range from soft to stiff. The transduction of the stem cells into these cells types is not directed solely by chemokine cues and cell to cell signaling. The elasticity of the microenvironment can also affect the differentiation of mesenchymal stem cells (MSCs which originate in bone marrow.) When MSCs are placed on substrates of the same stiffness as brain, muscle and bone ECM, the MSCs take on properties of those respective cell types. Matrix sensing requires the cell to pull against the matrix at focal adhesions, which triggers a cellular mechano-transducer to generate a signal to be informed what force is needed to deform the matrix. To determine the key players in matrix-elasticity-driven lineage specification in MSCs, different matrix microenvironments were mimicked. From these experiments, it was concluded that focal adhesions of the MSCs were the cellular mechano-transducer sensing the differences of the matrix elasticity. The non-muscle myosin IIa-c isoforms generates the forces in the cell that lead to signaling of early commitment markers. Nonmuscle myosin IIa generates the least force increasing to non-muscle myosin IIc. There are also factors in the cell that inhibit non-muscle myosin II, such as blebbistatin. This makes the cell effectively blind to the surrounding matrix. Researchers have achieved some success in inducing stem cell-like properties in HEK 239 cells by providing a soft matrix without the use of diffusing factors. The stem-cell properties appear to be linked to tension in the cells' actin network. One identified mechanism for matrix-induced differentiation is tension-induced proteins, which remodel chromatin in response to mechanical stretch. The RhoA pathway is also implicated in this process. Evolutionary history A billion-years-old, likely holozoan, protist, Bicellum brasieri with two types of cells, shows that the evolution of differentiated multicellularity, possibly but not necessarily of animal lineages, occurred at least 1 billion years ago and possibly mainly in freshwater lakes rather than the ocean. See also Interbilayer Forces in Membrane Fusion Fusion mechanism Lipid bilayer fusion Cell-cell fusogens CAF-1 List of human cell types derived from the germ layers References Cellular processes Developmental biology Induced stem cells
Cellular differentiation
[ "Biology" ]
5,938
[ "Behavior", "Developmental biology", "Stem cell research", "Reproduction", "Cellular processes", "Induced stem cells" ]
152,623
https://en.wikipedia.org/wiki/Radiology
Radiology ( ) is the medical specialty that uses medical imaging to diagnose diseases and guide treatment within the bodies of humans and other animals. It began with radiography (which is why its name has a root referring to radiation), but today it includes all imaging modalities. This includes technologies that use no ionizing electromagnetic radiation, such as ultrasonography and magnetic resonance imaging), as well as others that do use radiation, such as computed tomography (CT), fluoroscopy, and nuclear medicine including positron emission tomography (PET). Interventional radiology is the performance of usually minimally invasive medical procedures with the guidance of imaging technologies such as those mentioned above. The modern practice of radiology involves a team of several different healthcare professionals. A radiologist, who is a medical doctor with specialized post-graduate training, interprets medical images, communicates these findings to other physicians through reports or verbal communication, and uses imaging to perform minimally invasive medical procedures The nurse is involved in the care of patients before and after imaging or procedures, including administration of medications, monitoring of vital signs and monitoring of sedated patients. The radiographer, also known as a "radiologic technologist" in some countries such as the United States and Canada, is a specially trained healthcare professional that uses sophisticated technology and positioning techniques to produce medical images for the radiologist to interpret. Depending on the individual's training and country of practice, the radiographer may specialize in one of the above-mentioned imaging modalities or have expanded roles in image reporting. Diagnostic imaging modalities Projection (plain) radiography Radiographs (originally called roentgenographs, named after the discoverer of X-rays, Wilhelm Conrad Röntgen) are produced by transmitting X-rays through a patient. The X-rays are projected through the body onto a detector; an image is formed based on which rays pass through (and are detected) versus those that are absorbed or scattered in the patient (and thus are not detected). Röntgen discovered X-rays on November 8, 1895, and received the first Nobel Prize in Physics for his discovery in 1901. In film-screen radiography, an X-ray tube generates a beam of X-rays, which is aimed at the patient. The X-rays that pass through the patient are filtered through a device called a grid or X-ray filter, to reduce scatter, and strike an undeveloped film, which is held tightly to a screen of light-emitting phosphors in a light-tight cassette. The film is then developed chemically and an image appears on the film. Film-screen radiography is being replaced by phosphor plate radiography but more recently by digital radiography (DR) and the EOS imaging. In the two latest systems, the X-rays strike sensors that converts the signals generated into digital information, which is transmitted and converted into an image displayed on a computer screen. In digital radiography the sensors shape a plate, but in the EOS system, which is a slot-scanning system, a linear sensor vertically scans the patient. Plain radiography was the only imaging modality available during the first 50 years of radiology. Due to its availability, speed, and lower costs compared to other modalities, radiography is often the first-line test of choice in radiologic diagnosis. Also despite the large amount of data in CT scans, MR scans and other digital-based imaging, there are many disease entities in which the classic diagnosis is obtained by plain radiographs. Examples include various types of arthritis and pneumonia, bone tumors (especially benign bone tumors), fractures, congenital skeletal anomalies, and certain kidney stones. Mammography and DXA are two applications of low energy projectional radiography, used for the evaluation for breast cancer and osteoporosis, respectively. Fluoroscopy Fluoroscopy and angiography are special applications of X-ray imaging, in which a fluorescent screen and image intensifier tube is connected to a closed-circuit television system. This allows real-time imaging of structures in motion or augmented with a radiocontrast agent. Radiocontrast agents are usually administered by swallowing or injecting into the body of the patient to delineate anatomy and functioning of the blood vessels, the genitourinary system, or the gastrointestinal tract (GI tract). Two radiocontrast agents are presently in common use. Barium sulfate (BaSO4) is given orally or rectally for evaluation of the GI tract. Iodine, in multiple proprietary forms, is given by oral, rectal, vaginal, intra-arterial or intravenous routes. These radiocontrast agents strongly absorb or scatter X-rays, and in conjunction with the real-time imaging, allow demonstration of dynamic processes, such as peristalsis in the digestive tract or blood flow in arteries and veins. Iodine contrast may also be concentrated in abnormal areas more or less than in normal tissues and make abnormalities (tumors, cysts, inflammation) more conspicuous. Additionally, in specific circumstances, air can be used as a contrast agent for the gastrointestinal system and carbon dioxide can be used as a contrast agent in the venous system; in these cases, the contrast agent attenuates the X-ray radiation less than the surrounding tissues. Computed tomography CT imaging uses X-rays in conjunction with computing algorithms to image the body. In CT, an X-ray tube opposite an X-ray detector (or detectors) in a ring-shaped apparatus rotate around a patient, producing a computer-generated cross-sectional image (tomogram). CT is acquired in the axial plane, with coronal and sagittal images produced by computer reconstruction. Radiocontrast agents are often used with CT for enhanced delineation of anatomy. Although radiographs provide higher spatial resolution, CT can detect more subtle variations in attenuation of X-rays (higher contrast resolution). CT exposes the patient to significantly more ionizing radiation than a radiograph. Spiral multidetector CT uses 16, 64, 254 or more detectors during continuous motion of the patient through the radiation beam to obtain fine detail images in a short exam time. With rapid administration of intravenous contrast during the CT scan, these fine detail images can be reconstructed into three-dimensional (3D) images of carotid, cerebral, coronary or other arteries. The introduction of computed tomography in the early 1970s revolutionized diagnostic radiology by providing front-line clinicians with detailed images of anatomic structures in three dimensions. CT scanning has become the test of choice in diagnosing some urgent and emergent conditions, such as cerebral hemorrhage, pulmonary embolism (clots in the arteries of the lungs), aortic dissection (tearing of the aortic wall), appendicitis, diverticulitis, and obstructing kidney stones. Before the development of CT imaging, risky and painful exploratory surgery was often the only way to obtain a definitive diagnosis of the cause of severe abdominal pain which could not be otherwise ascertained from external observation. Continuing improvements in CT technology, including faster scanning times and improved resolution, have dramatically increased the accuracy and usefulness of CT scanning, which may partially account for increased use in medical diagnosis. Ultrasound Medical ultrasonography uses ultrasound (high-frequency sound waves) to visualize soft tissue structures in the body in real time. No ionizing radiation is involved, but the quality of the images obtained using ultrasound is highly dependent on the skill of the person (ultrasonographer) performing the exam and the patient's body size. Examinations of larger, overweight patients may have a decrease in image quality as their subcutaneous fat absorbs more of the sound waves. This results in fewer sound waves penetrating to organs and reflecting back to the transducer, resulting in loss of information and a poorer quality image. Ultrasound is also limited by its inability to image through air pockets (lungs, bowel loops) or bone. Its use in medical imaging has developed mostly within the last 30 years. The first ultrasound images were static and two-dimensional (2D), but with modern ultrasonography, 3D reconstructions can be observed in real time, effectively becoming "4D". Because ultrasound imaging techniques do not employ ionizing radiation to generate images (unlike radiography, and CT scans), they are generally considered safer and are therefore more common in obstetrical imaging. The progression of pregnancies can be thoroughly evaluated with less concern about damage from the techniques employed, allowing early detection and diagnosis of many fetal anomalies. Growth can be assessed over time, important in patients with chronic disease or pregnancy-induced disease, and in multiple pregnancies (twins, triplets, etc.). Color-flow Doppler ultrasound measures the severity of peripheral vascular disease and is used by cardiologists for dynamic evaluation of the heart, heart valves and major vessels. Stenosis, for example, of the carotid arteries may be a warning sign for an impending stroke. A clot, embedded deep in one of the inner veins of the legs, can be found via ultrasound before it dislodges and travels to the lungs, resulting in a potentially fatal pulmonary embolism. Ultrasound is useful as a guide to performing biopsies to minimize damage to surrounding tissues and in drainages such as thoracentesis. Small, portable ultrasound devices now replace peritoneal lavage in trauma wards by non-invasively assessing for the presence of internal bleeding and any internal organ damage. Extensive internal bleeding or injury to the major organs may require surgery and repair. Magnetic resonance imaging MRI uses strong magnetic fields to align atomic nuclei (usually hydrogen protons) within body tissues, then uses a radio signal to disturb the axis of rotation of these nuclei and observes the radio frequency signal generated as the nuclei return to their baseline states. The radio signals are collected by small antennae, called coils, placed near the area of interest. An advantage of MRI is its ability to produce images in axial, coronal, sagittal and multiple oblique planes with equal ease. MRI scans give the best soft tissue contrast of all the imaging modalities. With advances in scanning speed and spatial resolution, and improvements in computer 3D algorithms and hardware, MRI has become an important tool in musculoskeletal radiology and neuroradiology. One disadvantage is the patient has to hold still for long periods of time in a noisy, cramped space while the imaging is performed. Claustrophobia (fear of closed spaces) severe enough to terminate the MRI exam is reported in up to 5% of patients. Recent improvements in magnet design including stronger magnetic fields (3 teslas), shortening exam times, wider, shorter magnet bores and more open magnet designs, have brought some relief for claustrophobic patients. However, for magnets with equivalent field strengths, there is often a trade-off between image quality and open design. MRI has great benefit in imaging the brain, spine, and musculoskeletal system. The use of MRI is currently contraindicated for patients with pacemakers, cochlear implants, some indwelling medication pumps, certain types of cerebral aneurysm clips, metal fragments in the eyes, some metallic hardware due to the powerful magnetic fields, and strong fluctuating radio signals to which the body is exposed. Areas of potential advancement include functional imaging, cardiovascular MRI, and MRI-guided therapy. Nuclear medicine Nuclear medicine imaging involves the administration into the patient of radiopharmaceuticals consisting of substances with affinity for certain body tissues labeled with radioactive tracer. The most commonly used tracers are technetium-99m, iodine-123, iodine-131, gallium-67, indium-111, thallium-201 and fludeoxyglucose (18F) (18F-FDG). The heart, lungs, thyroid, liver, brain, gallbladder, and bones are commonly evaluated for particular conditions using these techniques. While anatomical detail is limited in these studies, nuclear medicine is useful in displaying physiological function. The excretory function of the kidneys, iodine-concentrating ability of the thyroid, blood flow to heart muscle, etc. can be measured. The principal imaging devices are the gamma camera and the PET Scanner, which detect the radiation emitted by the tracer in the body and display it as an image. With computer processing, the information can be displayed as axial, coronal and sagittal images (single-photon emission computed tomography - SPECT or Positron-emission tomography - PET). In the most modern devices, nuclear medicine images can be fused with a CT scan taken quasisimultaneously, so the physiological information can be overlaid or coregistered with the anatomical structures to improve diagnostic accuracy. Positron emission tomography (PET) scanning deals with positrons instead of gamma rays detected by gamma cameras. The positrons annihilate to produce two opposite traveling gamma rays to be detected coincidentally, thus improving resolution. In PET scanning, a radioactive, biologically active substance, most often 18F-FDG, is injected into a patient and the radiation emitted by the patient is detected to produce multiplanar images of the body. Metabolically more active tissues, such as cancer, concentrate the active substance more than normal tissues. PET images can be combined (or "fused") with anatomic (CT) imaging, to more accurately localize PET findings and thereby improve diagnostic accuracy. The fusion technology has gone further to combine PET and MRI similar to PET and CT. PET/MRI fusion, largely practiced in academic and research settings, could potentially play a crucial role in fine detail of brain imaging, breast cancer screening, and small joint imaging of the foot. The technology recently blossomed after passing the technical hurdle of altered positron movement in strong magnetic field thus affecting the resolution of PET images and attenuation correction. Interventional radiology Interventional radiology (IR or sometimes VIR for vascular and interventional radiology) is a subspecialty of radiology in which minimally invasive procedures are performed using image guidance. Some of these procedures are done for purely diagnostic purposes (e.g., angiogram), while others are done for treatment purposes (e.g., angioplasty). The basic concept behind interventional radiology is to diagnose or treat pathologies, with the most minimally invasive technique possible. Minimally invasive procedures are currently performed more than ever before. These procedures are often performed with the patient fully awake, with little or no sedation required. Interventional radiologists and interventional radiographers diagnose and treat several disorders, including peripheral vascular disease, renal artery stenosis, inferior vena cava filter placement, gastrostomy tube placements, biliary stents and hepatic interventions. Radiographic images, fluoroscopy, and ultrasound modalities are used for guidance, and the primary instruments used during the procedure are specialized needles and catheters. The images provide maps that allow the clinician to guide these instruments through the body to the areas containing disease. By minimizing the physical trauma to the patient, peripheral interventions can reduce infection rates and recovery times, as well as hospital stays. To be a trained interventionalist in the United States, an individual completes a five-year residency in radiology and a one- or two-year fellowship in IR. Analysis of images Plain, or general, radiography The basic technique is optical density evaluation (i.e. histogram analysis). It is then described that a region has a different optical density, e.g. a cancer metastasis to bone can cause radiolucency. The development of this is the digital radiological subtraction. It consists in overlapping two radiographs of the same examined region and subtracting the optical densities Comparison of changes in dental and bone radiographic densities in the presence of different soft-tissue simulators using pixel intensity and digital subtraction analyses. The resultant image only contains the time-dependent differences between the two examined radiographs. The advantage of this technique is the precise determination of the dynamics of density changes and the place of their occurrence. However, beforehand the geometrical adjustment and general alignment of optical density should be done Noise in subtraction images made from pairs of intraoral radiographs: a comparison between four methods of geometric alignment. Another possibility of radiographic image analysis is to study second order features, e.g. digital texture analysis Basic research Textural entropy as a potential feature for quantitative assessment of jaw bone healing process Comparative Analysis of Three Bone Substitute Materials Based on Co-Occurrence Matrix or fractal dimension Using fractal dimension to evaluate alveolar bone defects treated with various bone substitute materials. On this basis, it is possible to assess the places where bio-materials are implanted into the bone for the purpose of guided bone regeneration. They take an intact bone image sample (region of interest, ROI, reference site) and a sample of the implantation site (second ROI, test site) can be assessed numerically/objectively to what extent the implantation site imitates a healthy bone and how advanced is the process of bone regeneration Fast-Versus Slow-Resorbable Calcium Phosphate Bone Substitute Materials—Texture Analysis after 12 Months of Observation New Oral Surgery Materials for Bone Reconstruction—A Comparison of Five Bone Substitute Materials for Dentoalveolar Augmentation. It is also possible to check whether the bone healing process is influenced by some systemic factors Influence of General Mineral Condition on Collagen-Guided Alveolar Crest Augmentation. Teleradiology Teleradiology is the transmission of radiographic images from one location to another for interpretation by an appropriately trained professional, usually a radiologist or reporting radiographer. It is most often used to allow rapid interpretation of emergency room, ICU and other emergent examinations after hours of usual operation, at night and on weekends. In these cases, the images can be sent across time zones (e.g. to Spain, Australia, India) with the receiving Clinician working his normal daylight hours. However, at present, large private teleradiology companies in the U.S. currently provide most after-hours coverage employing night-working radiologists in the U.S. Teleradiology can also be used to obtain consultation with an expert or subspecialist about a complicated or puzzling case. In the U.S., many hospitals outsource their radiology departments to radiologists in India due to the lowered cost and availability of high speed internet access. Teleradiology requires a sending station, a high-speed internet connection, and a high-quality receiving station. At the transmission station, plain radiographs are passed through a digitizing machine before transmission, while CT, MRI, ultrasound and nuclear medicine scans can be sent directly, as they are already digital data. The computer at the receiving end will need to have a high-quality display screen that has been tested and cleared for clinical purposes. Reports are then transmitted to the requesting clinician. The major advantage of teleradiology is the ability to use different time zones to provide real-time emergency radiology services around-the-clock. The disadvantages include higher costs, limited contact between the referrer and the reporting Clinician, and the inability to cover for procedures requiring an onsite reporting Clinician. Laws and regulations concerning the use of teleradiology vary among the states, with some requiring a license to practice medicine in the state sending the radiologic exam. In the U.S., some states require the teleradiology report to be preliminary with the official report issued by a hospital staff radiologist. Lastly, a benefit of teleradiology is that it might be automated with modern machine learning techniques. Patient interaction Some radiologists, like teleradiologists, have no interaction with patients. Other radiologists, like interventional radiologists, primarily interact with patients and spend less time analyzing images. Diagnostic radiologists tend to spend the majority of their time analyzing images and a minority of their time interacting with patients. Compared to the healthcare provider who sends the patient to have images interpreted by a diagnostic radiologist, the radiologist usually does not know as much about the patient's clinical status or have as much influence on what action should be taken based on the images. Thus, the diagnostic radiologist reports image findings directly to that healthcare provider and often provides recommendations, who then takes the appropriate next steps for recommendations about medical management. Because radiologists undergo training regarding risks associated with different types of imaging tests and image-guided procedures, radiologists are the healthcare providers who generally educate patients about those risks to enable informed consent, not the healthcare provider requesting the test or procedure. Professional training United States Radiology is a field in medicine that has expanded rapidly after 2000 due to advances in computer technology, which is closely linked to modern imaging techniques. Applying for residency positions in radiology has become highly competitive. Applicants are often near the top of their medical school classes, with high USMLE (board) examination scores. Diagnostic radiologists must complete prerequisite undergraduate education, four years of medical school to earn a medical degree (D.O. or M.D.), one year of internship, and four years of residency training. After residency, most radiologists pursue one or two years of additional specialty fellowship training. The American Board of Radiology (ABR) administers professional certification in Diagnostic Radiology, Radiation Oncology and Medical Physics as well as subspecialty certification in neuroradiology, nuclear radiology, pediatric radiology and vascular and interventional radiology. "Board Certification" in diagnostic radiology requires successful completion of two examinations. The Core Exam is given after 36 months of residency. Although previously taken in Chicago or Tucson, Arizona, beginning in February 2021, the computer test transitioned permanently to a remote format. It encompasses 18 categories. A passing score is 350 or above. A fail on one to five categories was previously a Conditioned exam, however beginning in June 2021, the conditioned category will no longer exist and the test will be graded as a whole. The Certification Exam, can be taken 15 months after completion of the Radiology residency. This computer-based examination consists of five modules and graded pass-fail. It is given twice a year in Chicago and Tucson. Recertification examinations are taken every 10 years, with additional required continuing medical education as outlined in the Maintenance of Certification document. Certification may also be obtained from the American Osteopathic Board of Radiology (AOBR) and the American Board of Physician Specialties. Following completion of residency training, radiologists may either begin practicing as a general diagnostic radiologist or enter into subspecialty training programs known as fellowships. Examples of subspeciality training in radiology include abdominal imaging, thoracic imaging, cross-sectional/ultrasound, MRI, musculoskeletal imaging, interventional radiology, neuroradiology, interventional neuroradiology, paediatric radiology, nuclear medicine, emergency radiology, breast imaging and women's imaging. Fellowship training programs in radiology are usually one or two years in length. Some medical schools in the US have started to incorporate a basic radiology introduction into their core MD training. New York Medical College, the Wayne State University School of Medicine, Weill Cornell Medicine, the Uniformed Services University, and the University of South Carolina School of Medicine offer an introduction to radiology during their respective MD programs. Campbell University School of Osteopathic Medicine also integrates imaging material into their curriculum early in the first year. Radiographic exams are usually performed by radiographers. Qualifications for radiographers vary by country, but many radiographers now are required to hold a degree. Veterinary radiologists are veterinarians who specialize in the use of X-rays, ultrasound, MRI and nuclear medicine for diagnostic imaging or treatment of disease in animals. They are certified in either diagnostic radiology or radiation oncology by the American College of Veterinary Radiology. United Kingdom Radiology is an extremely competitive speciality in the UK, attracting applicants from a broad range of backgrounds. Applicants are welcomed directly from the Foundation Programme, as well as those who have completed higher training. Recruitment and selection into training post in clinical radiology posts in England, Scotland and Wales is done by an annual nationally coordinated process lasting from November to March. In this process, all applicants are required to pass a Specialty Recruitment Assessment (SRA) test. Those with a test score above a certain threshold are offered a single interview at the London and the South East Recruitment Office. At a later stage, applicants declare what programs they prefer, but may in some cases be placed in a neighbouring region. The training programme lasts for a total of five years. During this time, doctors rotate into different subspecialities, such as paediatrics, musculoskeletal or neuroradiology, and breast imaging. During the first year of training, radiology trainees are expected to pass the first part of the Fellowship of the Royal College of Radiologists (FRCR) exam. This comprises a medical physics and anatomy examination. Following completion of their part 1 exam, they are then required to pass six written exams (part 2A), which cover all the subspecialities. Successful completion of these allows them to complete the FRCR by completing part 2B, which includes rapid reporting, and a long case discussion. After achieving a certificate of completion of training (CCT), many fellowship posts exist in specialities such as neurointervention and vascular intervention, which would allow the doctor to work as an Interventional radiologist. In some cases, the CCT date can be deferred by a year to include these fellowship programmes. UK radiology registrars are represented by the Society of Radiologists in Training (SRT), which was founded in 1993 under the auspices of the Royal College of Radiologists. The society is a nonprofit organisation, run by radiology registrars specifically to promote radiology training and education in the UK. Annual meetings are held by which trainees across the country are encouraged to attend. Currently, a shortage of radiologists in the UK has created opportunities in all specialities, and with the increased reliance on imaging, demand is expected to increase in the future. Radiographers, and less frequently Nurses, are often trained to undertake many of these opportunities in order to help meet demand. Radiographers often may control a "list" of a particular set of procedures after being approved locally and signed off by a consultant radiologist. Similarly, radiographers may simply operate a list for a radiologist or other physician on their behalf. Most often if a radiographer operates a list autonomously then they are acting as the operator and practitioner under the Ionising Radiation (Medical Exposures) Regulations 2000. Radiographers are represented by a variety of bodies; most often this is the Society and College of Radiographers. Collaboration with nurses is also common, where a list may be jointly organised between the nurse and radiographer. Germany After obtaining medical licensure, German radiologists complete a five-year residency, culminating with a board examination (known as Facharztprüfung). Italy Italian radiologists complete a four-year residency program, after completing the six-year MD program. The Netherlands Dutch radiologists complete a five-year residency program, after completing the six-year MD program. India In India, one must obtain a bachelor's degree which requires 4.5 years of training, along with 1 year internship, followed by NEET PG examination which is one of the hardest examinations in India. Previous rank data shows only top rankers take radiology which means if the score is less, one might get accepted into other branches, but not radiology. The radiology program is a post graduate 3-year program (MD/DNB Radiology) or a 2-year diploma (DMRD). Singapore Radiologists in Singapore complete a five-year undergraduate MD program, followed by a one-year internship, and then a five-year residency program. Some radiologists may elect to complete a one or two-year fellowship for further sub-specialization in fields such as interventional radiology. Slovenia After finishing a six-year study of medicine and passing the emergency medicine internship, MDs can apply for radiology residency. Radiology is a five-year post-graduate program that involves all fields of radiology with a final board exam. France To become a radiologist, after having validated the common core of medical studies, one must obtain a DES (Specialized Studies Diploma) in radiology and medical imaging (specialized studies in 5 years), or a DES in advanced interventional radiology (specialized studies in 6 years). At the end of his DES, once validated, the future doctor will have to defend his “practice thesis” in order to validate his DE (State Diploma) as a doctor of medicine (common to all doctors of medicine therefore) and to be able to practice in France. Specialty training for interventional radiology Training for interventional radiology occurs in the residency portion of medical education, and has gone through developments. In 2000, the Society of Interventional Radiology (SIR) created a program named "Clinical Pathway in IR", which modified the "Holman Pathway" that was already accepted by the American Board of Radiology to including training in IR; this was accepted by ABR but was not widely adopted. In 2005 SIR proposed and ABR accepted another pathway called "DIRECT (Diagnostic and Interventional Radiology Enhanced Clinical Training) Pathway" to help trainees coming from other specialities learn IR; this too was not widely adopted. In 2006 SIR proposed a pathway resulting in certification in IR as a speciality; this was eventually accepted by the ABR in 2007 and was presented to the American Board of Medical Specialities (ABMS) in 2009, which rejected it because it did not include enough diagnostic radiology (DR) training. The proposal was reworked, at the same time that overall DR training was being revamped, and a new proposal that would lead to a dual DR/IR specialization was presented to the ABMS and was accepted in 2012 and eventually was implemented in 2014. By 2016 the field had determined that the old IR fellowships would be terminated by 2020. A handful of programs have offered interventional radiology fellowships that focus on training in the treatment of children. In Europe the field followed its own pathway; for example in Germany the parallel interventional society began to break free of the DR society in 2008. In the UK, interventional radiology was approved as a sub-specialty of clinical radiology in 2010. While many countries have an interventional radiology society, there is also the European-wide Cardiovascular and Interventional Radiological Society of Europe, whose aim is to support teaching, science, research and clinical practice in the field by hosting meetings, educational workshops and promoting patient safety initiatives. Furthermore, the Society provides an examination, the European Board of Interventional Radiology (EBIR), which is a highly valuable qualification in interventional radiology based on the European Curriculum and Syllabus for IR. See also Digital mammography: use of a computer to produce images of the breast Global radiology: improving access to radiology resources in poor and developing countries Medical radiography: the use of ionizing electromagnetic radiation, such as X-rays, in medicine Radiation protection: the science of preventing people and the environment from suffering harmful effects from ionizing radiation Radiologists Without Borders Radiosensitivity: measure of the susceptibility of organic tissues to the harmful effects of radiation X-ray image intensifier: equipment that uses x-rays to produce an image feed displayed on a TV screen International Day of Radiology: an awareness day for medical imaging Electrogram References External links Medical imaging Medical physics
Radiology
[ "Physics" ]
6,611
[ "Applied and interdisciplinary physics", "Medical physics" ]
152,643
https://en.wikipedia.org/wiki/Hand%20fan
A handheld fan, or simply hand fan, is a broad, flat surface that is waved back-and-forth to create an airflow. Generally, purpose-made handheld fans are folding fans, which are shaped like a sector of a circle and made of a thin material (such as paper or feathers) mounted on slats which revolve around a pivot so that it can be closed when not in use. Hand fans were used before mechanical fans were invented. Fans work by utilizing the concepts of Thermodynamics. On human skin, the airflow from hand fans increase the evaporation rate of sweat, lowering body temperature due to the latent heat of the evaporation of water. It also increases heat convection by displacing the warmer air produced by body heat that surrounds the skin, which has an additional cooling effect, provided that the ambient air temperature is lower than the skin temperature, which is typically about . Next to the folding fan, the rigid hand screen fan was also a highly decorative and desired object among the higher social classes. They serve a different purpose to the lighter, easier to carry hand fans. Hand screen fans were mostly used to shield a lady's face against the glare of the sun or fire. History Africa Hand fans originated about 4000 years ago in Egypt. Egyptians viewed them as sacred objects, and the tomb of Tutankhamun contained two elaborate hand fans. Ancient Europe Archaeological ruins and ancient texts show that the hand fan was used in ancient Greece at least from the 4th century BC and was known as a (). Fans were also used to keep flies away (like a fly-flapper), this kind of fan was less stiff and was named μυιoσόβη. Another use for a fan was to fan the flame, e.g. in cookery or at the altar. Christian Europe's earliest known fan was the flabellum (ceremonial fan), which dates from the 6th century. It was used during services to drive insects away from the consecrated bread and wine. Its use died out in western Europe, but continues in the Eastern Orthodox and Ethiopian Churches. East Asia China There were many kinds of fans in ancient China. The Chinese character for "fan" () is etymologically composed of the characters for "door" () and "feather" (). Historically, fans have played an important aspect in the life of the Chinese people. The Chinese have used hand-held fans as a way to relieve themselves during hot days since the ancient times; the fans are also an embodiment of the wisdom of Chinese culture and art. They were also used for ceremonial and ritual purposes and as a sartorial accessory when wearing . They were also carriers of Chinese traditional arts and literature and were representative of its user's personal aesthetic sense and their social status. Specific concepts of status and gender were associated with types of fans in Chinese history, but generally folding fans were reserved for males while rigid fans were for females. In ancient China, fans came in various shapes and forms (such as in a leaf, oval or a half-moon shape), and were made in different materials such as silk, bamboo, and feathers. So far, the earliest fans that have been found date to the Spring and Autumn and Warring States period. It was suggested by the Cultural Relics Archaeology Institute of Hubei Province that these fans were made of either bamboo or feathers and were oftentimes used as burial objects in the State of Chu. The oldest existing Chinese fans are a pair of woven bamboo, wood or paper side-mounted fans from the 2nd century BC. The Chinese form of the feather fan, known as , was a row of feathers mounted in the end of a handle. The arts of fan making eventually progressed to the point that by the Jin dynasty, fans could come in different shapes and could be made in different materials. The selling of hexagonal-shaped fan was also recorded in the Book of Jin. In later centuries, Chinese poems and four-word idioms were used to decorate fans, using Chinese calligraphy pens. The Chinese dancing fan was developed in the 7th century. The most ancient ritual Chinese fan is the , also known as , which is believed to have been invented by Emperor Shun. It is characterized with a long handle and the fan looks like a door in shape. This type of fan was used for ceremonial purposes. While its shape evolved throughout the millennia, it remained used as a symbol of imperial power and authority; it continued to be used until the fall of the Qing dynasty. Silk round-shaped fans are called (), also known as "fans of reunion"; it is a type of "rigid fan". These types of fans were mostly used by women in the Tang dynasty and were later introduced into Japan. These round fans remained mainstream even after the growing popularity of the folding fans. Round fans with Chinese paintings and with calligraphy became very popular in the Song dynasty. During the Song dynasty, famous artists were often commissioned to paint fans. Lacquer fans were also one of the unique handcraft of the Song dynasty. Chinese brides also used a type of moon-shaped round fan in a traditional Chinese wedding called . The ceremonial rite of was an important ceremony in Chinese wedding: the bride would hold it in front of her face to hide her shyness, to remain mysterious, and as a way to exorcise evil spirits. After all the other wedding ceremonies were completed and after the groom had impressed the bride, the bride would then proceed in revealing her face to the groom by removing the from her face. Another popular type of Chinese fan was the palm leaf fan (), also known as (), which was made of the leaves and stalks of (Livistona chinensis). The folding fan (), invented in Japan, was later introduced to the Chinese in the 10th century. In 988 AD, folding fans were first introduced in China by a Japanese monk from Japan as a tribute during the Northern Song dynasty; these folding fans became very fashionable in China by the Southern Song dynasty. The folding fans were referred to as "Japanese fans" by the Chinese. While the folding fans gained popularity, the traditional silk round fans continued to remain mainstream in the Song dynasty. The folding fan later became very fashionable in the Ming dynasty; however, folding fans were met with resistance because they were believed to be intended for the lower-class people and servants. The Chinese also innovated the design of the folding fan by creating the fan ('broken fan'). Foreign export From the late 18th century until 1845, trade between America and China flourished. During this period, Chinese fans reached the peak of their popularity in America; popular fans among American women were the fan, and fans made of palm leaf, feather, and paper. The most popular type during this period appeared to have been the palm leaf fan. The custom of using fans among the American middle class and by ladies was attributed to this Chinese influence. Japan In ancient Japan, hand fans, such as oval and silk fans, were greatly influenced by Chinese fans. The earliest visual depiction of fans in Japan dates back to the 6th century AD, with burial tomb paintings showed drawings of fans. The folding fan was invented in Japan, with dates ranging from the 6th to 9th centuries; it was a court fan called the , after the court women's dress named . According to the (History of Song), a Japanese monk offered the folding fans (twenty and two to the emperor of China in 988. Later in the 11th century, Korean envoys brought along Korean folding fans which were of Japanese origin as gifts to Chinese court. The popularity of folding fans was such that sumptuary laws were passed during Heian period which restricted the decoration of both and paper folding fans. The earliest fans in Japan were made by tying thin stripes of (or Japanese cypress) together with thread. The number of strips of wood differed according to the person's rank. Later in the 16th century, Portuguese traders introduced it to the west and soon both men and women throughout the continent adopted it. They are used today by Shinto priests in formal costume and in the formal costume of the Japanese court (they can be seen used by the Emperor and Empress during enthronement and marriage) and are brightly painted with long tassels. Simple Japanese paper fans are sometimes known as . Printed fan leaves and painted fans are done on a paper ground. The paper was originally handmade and displayed the characteristic watermarks. Machine-made paper fans, introduced in the 19th century, are smoother, with an even texture. Even today, geisha and use folding fans in their fan dances as well. Japanese fans are made of paper on a bamboo frame, usually with a design painted on them. In addition to folding fans (), the non-bending fans () are popular and commonplace. The fan is primarily used for fanning oneself in hot weather. The fan subsequently spread to other parts of Asia, including Burma, Thailand, Cambodia and Sri Lanka, and such fans are still used by Buddhist monks as "ceremonial fans". Fans were also used in the military as a way of sending signals on the field of battle. However, fans were mainly used for social and court activities. In Japan, fans were variously used by warriors as a form of weapon, by actors and dancers for performances, and by children as a toy. Traditionally, the rigid fan (also called fixed fan) was the most popular form in China, although the folding fan came into popularity during the Ming dynasty between the years of 1368 and 1644, and there are many beautiful examples of these folding fans still remaining. The (or Japanese dancing fan) has ten sticks and a thick paper mount showing the family crest, and Japanese painters made a large variety of designs and patterns. The slats, of ivory, bone, mica, mother of pearl, sandalwood, or tortoise shell, were carved and covered with paper or fabric. Folding fans have "montures" which are the sticks and guards, and the leaves were usually painted by craftsmen. Social significance was attached to the fan in the Far East as well, and the management of the fan became a highly regarded feminine art. Fans were even used as a weapon – called the iron fan, or in Japanese. See also, the , a military leader's fan (in old Japan); used in the modern day as an umpire's fan in sumo wrestling, it is a type of Japanese war fan, like the . Korea Every Dano (May 5 of the lunar calendar) when the heat began, there was a custom in which the king distributed hand fans to his vassals. The vassal, who received a hand fan from the king, did an ink-and-wash painting and handed out white fans to his elders and the indebted people, which has made the practice of exchanging hand fans widely popular. These cultural factors also contributed to the creation of various types of hand fan in Korea. Vietnam The hand fan () is an integral part of Vietnamese culture. According to the Vân Đài Loại Ngữ, a book written by Lê Quý Ðôn, in the old times Vietnamese people used hand fans made from bird feather and , a type of fan made from leaves of the taraw palm tree. The folding fans only started appearing in Vietnam in the 10th century, known as in Vietnamese. Christian missionary Christoforo Borri recorded that in 1621, both Vietnamese men and women frequently held hand fans as part of their daily garment. Many villages in Vietnam have long-dating traditions of making exquisite hand fans such as Canh Hoạch village and Đào Xá village, with fan-making dating back to the early 19th century. Simple handheld fans, such as and the are commonly found in the Vietnamese countrysides and popularly used among farmers and working people. The has the simplest design, cut directly from the dried Areca leaf stems, then pressed to flatten. It appears in "Thằng Bờm", a well-known Vietnamese (a type of Vietnamese folk song). The also has a simple design, made by sewing a half-moon shaped Maclurochloa leaf onto a straight bamboo stick. Re-introduction in Europe Hand fans were absent from Europe during the High Middle Ages until they were reintroduced in the 13th and 14th centuries. Fans from the Middle East were brought by Crusaders, and refugees from the Byzantine Empire. In the 15th and early 16th century, Chinese folding fans were introduced in Europe and later played an important role in the social circles of Europe in the 18th century. The Portuguese traders first opened up the sea route to China in the 15th century and reached Japan in the mid-16th century, and appear to be the first people who introduced Oriental (Chinese and Japanese) fans in Europe which lead to their popularity, as well as the increased oriental fan imports in Europe. The fan became especially popular in Spain, where flamenco dancers used the fan and extended its use to the nobility. European fan-makers have introduced more modern designs and have enabled the hand fan to work with modern fashion. 17th century In the 17th century the folding fan, and its attendant semiotic culture, were introduced from China and Japan. By the end of the 17th century, there were enormous imports of China folding in Europe due to its popularity and to a lesser extent, Japanese folding fans were also reaching Europe by that period. These fans are particularly well displayed in the portraits of the high-born women of the era. Queen Elizabeth I of England can be seen to carry both folding fans decorated with pom poms on their guardsticks as well as the older style rigid fan, usually decorated with feathers and jewels. These rigid style fans often hung from the skirts of ladies, but of the fans of this era it is only the more exotic folding ones which have survived. Those folding fans of the 15th century found in museums today have either leather leaves with cut out designs forming a lace-like design or a more rigid leaf with inlays of more exotic materials like mica. One of the characteristics of these fans is the rather crude bone or ivory sticks and the way the leather leaves are often slotted onto the sticks rather than glued as with later folding fans. Fans made entirely of decorated sticks without a fan "leaf" were known as fans. The fan originated in China. However, despite the relative crude methods of construction, folding fans were at this era high status, exotic items on par with elaborate gloves as gifts to royalty. In the 17th century the rigid fan which was seen in portraits of the previous century had fallen out of favour as folding fans gained dominance in Europe. Fans started to display well painted leaves, often with a religious or classical subject. The reverse side of these early fans also started to display elaborate flower designs. The sticks are often plain ivory or tortoiseshell, sometimes inlaid with gold or silver pique work. The way the sticks sit close to each other, often with little or no space between them is one of the distinguishing characteristics of fans of this era. In 1685 the Edict of Nantes was revoked in France. This caused large scale immigration from France to the surrounding Protestant countries (such as England) of many fan craftsmen. This dispersion in skill is reflected in the growing quality of many fans from these non-French countries after this date. 18th century In the 18th century, fans reached a high degree of artistry and were being made throughout Europe often by specialized craftsmen, either in leaves or sticks. Folded fans of silk, or parchment were decorated and painted by artists. Fans were also imported from China by the East India Companies at this time. Around the middle 18th century, inventors started designing mechanical fans. Wind-up fans (similar to wind-up clocks) were popular in the 18th century. 19th century In the 19th century in the West, European fashion caused fan decoration and size to vary. It has been said that in the courts of England, Spain and elsewhere, fans were used in a more or less secret, unspoken code of messages. These fan languages were a way to cope with the restricting social etiquette. However, modern research has proved that this was a marketing ploy developed in the 19th century – one that has kept its appeal remarkably over the succeeding centuries. This is now used for marketing by fan makers like Cussons & Sons & Co. Ltd who produced a series of advertisements in 1954 showing "the language of the fan" with fans supplied by the well known French fan maker Duvelleroy. The rigid or screen fan () became also fashionable during the 18th and 19th century. They never reached the same level of popularity as the easy to carry around, folding fans which became almost an integrated part of women's dress. The screen fan was mainly used inside the interior of the house. In 18th and 19th century paintings of interiors one sometimes sees one laying on a chimney mantle. They were mainly used to protect a woman's face against the glare and heat of the fire, to avoid getting , or ruddy cheeks from the heat. But probably not in the least it served to keep the heat from spoiling the carefully applied make-up which in those days was often wax-based. Until the 20th century houses were heated by open fires in chimneys or by stoves, and the lack of insulation made many a house very draughty and cold during winter. Therefore, any social or family gathering would be in close proximity to the fireplace. The design of the screen fan is a fixed handle, most often made out of exquisitely turned (painted or guided) wood, fixed to a flat screen. The screen could be made out of silk stretched on a frame or thin wood, leather or papier mache. The surface is often exquisitely painted with scenes ranging from flowers and birds of paradise to religious scenes. At the end of the 19th century they disappeared when the need for them ceased to exist. During the 19th century names like the Birmingham-based firm of Jennens and Bettridge produced many papier-mâché fans. Modern day Modern day hand fans are less popular than in the past, but are still used by many. Drag subculture A large group that continues to use folding hand fans for cultural and fashion use are drag queens. Stemming from ideas of imitating and appropriating cultural ideas of excess, wealth, status and elegance, large folding hand fans, sometimes or more in radius, are used to punctuate speech, as part of performances, or as accessories to an outfit. Fans may have phrases taken from the lexicon of drag and LGBTQ+ culture written on them, and may be decorated in other ways, such as the addition of sequins or tassels. Folding fans are often used to emphasize a point in a person's speech, rather than for express use of fanning oneself. A person might harshly snap open the fan when engaging in "throwing shade" on (comically insulting) another person, creating a loud snapping noise that punctuates the insult. Drag dance numbers also utilise larger hand fans as a way to add flair and as a prop, used to emphasise movements in the dance. Popular drag comedy webshow UNHhhh has used folding fans as a point of humour, with the sound made by a folding fan unfolding coined onomatopoeically as a "thworp" by the editors. Categories Hand fans have three general categories: Fixed (or rigid, flat) fans (Chinese: , ; Japanese: , ): circular fans, palm-leaf fans, straw fans, feather fans Folding fans (Chinese: , ; Japanese: , ): silk folding fans, paper folding fans, sandalwood fans Modern powered mechanical hand fans: hand fans which appear as mini mechanical rotating fans with blades. These are usually axial fans, and often use blades made from a soft material for safety. These are usually battery operated, but can be hand cranked as well. Gallery See also Church fan – Fans used in churches in the United States Use in fashion Abaniko Use in dance – Korean fan dance Cariñosa – national dance of the Philippines Singkil – traditional Maranao dance from the Philippines Pagapir – a traditional fan dance in Mindanao, Philippines – traditional dance originating from Japan Use as weapons Princess Iron Fan Japanese war fan Korean fighting fan Use in comedy Use in politics Islami Andolan Bangladesh – an Islamic political party in Bangladesh that uses a hand fan as its electoral symbol Museums Musée de l'Éventail (Paris) The Fan Museum in Greenwich (Greenwich, London) The Hand Fan Museum in Healdsburg, California References Sources Nussbaum, Louis Frédéric and Käthe Roth. (2005). Japan Encyclopedia. Cambridge: Harvard University Press. ; OCLC 48943301 Books Alexander, Helene. The Fan Museum, Third Millennium Publishing, 2001 Alexander, Helene & Hovinga-Van Eijsden, Fransje. A Touch of Dutch - Fans from the Royal House of Orange-Nassau, The Fan Museum, February 2008, Armstrong, Nancy. Book of Fans. Smithmark Publishing, 1984. Armstrong, Nancy. Fans, Souvenir Press, 1984 Bennett, Anna G. Unfolding beauty: The art of the fan : the collection of Esther Oldham and the Museum of Fine Arts, Boston. Thames and Hudson (1988). Bennett, Anna G. & Berson, Ruth Fans in fashion. Publisher Charles E. Tuttle Co. Inc & The Fine Arts Museums of San Francisco (1981) Biger, Pierre-Henri. Sens et sujets de l'éventail européen de Louis XIV à Louis-Philippe. Histoire of Art Thesis, Rennes 2 University, 2015. (https://tel.archives-ouvertes.fr/tel-01220297) Checcoli, Anna. " Il ventaglio e i suoi segreti ", Tassinari, 2009 Checcoli, Anna. " Ventagli Cinesi Giapponesi ed Orientali ", Tassinari, 2009 Cowen, Pamela. A Fanfare for the Sun King: Unfolding Fans for Louis XIV, Third Millennium Publishing (September, 2003) Das, Justin. Pankha -Traditional crafted hand fans of the Indian Subcontinent from the collection of Justin Das - The fan museum, Greenwich (2004) Faulkner, Rupert. Hiroshige Fan Prints, V&A publications, 2001 Fendel, Cynthia. Novelty Hand Fans, Fashionable Functional Fun Accessories of the Past. Hand Fan Productions, 2006 Fendel, Cynthia. Celluloid Hand Fans. Hand Fan Productions, 2001. Gitter, Kurt A. Japanese fan paintings from western collections. Publisher - New Orleans Museum of Art (1985). Hart, Avril & Taylor, Emma. Fans (V & A Fashion Accessories Series). Publisher- V & A Publications. Hutt, Julia & Alexander, Helene. Ogi: A History of the Japanese Fan. Art Media Resources; Bilingual edition (February 1, 1992) Irons, Neville John. Fans of Imperial China. Kaiserreich Kunst Ltd, 1982 Letourmy-Bordier, Georgina & Le Guen, Sylvain, L'éventail, matières d'excellence : La nature sublimée par les mains de l'artisan, Musée de la Nacre et de la Tabletterie (September 2015) Mayor, Susan. A Collectors Guide to Fans, Charles Letts, 1990 Mayor, Susan. The Letts Guide to Collecting Fans. Charles Letts, 1991 North, Audrey. Australia's fan heritage. Boolarong Publications (1985). Qian, Gonglin. Chinese Fans: Artistry and Aesthetics (Arts of China, #2). Long River Press (August 31, 2004) Rhead, G. Wooliscroft. The History of the Fan, Kegan Paul, 1910 Roberts, Jane. Unfolding Pictures: Fans in the Royal Collection. Publisher - Royal Collection (January 30, 2006. Tam, C.S. Fan Paintings by Late Ch'ing Shanghai Masters. Publisher - Urban Council for an exhibition in the Hong Kong Museum of Art (1977) Vannotti, Franco. ''Peinture Chinoise de la Dynastie Ts'ing (1644–1912). Collections Baur, Geneve (1974) External links A visual guide to Victorian fan language, photos by Fabio and Gabrielle Arciniegas mm's fan collection with monographies on love symbols on fans, celluloid fans, George Barbier and more Hand fan collection Anna Checcoli All About Hand Fans with Cynthia Fendel Hand Fan Museum The Fan Circle International Tessen warrior fan The Fan Museum in Greenwich, London Fan Association of North America Fans in the Staten Island Historical Society Online Collections Database La Place de l'Eventail (French website dedicated to the European Hand Fan (most pages in English) Galerie Le Curieux, Paris Fans in the 16th and 17th Centuries Variety of Hand Held fans in different colour and styles Maison Sylvain Le Guen - contemporary hand fans by Sylvain Le Guen Allhandfans - Site entirely dedicated to the hand fan Museu Tèxtil i d'Indumentària in Barcelona Articles containing video clips Ancient Egyptian technology Ancient Greek technology Chinese culture Chinese inventions Cooling technology Greek inventions Ventilation fans Fashion accessories Hand tools Culture of Japan Japanese inventions
Hand fan
[ "Engineering" ]
5,183
[ "Human–machine interaction", "Hand tools" ]
152,664
https://en.wikipedia.org/wiki/Loop%20quantum%20gravity
Loop quantum gravity (LQG) is a theory of quantum gravity that incorporates matter of the Standard Model into the framework established for the intrinsic quantum gravity case. It is an attempt to develop a quantum theory of gravity based directly on Albert Einstein's geometric formulation rather than the treatment of gravity as a mysterious mechanism (force). As a theory, LQG postulates that the structure of space and time is composed of finite loops woven into an extremely fine fabric or network. These networks of loops are called spin networks. The evolution of a spin network, or spin foam, has a scale on the order of a Planck length, approximately 10−35 meters, and smaller scales are meaningless. Consequently, not just matter, but space itself, prefers an atomic structure. The areas of research, which involve about 30 research groups worldwide, share the basic physical assumptions and the mathematical description of quantum space. Research has evolved in two directions: the more traditional canonical loop quantum gravity, and the newer covariant loop quantum gravity, called spin foam theory. The most well-developed theory that has been advanced as a direct result of loop quantum gravity is called loop quantum cosmology (LQC). LQC advances the study of the early universe, incorporating the concept of the Big Bang into the broader theory of the Big Bounce, which envisions the Big Bang as the beginning of a period of expansion, that follows a period of contraction, which has been described as the Big Crunch. History In 1986, Abhay Ashtekar reformulated Einstein's general relativity in a language closer to that of the rest of fundamental physics, specifically Yang–Mills theory. Shortly after, Ted Jacobson and Lee Smolin realized that the formal equation of quantum gravity, called the Wheeler–DeWitt equation, admitted solutions labelled by loops when rewritten in the new Ashtekar variables. Carlo Rovelli and Smolin defined a nonperturbative and background-independent quantum theory of gravity in terms of these loop solutions. Jorge Pullin and Jerzy Lewandowski understood that the intersections of the loops are essential for the consistency of the theory, and the theory should be formulated in terms of intersecting loops, or graphs. In 1994, Rovelli and Smolin showed that the quantum operators of the theory associated to area and volume have a discrete spectrum. That is, geometry is quantized. This result defines an explicit basis of states of quantum geometry, which turned out to be labelled by Roger Penrose's spin networks, which are graphs labelled by spins. The canonical version of the dynamics was established by Thomas Thiemann, who defined an anomaly-free Hamiltonian operator and showed the existence of a mathematically consistent background-independent theory. The covariant, or "spin foam", version of the dynamics was developed jointly over several decades by research groups in France, Canada, UK, Poland, and Germany. It was completed in 2008, leading to the definition of a family of transition amplitudes, which in the classical limit can be shown to be related to a family of truncations of general relativity. The finiteness of these amplitudes was proven in 2011. It requires the existence of a positive cosmological constant, which is consistent with observed acceleration in the expansion of the Universe. Background independence LQG is formally background independent, meaning the equations of LQG are not embedded in, or dependent on, space and time (except for its invariant topology). Instead, they are expected to give rise to space and time at distances which are 10 times the Planck length. The issue of background independence in LQG still has some unresolved subtleties. For example, some derivations require a fixed choice of the topology, while any consistent quantum theory of gravity should include topology change as a dynamical process. Spacetime as a "container" over which physics takes place has no objective physical meaning and instead the gravitational interaction is represented as just one of the fields forming the world. This is known as the relationalist interpretation of spacetime. In LQG this aspect of general relativity is taken seriously and this symmetry is preserved by requiring that the physical states remain invariant under the generators of diffeomorphisms. The interpretation of this condition is well understood for purely spatial diffeomorphisms. However, the understanding of diffeomorphisms involving time (the Hamiltonian constraint) is more subtle because it is related to dynamics and the so-called "problem of time" in general relativity. A generally accepted calculational framework to account for this constraint has yet to be found. A plausible candidate for the quantum Hamiltonian constraint is the operator introduced by Thiemann. Constraints and their Poisson bracket algebra Dirac observables The constraints define a constraint surface in the original phase space. The gauge motions of the constraints apply to all phase space but have the feature that they leave the constraint surface where it is, and thus the orbit of a point in the hypersurface under gauge transformations will be an orbit entirely within it. Dirac observables are defined as phase space functions, , that Poisson commute with all the constraints when the constraint equations are imposed, that is, they are quantities defined on the constraint surface that are invariant under the gauge transformations of the theory. Then, solving only the constraint and determining the Dirac observables with respect to it leads us back to the Arnowitt–Deser–Misner (ADM) phase space with constraints . The dynamics of general relativity is generated by the constraints, it can be shown that six Einstein equations describing time evolution (really a gauge transformation) can be obtained by calculating the Poisson brackets of the three-metric and its conjugate momentum with a linear combination of the spatial diffeomorphism and Hamiltonian constraint. The vanishing of the constraints, giving the physical phase space, are the four other Einstein equations. Quantization of the constraints – the equations of quantum general relativity Pre-history and Ashtekar new variables Many of the technical problems in canonical quantum gravity revolve around the constraints. Canonical general relativity was originally formulated in terms of metric variables, but there seemed to be insurmountable mathematical difficulties in promoting the constraints to quantum operators because of their highly non-linear dependence on the canonical variables. The equations were much simplified with the introduction of Ashtekar's new variables. Ashtekar variables describe canonical general relativity in terms of a new pair of canonical variables closer to those of gauge theories. The first step consists of using densitized triads (a triad is simply three orthogonal vector fields labeled by and the densitized triad is defined by ) to encode information about the spatial metric, (where is the flat space metric, and the above equation expresses that , when written in terms of the basis , is locally flat). (Formulating general relativity with triads instead of metrics was not new.) The densitized triads are not unique, and in fact one can perform a local in space rotation with respect to the internal indices . The canonically conjugate variable is related to the extrinsic curvature by . But problems similar to using the metric formulation arise when one tries to quantize the theory. Ashtekar's new insight was to introduce a new configuration variable, that behaves as a complex connection where is related to the so-called spin connection via . Here is called the chiral spin connection. It defines a covariant derivative . It turns out that is the conjugate momentum of , and together these form Ashtekar's new variables. The expressions for the constraints in Ashtekar variables; Gauss's theorem, the spatial diffeomorphism constraint and the (densitized) Hamiltonian constraint then read: respectively, where is the field strength tensor of the connection and where is referred to as the vector constraint. The above-mentioned local in space rotational invariance is the original of the gauge invariance here expressed by Gauss's theorem. Note that these constraints are polynomial in the fundamental variables, unlike the constraints in the metric formulation. This dramatic simplification seemed to open up the way to quantizing the constraints. (See the article Self-dual Palatini action for a derivation of Ashtekar's formalism). With Ashtekar's new variables, given the configuration variable , it is natural to consider wavefunctions . This is the connection representation. It is analogous to ordinary quantum mechanics with configuration variable and wavefunctions . The configuration variable gets promoted to a quantum operator via: (analogous to ) and the triads are (functional) derivatives, (analogous to ). In passing over to the quantum theory the constraints become operators on a kinematic Hilbert space (the unconstrained Yang–Mills Hilbert space). Note that different ordering of the 's and 's when replacing the 's with derivatives give rise to different operators – the choice made is called the factor ordering and should be chosen via physical reasoning. Formally they read There are still problems in properly defining all these equations and solving them. For example, the Hamiltonian constraint Ashtekar worked with was the densitized version instead of the original Hamiltonian, that is, he worked with . There were serious difficulties in promoting this quantity to a quantum operator. Moreover, although Ashtekar variables had the virtue of simplifying the Hamiltonian, they are complex. When one quantizes the theory, it is difficult to ensure that one recovers real general relativity as opposed to complex general relativity. Quantum constraints as the equations of quantum general relativity The classical result of the Poisson bracket of the smeared Gauss' law with the connections is The quantum Gauss' law reads If one smears the quantum Gauss' law and study its action on the quantum state one finds that the action of the constraint on the quantum state is equivalent to shifting the argument of by an infinitesimal (in the sense of the parameter small) gauge transformation, and the last identity comes from the fact that the constraint annihilates the state. So the constraint, as a quantum operator, is imposing the same symmetry that its vanishing imposed classically: it is telling us that the functions have to be gauge invariant functions of the connection. The same idea is true for the other constraints. Therefore, the two step process in the classical theory of solving the constraints (equivalent to solving the admissibility conditions for the initial data) and looking for the gauge orbits (solving the 'evolution' equations) is replaced by a one step process in the quantum theory, namely looking for solutions of the quantum equations . This is because it solves the constraint at the quantum level and it simultaneously looks for states that are gauge invariant because is the quantum generator of gauge transformations (gauge invariant functions are constant along the gauge orbits and thus characterize them). Recall that, at the classical level, solving the admissibility conditions and evolution equations was equivalent to solving all of Einstein's field equations, this underlines the central role of the quantum constraint equations in canonical quantum gravity. Introduction of the loop representation It was in particular the inability to have good control over the space of solutions to Gauss's law and spatial diffeomorphism constraints that led Rovelli and Smolin to consider the loop representation in gauge theories and quantum gravity. LQG includes the concept of a holonomy. A holonomy is a measure of how much the initial and final values of a spinor or vector differ after parallel transport around a closed loop; it is denoted . Knowledge of the holonomies is equivalent to knowledge of the connection, up to gauge equivalence. Holonomies can also be associated with an edge; under a Gauss Law these transform as For a closed loop and assuming , yields or The trace of an holonomy around a closed loop is written and is called a Wilson loop. Thus Wilson loops are gauge invariant. The explicit form of the Holonomy is where is the curve along which the holonomy is evaluated, and is a parameter along the curve, denotes path ordering meaning factors for smaller values of appear to the left, and are matrices that satisfy the algebra The Pauli matrices satisfy the above relation. It turns out that there are infinitely many more examples of sets of matrices that satisfy these relations, where each set comprises matrices with , and where none of these can be thought to 'decompose' into two or more examples of lower dimension. They are called different irreducible representations of the algebra. The most fundamental representation being the Pauli matrices. The holonomy is labelled by a half integer according to the irreducible representation used. The use of Wilson loops explicitly solves the Gauss gauge constraint. Loop representation is required to handle the spatial diffeomorphism constraint. With Wilson loops as a basis, any Gauss gauge invariant function expands as, This is called the loop transform and is analogous to the momentum representation in quantum mechanics (see Position and momentum space). The QM representation has a basis of states labelled by a number and expands as and works with the coefficients of the expansion The inverse loop transform is defined by This defines the loop representation. Given an operator in the connection representation, one should define the corresponding operator on in the loop representation via, where is defined by the usual inverse loop transform, A transformation formula giving the action of the operator on in terms of the action of the operator on is then obtained by equating the R.H.S. of with the R.H.S. of with substituted into , namely or where means the operator but with the reverse factor ordering (remember from simple quantum mechanics where the product of operators is reversed under conjugation). The action of this operator on the Wilson loop is evaluated as a calculation in the connection representation and the result is rearranged purely as a manipulation in terms of loops (with regard to the action on the Wilson loop, the chosen transformed operator is the one with the opposite factor ordering compared to the one used for its action on wavefunctions ). This gives the physical meaning of the operator . For example, if corresponded to a spatial diffeomorphism, then this can be thought of as keeping the connection field of where it is while performing a spatial diffeomorphism on instead. Therefore, the meaning of is a spatial diffeomorphism on , the argument of . In the loop representation, the spatial diffeomorphism constraint is solved by considering functions of loops that are invariant under spatial diffeomorphisms of the loop . That is, knot invariants are used. This opens up an unexpected connection between knot theory and quantum gravity. Any collection of non-intersecting Wilson loops satisfy Ashtekar's quantum Hamiltonian constraint. Using a particular ordering of terms and replacing by a derivative, the action of the quantum Hamiltonian constraint on a Wilson loop is When a derivative is taken it brings down the tangent vector, , of the loop, . So, However, as is anti-symmetric in the indices and this vanishes (this assumes that is not discontinuous anywhere and so the tangent vector is unique). With regard to loop representation, the wavefunctions vanish when the loop has discontinuities and are knot invariants. Such functions solve the Gauss law, the spatial diffeomorphism constraint and (formally) the Hamiltonian constraint. This yields an infinite set of exact (if only formal) solutions to all the equations of quantum general relativity! This generated a lot of interest in the approach and eventually led to LQG. Geometric operators, the need for intersecting Wilson loops and spin network states The easiest geometric quantity is the area. Let us choose coordinates so that the surface is characterized by . The area of small parallelogram of the surface is the product of length of each side times where is the angle between the sides. Say one edge is given by the vector and the other by then, In the space spanned by and there is an infinitesimal parallelogram described by and . Using (where the indices and run from 1 to 2), yields the area of the surface given by where and is the determinant of the metric induced on . The latter can be rewritten where the indices go from 1 to 2. This can be further rewritten as The standard formula for an inverse matrix is There is a similarity between this and the expression for . But in Ashtekar variables, . Therefore, According to the rules of canonical quantization the triads should be promoted to quantum operators, The area can be promoted to a well defined quantum operator despite the fact that it contains a product of two functional derivatives and a square-root. Putting (-th representation), This quantity is important in the final formula for the area spectrum. The result is where the sum is over all edges of the Wilson loop that pierce the surface . The formula for the volume of a region is given by The quantization of the volume proceeds the same way as with the area. Each time the derivative is taken, it brings down the tangent vector , and when the volume operator acts on non-intersecting Wilson loops the result vanishes. Quantum states with non-zero volume must therefore involve intersections. Given that the anti-symmetric summation is taken over in the formula for the volume, it needs intersections with at least three non-coplanar lines. At least four-valent vertices are needed for the volume operator to be non-vanishing. Assuming the real representation where the gauge group is , Wilson loops are an over complete basis as there are identities relating different Wilson loops. These occur because Wilson loops are based on matrices (the holonomy) and these matrices satisfy identities. Given any two matrices and , This implies that given two loops and that intersect, where by we mean the loop traversed in the opposite direction and means the loop obtained by going around the loop and then along . See figure below. Given that the matrices are unitary one has that . Also given the cyclic property of the matrix traces (i.e. ) one has that . These identities can be combined with each other into further identities of increasing complexity adding more loops. These identities are the so-called Mandelstam identities. Spin networks certain are linear combinations of intersecting Wilson loops designed to address the over-completeness introduced by the Mandelstam identities (for trivalent intersections they eliminate the over-completeness entirely) and actually constitute a basis for all gauge invariant functions. As mentioned above the holonomy tells one how to propagate test spin half particles. A spin network state assigns an amplitude to a set of spin half particles tracing out a path in space, merging and splitting. These are described by spin networks : the edges are labelled by spins together with 'intertwiners' at the vertices which are prescription for how to sum over different ways the spins are rerouted. The sum over rerouting are chosen as such to make the form of the intertwiner invariant under Gauss gauge transformations. Hamiltonian constraint of LQG In the long history of canonical quantum gravity formulating the Hamiltonian constraint as a quantum operator (Wheeler–DeWitt equation) in a mathematically rigorous manner has been a formidable problem. It was in the loop representation that a mathematically well defined Hamiltonian constraint was finally formulated in 1996. We leave more details of its construction to the article Hamiltonian constraint of LQG. This together with the quantum versions of the Gauss law and spatial diffeomorphism constrains written in the loop representation are the central equations of LQG (modern canonical quantum General relativity). Finding the states that are annihilated by these constraints (the physical states), and finding the corresponding physical inner product, and observables is the main goal of the technical side of LQG. An important aspect of the Hamiltonian operator is that it only acts at vertices (a consequence of this is that Thiemann's Hamiltonian operator, like Ashtekar's operator, annihilates non-intersecting loops except now it is not just formal and has rigorous mathematical meaning). More precisely, its action is non-zero on at least vertices of valence three and greater and results in a linear combination of new spin networks where the original graph has been modified by the addition of lines at each vertex together and a change in the labels of the adjacent links of the vertex. Chiral fermions and the fermion doubling problem A significant challenge in theoretical physics lies in unifying LQG, a theory of quantum spacetime, with the Standard Model of particle physics, which describes fundamental forces and particles. A major obstacle in this endeavor is the fermion doubling problem, which arises when incorporating chiral fermions into the LQG framework. Chiral fermions, such as electrons and quarks, are fundamental particles characterized by their "handedness" or chirality. This property dictates that a particle and its mirror image behave differently under weak interactions. This asymmetry is fundamental to the Standard Model's success in explaining numerous physical phenomena. However, attempts to integrate chiral fermions into LQG often result in the appearance of spurious, mirror-image particles. Instead of a single left-handed fermion, for instance, the theory predicts the existence of both a left-handed and a right-handed version. This "doubling" contradicts the observed chirality of the Standard Model and disrupts its predictive power. The fermion doubling problem poses a significant hurdle in constructing a consistent theory of quantum gravity. The Standard Model's accuracy in describing the universe at the smallest scales relies heavily on the unique properties of chiral fermions. Without a solution to this problem, incorporating matter and its interactions into a unified framework of quantum gravity remains a significant challenge. Therefore, resolving the fermion doubling problem is crucial for advancing our understanding of the universe at its most fundamental level and developing a complete theory that unites gravity with the quantum world. Spin foams In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface. The set of all possible spin networks (or, more accurately, "s-knots" – that is, equivalence classes of spin networks under diffeomorphisms) is countable; it constitutes a basis of LQG Hilbert space. In physics, a spin foam is a topological structure made out of two-dimensional faces that represents one of the configurations that must be summed to obtain a Feynman's path integral (functional integration) description of quantum gravity. It is closely related to loop quantum gravity. Spin foam derived from the Hamiltonian constraint operator On this section see and references therein. The Hamiltonian constraint generates 'time' evolution. Solving the Hamiltonian constraint should tell us how quantum states evolve in 'time' from an initial spin network state to a final spin network state. One approach to solving the Hamiltonian constraint starts with what is called the Dirac delta function. The summation of which over different sequences of actions can be visualized as a summation over different histories of 'interaction vertices' in the 'time' evolution sending the initial spin network to the final spin network. Each time a Hamiltonian operator acts it does so by adding a new edge at the vertex. This then naturally gives rise to the two-complex (a combinatorial set of faces that join along edges, which in turn join on vertices) underlying the spin foam description; we evolve forward an initial spin network sweeping out a surface, the action of the Hamiltonian constraint operator is to produce a new planar surface starting at the vertex. We are able to use the action of the Hamiltonian constraint on the vertex of a spin network state to associate an amplitude to each "interaction" (in analogy to Feynman diagrams). See figure below. This opens a way of trying to directly link canonical LQG to a path integral description. Just as a spin networks describe quantum space, each configuration contributing to these path integrals, or sums over history, describe 'quantum spacetime'. Because of their resemblance to soap foams and the way they are labeled John Baez gave these 'quantum spacetimes' the name 'spin foams'. There are however severe difficulties with this particular approach, for example the Hamiltonian operator is not self-adjoint, in fact it is not even a normal operator (i.e. the operator does not commute with its adjoint) and so the spectral theorem cannot be used to define the exponential in general. The most serious problem is that the 's are not mutually commuting, it can then be shown the formal quantity cannot even define a (generalized) projector. The master constraint (see below) does not suffer from these problems and as such offers a way of connecting the canonical theory to the path integral formulation. Spin foams from BF theory It turns out there are alternative routes to formulating the path integral, however their connection to the Hamiltonian formalism is less clear. One way is to start with the BF theory. This is a simpler theory than general relativity, it has no local degrees of freedom and as such depends only on topological aspects of the fields. BF theory is what is known as a topological field theory. Surprisingly, it turns out that general relativity can be obtained from BF theory by imposing a constraint, BF theory involves a field and if one chooses the field to be the (anti-symmetric) product of two tetrads (tetrads are like triads but in four spacetime dimensions), one recovers general relativity. The condition that the field be given by the product of two tetrads is called the simplicity constraint. The spin foam dynamics of the topological field theory is well understood. Given the spin foam 'interaction' amplitudes for this simple theory, one then tries to implement the simplicity conditions to obtain a path integral for general relativity. The non-trivial task of constructing a spin foam model is then reduced to the question of how this simplicity constraint should be imposed in the quantum theory. The first attempt at this was the famous Barrett–Crane model. However this model was shown to be problematic, for example there did not seem to be enough degrees of freedom to ensure the correct classical limit. It has been argued that the simplicity constraint was imposed too strongly at the quantum level and should only be imposed in the sense of expectation values just as with the Lorenz gauge condition in the Gupta–Bleuler formalism of quantum electrodynamics. New models have now been put forward, sometimes motivated by imposing the simplicity conditions in a weaker sense. Another difficulty here is that spin foams are defined on a discretization of spacetime. While this presents no problems for a topological field theory as it has no local degrees of freedom, it presents problems for GR. This is known as the problem triangularization dependence. Modern formulation of spin foams Just as imposing the classical simplicity constraint recovers general relativity from BF theory, it is expected that an appropriate quantum simplicity constraint will recover quantum gravity from quantum BF theory. Progress has been made with regard to this issue by Engle, Pereira, and Rovelli, Freidel and Krasnov and Livine and Speziale in defining spin foam interaction amplitudes with better behaviour. An attempt to make contact between EPRL-FK spin foam and the canonical formulation of LQG has been made. Spin foam derived from the master constraint operator See below. The semiclassical limit and loop quantum gravity The Classical limit is the ability of a physical theory to approximate classical mechanics. It is used with physical theories that predict non-classical behavior. Any candidate theory of quantum gravity must be able to reproduce Einstein's theory of general relativity as a classical limit of a quantum theory. This is not guaranteed because of a feature of quantum field theories which is that they have different sectors, these are analogous to the different phases that come about in the thermodynamical limit of statistical systems. Just as different phases are physically different, so are different sectors of a quantum field theory. It may turn out that LQG belongs to an unphysical sector – one in which one does not recover general relativity in the semiclassical limit or there might not be any physical sector. Moreover, the physical Hilbert space must contain enough semiclassical states to guarantee that the quantum theory obtained can return to the classical theory when avoiding quantum anomalies; otherwise there will be restrictions on the physical Hilbert space that have no counterpart in the classical theory, implying that the quantum theory has fewer degrees of freedom than the classical theory. Theorems establishing the uniqueness of the loop representation as defined by Ashtekar et al. (i.e. a certain concrete realization of a Hilbert space and associated operators reproducing the correct loop algebra) have been given by two groups (Lewandowski, Okołów, Sahlmann and Thiemann; and Christian Fleischhack). Before this result was established it was not known whether there could be other examples of Hilbert spaces with operators invoking the same loop algebra – other realizations not equivalent to the one that had been used. These uniqueness theorems imply no others exist, so if LQG does not have the correct semiclassical limit then the theorems would mean the end of the loop representation of quantum gravity. Difficulties and progress checking the semiclassical limit There are a number of difficulties in trying to establish LQG gives Einstein's theory of general relativity in the semiclassical limit: There is no operator corresponding to infinitesimal spatial diffeomorphisms (it is not surprising that the theory has no generator of infinitesimal spatial 'translations' as it predicts spatial geometry has a discrete nature, compare to the situation in condensed matter). Instead it must be approximated by finite spatial diffeomorphisms and so the Poisson bracket structure of the classical theory is not exactly reproduced. This problem can be circumvented with the introduction of the so-called master constraint (see below). There is the problem of reconciling the discrete combinatorial nature of the quantum states with the continuous nature of the fields of the classical theory. There are serious difficulties arising from the structure of the Poisson brackets involving the spatial diffeomorphism and Hamiltonian constraints. In particular, the algebra of (smeared) Hamiltonian constraints does not close: It is proportional to a sum over infinitesimal spatial diffeomorphisms (which, as noted above, does not exist in the quantum theory) where the coefficients of proportionality are not constants but have non-trivial phase space dependence – as such it does not form a Lie algebra. However, the situation is improved by the introduction of the master constraint. The semiclassical machinery developed so far is only appropriate to non-graph-changing operators, however, Thiemann's Hamiltonian constraint is a graph-changing operator – the new graph it generates has degrees of freedom upon which the coherent state does not depend and so their quantum fluctuations are not suppressed. There is also the restriction, so far, that these coherent states are only defined at the Kinematic level, and now one has to lift them to the level of and . It can be shown that Thiemann's Hamiltonian constraint is required to be graph-changing in order to resolve problem 3 in some sense. The master constraint algebra however is trivial and so the requirement that it be graph-changing can be lifted and indeed non-graph-changing master constraint operators have been defined. As far as is currently known, this problem is still out of reach. Formulating observables for classical general relativity is a formidable problem because of its non-linear nature and spacetime diffeomorphism invariance. A systematic approximation scheme to calculate observables has been recently developed. Difficulties in trying to examine the semiclassical limit of the theory should not be confused with it having the wrong semiclassical limit. Concerning issue number 2 above, consider so-called weave states. Ordinary measurements of geometric quantities are macroscopic, and Planckian discreteness is smoothed out. The fabric of a T-shirt is analogous: at a distance it is a smooth curved two-dimensional surface, but on closer inspection we see that it is actually composed of thousands of one-dimensional linked threads. The image of space given in LQG is similar. Consider a large spin network formed by a large number of nodes and links, each of Planck scale. Probed at a macroscopic scale, it appears as a three-dimensional continuous metric geometry. To make contact with low energy physics it is mandatory to develop approximation schemes both for the physical inner product and for Dirac observables; the spin foam models that have been intensively studied can be viewed as avenues toward approximation schemes for said physical inner product. Markopoulou, et al. adopted the idea of noiseless subsystems in an attempt to solve the problem of the low energy limit in background independent quantum gravity theories. The idea has led to the possibility of matter of the standard model being identified with emergent degrees of freedom from some versions of LQG (see section below: LQG and related research programs). As Wightman emphasized in the 1950s, in Minkowski QFTs the point functions completely determine the theory. In particular, one can calculate the scattering amplitudes from these quantities. As explained below in the section on the Background independent scattering amplitudes, in the background-independent context, the point functions refer to a state and in gravity that state can naturally encode information about a specific geometry which can then appear in the expressions of these quantities. To leading order, LQG calculations have been shown to agree in an appropriate sense with the point functions calculated in the effective low energy quantum general relativity. Improved dynamics and the master constraint The master constraint Thiemann's Master Constraint Programme for Loop Quantum Gravity (LQG) was proposed as a classically equivalent way to impose the infinite number of Hamiltonian constraint equations in terms of a single master constraint , which involves the square of the constraints in question. An initial objection to the use of the master constraint was that on first sight it did not seem to encode information about the observables; because the Master constraint is quadratic in the constraint, when one computes its Poisson bracket with any quantity, the result is proportional to the constraint, therefore it vanishes when the constraints are imposed and as such does not select out particular phase space functions. However, it was realized that the condition is where is at least a twice differentiable function on phase space is equivalent to being a weak Dirac observable with respect to the constraints in question. So the master constraint does capture information about the observables. Because of its significance this is known as the master equation. That the master constraint Poisson algebra is an honest Lie algebra opens the possibility of using a method, known as group averaging, in order to construct solutions of the infinite number of Hamiltonian constraints, a physical inner product thereon and Dirac observables via what is known as refined algebraic quantization, or RAQ. The quantum master constraint Define the quantum master constraint (regularisation issues aside) as Obviously, for all implies . Conversely, if then implies . First compute the matrix elements of the would-be operator , that is, the quadratic form . is a graph changing, diffeomorphism invariant quadratic form that cannot exist on the kinematic Hilbert space , and must be defined on . Since the master constraint operator is densely defined on , then is a positive and symmetric operator in . Therefore, the quadratic form associated with is closable. The closure of is the quadratic form of a unique self-adjoint operator , called the Friedrichs extension of . We relabel as for simplicity. Note that the presence of an inner product, viz Eq 4, means there are no superfluous solutions i.e. there are no such that but for which . It is also possible to construct a quadratic form for what is called the extended master constraint (discussed below) on which also involves the weighted integral of the square of the spatial diffeomorphism constraint (this is possible because is not graph changing). The spectrum of the master constraint may not contain zero due to normal or factor ordering effects which are finite but similar in nature to the infinite vacuum energies of background-dependent quantum field theories. In this case it turns out to be physically correct to replace with provided that the "normal ordering constant" vanishes in the classical limit, that is, so that is a valid quantisation of . Testing the master constraint The constraints in their primitive form are rather singular, this was the reason for integrating them over test functions to obtain smeared constraints. However, it would appear that the equation for the master constraint, given above, is even more singular involving the product of two primitive constraints (although integrated over space). Squaring the constraint is dangerous as it could lead to worsened ultraviolet behaviour of the corresponding operator and hence the master constraint programme must be approached with care. In doing so the master constraint programme has been satisfactorily tested in a number of model systems with non-trivial constraint algebras, free and interacting field theories. The master constraint for LQG was established as a genuine positive self-adjoint operator and the physical Hilbert space of LQG was shown to be non-empty, a consistency test LQG must pass to be a viable theory of quantum general relativity. Applications of the master constraint The master constraint has been employed in attempts to approximate the physical inner product and define more rigorous path integrals. The Consistent Discretizations approach to LQG, is an application of the master constraint program to construct the physical Hilbert space of the canonical theory. Spin foam from the master constraint The master constraint is easily generalized to incorporate the other constraints. It is then referred to as the extended master constraint, denoted . We can define the extended master constraint which imposes both the Hamiltonian constraint and spatial diffeomorphism constraint as a single operator, . Setting this single constraint to zero is equivalent to and for all in . This constraint implements the spatial diffeomorphism and Hamiltonian constraint at the same time on the Kinematic Hilbert space. The physical inner product is then defined as (as ). A spin foam representation of this expression is obtained by splitting the -parameter in discrete steps and writing The spin foam description then follows from the application of on a spin network resulting in a linear combination of new spin networks whose graph and labels have been modified. Obviously an approximation is made by truncating the value of to some finite integer. An advantage of the extended master constraint is that we are working at the kinematic level and so far it is only here we have access semiclassical coherent states. Moreover, one can find none graph changing versions of this master constraint operator, which are the only type of operators appropriate for these coherent states. Algebraic quantum gravity (AQG) The master constraint programme has evolved into a fully combinatorial treatment of gravity known as algebraic quantum gravity (AQG). The non-graph changing master constraint operator is adapted in the framework of algebraic quantum gravity. While AQG is inspired by LQG, it differs drastically from it because in AQG there is fundamentally no topology or differential structure – it is background independent in a more generalized sense and could possibly have something to say about topology change. In this new formulation of quantum gravity AQG semiclassical states always control the fluctuations of all present degrees of freedom. This makes the AQG semiclassical analysis superior over that of LQG, and progress has been made in establishing it has the correct semiclassical limit and providing contact with familiar low energy physics. Physical applications of LQG Black hole entropy Black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. The no hair conjecture of general relativity states that a black hole is characterized only by its mass, its charge, and its angular momentum; hence, it has no entropy. It appears, then, that one can violate the second law of thermodynamics by dropping an object with nonzero entropy into a black hole. Work by Stephen Hawking and Jacob Bekenstein showed that the second law of thermodynamics can be preserved by assigning to each black hole a black-hole entropy where is the area of the hole's event horizon, is the Boltzmann constant, and is the Planck length. The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle. An oversight in the application of the no-hair theorem is the assumption that the relevant degrees of freedom accounting for the entropy of the black hole must be classical in nature; what if they were purely quantum mechanical instead and had non-zero entropy? This is what is realized in the LQG derivation of black hole entropy, and can be seen as a consequence of its background-independence – the classical black hole spacetime comes about from the semiclassical limit of the quantum state of the gravitational field, but there are many quantum states that have the same semiclassical limit. Specifically, in LQG it is possible to associate a quantum geometrical interpretation to the microstates: These are the quantum geometries of the horizon which are consistent with the area, , of the black hole and the topology of the horizon (i.e. spherical). LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon. These calculations have been generalized to rotating black holes. It is possible to derive, from the covariant formulation of full quantum theory (Spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy. The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes. A recent success of the theory in this direction is the computation of the entropy of all non singular black holes directly from theory and independent of Immirzi parameter. The result is the expected formula , where is the entropy and the area of the black hole, derived by Bekenstein and Hawking on heuristic grounds. This is the only known derivation of this formula from a fundamental theory, for the case of generic non singular black holes. Older attempts at this calculation had difficulties. The problem was that although Loop quantum gravity predicted that the entropy of a black hole is proportional to the area of the event horizon, the result depended on a crucial free parameter in the theory, the above-mentioned Immirzi parameter. However, there is no known computation of the Immirzi parameter, so it was fixed by demanding agreement with Bekenstein and Hawking's calculation of the black hole entropy. Hawking radiation in loop quantum gravity A detailed study of the quantum geometry of a black hole horizon has been made using loop quantum gravity. Loop-quantization does not reproduce the result for black hole entropy originally discovered by Bekenstein and Hawking, unless one chooses the value of the Immirzi parameter to cancel out another constant that arises in the derivation. However, it led to the computation of higher-order corrections to the entropy and radiation of black holes. Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking spectrum that would be observable were X-rays from Hawking radiation of evaporating primordial black holes to be observed. The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of Hawking radiation spectrum. Planck star In 2014 Carlo Rovelli and Francesca Vidotto proposed that there is a Planck star inside every black hole. Based on LQG, the theory states that as stars are collapsing into black holes, the energy density reaches the Planck energy density, causing a repulsive force that creates a star. Furthermore, the existence of such a star would resolve the black hole firewall and black hole information paradox. Loop quantum cosmology The popular and technical literature makes extensive references to the LQG-related topic of loop quantum cosmology. LQC was mainly developed by Martin Bojowald. It was popularized in Scientific American for predicting a Big Bounce prior to the Big Bang. Loop quantum cosmology (LQC) is a symmetry-reduced model of classical general relativity quantized using methods that mimic those of loop quantum gravity (LQG) that predicts a "quantum bridge" between contracting and expanding cosmological branches. Achievements of LQC have been the resolution of the big bang singularity, the prediction of a Big Bounce, and a natural mechanism for inflation. LQC models share features of LQG and so is a useful toy model. However, the results obtained are subject to the usual restriction that a truncated classical theory, then quantized, might not display the true behaviour of the full theory due to artificial suppression of degrees of freedom that might have large quantum fluctuations in the full theory. It has been argued that singularity avoidance in LQC are by mechanisms only available in these restrictive models and that singularity avoidance in the full theory can still be obtained but by a more subtle feature of LQG. Loop quantum gravity phenomenology Quantum gravity effects are difficult to measure because the Planck length is so small. However recently physicists, such as Jack Palmer, have started to consider the possibility of measuring quantum gravity effects mostly from astrophysical observations and gravitational wave detectors. The energy of those fluctuations at scales this small cause space-perturbations which are visible at higher scales. Background-independent scattering amplitudes Loop quantum gravity is formulated in a background-independent language. No spacetime is assumed a priori, but rather it is built up by the states of theory themselves – however scattering amplitudes are derived from -point functions (Correlation function) and these, formulated in conventional quantum field theory, are functions of points of a background spacetime. The relation between the background-independent formalism and the conventional formalism of quantum field theory on a given spacetime is not obvious, and it is not obvious how to recover low-energy quantities from the full background-independent theory. One would like to derive the -point functions of the theory from the background-independent formalism, in order to compare them with the standard perturbative expansion of quantum general relativity and therefore check that loop quantum gravity yields the correct low-energy limit. A strategy for addressing this problem has been suggested; by studying the boundary amplitude, namely a path integral over a finite spacetime region, seen as a function of the boundary value of the field. In conventional quantum field theory, this boundary amplitude is well–defined and codes the physical information of the theory; it does so in quantum gravity as well, but in a fully background–independent manner. A generally covariant definition of -point functions can then be based on the idea that the distance between physical points – arguments of the -point function is determined by the state of the gravitational field on the boundary of the spacetime region considered. Progress has been made in calculating background-independent scattering amplitudes this way with the use of spin foams. This is a way to extract physical information from the theory. Claims to have reproduced the correct behaviour for graviton scattering amplitudes and to have recovered classical gravity have been made. "We have calculated Newton's law starting from a world with no space and no time." – Carlo Rovelli. Gravitons, string theory, supersymmetry, extra dimensions in LQG Some quantum theories of gravity posit a spin-2 quantum field that is quantized, giving rise to gravitons. In string theory, one generally starts with quantized excitations on top of a classically fixed background. This theory is thus described as background dependent. Particles like photons as well as changes in the spacetime geometry (gravitons) are both described as excitations on the string worldsheet. The background dependence of string theory can have physical consequences, such as determining the number of quark generations. In contrast, loop quantum gravity, like general relativity, is manifestly background independent, eliminating the background required in string theory. Loop quantum gravity, like string theory, also aims to overcome the nonrenormalizable divergences of quantum field theories. LQG does not introduce a background and excitations living on such a background, so LQG does not use gravitons as building blocks. Instead one expects that one may recover a kind of semiclassical limit or weak field limit where something like "gravitons" will show up again. In contrast, gravitons play a key role in string theory where they are among the first (massless) level of excitations of a superstring. LQG differs from string theory in that it is formulated in 3 and 4 dimensions and without supersymmetry or Kaluza–Klein extra dimensions, while the latter requires both to be true. There is no experimental evidence to date that confirms string theory's predictions of supersymmetry and Kaluza–Klein extra dimensions. In a 2003 paper "A Dialog on Quantum Gravity", Carlo Rovelli regards the fact LQG is formulated in 4 dimensions and without supersymmetry as a strength of the theory as it represents the most parsimonious explanation, consistent with current experimental results, over its rival string/M-theory. Proponents of string theory will often point to the fact that, among other things, it demonstrably reproduces the established theories of general relativity and quantum field theory in the appropriate limits, which loop quantum gravity has struggled to do. In that sense string theory's connection to established physics may be considered more reliable and less speculative, at the mathematical level. Loop quantum gravity has nothing to say about the matter (fermions) in the universe. Since LQG has been formulated in 4 dimensions (with and without supersymmetry), and M-theory requires supersymmetry and 11 dimensions, a direct comparison between the two has not been possible. It is possible to extend mainstream LQG formalism to higher-dimensional supergravity, general relativity with supersymmetry and Kaluza–Klein extra dimensions should experimental evidence establish their existence. It would therefore be desirable to have higher-dimensional Supergravity loop quantizations at one's disposal in order to compare these approaches. A series of papers have been published attempting this. Most recently, Thiemann (and alumni) have made progress toward calculating black hole entropy for supergravity in higher dimensions. It will be useful to compare these results to the corresponding super string calculations. LQG and related research programs Several research groups have attempted to combine LQG with other research programs: Johannes Aastrup, Jesper M. Grimstrup et al. research combines noncommutative geometry with canonical quantum gravity and Ashtekar variables, Laurent Freidel, Simone Speziale, et al., spinors and twistor theory with loop quantum gravity, and Lee Smolin et al. with Verlinde entropic gravity and loop gravity. Stephon Alexander, Antonino Marciano and Lee Smolin have attempted to explain the origins of weak force chirality in terms of Ashketar's variables, which describe gravity as chiral, and LQG with Yang–Mills theory fields in four dimensions. Sundance Bilson-Thompson, Hackett et al., has attempted to introduce the standard model via LQGs degrees of freedom as an emergent property (by employing the idea of noiseless subsystems, a notion introduced in a more general situation for constrained systems by Fotini Markopoulou-Kalamara et al.) Furthermore, LQG has drawn philosophical comparisons with causal dynamical triangulation and asymptotically safe gravity, and the spinfoam with group field theory and AdS/CFT correspondence. Smolin and Wen have suggested combining LQG with string-net liquid, tensors, and Smolin and Fotini Markopoulou-Kalamara quantum graphity. There is the consistent discretizations approach. Also, Pullin and Gambini provide a framework to connect the path integral and canonical approaches to quantum gravity. They may help reconcile the spin foam and canonical loop representation approaches. Recent research by Chris Duston and Matilde Marcolli introduces topology change via topspin networks. Problems and comparisons with alternative approaches Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail. Many of these problems apply to LQG, including: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)? Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity theory? The theory of LQG is one possible solution to the problem of quantum gravity, as is string theory. There are substantial differences however. For example, string theory also addresses unification, the understanding of all known forces and particles as manifestations of a single entity, by postulating extra dimensions and so-far unobserved additional particles and symmetries. Contrary to this, LQG is based only on quantum theory and general relativity and its scope is limited to understanding the quantum aspects of the gravitational interaction. On the other hand, the consequences of LQG are radical, because they fundamentally change the nature of space and time and provide a tentative but detailed physical and mathematical picture of quantum spacetime. Presently, no semiclassical limit recovering general relativity has been shown to exist. This means it remains unproven that LQG's description of spacetime at the Planck scale has the right continuum limit (described by general relativity with possible quantum corrections). Specifically, the dynamics of the theory are encoded in the Hamiltonian constraint, but there is no candidate Hamiltonian. Other technical problems include finding off-shell closure of the constraint algebra and physical inner product vector space, coupling to matter fields of quantum field theory, fate of the renormalization of the graviton in perturbation theory that lead to ultraviolet divergence beyond 2-loops (see one-loop Feynman diagram in Feynman diagram). While there has been a proposal relating to observation of naked singularities, and doubly special relativity as a part of a program called loop quantum cosmology, there is no experimental observation for which loop quantum gravity makes a prediction not made by the Standard Model or general relativity (a problem that plagues all current theories of quantum gravity). Because of the above-mentioned lack of a semiclassical limit, LQG has not yet even reproduced the predictions made by general relativity. An alternative criticism is that general relativity may be an effective field theory, and therefore quantization ignores the fundamental degrees of freedom. ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48m or 13 orders of magnitude below the Planck scale. See also Notes Citations Works cited (available here as of 2 May 2017) Further reading Popular books: Rodolfo Gambini and Jorge Pullin, Loop Quantum Gravity for Everyone, World Scientific, 2020. Carlo Rovelli, "Reality is not what it seems", Penguin, 2016. Martin Bojowald, Once Before Time: A Whole Story of the Universe 2010. Carlo Rovelli, What is Time? What is space?, Di Renzo Editore, Roma, 2006. Lee Smolin, Three Roads to Quantum Gravity, 2001. Magazine articles: Lee Smolin, "Atoms of Space and Time", Scientific American, January 2004. Martin Bojowald, "Following the Bouncing Universe", Scientific American, October 2008. Easier introductory, expository or critical works: Abhay Ashtekar, Gravity and the quantum, e-print available as gr-qc/0410054 (2004). John C. Baez and Javier P. Muniain, Gauge Fields, Knots and Quantum Gravity, World Scientific (1994). Carlo Rovelli, A Dialog on Quantum Gravity, e-print available as hep-th/0310077 (2003). Carlo Rovelli and Francesca Vidotto, Covariant Loop Quantum Gravity, Cambridge (2014); draft available online. More advanced introductory/expository works: Carlo Rovelli, Quantum Gravity, Cambridge University Press (2004); draft available online. Abhay Ashtekar, New Perspectives in Canonical Gravity, Bibliopolis (1988). Abhay Ashtekar, Lectures on Non-Perturbative Canonical Gravity, World Scientific (1991). Rodolfo Gambini and Jorge Pullin, Loops, Knots, Gauge Theories and Quantum Gravity, Cambridge University Press (1996). T. Thiemann The LQG – String: Loop Quantum Gravity Quantization of String Theory (2004). Topical reviews Carlo Rovelli and Marcus Gaul, Loop Quantum Gravity and the Meaning of Diffeomorphism Invariance, e-print available as gr-qc/9910079. Lee Smolin, The case for background independence, e-print available as hep-th/0507235. Alejandro Corichi, Loop Quantum Geometry: A primer, e-print available as Loop Quantum Geometry: A primer. Alejandro Perez, Introduction to loop quantum gravity and spin foams, e-print available as Introduction to Loop Quantum Gravity and Spin Foams. Fundamental research papers: Roger Penrose, Angular momentum: an approach to combinatorial space-time in Quantum Theory and Beyond, ed. Ted Bastin, Cambridge University Press, 1971. Carlo Rovelli and Lee Smolin, Discreteness of area and volume in quantum gravity, Nuclear Physics, B442 (1995). pp. 593–622, e-print available as . External links Introduction to Loop Quantum Gravity Online lectures by Carlo Rovelli Covariant Loop Quantum Gravity by Carlo Rovelli and Francesca Vidotto "Loop Quantum Gravity" by Carlo Rovelli Physics World, November 2003 Quantum Foam and Loop Quantum Gravity Abhay Ashtekar: Semi-Popular Articles. Some excellent popular articles suitable for beginners about space, time, GR, and LQG Loop Quantum Gravity: Lee Smolin Loop Quantum Gravity Lectures Online by Lee Smolin Spin networks, spin foams and loop quantum gravity Wired magazine, News: Moving Beyond String Theory April 2006 Scientific American Special Issue, A Matter of Time, has Lee Smolin LQG Article Atoms of Space and Time September 2006, The Economist, article Looping the loop Gamma-ray Large Area Space Telescope: The Fermi Gamma-ray Space Telescope Zeno meets modern science. Article from Acta Physica Polonica B by Z.K. Silagadze. Did pre-big bang universe leave its mark on the sky? – According to a model based on "loop quantum gravity" theory, a parent universe that existed before ours may have left an imprint (New Scientist, 10 April 2008) Physics beyond the Standard Model Theories of gravity
Loop quantum gravity
[ "Physics" ]
12,357
[ "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Theories of gravity", "Physics beyond the Standard Model" ]
152,671
https://en.wikipedia.org/wiki/Z3%20%28computer%29
The Z3 was a German electromechanical computer designed by Konrad Zuse in 1938, and completed in 1941. It was the world's first working programmable, fully automatic digital computer. The Z3 was built with 2,600 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was stored on punched film. Initial values were entered manually. The Z3 was completed in Berlin in 1941. It was not considered vital, so it was never put into everyday operation. Based on the work of the German aerodynamics engineer Hans Georg Küssner (known for the Küssner effect), a "Program to Compute a Complex Matrix" was written and used to solve wing flutter problems. Zuse asked the German government for funding to replace the relays with fully electronic switches, but funding was denied during World War II since such development was deemed "not war-important". The original Z3 was destroyed on 21 December 1943 during an Allied bombardment of Berlin. That Z3 was originally called V3 (Versuchsmodell 3 or Experimental Model 3) but was renamed so that it would not be confused with Germany's V-weapons. A fully functioning replica was built in 1961 by Zuse's company, Zuse KG, which is now on permanent display at Deutsches Museum in Munich. The Z3 was demonstrated in 1998 to be, in principle, Turing-complete. However, because it lacked conditional branching, the Z3 only meets this definition by speculatively computing all possible outcomes of a calculation. Thanks to this machine and its predecessors, Konrad Zuse has often been suggested as the inventor of the computer. Design and development Zuse designed the Z1 in 1935 to 1936 and built it from 1936 to 1938. The Z1 was wholly mechanical and only worked for a few minutes at a time at most. Helmut Schreyer advised Zuse to use a different technology. As a doctoral student at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin) in 1937 he worked on the implementation of Boolean operations and (in today's terminology) flip-flops on the basis of vacuum tubes. In 1938, Schreyer demonstrated a circuit on this basis to a small audience, and explained his vision of an electronic computing machine – but since the largest operational electronic devices contained far fewer tubes this was considered practically infeasible. In that year when presenting the plan for a computer with 2,000 electron tubes, Zuse and Schreyer, who was an assistant at Telecommunication Institute at Technische Universität Berlin, were discouraged by members of the institute who knew about the problems with electron tube technology. Zuse later recalled: "They smiled at us in 1939, when we wanted to build electronic machines ... We said: The electronic machine is great, but first the components have to be developed." In 1940, Zuse and Schreyer managed to arrange a meeting at the Oberkommando der Wehrmacht (OKW) to discuss a potential project for developing an electronic computer, but when they estimated a duration of two or three years, the proposal was rejected. Zuse decided to implement the next design based on relays. The realization of the Z2 was helped financially by Kurt Pannke, who manufactured small calculating machines. The Z2 was completed and presented to an audience of the ("German Laboratory for Aviation") in 1940 in Berlin-Adlershof. Zuse was lucky – this presentation was one of the few instances where the Z2 actually worked and could convince the DVL to partly finance the next design. In 1941, improving on the basic Z2 machine, he built the Z3 in a highly secret project of the German government. Joseph Jennissen (1905–1977), member of the "Research-Leadership" (Forschungsführung) in the Reich Air Ministry acted as a government supervisor for orders of the ministry to Zuse's company ZUSE Apparatebau. A further intermediary between Zuse and the Reich Air Ministry was the aerodynamicist Herbert A. Wagner. The Z3 was completed in 1941 and was faster and far more reliable than the Z1 and Z2. The Z3 floating-point arithmetic was improved over that of the Z1 in that it implemented exception handling "using just a few relays", the exceptional values (plus infinity, minus infinity and undefined) could be generated and passed through operations. It further added a square root instruction. The Z3, like its predecessors, stored its program on an external punched tape, thus no rewiring was necessary to change programs. However, it did not have conditional branching found in later universal computers. On 12 May 1941, the Z3 was presented to an audience of scientists including the professors Alfred Teichmann and Curt Schmieden of the ("German Laboratory for Aviation") in Berlin, today known as the German Aerospace Center in Cologne. Zuse moved on to the Z4 design, which he completed in a bunker in the Harz mountains, alongside Wernher von Braun's ballistic missile development. When World War II ended, Zuse retreated to Hinterstein in the Alps with the Z4, where he remained for several years. Instruction set The Z3 operated as a stack machine with a stack of two registers, R1 and R2. The first load operation in a program would load the contents of a memory location into R1; the next load operation would load the contents of a memory location into R2. Arithmetic instructions would operate on the contents of R1 and R2, leaving the result in R1, and clearing R2; the next load operation would load into R2. A store operation would store the contents of R1 into a memory location, and clear R1; the next load operation would load the contents of a memory location into R1. A read keyboard operation would read a number from the keyboard into R1 and clear R2. A display instruction would display the contents of R1 and clear R2; the next load instruction would load into R2. Z3 as a universal Turing machine It was possible to construct loops on the Z3, but there was no conditional branch instruction. Nevertheless, the Z3 was Turing-complete – how to implement a universal Turing machine on the Z3 was shown in 1998 by Raúl Rojas. He proposed that the tape program would have to be long enough to execute every possible path through both sides of every branch. It would compute all possible answers, but the unneeded results would be canceled out (a kind of speculative execution). Rojas concludes, "We can therefore say that, from an abstract theoretical perspective, the computing model of the Z3 is equivalent to the computing model of today's computers. From a practical perspective, and in the way the Z3 was really programmed, it was not equivalent to modern computers." This seeming limitation belies the fact that the Z3 provided a practical instruction set for the typical engineering applications of the 1940s. Mindful of the existing hardware restrictions, Zuse's main goal at the time was to have a workable device to facilitate his work as a civil engineer. Relation to other work The success of Zuse's Z3 is often attributed to its use of the simple binary system. This was invented roughly three centuries earlier by Gottfried Leibniz; Boole later used it to develop his Boolean algebra. Zuse was inspired by Hilbert's and Ackermann's book on elementary mathematical logic Principles of Mathematical Logic. In 1937, Claude Shannon introduced the idea of mapping Boolean algebra onto electronic relays in a seminal work on digital circuit design. Zuse, however, did not know of Shannon's work and developed the groundwork independently for his first computer Z1, which he designed and built from 1935 to 1938. Zuse's coworker Helmut Schreyer built an electronic digital experimental model of a computer using 100 vacuum tubes in 1942, but it was lost at the end of the war. An analog computer was built by the rocket scientist Helmut Hölzer in 1942 at the Peenemünde Army Research Center to simulate V-2 rocket trajectories. The Colossus (1943), built by Tommy Flowers, and the Atanasoff–Berry computer (1942) used thermionic valves (vacuum tubes) and binary representation of numbers. Programming was by means of re-plugging patch panels and setting switches. The ENIAC computer, completed after the war, used vacuum tubes to implement switches and used decimal representation for numbers. Until 1948 programming was, as with Colossus, by patch leads and switches. The Manchester Baby of 1948 along with the Manchester Mark 1 and EDSAC both of 1949 were the world's earliest working computers that stored program instructions and data in the same space. In this they implemented the stored-program concept which is frequently (but erroneously) attributed to a 1945 paper by John von Neumann and colleagues. Von Neumann is said to have given due credit to Alan Turing, and the concept had actually been mentioned earlier by Konrad Zuse himself, in a 1936 patent application (that was rejected). Konrad Zuse himself remembered in his memoirs: "During the war it would have barely been possible to build efficient stored program devices anyway." Friedrich L. Bauer later wrote: "His visionary ideas (live programs) which were only to be published years afterwards aimed at the right practical direction but were never implemented by him." Specifications Average calculation speed: addition – 0.8 seconds, multiplication – 3 seconds Arithmetic unit: Binary floating-point, 22-bit, add, subtract, multiply, divide, square root Data memory: 64 22-bit words Program memory: Punched celluloid tape Input: Decimal floating-point numbers Output: Decimal floating-point numbers Input and Output was facilitated by a terminal, with a special keyboard for input and a row of lamps to show results Elements: Around 2,000 relays (1,400 for the memory) Frequency: 5–10 hertz Power consumption: Around 4,000 watts Weight: Around Modern reconstructions A modern reconstruction directed by Raúl Rojas and Horst Zuse started in 1997 and finished in 2003. It is now in the Konrad Zuse Museum in Hünfeld, Germany. Memory was halved to 32 words. Power consumption is about 400 W, and weight is about . In 2008, Horst Zuse started a reconstruction of the Z3 by himself. It was presented in 2010 in the Konrad Zuse Museum in Hünfeld. See also History of computing hardware Reverse Polish notation (RPN) Notes References Further reading External links Z3 page at Horst Zuse's website The Life and Work of Konrad Zuse Paul E. Ceruzzi Collection on Konrad Zuse (CBI 219). Charles Babbage Institute, University of Minnesota. Collection contains published reports, articles, product literature, and other materials. 1940s computers Z3 One-of-a-kind computers German inventions of the Nazi period World War II German electronics Computer-related introductions in 1941 Konrad Zuse Computers designed in Germany Serial computers
Z3 (computer)
[ "Technology" ]
2,297
[ "Serial computers", "Computers" ]
152,692
https://en.wikipedia.org/wiki/Tractor
A tractor is an engineering vehicle specifically designed to deliver a high tractive effort (or torque) at slow speeds, for the purposes of hauling a trailer or machinery such as that used in agriculture, mining or construction. Most commonly, the term is used to describe a farm vehicle that provides the power and traction to mechanize agricultural tasks, especially (and originally) tillage, and now many more. Agricultural implements may be towed behind or mounted on the tractor, and the tractor may also provide a source of power if the implement is mechanised. Etymology The word tractor was taken from Latin, being the agent noun of trahere "to pull". The first recorded use of the word meaning "an engine or vehicle for pulling wagons or plows" occurred in 1896, from the earlier term "traction motor" (1859). National variations In the UK, Ireland, Australia, India, Spain, Argentina, Slovenia, Serbia, Croatia, the Netherlands, and Germany, the word "tractor" usually means "farm tractor", and the use of the word "tractor" to mean other types of vehicles is familiar to the vehicle trade, but unfamiliar to much of the general public. In Canada and the US, the word may also refer to the road tractor portion of a tractor trailer truck, but also usually refers to the piece of farm equipment. History Traction engines The first powered farm implements in the early 19th century were portable engines – steam engines on wheels that could be used to drive mechanical farm machinery by way of a flexible belt. Richard Trevithick designed the first 'semi-portable' stationary steam engine for agricultural use, known as a "barn engine" in 1812, and it was used to drive a corn threshing machine. The truly portable engine was invented in 1839 by William Tuxford of Boston, Lincolnshire who started manufacture of an engine built around a locomotive-style boiler with horizontal smoke tubes. A large flywheel was mounted on the crankshaft, and a stout leather belt was used to transfer the drive to the equipment being driven. In the 1850s, John Fowler used a Clayton & Shuttleworth portable engine to drive apparatus in the first public demonstrations of the application of cable haulage to cultivation. In parallel with the early portable engine development, many engineers attempted to make them self-propelled – the fore-runners of the traction engine. In most cases this was achieved by fitting a sprocket on the end of the crankshaft, and running a chain from this to a larger sprocket on the rear axle. These experiments met with mixed success. The first proper traction engine, in the form recognisable today, was developed in 1859 when British engineer Thomas Aveling modified a Clayton & Shuttleworth portable engine, which had to be hauled from job to job by horses, into a self-propelled one. The alteration was made by fitting a long driving chain between the crankshaft and the rear axle. The first half of the 1860s was a period of great experimentation but by the end of the decade the standard form of the traction engine had evolved and changed little over the next sixty years. It was widely adopted for agricultural use. The first tractors were steam-powered plowing engines. They were used in pairs, placed on either side of a field to haul a plow back and forth between them using a wire cable. In Britain Mann's and Garrett developed steam tractors for direct ploughing, but the heavy, wet soil of England meant that these designs were less economical than a team of horses. In the United States, where soil conditions permitted, steam tractors were used to direct-haul plows. Steam-powered agricultural engines remained in use well into the 20th century until reliable internal combustion engines had been developed. Fuel The first gasoline powered tractors were built in Illinois, by John Charter combining single cylinder Otto engines with a Rumley Steam engine chassis, in 1889. In 1892, John Froelich built a gasoline-powered tractor in Clayton County, Iowa, US. A Van Duzen single-cylinder gasoline engine was mounted on a Robinson engine chassis, which could be controlled and propelled by Froelich's gear box. After receiving a patent, Froelich started up the Waterloo Gasoline Engine Company and invested all of his assets. The venture was very unsuccessful, and by 1895 all was lost and he went out of business. Richard Hornsby & Sons are credited with producing and selling the first oil-engined tractor in Britain, invented by Herbert Akroyd Stuart. The Hornsby-Akroyd Patent Safety Oil Traction Engine was made in 1896 with a engine. In 1897, it was bought by Mr. Locke-King, the first recorded British tractor sale. That year, it won a Silver Medal from the Royal Agricultural Society of England. It later returned to the factory for a caterpillar track fitting. The first commercially successful light-weight petrol-powered general purpose tractor was built by Dan Albone, a British inventor in 1901. He filed for a patent on 15 February 1902 for his tractor design and then formed Ivel Agricultural Motors Limited. The other directors were Selwyn Edge, Charles Jarrott, John Hewitt and Lord Willoughby. He called his machine the Ivel Agricultural Motor; the word "tractor" came into common use after Hart-Parr created it. The Ivel Agricultural Motor was light, powerful and compact. It had one front wheel, with a solid rubber tyre, and two large rear wheels like a modern tractor. The engine used water cooling, utilizing the thermo-syphon effect. It had one forward and one reverse gear. A pulley wheel on the left hand side allowed it to be used as a stationary engine, driving a wide range of agricultural machinery. The 1903 sale price was £300. His tractor won a medal at the Royal Agricultural Show, in 1903 and 1904. About 500 were built, and many were exported all over the world. The original engine was made by Payne & Co. of Coventry. After 1906, French Aster engines were used. The first successful American tractor was built by Charles W. Hart and Charles H. Parr. They developed a two-cylinder gasoline engine and set up their business in Charles City, Iowa. In 1903, the firm built 15 tractors. Their #3 is the oldest surviving internal combustion engine tractor in the United States, and is on display at the Smithsonian National Museum of American History in Washington, D.C. The two-cylinder engine has a unique hit-and-miss firing cycle that produced at the belt and at the drawbar. In 1908, the Saunderson Tractor and Implement Co. of Bedford introduced a four-wheel design, and became the largest tractor manufacturer in Britain at the time. While the earlier, heavier tractors were initially very successful, it became increasingly apparent at this time that the weight of a large supporting frame was less efficient than lighter designs. Henry Ford introduced a light-weight, mass-produced design which largely displaced the heavier designs. Some companies halfheartedly followed suit with mediocre designs, as if to disprove the concept, but they were largely unsuccessful in that endeavor. While unpopular at first, these gasoline-powered machines began to catch on in the 1910s, when they became smaller and more affordable. Henry Ford introduced the Fordson, a wildly popular mass-produced tractor, in 1917. They were built in the U.S., Ireland, England and Russia, and by 1923, Fordson had 77% of the U.S. market. The Fordson dispensed with a frame, using the strength of the engine block to hold the machine together. By the 1920s, tractors with gasoline-powered internal combustion engines had become the norm. The first three-point hitches were experimented with in 1917. After Harry Ferguson applied for a British patent for his three-point hitch in 1926, they became popular. A three-point attachment of the implement to the tractor is the simplest and the only statically determinate way of joining two bodies in engineering. The Ferguson-Brown Company produced the Model A Ferguson-Brown tractor with a Ferguson-designed hydraulic hitch. In 1938 Ferguson entered into a collaboration with Henry Ford to produce the Ford-Ferguson 9N tractor. The three-point hitch soon became the favorite hitch attachment system among farmers around the world. This tractor model also included a rear Power Take Off (PTO) shaft that could be used to power three point hitch mounted implements such as sickle-bar mowers. Electric In 1969, General Electric introduced the Elec-Trak, the first commercial, electric tractor (electric-powered garden tractor). The Elec-Trak was manufactured by General Electric until 1975. Electric tractors are manufactured by a German company, Fendt, and by US companies, Solectrac and Monarch Tractor. John Deere's protoype electric tractor is a plug-in, powered by an electrical cable. Kubota is prototyping an autonomous electric tractor. Design, power and transmission Configuration Tractors can be generally classified by number of axles or wheels, with main categories of two-wheel tractors (single-axle tractors) and four-wheel tractors (two-axle tractors); more axles are possible but uncommon. Among four-wheel tractors (two-axle tractors), most are two-wheel drive (usually at the rear); but many are two-wheel drive with front wheel assist, four-wheel drive (often with articulated steering), or track crawler (with steel or rubber tracks). The classic farm tractor is a simple open vehicle, with two very large driving wheels on an axle below a single seat (the seat and steering wheel consequently are in the center), and the engine in front of the driver, with two steerable wheels below the engine compartment. This basic design has remained unchanged for a number of years after being pioneered by Wallis, but enclosed cabs are fitted on almost all modern models, for operator safety and comfort. In some localities with heavy or wet soils, notably in the Central Valley of California, the "Caterpillar" or "crawler" type of tracked tractor became popular due to superior traction and flotation. These were usually maneuvered through the use of turning brake pedals and separate track clutches operated by levers rather than a steering wheel. Four-wheel drive tractors began to appear in the 1960s. Some four-wheel drive tractors have the standard "two large, two small" configuration typical of smaller tractors, while some have four large, powered wheels. The larger tractors are typically an articulated, center-hinged design steered by hydraulic cylinders that move the forward power unit while the trailing unit is not steered separately. In the early 21st century, articulated or non-articulated, steerable multitrack tractors have largely supplanted the Caterpillar type for farm use. Larger types of modern farm tractors include articulated four-wheel or eight-wheel drive units with one or two power units which are hinged in the middle and steered by hydraulic clutches or pumps. A relatively recent development is the replacement of wheels or steel crawler-type tracks with flexible, steel-reinforced rubber tracks, usually powered by hydrostatic or completely hydraulic driving mechanisms. The configuration of these tractors bears little resemblance to the classic farm tractor design. Engine and fuels The predecessors of modern tractors, traction engines, used steam engines for power. Gasoline and kerosene Since the turn of the 20th century, internal combustion engines have been the power source of choice. Between 1900 and 1960, gasoline was the predominant fuel, with kerosene (the Rumely Oil Pull was the most notable of this kind)being a common alternative. Generally, one engine could burn any of those, although cold starting was easiest on gasoline. Often, a small auxiliary fuel tank was available to hold gasoline for cold starting and warm-up, while the main fuel tank held whatever fuel was most convenient or least expensive for the particular farmer. In the United Kingdom, a gasoline-kerosene engine is known as a petrol-paraffin engine. Diesel Dieselisation gained momentum starting in the 1960s, and modern farm tractors usually employ diesel engines, which range in power output from 18 to 575 horsepower (15 to 480 kW). Size and output are dependent on application, with smaller tractors used for lawn mowing, landscaping, orchard work, and truck farming, and larger tractors for vast fields of wheat, corn, soy, and other bulk crops. Liquefied petroleum gas Liquefied petroleum gas (LPG) or propane also have been used as tractor fuels, but require special pressurized fuel tanks and filling equipment and produced less power, so are less prevalent in most markets. Most are confined for inside work due to their clean burning. Wood During the second world war, Petrolium based fuel was scarce in many European nations. So they resorted to using wood gasifires on every vehicle, including tractors. Biodiesel In some countries such as Germany, biodiesel is often used. Some other biofuels such as straight vegetable oil are also being used by some farmers. Electric powered Prototype battery powered electric tractors are being developed by a German company, Fendt, and by two US companies, Solectrac and Monarch Tractor. John Deere's protoype electric tractor is a plug-in, powered by an electrical cable. Kubota is prototyping an autonomous electric tractor. Transmission Most older farm tractors use a manual transmission with several gear ratios, typically three to six, sometimes multiplied into two or three ranges. This arrangement provides a set of discrete ratios that, combined with the varying of the throttle, allow final-drive speeds from less than one up to about 25 miles per hour (40 km/h), with the lower speeds used for working the land and the highest speed used on the road. Slow, controllable speeds are necessary for most of the operations performed with a tractor. They help give the farmer a larger degree of control in certain situations, such as field work. When travelling on public roads, the slow operating speeds can cause problems, such as long queues or tailbacks, which can delay or annoy motorists in cars and trucks. These motorists are responsible for being duly careful around farm tractors and sharing the road with them, but many shirk this responsibility, so various ways to minimize the interaction or minimize the speed differential are employed where feasible. Some countries (for example the Netherlands) employ a road sign on some roads that means "no farm tractors". Some modern tractors, such as the JCB Fastrac, are now capable of much higher road speeds of around 50 mph (80 km/h). Older tractors usually have unsynchronized transmission designs, which often require the operator to engage the clutch to shift between gears. This mode of use is inherently unsuited to some of the work tractors do, and has been circumvented in various ways over the years. For existing unsynchronized tractors, the methods of circumvention are double clutching or power-shifting, both of which require the operator to rely on skill to speed-match the gears while shifting, and are undesirable from a risk-mitigation standpoint because of what can go wrong if the operator makes a mistake – transmission damage is possible, and loss of vehicle control can occur if the tractor is towing a heavy load either uphill or downhill – something that tractors often do. Therefore, operator's manuals for most of these tractors state one must always stop the tractor before shifting. In newer designs, unsynchronized transmission designs were replaced with synchronization or with continuously variable transmissions (CVTs). Either a synchronized manual transmission with enough available gear ratios (often achieved with dual ranges, high and low) or a CVT allow the engine speed to be matched to the desired final-drive speed, while keeping engine speed within the appropriate speed (as measured in rotations per minute or rpm) range for power generation (the working range) (whereas throttling back to achieve the desired final-drive speed is a trade-off that leaves the working range). The problems, solutions, and developments described here also describe the history of transmission evolution in semi-trailer trucks. The biggest difference is fleet turnover; whereas most of the old road tractors have long since been scrapped, many of the old farm tractors are still in use. Therefore, old transmission design and operation is primarily just of historical interest in trucking, whereas in farming it still often affects daily life. Hitches and power applications The power produced by the engine must be transmitted to the implement or equipment to do the actual work intended for the equipment. This may be accomplished via a drawbar or hitch system if the implement is to be towed or otherwise pulled through the tractive power of the engine, or via a pulley or power takeoff system if the implement is stationary, or a combination of the two. Drawbars Plows and other tillage equipment are most commonly connected to the tractor via a drawbar. The classic drawbar is simply a steel bar attached to the tractor (or in some cases, as in the early Fordsons, cast as part of the rear transmission housing) to which the hitch of the implement was attached with a pin or by a loop and clevis. The implement could be readily attached and removed, allowing the tractor to be used for other purposes on a daily basis. If the tractor was equipped with a swinging drawbar, then it could be set at the center or offset from center to allow the tractor to run outside the path of the implement. The drawbar system necessitated the implement having its own running gear (usually wheels) and in the case of a plow, chisel cultivator or harrow, some sort of lift mechanism to raise it out of the ground at turns or for transport. Drawbars necessarily posed a rollover risk depending on how the tractive torque was applied. The Fordson tractor was prone to roll backward due to an excessively short wheelbase. The linkage between the implement and the tractor usually had some slack which could lead to jerky starts and greater wear and tear on the tractor and the equipment. Drawbars were appropriate to the dawn of mechanization, because they were very simple in concept and because as the tractor replaced the horse, existing horse-drawn implements usually already had running gear. As the history of mechanization progressed, the advantages of other hitching systems became apparent, leading to new developments (see below). Depending on the function for which a tractor is used, though, the drawbar is still one of the usual means of attaching an implement to a tractor (see photo at left). Fixed mounts Some tractor manufacturers produced matching equipment that could be directly mounted on the tractor. Examples included front-end loaders, belly mowers, row crop cultivators, corn pickers and corn planters. In most cases, these fixed mounts were proprietary and unique to each make of tractor, so an implement produced by John Deere, for example, could not be attached to a Minneapolis Moline tractor. Another disadvantage was mounting usually required some time and labor, resulting in the implement being semi-permanently attached with bolts or other mounting hardware. Usually, it was impractical to remove the implement and reinstall it on a day-to-day basis. As a result, the tractor was unavailable for other uses and dedicated to a single use for an appreciable period of time. An implement was generally mounted at the beginning of its season of use (such as tillage, planting or harvesting) and removed when the season ended. Three-point and quick The drawbar system was virtually the exclusive method of attaching implements (other than direct attachment to the tractor) before Harry Ferguson developed the three-point hitch. Equipment attached to the three-point hitch can be raised or lowered hydraulically with a control lever. The equipment attached to the three-point hitch is usually completely supported by the tractor. Another way to attach an implement is via a quick hitch, which is attached to the three-point hitch. This enables a single person to attach an implement quicker and put the person in less danger when attaching the implement. The three-point hitch revolutionized farm tractors and their implements. While the Ferguson System was still under patent, other manufacturers developed new hitching systems to try to fend off some of Ferguson's competitive advantage. For example, International Harvester's Farmall tractors gained a two-point "Fast Hitch", and John Deere had a power lift that was somewhat similar to the more flexible Ferguson invention. Once the patent protection expired on the three-point hitch, it became an industry standard. Almost every tractor today features Ferguson's three-point linkage or a derivative of it. This hitch allows for easy attachment and detachment of implements while allowing the implement to function as a part of the tractor, almost as if it were attached by a fixed mount. Previously, when the implement hit an obstacle, the towing link broke or the tractor flipped over. Ferguson's idea was to combine a connection via two lower and one upper lift arms that were connected to a hydraulic lifting ram. The ram was, in turn, connected to the upper of the three links so the increased drag (as when a plough hits a rock) caused the hydraulics to lift the implement until the obstacle was passed. Recently, Bobcat's patent on its front loader connection (inspired by these earlier systems) has expired, and compact tractors are now being outfitted with quick-connect attachments for their front-end loaders. Power take-off systems and hydraulics In addition to towing an implement or supplying tractive power through the wheels, most tractors have a means to transfer power to another machine such as a baler, swather, or mower. Unless it functions solely by pulling it through or over the ground, a towed implement needs its own power source (such as a baler or combine with a separate engine) or else a means of transmitting power from the tractor to the mechanical operations of the equipment. Early tractors used belts or cables wrapped around the flywheel or a separate belt pulley to power stationary equipment, such as a threshing machine, buzz saw, silage blower, or stationary baler. In most cases, it was impractical for the tractor and equipment to move with a flexible belt or cable between them, so this system required the tractor to remain in one location, with the work brought to the equipment, or the tractor to be relocated at each turn and the power set-up reapplied (as in cable-drawn plowing systems used in early steam tractor operations). Modern tractors use a power take-off (PTO) shaft to provide rotary power to machinery that may be stationary or pulled. The PTO shaft generally is at the rear of the tractor, and can be connected to an implement that is either towed by a drawbar or a three-point hitch. This eliminates the need for a separate, implement-mounted power source, which is almost never seen in modern farm equipment. It is also optional to get a front PTO as well when buying a new tractor. Virtually all modern tractors can also provide external hydraulic fluid and electrical power to the equipment they are towing, either by hoses or wires. Operation Modern tractors have many electrical switches and levers in the cab for controlling the multitude of different functions available on the tractor. Pedals Some modern farm tractors retain a traditional manual transmission; increasingly they have hydraulically driven powershift transmissions and CVT, which vastly simplify operation. Those with powershift transmissions have identical pedal arrangements on the floor for the operator to actuate, replacing a clutch pedal on the far left with an inching pedal that cuts off hydraulic flow to the clutches. Twinned brake pedals – one each for left and right side wheels – are placed together on the right side. Some have a pedal for a foot throttle on the far right. Unlike automobiles, throttle speed can also be controlled by a hand-operated lever ("hand throttle"), which may be set to a fixed position. This helps provide a constant speed in field work. It also helps provide continuous power for stationary tractors that are operating an implement by PTO shaft or axle driven belt. The foot throttle gives the operator more automobile-like control over the speed of a mobile tractor in any operation. Some modern tractors also have (or offer as optional equipment) a button on the gear stick for controlling the clutch, in addition to the standard pedal, allowing for gear changes and the tractor to be brought to a stop without using the foot pedal to engage the clutch. Others have a button for temporarily increasing throttle speed to improve hydraulic flow to implements, such as a front end loader bucket. Independent left and right brake pedals are provided to allow improved steering (by engaging the side one wishes to turn to, slowing or stopping its wheel) and improved traction in soft and slippery conditions (by transferring rotation to the wheel with better grip). Some users prefer to lock both pedals together, or utilize a partial lock that allows the left pedal to be depressed independently but engages both when the right is applied. This may be in the form of a swinging or sliding bolt that may be readily engaged or disengaged in the field without tools. Foot pedal throttle control is mostly a returning feature of newer tractors. In the UK, foot pedal use to control engine speed while travelling on the road is mandatory. Some tractors, especially those designed for row-crop work, have a 'de-accelerator' pedal, which operates in the reverse fashion of an automobile throttle, slowing the engine when applied. This allows control over the speed of a tractor with its throttle set high for work, as when repeatedly slowing to make U-turns at the end of crop rows in fields. A front-facing foot button is traditionally included just ahead of the driver's seat (designed to be pressed by the operator's heel) to engage the rear differential lock (diff-lock), which prevents wheel slip. The differential normally allows driving wheels to operate at their own speeds, as required, for example, by the different radius each takes in a turn. This allows the outside wheel to travel faster than the inside wheel, thereby traveling further during a turn. In low-traction conditions on a soft surface, the same mechanism can allow one wheel to slip, wasting its torque and further reducing traction. The differential lock overrides this, forcing both wheels to turn at the same speed, reducing wheel slip and improving traction. Care must be taken to unlock the differential before turning, usually by hitting the pedal a second time, since the tractor with good traction cannot perform a turn with the diff-lock engaged. In many modern tractors, this pedal is replaced with an electrical switch. Levers and switches Many functions once controlled with levers have been replaced with some model of electrical switch with the rise of indirect computer controlling of functions in modern tractors. Until the late of the 1950s, tractors had a single register of gears, hence one gear stick, often with three to five forward gears and one reverse. Then, group gears were introduced, and another gear stick was added. Later, control of the forward-reverse direction was moved to a special stick attached at the side of the steering wheel, which allowed forward or reverse travel in any gear. Now, with CVTs or other gear types, fewer sticks control the transmission, and some are replaced with electrical switches or are totally computer-controlled. The three-point hitch was controlled with a lever for adjusting the position, or as with the earliest ones, just the function for raising or lowering the hitch. With modern electrical systems, it is often replaced with a potentiometer for the lower bound position and another one for the upper bound, and a switch allowing automatic adjustment of the hitch between these settings. The external hydraulics also originally had levers, but now are often replaced with some form of electrical switch; the same is true for the power take-off shaft. Safety Agriculture in the United States is one of the most hazardous industries, only surpassed by mining and construction. No other farm machine is so identified with the hazards of production agriculture as the tractor. Tractor-related injuries account for approximately 32% of the fatalities and 6% of the nonfatal injuries in agriculture. Over 50% is attributed to tractor overturns. The roll-over protection structure (ROPS) and seat belt, when worn, are the most important safety devices to protect operators from death during tractor overturns. Modern tractors have a ROPS to prevent an operator from being crushed when overturning. This is especially important in open-air tractors, where the ROPS is a steel beam that extends above the operator's seat. For tractors with operator cabs, the ROPS is part of the frame of the cab. A ROPS with enclosed cab further reduces the likelihood of serious injury because the operator is protected by the sides and windows of the cab. These structures were first required by legislation in Sweden in 1959. Before they were required, some farmers died when their tractors rolled on top of them. Row-crop tractors, before ROPS, were particularly dangerous because of their 'tricycle' design with the two front wheels spaced close together and angled inward toward the ground. Some farmers were killed by rollovers while operating tractors along steep slopes. Others have been killed while attempting to tow or pull an excessive load from above axle height, or when cold weather caused the tires to freeze to the ground, in both cases causing the tractor to pivot around the rear axle. ROPS were first required in the United States in 1986, non-retroactively. ROPS adoption by farmers is thus incomplete. To treat this problem, CROPS (cost-effective roll-over protection structures) have been developed to encourage farmers to retrofit older tractors. For the ROPS to work as designed, the operator must stay within its protective frame and wear the seat belt. In addition to ROPS, U.S. manufacturers add instructional seats on tractors with enclosed cabs. The tractors have a ROPS with seatbelts for both the operator and passenger. This instructional seat is intended to be used for training new tractor operators, but can also be used to diagnose machine problems. The misuse of an instructional seat increases the likelihood of injury, especially when children are transported. The International Organization for Standardization's ISO standard 23205:2014 specifies the minimum design and performance requirements for an instructional seat and states that the instructional seat is neither intended for, nor is it designed for use by children. Despite this, upwards of 40% of farm families give their children rides on tractors, often using these instructional seats. Applications and variations Farm The most common use of the term "tractor" is for the vehicles used on farms. The farm tractor is used for pulling or pushing agricultural machinery or trailers, for plowing, tilling, disking, harrowing, planting, and similar tasks. A variety of specialty farm tractors have been developed for particular uses. These include "row crop" tractors with adjustable tread width to allow the tractor to pass down rows of cereals, maize, tomatoes or other crops without crushing the plants, "wheatland" or "standard" tractors with fixed wheels and a lower center of gravity for plowing and other heavy field work for broadcast crops, and "high crop" tractors with adjustable tread and increased ground clearance, often used in the cultivation of cotton and other high-growing row crop plant operations, and "utility tractors", typically smaller tractors with a low center of gravity and short turning radius, used for general purposes around the farmstead. Many utility tractors are used for nonfarm grading, landscape maintenance and excavation purposes, particularly with loaders, backhoes, pallet forks and similar devices. Small garden or lawn tractors designed for suburban and semirural gardening and landscape maintenance are produced in a variety of configurations, and also find numerous uses on a farmstead. Some farm-type tractors are found elsewhere than on farms: with large universities' gardening departments, in public parks, or for highway workman use with blowtorch cylinders strapped to the sides and a pneumatic drill air compressor permanently fastened over the power take-off. These are often fitted with grass (turf) tyres which are less damaging to soft surfaces than agricultural tires. Precision Space technology has been incorporated into agriculture in the form of GPS devices, and robust on-board computers installed as optional features on farm tractors. These technologies are used in modern, precision farming techniques. The spin-offs from the space race have actually facilitated automation in plowing and the use of autosteer systems (drone on tractors that are manned but only steered at the end of a row), the idea being to neither overlap and use more fuel nor leave streaks when performing jobs such as cultivating. Several tractor companies have also been working on producing a driverless tractor. Engineering The durability and engine power of tractors made them very suitable for engineering tasks. Tractors can be fitted with engineering tools such as dozer blades, buckets, hoes, rippers, etc. The most common attachments for the front of a tractor are dozer blades or buckets. When attached to engineering tools, the tractor is called an engineering vehicle. A bulldozer is a track-type tractor with a blade attached in the front and a rope-winch behind. Bulldozers are very powerful tractors and have excellent ground-hold, as their main tasks are to push or drag. Bulldozers have been further modified over time to evolve into new machines which are capable of working in ways that the original bulldozer can not. One example is that loader tractors were created by removing the blade and substituting a large volume bucket and hydraulic arms which can raise and lower the bucket, thus making it useful for scooping up earth, rock and similar loose material to load it into trucks. A front-loader or loader is a tractor with an engineering tool which consists of two hydraulic powered arms on either side of the front engine compartment and a tilting implement. This is usually a wide-open box called a bucket, but other common attachments are a pallet fork and a bale grappler. Other modifications to the original bulldozer include making the machine smaller to let it operate in small work areas where movement is limited. Also, tiny wheeled loaders, officially called skid-steer loaders, but nicknamed "Bobcat" after the original manufacturer, are particularly suited for small excavation projects in confined areas. Backhoe The most common variation of the classic farm tractor is the backhoe, also called a backhoe-loader. As the name implies, it has a loader assembly on the front and a backhoe on the back. Backhoes attach to a three-point hitch on farm or industrial tractors. Industrial tractors are often heavier in construction, particularly with regards to the use of a steel grill for protection from rocks and the use of construction tires. When the backhoe is permanently attached, the machine usually has a seat that can swivel to the rear to face the hoe controls. Removable backhoe attachments almost always have a separate seat on the attachment. Backhoe-loaders are very common and can be used for a wide variety of tasks: construction, small demolitions, light transportation of building materials, powering building equipment, digging holes, loading trucks, breaking asphalt and paving roads. Some buckets have retractable bottoms, enabling them to empty their loads more quickly and efficiently. Buckets with retractable bottoms are also often used for grading and scratching off sand. The front assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools. Their relatively small frames and precise controls make backhoe-loaders very useful and common in urban engineering projects, such as construction and repairs in areas too small for larger equipment. Their versatility and compact size make them one of the most popular urban construction vehicles. In the UK and Ireland, the word "JCB" is used colloquially as a genericized trademark for any such type of engineering vehicle. The term JCB now appears in the Oxford English Dictionary, although it is still legally a trademark of J. C. Bamford Ltd. The term "digger" is also commonly used. Compact utility A compact utility tractor (CUT) is a smaller version of an agricultural tractor, but designed primarily for landscaping and estate management tasks, rather than for planting and harvesting on a commercial scale. Typical CUTs range from with available power take-off (PTO) power ranging from . CUTs are often equipped with both a mid-mounted and a standard rear PTO, especially those below . The mid-mount PTO shaft typically rotates at/near 2000 rpm and is typically used to power mid-mount finish mowers, front-mounted snow blowers or front-mounted rotary brooms. The rear PTO is standardized at 540 rpm for the North American markets, but in some parts of the world, a dual 540/1000 rpm PTO is standard, and implements are available for either standard in those markets. One of the most common attachments for a CUT is the front-end loader or FEL. Like the larger agricultural tractors, a CUT will have an adjustable, hydraulically controlled three-point hitch. Typically, a CUT will have four-wheel drive, or more correctly four-wheel assist. Modern CUTs often feature hydrostatic transmissions, but many variants of gear-drive transmissions are also offered from low priced, simple gear transmissions to synchronized transmissions to advanced glide-shift transmissions. All modern CUTs feature government-mandated roll over protection structures just like agricultural tractors. The most well-known brands in North America include Kubota, John Deere Tractor, New Holland Ag, Case-Farmall and Massey Ferguson. Although less common, compact backhoes are often attached to compact utility tractors. Compact utility tractors require special, smaller implements than full-sized agricultural tractors. Very common implements include the box blade, the grader blade, the landscape rake, the post hole digger (or post hole auger), the rotary cutter (slasher or a brush hog), a mid- or rear-mount finish mower, a broadcast seeder, a subsoiler and the rototiller (rotary tiller). In northern climates, a rear-mounted snow blower is very common; some smaller CUT models are available with front-mounted snow blowers powered by mid-PTO shafts. Implement brands outnumber tractor brands, so CUT owners have a wide selection of implements. For small-scale farming or large-scale gardening, some planting and harvesting implements are sized for CUTs. One- and two-row planting units are commonly available, as are cultivators, sprayers and different types of seeders (slit, rotary and drop). One of the first CUTs offered for small farms of three to 30 acres and for small jobs on larger farms was a three-wheeled unit, with the rear wheel being the drive wheel, offered by Sears & Roebuck in 1954 and priced at $598 for the basic model. An even smaller variant of the compact utility tractor is the subcompact utility tractor. Although these tractors are often barely larger than a riding lawn mower, these tractors have all the same features of a compact tractor, such as a three-point hitch, power steering, four-wheel-drive, and front-end loader. These tractors are generally marketed towards homeowners who intend to mostly use them for lawn mowing, with the occasional light landscaping task. Standard The earliest tractors were called "standard" tractors, and were intended almost solely for plowing and harrowing before planting, which were difficult tasks for humans and draft animals. They were characterized by a low, rearward seating position, fixed-width tread, and low ground clearance. These early tractors were cumbersome, and ill-suited to enter a field of planted row crops for weed control. The "standard" tractor definition is no longer in current use. However, tractors with fixed wheel spacing and a low center of gravity are well-suited as loaders, forklifts and backhoes, so that the configuration continues in use without the "standard" nomenclature. Row-crop A general-purpose or row-crop tractor is tailored specifically to the growing of crops grown in rows, and most especially to cultivating these crops. These tractors are universal machines, capable of both primary tillage and cultivation of a crop. The row-crop tractor category evolved rather than appearing overnight, but the International Harvester (IH) Farmall is often considered the "first" tractor of the category. Some earlier tractors of the 1910s and 1920s approached the form factor from the heavier side, as did motorized cultivators from the lighter side, but the Farmall brought all of the salient features together into one package, with a capable distribution network to ensure its commercial success. In the new form factor that the Farmall popularized, the cultivator was mounted in the front so it was easily visible. Additionally, the tractor had a narrow front end; the front tires were spaced very closely and angled in toward the bottom. The back wheels straddled two rows with their spacing adjustable depending on row spacing, and the unit could cultivate four rows at once. Where wide front wheels were used, they often could be adjusted as well. Tractors with non-adjustable spacing were called "standard" or "wheatland", and were chiefly meant for pulling plows or other towed implements, typically with a lower overall tractor height than row-crop models. From 1924 until 1963, Farmalls were the largest selling row-crop tractors. To compete, John Deere designed the Model C, which had a wide front and could cultivate three rows at once. Only 112 prototypes were made, as Deere realized it would lose sales to Farmall if its model did less. In 1928, Deere released the Model C anyway, only as the Model GP (General Purpose) to avoid confusion with the Model D when ordered over the then unclear telephone. Oliver refined its "Row Crop" model early in 1930. Until 1935, the 18–27 was Oliver–Hart-Parr's only row-crop tractor. Many Oliver row-crop models are referred to as "Oliver Row Crop 77", "Oliver Row Crop 88", etc. Many early row-crop tractors had a tricycle design with two closely spaced front tires, and some even had a single front tire. This made it dangerous to operate on the side of a steep hill; as a result, many farmers died from tractor rollovers. Also, early row-crop tractors had no rollover protection system (ROPS), meaning if the tractor flipped back, the operator could be crushed. Sweden was the first country which passed legislation requiring ROPS, in 1959. Over 50% of tractor related injuries and deaths are attributed to tractor rollover. Canadian agricultural equipment manufacturer Versatile makes row-crop tractors that are ; powered by an 8.9 liter Cummins Diesel engine. Case IH and New Holland of CNH Industrial both produce high horsepower front-wheel-assist row crop tractors with available rear tracks. Case IH also has a four-wheel drive track system called Rowtrac. John Deere has an extensive line of available row crop tractors ranging from . Modern row crop tractors have rollover protection systems in the form of a reinforced cab or a roll bar. Garden Garden tractors, sometimes called lawn tractors, are small, light tractors designed for use in domestic gardens, lawns, and small estates. Lawn tractors are designed for cutting grass and snow removal, while garden tractors are for small property cultivation. In the U.S., the term riding lawn mower today often is used to refer to mid- or rear-engined machines. Front-engined tractor layout machines designed primarily for cutting grass and light towing are called lawn tractors; heavier-duty tractors of similar size are garden tractors. Garden tractors are capable of mounting a wider array of attachments than lawn tractors. Unlike lawn tractors and rear-engined riding mowers, garden tractors are powered by horizontal-crankshaft engines with a belt-drive to transaxle-type transmissions (usually of four or five speeds, although some may also have two-speed reduction gearboxes, drive-shafts, or hydrostatic or hydraulic drives). Garden tractors from Wheel Horse, Cub Cadet, Economy (Power King), John Deere, Massey Ferguson and Case Ingersoll are built in this manner. The engines are generally one- or two-cylinder petrol (gasoline) engines, although diesel engine models are also available, especially in Europe. Typically, diesel-powered garden tractors are larger and heavier-duty than gasoline-powered units and compare more similarly to compact utility tractors. Visually, the distinction between a garden tractor and a lawn tractor is often hard to make – generally, garden tractors are more sturdily built, with stronger frames, 12-inch or larger wheels mounted with multiple lugs (most lawn tractors have a single bolt or clip on the hub), heavier transaxles, and ability to accommodate a wide range of front, belly, and rear mounted attachments. Two-wheel Although most people think primarily of four-wheel vehicles when they think of tractors, a tractor may have one or more axles. The key benefit is the power itself, which only takes one axle to provide. Single-axle tractors, more often called two-wheel tractors or walk-behind tractors, have had many users since the introduction of the internal combustion engine tractors. They tend to be small and affordable, this was especially true before the 1960s when a walk-behind tractor could often be more affordable than a two-axle tractor of comparable power. Today's compact utility tractors and advanced garden tractors may negate most of that market advantage, but two-wheel tractors still have a following, especially among those who already own one. Countries where two-wheel tractors are especially prevalent today include Thailand, China, Bangladesh, India, and other Southeast Asia countries. Most two-wheel tractors today are specialty tractors made for one purpose, such as snow blowers, push tillers, and self propelled push mowers. Orchard Tractors tailored to use in fruit orchards typically have features suited to passing under tree branches with impunity. These include a lower overall profile; reduced tree-branch-snagging risk (via underslung exhaust pipes rather than smoke-stack-style exhaust, and large sheetmetal cowlings and fairings that allow branches to deflect and slide off rather than catch); spark arrestors on the exhaust tips; and often wire cages to protect the operator from snags. Automobile conversions and other homemade versions The ingenuity of farm mechanics, coupled in some cases with OEM or aftermarket assistance, has often resulted in the conversion of automobiles for use as farm tractors. In the United States, this trend was especially strong from the 1910s through 1950s. It began early in the development of vehicles powered by internal combustion engines, with blacksmiths and amateur mechanics tinkering in their shops. Especially during the interwar period, dozens of manufacturers (Montgomery Ward among them) marketed aftermarket kits for converting Ford Model Ts for use as tractors. (These were sometimes called 'Hoover wagons' during the Great Depression, although this term was usually reserved for automobiles converted to horse-drawn buggy use when gasoline was unavailable or unaffordable. During the same period, another common name was "Doodlebug", after the popular kit by the same name.) Ford even considered producing an "official" optional kit. Many Model A Fords also were converted for this purpose. In later years, some farm mechanics have been known to convert more modern trucks or cars for use as tractors, more often as curiosities or for recreational purposes (rather than out of the earlier motives of pure necessity or frugality). During World War II, a shortage of tractors in Sweden led to the development of the so-called "EPA" tractor (EPA was a chain of discount stores and it was often used to signify something lacking in quality). An EPA tractor was simply an automobile, truck or lorry, with the passenger space cut off behind the front seats, equipped with two gearboxes in a row. When done to an older car with a ladder frame, the result was similar to a tractor and could be used as one. After the war it remained popular as a way for young people without a driver's license to own something similar to a car. Since it was legally seen as a tractor, it could be driven from 16 years of age and only required a tractor license. Eventually, the legal loophole was closed and no new EPA tractors were allowed to be made, but the remaining ones were still legal, which led to inflated prices and many protests from people who preferred EPA tractors to ordinary cars. The Swedish government eventually replaced them with the so called "A-tractor" which now had its speed limited to 30 km/h and allowed people aged 16 and older to drive the cars with a moped license. The German occupation of Italy during World resulted in a severe shortage of mechanized farm equipment. The destruction of tractors was a sort of scorched-earth strategy used to reduce the independence of the conquered. The shortage of tractors in that area of Europe was the origin of Lamborghini. The war was also the inspiration for dual-purpose vehicles such as the Land Rover. Based on the Jeep, the company made a vehicle that combined PTO, tillage, 4wd, and transportation. In March 1975, a similar type of vehicle was introduced in Sweden, the A tractor [from arbetstraktor (work tractor)]; the main difference is an A tractor has a top speed of 30 km/h. This is usually done by fitting two gearboxes in a row and only using one. The Volvo Duett was, for a long time, the primary choice for conversion to an EPA or A tractor, but since supplies have dried up, other cars have been used, in most cases another Volvo. The SFRO is a Swedish organization advocating homebuilt and modified vehicles. Another type of homemade tractors are ones that are fabricated from scratch. The "from scratch" description is relative, as often individual components will be repurposed from earlier vehicles or machinery (e.g., engines, gearboxes, axle housings), but the tractor's overall chassis is essentially designed and built by the owner (e.g., a frame is welded from bar stockchannel stock, angle stock, flat stock, etc.). As with automobile conversions, the heyday of this type of tractor, at least in developed economies, lies in the past, when there were large populations of blue-collar workers for whom metalworking and farming were prevalent parts of their lives. (For example, many 19th- and 20th-century New England and Midwestern machinists and factory workers had grown up on farms.) Backyard fabrication was a natural activity to them (whereas it might seem daunting to most people today). Nomenclature The term "tractor" (US and Canada) or "tractor unit" (UK) is also applied to: Road tractors, tractor units or traction heads, familiar as the front end of an articulated lorry / semi-trailer truck. They are heavy-duty vehicles with large engines and several axles. The majority of these tractors are designed to pull long semi-trailers, most often to transport freight over a significant distance, and is connected to the trailer with a fifth wheel coupling. In England, this type of "tractor" is often called an "artic cab" (short for "articulated" cab). A minority is the ballast tractor, whose load is hauled from a drawbar. Pushback tractors are used on airports to move aircraft on the ground, most commonly pushing aircraft away from their parking stands. Locomotive tractors (engines) or rail car movers – the amalgamation of machines, electrical generators, controls and devices that comprise the traction component of railway vehicles Artillery tractors – vehicles used to tow artillery pieces of varying weights. NASA and other space agencies use very large tractors to move large launch vehicles and Space Shuttles between their hangars and launch pads. A pipe-tractor is a device used for conveying advanced instruments into pipes for measurement and data logging, and the purging of well holes, sewer pipes and other inaccessible tubes. Nebraska tests Nebraska tractor tests are tests mandated by the Nebraska Tractor Test Law and administered by the University of Nebraska, that objectively test the performance of all brands of tractors, 40 horsepower or more, sold in Nebraska. In the 1910s and 1920s, an era of snake oil sales and advertising tactics, the Nebraska tests helped farmers throughout North America to see through marketing claims and make informed buying decisions. The tests continue today, making sure tractors fulfill the manufacturer's advertised claims. Manufacturers Some of the many tractor manufacturers and brands worldwide include: Belarus Case IH Caterpillar Claas Challenger Deutz-Fahr Fendt ITMCO Iseki JCB John Deere Lamborghini Landini Kubota Mahindra Tractors Massey Ferguson McCormick Mercedes-Benz New Holland SAME Steyr TAFE Ursus Valtra Zetor In addition to commercial manufacturers, the Open Source Ecology group has developed several working prototypes of an open source hardware tractor called the LifeTrac as part of its Global Village Construction Set. Gallery See also Agricultural machinery Artillery tractor Ballast tractor Big Bud 747, the world's largest farm tractor Driverless tractor Heavy equipment Lester F. Larsen Tractor Museum Non-road engine Power take-off Railcar mover Terminal tractor Tractor pulling Tractor unit Two-wheel tractor Unimog 70200 DT-20 References External links Tractor information Purdue University Tractor Safety Article re: ROPS, PTO, etc Nebraska Tractor Test Laboratory Historical Tractor Test Reports and Manufacturers' Literature Reports on 400+ models 1903–2006 A History of Tractors at the Canada Agriculture Museum Tractor safety EU Working Group on Agricultural Tractors – Work Safety EU Directives on tractor design: (Mapped Index), or (Numerical Index) Tractor Safety (National Agricultural Safety Database) Tractor Safety (National Safety Council) Adaptive Tractor Overturn Prediction System Tractor Overturn Protection and Prevention ACC: Farm safety: Vehicles, machinery and equipment. CDC – Agricultural Safety: Cost-effective Rollover Protective Structures – NIOSH Workplace Safety and Health Topic Agricultural machinery Engineering vehicles Heavy equipment Vehicles introduced in 1901
Tractor
[ "Engineering" ]
11,145
[ "Engineering vehicles", "Tractors" ]
152,702
https://en.wikipedia.org/wiki/BARK%20%28computer%29
BARK () was an early electromechanical computer. BARK was built using standard telephone relays, implementing a 32-bit binary machine. It could perform addition in 150 ms and multiplication in 250 ms. It had a memory with 50 registers and 100 constants. It was later expanded to double the memory. Howard Aiken stated in reference to BARK "This is the first computer I have seen outside Harvard that actually works." History BARK was developed by Matematikmaskinnämnden (Swedish Board for Computing Machinery) a few years before BESK. The machine was built with 8,000 standard telephone relays, 80 km of cable and with 175,000 soldering points. Programming was done by plugboard. It was completed in February 1950 at a cost of 400,000 Swedish kronor (less than $100,000), became operational on April 28, 1950, and was taken offline on September 22, 1954. The engineers on the team led by Conny Palm were Harry Freese, Gösta Neovius, Olle Karlqvist, Carl-Erik Fröberg, G. Kellberg, Björn Lind, Arne Lindberger, P. Petersson and Madeline Wallmark. See also BESK - Binär Elektronisk Sekvens-Kalkylator - Sweden's second computer. Elsa-Karin Boestad-Nilsson, a programmer on BARK and BESK SMIL - SifferMaskinen I Lund (The Number Machine in Lund) History of computing hardware References External links Tekn. lic. Olle Karlqvist in memoriam (in Swedish), Google translation, memorial site of one of the engineers behind BARK and BESK. On BARK page there's a technical pdf document (in English): The BARK, A Swedish General Purpose Relay Computer One-of-a-kind computers Electro-mechanical computers Science and technology in Sweden
BARK (computer)
[ "Technology" ]
393
[ "Computing stubs", "Computer hardware stubs" ]
152,703
https://en.wikipedia.org/wiki/Hilbert%27s%20third%20problem
The third of Hilbert's list of mathematical problems, presented in 1900, was the first to be solved. The problem is related to the following question: given any two polyhedra of equal volume, is it always possible to cut the first into finitely many polyhedral pieces which can be reassembled to yield the second? Based on earlier writings by Carl Friedrich Gauss, David Hilbert conjectured that this is not always possible. This was confirmed within the year by his student Max Dehn, who proved that the answer in general is "no" by producing a counterexample. The answer for the analogous question about polygons in 2 dimensions is "yes" and had been known for a long time; this is the Wallace–Bolyai–Gerwien theorem. Unknown to Hilbert and Dehn, Hilbert's third problem was also proposed independently by Władysław Kretkowski for a math contest of 1882 by the Academy of Arts and Sciences of Kraków, and was solved by Ludwik Antoni Birkenmajer with a different method than Dehn's. Birkenmajer did not publish the result, and the original manuscript containing his solution was rediscovered years later. History and motivation The formula for the volume of a pyramid, had been known to Euclid, but all proofs of it involve some form of limiting process or calculus, notably the method of exhaustion or, in more modern form, Cavalieri's principle. Similar formulas in plane geometry can be proven with more elementary means. Gauss regretted this defect in two of his letters to Christian Ludwig Gerling, who proved that two symmetric tetrahedra are equidecomposable. Gauss's letters were the motivation for Hilbert: is it possible to prove the equality of volume using elementary "cut-and-glue" methods? Because if not, then an elementary proof of Euclid's result is also impossible. Dehn's proof Dehn's proof is an instance in which abstract algebra is used to prove an impossibility result in geometry. Other examples are doubling the cube and trisecting the angle. Two polyhedra are called scissors-congruent if the first can be cut into finitely many polyhedral pieces that can be reassembled to yield the second. Any two scissors-congruent polyhedra have the same volume. Hilbert asks about the converse. For every polyhedron , Dehn defines a value, now known as the Dehn invariant , with the property that, if is cut into polyhedral pieces , then In particular, if two polyhedra are scissors-congruent, then they have the same Dehn invariant. He then shows that every cube has Dehn invariant zero while every regular tetrahedron has non-zero Dehn invariant. Therefore, these two shapes cannot be scissors-congruent. A polyhedron's invariant is defined based on the lengths of its edges and the angles between its faces. If a polyhedron is cut into two, some edges are cut into two, and the corresponding contributions to the Dehn invariants should therefore be additive in the edge lengths. Similarly, if a polyhedron is cut along an edge, the corresponding angle is cut into two. Cutting a polyhedron typically also introduces new edges and angles; their contributions must cancel out. The angles introduced when a cut passes through a face add to , and the angles introduced around an edge interior to the polyhedron add to . Therefore, the Dehn invariant is defined in such a way that integer multiples of angles of give a net contribution of zero. All of the above requirements can be met by defining as an element of the tensor product of the real numbers (representing lengths of edges) and the quotient space (representing angles, with all rational multiples of replaced by zero). For some purposes, this definition can be made using the tensor product of modules over (or equivalently of abelian groups), while other aspects of this topic make use of a vector space structure on the invariants, obtained by considering the two factors and to be vector spaces over and taking the tensor product of vector spaces over . This choice of structure in the definition does not make a difference in whether two Dehn invariants, defined in either way, are equal or unequal. For any edge of a polyhedron , let be its length and let denote the dihedral angle of the two faces of that meet at , measured in radians and considered modulo rational multiples of . The Dehn invariant is then defined as where the sum is taken over all edges of the polyhedron . It is a valuation. Further information In light of Dehn's theorem above, one might ask "which polyhedra are scissors-congruent"? Sydler (1965) showed that two polyhedra are scissors-congruent if and only if they have the same volume and the same Dehn invariant. Børge Jessen later extended Sydler's results to four dimensions. In 1990, Dupont and Sah provided a simpler proof of Sydler's result by reinterpreting it as a theorem about the homology of certain classical groups. Debrunner showed in 1980 that the Dehn invariant of any polyhedron with which all of three-dimensional space can be tiled periodically is zero. Jessen also posed the question of whether the analogue of Jessen's results remained true for spherical geometry and hyperbolic geometry. In these geometries, Dehn's method continues to work, and shows that when two polyhedra are scissors-congruent, their Dehn invariants are equal. However, it remains an open problem whether pairs of polyhedra with the same volume and the same Dehn invariant, in these geometries, are always scissors-congruent. Original question Hilbert's original question was more complicated: given any two tetrahedra T1 and T2 with equal base area and equal height (and therefore equal volume), is it always possible to find a finite number of tetrahedra, so that when these tetrahedra are glued in some way to T1 and also glued to T2, the resulting polyhedra are scissors-congruent? Dehn's invariant can be used to yield a negative answer also to this stronger question. See also Hill tetrahedron Onorato Nicoletti References Further reading External links Proof of Dehn's Theorem at Everything2 Dehn Invariant at Everything2 03 Euclidean solid geometry Geometric dissection Geometry problems
Hilbert's third problem
[ "Physics", "Mathematics" ]
1,348
[ "Geometry problems", "Euclidean solid geometry", "Hilbert's problems", "Space", "Geometry", "Spacetime", "Mathematical problems" ]
152,710
https://en.wikipedia.org/wiki/Sculptor%20%28constellation%29
Sculptor is a faint constellation in the southern sky. It represents a sculptor. It was introduced by Nicolas Louis de Lacaille in the 18th century. He originally named it Apparatus Sculptoris (the sculptor's studio), but the name was later shortened. History The region to the south of Cetus and Aquarius had been named by Aratus in 270 BC as The Waters – an area of scattered faint stars with two brighter stars standing out. Professor of astronomy Bradley Schaefer has proposed that these stars were most likely Alpha and Delta Sculptoris. The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as l'Atelier du Sculpteur (the sculptor's studio) in 1751–52, depicting a three-legged table with a carved head on it, and an artist's mallet and two chisels on a block of marble alongside it. Lacaille had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope, devising fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. He named all but one in honour of instruments that symbolised the Age of Enlightenment. Characteristics Sculptor is a small constellation bordered by Aquarius and Cetus to the north, Fornax to the east, Phoenix to the south, Grus to the southwest, and Piscis Austrinus to the west. The bright star Fomalhaut is nearby. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Scl". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 6 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.80° and −39.37°. The whole constellation is visible to observers south of latitude 50°N. Notable features Stars No stars brighter than 3rd magnitude are located in Sculptor. This is explained by the fact that Sculptor contains the south galactic pole where stellar density is very low. Overall, there are 56 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5. The brightest star is Alpha Sculptoris, an SX Arietis-type variable star with a spectral type B7IIIp and an apparent magnitude of 4.3. It is 780 ± 30 light-years distant from Earth. Eta Sculptoris is a red giant of spectral type M4III that varies between magnitudes 4.8 and 4.9, pulsating with multiple periods of 22.7, 23.5, 24.6, 47.3, 128.7 and 158.7 days. Estimated to be around 1,082 times as luminous as the Sun, it is 460 ± 20 light-years distant from Earth. R Sculptoris is a red giant that has been found to be surrounded by spirals of matter likely ejected around 1800 years ago. It is 1,440 ± 90 light-years distant from Earth. The Astronomical Society of Southern Africa in 2003 reported that observations of the Mira variable stars T, U, V and X Sculptoris were very urgently needed as data on their light curves was incomplete. Deep sky objects The constellation also contains the Sculptor Dwarf, a dwarf galaxy which is a member of the Local Group, as well as the Sculptor Group, the group of galaxies closest to the Local Group. The Sculptor Galaxy (NGC 253), a barred spiral galaxy and the largest member of the group, lies near the border between Sculptor and Cetus. Another prominent member of the group is the irregular galaxy NGC 55. One unique galaxy in Sculptor is the Cartwheel Galaxy, at a distance of 500 million light-years. The result of a merger around 300 million years ago, the Cartwheel Galaxy has a core of older, yellow stars, and an outer ring of younger, blue stars, which has a diameter of 100,000 light-years. The smaller galaxy in the collision is now incorporated into the core, after moving from a distance of 250,000 light-years. The shock waves from the collision sparked extensive star formation in the outer ring. Namesakes Sculptor (AK-103) was a United States Navy Crater class cargo ship named after the constellation. See also Sculptor (Chinese astronomy) References Notes Citations Sources External links The Deep Photographic Guide to the Constellations: Sculptor The clickable Sculptor Southern constellations Constellations listed by Lacaille
Sculptor (constellation)
[ "Astronomy" ]
926
[ "Sculptor (constellation)", "Southern constellations", "Constellations", "Constellations listed by Lacaille" ]
152,759
https://en.wikipedia.org/wiki/Hilbert%27s%20second%20problem
In mathematics, Hilbert's second problem was posed by David Hilbert in 1900 as one of his 23 problems. It asks for a proof that arithmetic is consistent – free of any internal contradictions. Hilbert stated that the axioms he considered for arithmetic were the ones given in , which include a second order completeness axiom. In the 1930s, Kurt Gödel and Gerhard Gentzen proved results that cast new light on the problem. Some feel that Gödel's theorems give a negative solution to the problem, while others consider Gentzen's proof as a partial positive solution. Hilbert's problem and its interpretation In one English translation, Hilbert asks: "When we are engaged in investigating the foundations of a science, we must set up a system of axioms which contains an exact and complete description of the relations subsisting between the elementary ideas of that science. ... But above all I wish to designate the following as the most important among the numerous questions which can be asked with regard to the axioms: To prove that they are not contradictory, that is, that a definite number of logical steps based upon them can never lead to contradictory results. In geometry, the proof of the compatibility of the axioms can be effected by constructing a suitable field of numbers, such that analogous relations between the numbers of this field correspond to the geometrical axioms. ... On the other hand a direct method is needed for the proof of the compatibility of the arithmetical axioms." Hilbert's statement is sometimes misunderstood, because by the "arithmetical axioms" he did not mean a system equivalent to Peano arithmetic, but a stronger system with a second-order completeness axiom. The system Hilbert asked for a completeness proof of is more like second-order arithmetic than first-order Peano arithmetic. As a nowadays common interpretation, a positive solution to Hilbert's second question would in particular provide a proof that Peano arithmetic is consistent. There are many known proofs that Peano arithmetic is consistent that can be carried out in strong systems such as Zermelo–Fraenkel set theory. These do not provide a resolution to Hilbert's second question, however, because someone who doubts the consistency of Peano arithmetic is unlikely to accept the axioms of set theory (which are much stronger) to prove its consistency. Thus a satisfactory answer to Hilbert's problem must be carried out using principles that would be acceptable to someone who does not already believe PA is consistent. Such principles are often called finitistic because they are completely constructive and do not presuppose a completed infinity of natural numbers. Gödel's second incompleteness theorem (see Gödel's incompleteness theorems) places a severe limit on how weak a finitistic system can be while still proving the consistency of Peano arithmetic. Gödel's incompleteness theorem Gödel's second incompleteness theorem shows that it is not possible for any proof that Peano Arithmetic is consistent to be carried out within Peano arithmetic itself. This theorem shows that if the only acceptable proof procedures are those that can be formalized within arithmetic then Hilbert's call for a consistency proof cannot be answered. However, as explain, there is still room for a proof that cannot be formalized in arithmetic: "This imposing result of Godel's analysis should not be misunderstood: it does not exclude a meta-mathematical proof of the consistency of arithmetic. What it excludes is a proof of consistency that can be mirrored by the formal deductions of arithmetic. Meta-mathematical proofs of the consistency of arithmetic have, in fact, been constructed, notably by Gerhard Gentzen, a member of the Hilbert school, in 1936, and by others since then. ... But these meta-mathematical proofs cannot be represented within the arithmetical calculus; and, since they are not finitistic, they do not achieve the proclaimed objectives of Hilbert's original program. ... The possibility of constructing a finitistic absolute proof of consistency for arithmetic is not excluded by Gödel’s results. Gödel showed that no such proof is possible that can be represented within arithmetic. His argument does not eliminate the possibility of strictly finitistic proofs that cannot be represented within arithmetic. But no one today appears to have a clear idea of what a finitistic proof would be like that is not capable of formulation within arithmetic." Gentzen's consistency proof In 1936, Gentzen published a proof that Peano Arithmetic is consistent. Gentzen's result shows that a consistency proof can be obtained in a system that is much weaker than set theory. Gentzen's proof proceeds by assigning to each proof in Peano arithmetic an ordinal number, based on the structure of the proof, with each of these ordinals less than ε0. He then proves by transfinite induction on these ordinals that no proof can conclude in a contradiction. The method used in this proof can also be used to prove a cut elimination result for Peano arithmetic in a stronger logic than first-order logic, but the consistency proof itself can be carried out in ordinary first-order logic using the axioms of primitive recursive arithmetic and a transfinite induction principle. gives a game-theoretic interpretation of Gentzen's method. Gentzen's consistency proof initiated the program of ordinal analysis in proof theory. In this program, formal theories of arithmetic or set theory are assigned ordinal numbers that measure the consistency strength of the theories. A theory will be unable to prove the consistency of another theory with a higher proof theoretic ordinal. Modern viewpoints on the status of the problem While the theorems of Gödel and Gentzen are now well understood by the mathematical logic community, no consensus has formed on whether (or in what way) these theorems answer Hilbert's second problem. argues that Gödel's incompleteness theorem shows that it is not possible to produce finitistic consistency proofs of strong theories. states that although Gödel's results imply that no finitistic syntactic consistency proof can be obtained, semantic (in particular, second-order) arguments can be used to give convincing consistency proofs. argues that Gödel's theorem does not prevent a consistency proof because its hypotheses might not apply to all the systems in which a consistency proof could be carried out. calls the belief that Gödel's theorem eliminates the possibility of a persuasive consistency proof "erroneous", citing the consistency proof given by Gentzen and a later one given by Gödel in 1958. See also Takeuti conjecture Notes References . External links Original text of Hilbert's talk, in German English translation of Hilbert's 1900 address 02
Hilbert's second problem
[ "Mathematics" ]
1,396
[ "Hilbert's problems", "Mathematical problems" ]
152,760
https://en.wikipedia.org/wiki/Hilbert%27s%20fifth%20problem
Hilbert's fifth problem is the fifth mathematical problem from the problem list publicized in 1900 by mathematician David Hilbert, and concerns the characterization of Lie groups. The theory of Lie groups describes continuous symmetry in mathematics; its importance there and in theoretical physics (for example quark theory) grew steadily in the twentieth century. In rough terms, Lie group theory is the common ground of group theory and the theory of topological manifolds. The question Hilbert asked was an acute one of making this precise: is there any difference if a restriction to smooth manifolds is imposed? The expected answer was in the negative (the classical groups, the most central examples in Lie group theory, are smooth manifolds). This was eventually confirmed in the early 1950s. Since the precise notion of "manifold" was not available to Hilbert, there is room for some debate about the formulation of the problem in contemporary mathematical language. Formulation of the problem A modern formulation of the problem (in its simplest interpretation) is as follows: An equivalent formulation of this problem closer to that of Hilbert, in terms of composition laws, goes as follows: In this form the problem was solved by Montgomery–Zippin and Gleason. A stronger interpretation (viewing as a transformation group rather than an abstract group) results in the Hilbert–Smith conjecture about group actions on manifolds, which in full generality is still open. It is known classically for actions on 2-dimensional manifolds and has recently been solved for three dimensions by John Pardon. Solution The first major result was that of John von Neumann in 1933, giving an affirmative answer for compact groups. The locally compact abelian group case was solved in 1934 by Lev Pontryagin. The final resolution, at least in the interpretation of what Hilbert meant given above, came with the work of Andrew Gleason, Deane Montgomery and Leo Zippin in the 1950s. In 1953, Hidehiko Yamabe obtained further results about topological groups that may not be manifolds: It follows that every locally compact group contains an open subgroup that is a projective limit of Lie groups, by van Dantzig's theorem (this last statement is called the Gleason–Yamabe Theorem in ). No small subgroups An important condition in the theory is no small subgroups. A topological group , or a partial piece of a group like above, is said to have no small subgroups if there is a neighbourhood of containing no subgroup bigger than . For example, the circle group satisfies the condition, while the -adic integers as additive group does not, because will contain the subgroups: , for all large integers . This gives an idea of what the difficulty is like in the problem. In the Hilbert–Smith conjecture case it is a matter of a known reduction to whether can act faithfully on a closed manifold. Gleason, Montgomery and Zippin characterized Lie groups amongst locally compact groups, as those having no small subgroups. Infinite dimensions Researchers have also considered Hilbert's fifth problem without supposing finite dimensionality. This was the subject of Per Enflo's doctoral thesis; his work is discussed in . See also Totally disconnected group Notes References Yamabe, Hidehiko, On an arcwise connected subgroup of a Lie group, Osaka Mathematical Journal v.2, no. 1 Mar. (1950),  13–14. Irving Kaplansky, Lie Algebras and Locally Compact Groups, Chicago Lectures in Mathematics, 1971. Enflo, Per. (1970) Investigations on Hilbert’s fifth problem for non locally compact groups. (Ph.D. thesis of five articles of Enflo from 1969 to 1970) Enflo, Per; 1969a: Topological groups in which multiplication on one side is differentiable or linear. Math. Scand., 24,  195–197. Enflo, Per; 1969b: On a problem of Smirnov. Ark. Mat. 8,  107–109. 05 Lie groups Differential structures
Hilbert's fifth problem
[ "Mathematics" ]
806
[ "Lie groups", "Mathematical structures", "Hilbert's problems", "Algebraic structures", "Mathematical problems" ]
152,772
https://en.wikipedia.org/wiki/Intensive%20farming
Intensive agriculture, also known as intensive farming (as opposed to extensive farming), conventional, or industrial agriculture, is a type of agriculture, both of crop plants and of animals, with higher levels of input and output per unit of agricultural land area. It is characterized by a low fallow ratio, higher use of inputs such as capital, labour, agrochemicals and water, and higher crop yields per unit land area. Most commercial agriculture is intensive in one or more ways. Forms that rely heavily on industrial methods are often called industrial agriculture, which is characterized by technologies designed to increase yield. Techniques include planting multiple crops per year, reducing the frequency of fallow years, improving cultivars, mechanised agriculture, controlled by increased and more detailed analysis of growing conditions, including weather, soil, water, weeds, and pests. Modern methods frequently involve increased use of non-biotic inputs, such as fertilizers, plant growth regulators, pesticides, and antibiotics for livestock. Intensive farms are widespread in developed nations and increasingly prevalent worldwide. Most of the meat, dairy products, eggs, fruits, and vegetables available in supermarkets are produced by such farms. Some intensive farms can use sustainable methods, although this typically necessitates higher inputs of labor or lower yields. Sustainably increasing agricultural productivity, especially on smallholdings, is an important way to decrease the amount of land needed for farming and slow and reverse environmental degradation caused by processes such as deforestation. Intensive animal farming involves large numbers of animals raised on a relatively small area of land, for example by rotational grazing, or sometimes as concentrated animal feeding operations. These methods increase the yields of food and fiber per unit land area compared to those of extensive animal husbandry; concentrated feed is brought to seldom-moved animals, or, with rotational grazing, the animals are repeatedly moved to fresh forage. History Agricultural development in Britain between the 16th century and the mid-19th century saw a massive increase in agricultural productivity and net output. This in turn contributed to unprecedented population growth, freeing up a significant percentage of the workforce, and thereby helped enable the Industrial Revolution. Historians cited enclosure, mechanization, four-field crop rotation, and selective breeding as the most important innovations. Industrial agriculture arose in the Industrial Revolution. By the early 19th century, agricultural techniques, implements, seed stocks, and cultivars had so improved that yield per land unit was many times that seen in the Middle Ages. The first phase involved a continuing process of mechanization. Horse-drawn machinery such as the McCormick reaper revolutionized harvesting, while inventions such as the cotton gin reduced the cost of processing. During this same period, farmers began to use steam-powered threshers and tractors. In 1892, the first gasoline-powered tractor was successfully developed, and in 1923, the International Harvester Farmall tractor became the first all-purpose tractor, marking an inflection point in the replacement of draft animals with machines. Mechanical harvesters (combines), planters, transplanters, and other equipment were then developed, further revolutionizing agriculture. These inventions increased yields and allowed individual farmers to manage increasingly large farms. The identification of nitrogen, phosphorus, and potassium (NPK) as critical factors in plant growth led to the manufacture of synthetic fertilizers, further increasing crop yields. In 1909, the Haber-Bosch method to synthesize ammonium nitrate was first demonstrated. NPK fertilizers stimulated the first concerns about industrial agriculture, due to concerns that they came with side effects such as soil compaction, soil erosion, and declines in overall soil fertility, along with health concerns about toxic chemicals entering the food supply. The discovery of vitamins and their role in nutrition, in the first two decades of the 20th century, led to vitamin supplements, which in the 1920s allowed some livestock to be raised indoors, reducing their exposure to adverse natural elements. Following World War II synthetic fertilizer use increased rapidly. The discovery of antibiotics and vaccines facilitated raising livestock by reducing diseases. Developments in logistics and refrigeration as well as processing technology made long-distance distribution feasible. Integrated pest management is the modern method to minimize pesticide use to more sustainable levels. There are concerns over the sustainability of industrial agriculture, and the environmental effects of fertilizers and pesticides, which has given rise to the organic movement and has built a market for sustainable intensive farming, as well as funding for the development of appropriate technology. Techniques and technologies Livestock Pasture intensification Pasture intensification is the improvement of pasture soils and grasses to increase the food production potential of livestock systems. It is commonly used to reverse pasture degradation, a process characterized by loss of forage and decreased animal carrying capacity which results from overgrazing, poor nutrient management, and lack of soil conservation. This degradation leads to poor pasture soils with decreased fertility and water availability and increased rates of erosion, compaction, and acidification. Degraded pastures have significantly lower productivity and higher carbon footprints compared to intensified pastures. Management practices which improve soil health and consequently grass productivity include irrigation, soil scarification, and the application of lime, fertilizers, and pesticides. Depending on the productivity goals of the target agricultural system, more involved restoration projects can be undertaken to replace invasive and under-productive grasses with grass species that are better suited to the soil and climate conditions of the region. These intensified grass systems allow higher stocking rates with faster animal weight gain and reduced time to slaughter, resulting in more productive, carbon-efficient livestock systems. Another technique to optimize yield while maintaining the carbon balance is the use of integrated crop-livestock (ICL) and crop-livestock-forestry (ICLF) systems, which combine several ecosystems into one optimized agricultural framework. Correctly performed, such production systems are able to create synergies potentially providing benefits to pastures through optimal plant usage, improved feed and fattening rates, increased soil fertility and quality, intensified nutrient cycling, integrated pest control, and improved biodiversity. The introduction of certain legume crops to pastures can increase carbon accumulation and nitrogen fixation in soils, while their digestibility helps animal fattening and reduces methane emissions from enteric fermentation. ICLF systems yield beef cattle productivity up to ten times that of degraded pastures; additional crop production from maize, sorghum, and soybean harvests; and greatly reduced greenhouse gas balances due to forest carbon sequestration. In the Twelve Aprils grazing program for dairy production, developed by the USDA-SARE, forage crops for dairy herds are planted into a perennial pasture. Rotational grazing Rotational grazing is a variety of foraging in which herds or flocks are regularly and systematically moved to fresh, rested grazing areas (sometimes called paddocks) to maximize the quality and quantity of forage growth. It can be used with cattle, sheep, goats, pigs, chickens, turkeys, ducks, and other animals. The herds graze one portion of pasture, or a paddock, while allowing the others to recover. Resting grazed lands allows the vegetation to renew energy reserves, rebuild shoot systems, and deepen root systems, resulting in long-term maximum biomass production. Pasture systems alone can allow grazers to meet their energy requirements, but rotational grazing is especially effective because grazers thrive on the more tender younger plant stems. Parasites are also left behind to die off, minimizing or eliminating the need for de-wormers. With the increased productivity of rotational systems, the animals may need less supplemental feed than in continuous grazing systems. Farmers can therefore increase stocking rates. Concentrated animal feeding operations Intensive livestock farming or "factory farming", is the process of raising livestock in confinement at high stocking density. "Concentrated animal feeding operations" (CAFO), or "intensive livestock operations", can hold large numbers (some up to hundreds of thousands) of cows, hogs, turkeys, or chickens, often indoors. The essence of such farms is the concentration of livestock in a given space. The aim is to provide maximum output at the lowest possible cost and with the greatest level of food safety. The term is often used pejoratively. CAFOs have dramatically increased the production of food from animal husbandry worldwide, both in terms of total food produced and efficiency. Food and water is delivered to the animals, and therapeutic use of antimicrobial agents, vitamin supplements, and growth hormones are often employed. Growth hormones are not used on chickens nor on any animal in the European Union. Undesirable behaviors often related to the stress of confinement led to a search for docile breeds (e.g., with natural dominant behaviors bred out), physical restraints to stop interaction, such as individual cages for chickens, or physical modification such as the debeaking of chickens to reduce the harm of fighting. The CAFO designation resulted from the 1972 U.S. Federal Clean Water Act, which was enacted to protect and restore lakes and rivers to a "fishable, swimmable" quality. The United States Environmental Protection Agency identified certain animal feeding operations, along with many other types of industry, as "point source" groundwater polluters. These operations were subjected to regulation. In 17 states in the U.S., isolated cases of groundwater contamination were linked to CAFOs. The U.S. federal government acknowledges the waste disposal issue and requires that animal waste be stored in lagoons. These lagoons can be as large as . Lagoons not protected with an impermeable liner can leak into groundwater under some conditions, as can runoff from manure used as fertilizer. A lagoon that burst in 1995 released 25 million gallons of nitrous sludge in North Carolina's New River. The spill allegedly killed eight to ten million fish. The large concentration of animals, animal waste, and dead animals in a small space poses ethical issues to some consumers. Animal rights and animal welfare activists have charged that intensive animal rearing is cruel to animals. Crops The Green Revolution transformed farming in many developing countries. It spread technologies that had already existed, but had not been widely used outside of industrialized nations. These technologies included "miracle seeds", pesticides, irrigation, and synthetic nitrogen fertilizer. Seeds In the 1970s, scientists created high-yielding varieties of maize, wheat, and rice. These have an increased nitrogen-absorbing potential compared to other varieties. Since cereals that absorbed extra nitrogen would typically lodge (fall over) before harvest, semi-dwarfing genes were bred into their genomes. Norin 10 wheat, a variety developed by Orville Vogel from Japanese dwarf wheat varieties, was instrumental in developing wheat cultivars. IR8, the first widely implemented high-yielding rice to be developed by the International Rice Research Institute, was created through a cross between an Indonesian variety named "Peta" and a Chinese variety named "Dee Geo Woo Gen". With the availability of molecular genetics in Arabidopsis and rice the mutant genes responsible (reduced height (rht), gibberellin insensitive (gai1) and slender rice (slr1)) have been cloned and identified as cellular signalling components of gibberellic acid, a phytohormone involved in regulating stem growth via its effect on cell division. Photosynthate investment in the stem is reduced dramatically in shorter plants and nutrients become redirected to grain production, amplifying in particular the yield effect of chemical fertilizers. High-yielding varieties outperformed traditional varieties several fold and responded better to the addition of irrigation, pesticides, and fertilizers. Hybrid vigour is utilized in many important crops to greatly increase yields for farmers. However, the advantage is lost for the progeny of the F1 hybrids, meaning seeds for annual crops need to be purchased every season, thus increasing costs and profits for farmers. Crop rotation Crop rotation or crop sequencing is the practice of growing a series of dissimilar types of crops in the same space in sequential seasons for benefits such as avoiding pathogen and pest buildup that occurs when one species is continuously cropped. Crop rotation also seeks to balance the nutrient demands of various crops to avoid soil nutrient depletion. A traditional component of crop rotation is the replenishment of nitrogen through the use of legumes and green manure in sequence with cereals and other crops. Crop rotation can also improve soil structure and fertility by alternating deep-rooted and shallow-rooted plants. A related technique is to plant multi-species cover crops between commercial crops. This combines the advantages of intensive farming with continuous cover and polyculture. Irrigation Crop irrigation accounts for 70% of the world's fresh water use. Flood irrigation, the oldest and most common type, is typically unevenly distributed, as parts of a field may receive excess water in order to deliver sufficient quantities to other parts. Overhead irrigation, using center-pivot or lateral-moving sprinklers, gives a much more equal and controlled distribution pattern. Drip irrigation is the most expensive and least-used type, but delivers water to plant roots with minimal losses. Water catchment management measures include recharge pits, which capture rainwater and runoff and use it to recharge groundwater supplies. This helps in the replenishment of groundwater wells and eventually reduces soil erosion. Dammed rivers creating reservoirs store water for irrigation and other uses over large areas. Smaller areas sometimes use irrigation ponds or groundwater. Weed control In agriculture, systematic weed management is usually required, often performed by machines such as cultivators or liquid herbicide sprayers. Herbicides kill specific targets while leaving the crop relatively unharmed. Some of these act by interfering with the growth of the weed and are often based on plant hormones. Weed control through herbicide is made more difficult when the weeds become resistant to the herbicide. Solutions include: Cover crops (especially those with allelopathic properties) that out-compete weeds or inhibit their regeneration Multiple herbicides, in combination or in rotation Strains genetically engineered for herbicide tolerance Locally adapted strains that tolerate or out-compete weeds Tilling Ground cover such as mulch or plastic Manual removal Mowing Grazing Burning Terracing In agriculture, a terrace is a leveled section of a hilly cultivated area, designed as a method of soil conservation to slow or prevent the rapid surface runoff of irrigation water. Often such land is formed into multiple terraces, giving a stepped appearance. The human landscapes of rice cultivation in terraces that follow the natural contours of the escarpments, like contour ploughing, are a classic feature of the island of Bali and the Banaue Rice Terraces in Banaue, Ifugao, Philippines. In Peru, the Inca made use of otherwise unusable slopes by building drystone walls to create terraces known as Andéns. Rice paddies A paddy field is a flooded parcel of arable land used for growing rice and other semiaquatic crops. Paddy fields are a typical feature of rice-growing countries of east and southeast Asia, including Malaysia, China, Sri Lanka, Myanmar, Thailand, Korea, Japan, Vietnam, Taiwan, Indonesia, India, and the Philippines. They are also found in other rice-growing regions such as Piedmont (Italy), the Camargue (France), and the Artibonite Valley (Haiti). They can occur naturally along rivers or marshes, or can be constructed, even on hillsides. They require large water quantities for irrigation, much of it from flooding. It gives an environment favourable to the strain of rice being grown, and is hostile to many species of weeds. As the only draft animal species which is comfortable in wetlands, the water buffalo is in widespread use in Asian rice paddies. A recent development in the intensive production of rice is the System of Rice Intensification. Developed in 1983 by the French Jesuit Father Henri de Laulanié in Madagascar, by 2013 the number of smallholder farmers using the system had grown to between 4 and 5 million. Aquaculture Aquaculture is the cultivation of the natural products of water (fish, shellfish, algae, seaweed, and other aquatic organisms). Intensive aquaculture takes place on land using tanks, ponds, or other controlled systems, or in the ocean, using cages. Sustainability Intensive farming practices which are thought to be sustainable have been developed to slow the deterioration of agricultural land and even regenerate soil health and ecosystem services. These developments may fall in the category of organic farming, or the integration of organic and conventional agriculture. Pasture cropping involves planting grain crops directly into grassland without first applying herbicides. The perennial grasses form a living mulch understory to the grain crop, eliminating the need to plant cover crops after harvest. The pasture is intensively grazed both before and after grain production. This intensive system yields equivalent farmer profits (partly from increased livestock forage) while building new topsoil and sequestering up to 33 tons of CO2/ha/year. Biointensive agriculture focuses on maximizing efficiency such as per unit area, energy input and water input. Agroforestry combines agriculture and orchard/forestry technologies to create more integrated, diverse, productive, profitable, healthy and sustainable land-use systems. Intercropping can increase yields or reduce inputs and thus represents (potentially sustainable) agricultural intensification. However, while total yield per unit land area is often increased, yields of any single crop often decrease. There are also challenges to farmers who rely on farming equipment optimized for monoculture, often resulting in increased labor inputs. Vertical farming is intensive crop production on a large scale in urban centers, in multi-story, artificially-lit structures, for the production of low-calorie foods like herbs, microgreens, and lettuce. An integrated farming system is a progressive, sustainable agriculture system such as zero waste agriculture or integrated multi-trophic aquaculture, which involves the interactions of multiple species. Elements of this integration can include: Intentionally introducing flowering plants into agricultural ecosystems to increase pollen-and nectar-resources required by natural enemies of insect pests Using crop rotation and cover crops to suppress nematodes in potatoes Integrated multi-trophic aquaculture is a practice in which the by-products (wastes) from one species are recycled to become inputs (fertilizers, food) for another. Challenges Environmental impact Industrial agriculture uses huge amounts of water, energy, and industrial chemicals, increasing pollution in the arable land, usable water, and atmosphere. Herbicides, insecticides, and fertilizers accumulate in ground and surface waters. Industrial agricultural practices are one of the main drivers of global warming, accounting for 14–28% of net greenhouse gas emissions. Many of the negative effects of industrial agriculture may emerge at some distance from fields and farms. Nitrogen compounds from the Midwest, for example, travel down the Mississippi to degrade coastal fisheries in the Gulf of Mexico, causing so-called oceanic dead zones. Many wild plant and animal species have become extinct on a regional or national scale, and the functioning of agro-ecosystems has been profoundly altered. Agricultural intensification includes a variety of factors, including the loss of landscape elements, increased farm and field sizes, and increase usage of insecticides and herbicides. The large scale of insecticides and herbicides lead to the rapid developing resistance among pests renders herbicides and insecticides increasingly ineffective. Agrochemicals have may be involved in colony collapse disorder, in which the individual members of bee colonies disappear. (Agricultural production is highly dependent on bees to pollinate many varieties of fruits and vegetables.) Intensive farming creates conditions for parasite growth and transmission that are vastly different from what parasites encounter in natural host populations, potentially altering selection on a variety of traits such as life-history traits and virulence. Some recent epidemic outbreaks have highlighted the association with intensive agricultural farming practices. For example the infectious salmon anaemia (ISA) virus is causing significant economic loss for salmon farms. The ISA virus is an orthomyxovirus with two distinct clades, one European and one North American, that diverged before 1900 (Krossøy et al. 2001). This divergence suggests that an ancestral form of the virus was present in wild salmonids prior to the introduction of cage-cultured salmonids. As the virus spread from vertical transmission (parent to offspring). Intensive monoculture increases the risk of failures due to pests, adverse weather and disease. Social impact A study for the U.S. Office of Technology Assessment concluded that regarding industrial agriculture, there is a "negative relationship between the trend toward increasing farm size and the social conditions in rural communities" on a "statistical level". Agricultural monoculture can entail social and economic risks. See also Convertible husbandry Dryland farming Environmental impact of agriculture Green Revolution Industrial crop Pekarangan Small-scale agriculture Intensive animal farming References External links Commercial farming
Intensive farming
[ "Chemistry" ]
4,256
[ "Eutrophication", "Intensive farming" ]
152,776
https://en.wikipedia.org/wiki/Organ%20%28biology%29
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. There are approxiamately 79 Organs in the human body,but it is something that is debated as not all scientist agree on what counts as an organ. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The same is true for the musculoskeletal system because of the relationship between the muscular and skeletal systems. Cardiovascular system: pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Digestive system: digestion and processing food with salivary glands, esophagus, stomach, liver, gallbladder, pancreas, intestines, colon, mesentery, rectum and anus. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroids and adrenals, i.e., adrenal glands. Excretory system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream, the lymph and the nodes and vessels that transport it including the immune system: defending against disease-causing agents with leukocytes, tonsils, adenoids, thymus and spleen. Integumentary system: skin, hair and nails of mammals. Also scales of fish, reptiles, and birds, and feathers of birds. Muscular system: movement with muscles. Nervous system: collecting, transferring and processing information with brain, spinal cord and nerves. Reproductive system: the sex organs, such as ovaries, oviducts, uterus, vulva, vagina, testicles, vasa deferentia, seminal vesicles, prostate and penis. Respiratory system: the organs used for breathing, the pharynx, larynx, trachea, bronchi, lungs and diaphragm. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Viscera In the study of anatomy, viscera (: viscus) refers to the internal organs of the abdominal, thoracic, and pelvic cavities. The abdominal organs may be classified as solid organs or hollow organs. The solid organs are the liver, pancreas, spleen, kidneys, and adrenal glands. The hollow organs of the abdomen are the stomach, intestines, gallbladder, bladder, and rectum. In the thoracic cavity, the heart is a hollow, muscular organ. Splanchnology is the study of the viscera. The term "visceral" is contrasted with the term "", meaning "of or relating to the wall of a body part, organ or cavity". The two terms are often used in describing a membrane or piece of connective tissue, referring to the opposing sides. Origin and evolution The organ level of organisation in animals can be first detected in flatworms and the more derived phyla, i.e. the bilaterians. The less-advanced taxa (i.e. Placozoa, Porifera, Ctenophora and Cnidaria) do not show unification of their tissues into organs. More complex animals are composed of different organs, which have evolved over time. For example, the liver and heart evolved in the chordates about 550-500 million years ago, while the gut and brain are even more ancient, arising in the ancestor of vertebrates, insects, molluscs, and worms about 700–650 million years ago. Given the ancient origin of most vertebrate organs, researchers have looked for model systems, where organs have evolved more recently, and ideally have evolved multiple times independently. An outstanding model for this kind of research is the placenta, which has evolved more than 100 times independently in vertebrates, has evolved relatively recently in some lineages, and exists in intermediate forms in extant taxa. Studies on the evolution of the placenta have identified a variety of genetic and physiological processes that contribute to the origin and evolution of organs, these include the re-purposing of existing animal tissues, the acquisition of new functional properties by these tissues, and novel interactions of distinct tissue types. Plants The study of plant organs is covered in plant morphology. Organs of plants can be divided into vegetative and reproductive. Vegetative plant organs include roots, stems, and leaves. The reproductive organs are variable. In flowering plants, they are represented by the flower, seed and fruit. In conifers, the organ that bears the reproductive structures is called a cone. In other divisions (phyla) of plants, the reproductive organs are called strobili, in Lycopodiophyta, or simply gametophores in mosses. Common organ system designations in plants include the differentiation of shoot and root. All parts of the plant above ground (in non-epiphytes), including the functionally distinct leaf and flower organs, may be classified together as the shoot organ system. The vegetative organs are essential for maintaining the life of a plant. While there can be 11 organ systems in animals, there are far fewer in plants, where some perform the vital functions, such as photosynthesis, while the reproductive organs are essential in reproduction. However, if there is asexual vegetative reproduction, the vegetative organs are those that create the new generation of plants (see clonal colony). Society and culture Many societies have a system for organ donation, in which a living or deceased donor's organ are transplanted into a person with a failing organ. The transplantation of larger solid organs often requires immunosuppression to prevent organ rejection or graft-versus-host disease. There is considerable interest throughout the world in creating laboratory-grown or artificial organs. Organ transplants Beginning in the 20th century, organ transplants began to take place as scientists knew more about the anatomy of organs. These came later in time as procedures were often dangerous and difficult. Both the source and method of obtaining the organ to transplant are major ethical issues to consider, and because organs as resources for transplant are always more limited than demand for them, various notions of justice, including distributive justice, are developed in the ethical analysis. This situation continues as long as transplantation relies upon organ donors rather than technological innovation, testing, and industrial manufacturing. History The English word "organ" dates back to the twelfth century and refers to any musical instrument. By the late 14th century, the musical term's meaning had narrowed to refer specifically to the keyboard-based instrument. At the same time, a second meaning arose, in reference to a "body part adapted to a certain function". Plant organs are made from tissue composed of different types of tissue. The three tissue types are ground, vascular, and dermal. When three or more organs are present, it is called an organ system. The adjective visceral, also splanchnic, is used for anything pertaining to the internal organs. Historically, viscera of animals were examined by Roman pagan priests like the haruspices or the augurs in order to divine the future by their shape, dimensions or other factors. This practice remains an important ritual in some remote, tribal societies. The term "visceral" is contrasted with the term "", meaning "of or relating to the wall of a body part, organ or cavity" The two terms are often used in describing a membrane or piece of connective tissue, referring to the opposing sides. Antiquity Aristotle used the word frequently in his philosophy, both to describe the organs of plants or animals (e.g. the roots of a tree, the heart or liver of an animal) because, in ancient Greek, the word 'organon' means 'tool', and Aristotle believed that the organs of the body were tools for us by means of which we can do things. For similar reasons, his logical works, taken as a whole, are referred to as the Organon because logic is a tool for philosophical thinking. Earlier thinkers, such as those who wrote texts in the Hippocratic corpus, generally did not believe that there were organs of the body but only different parts of the body. Some alchemists (e.g. Paracelsus) adopted the Hermetic Qabalah assignment between the seven vital organs and the seven classical planets as follows: Chinese traditional medicine recognizes eleven organs, associated with the five Chinese traditional elements and with yin and yang, as follows: The Chinese associated the five elements with the five planets (Jupiter, Mars, Venus, Saturn, and Mercury) similar to the way the classical planets were associated with different metals. The yin and yang distinction approximates the modern notion of solid and hollow organs. See also List of organs of the human body Organoid Organ-on-a-chip Situs inversus References External links Levels of organization (Biology)
Organ (biology)
[ "Biology" ]
2,252
[ "Organ systems", "Levels of organization (Biology)" ]
152,900
https://en.wikipedia.org/wiki/Finitism
Finitism is a philosophy of mathematics that accepts the existence only of finite mathematical objects. It is best understood in comparison to the mainstream philosophy of mathematics where infinite mathematical objects (e.g., infinite sets) are accepted as existing. Main idea The main idea of finitistic mathematics is not accepting the existence of infinite objects such as infinite sets. While all natural numbers are accepted as existing, the set of all natural numbers is not considered to exist as a mathematical object. Therefore quantification over infinite domains is not considered meaningful. The mathematical theory often associated with finitism is Thoralf Skolem's primitive recursive arithmetic. History The introduction of infinite mathematical objects occurred a few centuries ago when the use of infinite objects was already a controversial topic among mathematicians. The issue entered a new phase when Georg Cantor in 1874 introduced what is now called naive set theory and used it as a base for his work on transfinite numbers. When paradoxes such as Russell's paradox, Berry's paradox and the Burali-Forti paradox were discovered in Cantor's naive set theory, the issue became a heated topic among mathematicians. There were various positions taken by mathematicians. All agreed about finite mathematical objects such as natural numbers. However there were disagreements regarding infinite mathematical objects. One position was the intuitionistic mathematics that was advocated by L. E. J. Brouwer, which rejected the existence of infinite objects until they are constructed. Another position was endorsed by David Hilbert: finite mathematical objects are concrete objects, infinite mathematical objects are ideal objects, and accepting ideal mathematical objects does not cause a problem regarding finite mathematical objects. More formally, Hilbert believed that it is possible to show that any theorem about finite mathematical objects that can be obtained using ideal infinite objects can be also obtained without them. Therefore allowing infinite mathematical objects would not cause a problem regarding finite objects. This led to Hilbert's program of proving both consistency and completeness of set theory using finitistic means as this would imply that adding ideal mathematical objects is conservative over the finitistic part. Hilbert's views are also associated with the formalist philosophy of mathematics. Hilbert's goal of proving the consistency and completeness of set theory or even arithmetic through finitistic means turned out to be an impossible task due to Kurt Gödel's incompleteness theorems. However, Harvey Friedman's grand conjecture would imply that most mathematical results are provable using finitistic means. Hilbert did not give a rigorous explanation of what he considered finitistic and referred to as elementary. However, based on his work with Paul Bernays some experts such as have argued that primitive recursive arithmetic can be considered an upper bound on what Hilbert considered finitistic mathematics. As a result of Gödel's theorems, as it became clear that there is no hope of proving both the consistency and completeness of mathematics, and with the development of seemingly consistent axiomatic set theories such as Zermelo–Fraenkel set theory, most modern mathematicians do not focus on this topic. Classical finitism vs. strict finitism In her book The Philosophy of Set Theory, Mary Tiles characterized those who allow potentially infinite objects as classical finitists, and those who do not allow potentially infinite objects as strict finitists: for example, a classical finitist would allow statements such as "every natural number has a successor" and would accept the meaningfulness of infinite series in the sense of limits of finite partial sums, while a strict finitist would not. Historically, the written history of mathematics was thus classically finitist until Cantor created the hierarchy of transfinite cardinals at the end of the 19th century. Views regarding infinite mathematical objects Leopold Kronecker remained a strident opponent to Cantor's set theory: Reuben Goodstein was another proponent of finitism. Some of his work involved building up to analysis from finitist foundations. Although he denied it, much of Ludwig Wittgenstein's writing on mathematics has a strong affinity with finitism. If finitists are contrasted with transfinitists (proponents of e.g. Georg Cantor's hierarchy of infinities), then also Aristotle may be characterized as a finitist. Aristotle especially promoted the potential infinity as a middle option between strict finitism and actual infinity (the latter being an actualization of something never-ending in nature, in contrast with the Cantorist actual infinity consisting of the transfinite cardinal and ordinal numbers, which have nothing to do with the things in nature): Other related philosophies of mathematics Ultrafinitism (also known as ultraintuitionism) has an even more conservative attitude towards mathematical objects than finitism, and has objections to the existence of finite mathematical objects when they are too large. Towards the end of the 20th century John Penn Mayberry developed a system of finitary mathematics which he called "Euclidean Arithmetic". The most striking tenet of his system is a complete and rigorous rejection of the special foundational status normally accorded to iterative processes, including in particular the construction of the natural numbers by the iteration "+1". Consequently Mayberry is in sharp dissent from those who would seek to equate finitary mathematics with Peano arithmetic or any of its fragments such as primitive recursive arithmetic. See also Temporal finitism Transcomputational problem Rational trigonometry Notes Further reading References Constructivism (mathematics) Infinity Epistemological theories
Finitism
[ "Mathematics" ]
1,125
[ "Mathematical objects", "Mathematical logic", "Infinity", "Constructivism (mathematics)" ]
152,928
https://en.wikipedia.org/wiki/Bluebell%20wood
A bluebell wood is a woodland that in springtime has a carpet of flowering bluebells (Hyacinthoides non-scripta) underneath a newly forming leaf canopy. The thicker the summer canopy, the more the competitive ground-cover is suppressed, encouraging a dense carpet of bluebells, whose leaves mature and die down by early summer. Other common woodland plants which accompany bluebells include the yellow rattle and the wood anemone. Locations Bluebell woods are found in all parts of Great Britain and Ireland, as well as elsewhere in Europe. Bluebells are a common indicator species for ancient woodlands, so bluebell woods are likely to date back to at least 1600. Some introduced portions of bluebell woods can occur in places where they've been heavily naturalised such as the Pacific Northwest, Mid-Atlantic Region, and British Columbia. Literature Gerard Manley Hopkins, an English poet, was very keen on the plant as revealed by these lines of his poem "May Magnificat" And azuring-over greybell makes Wood banks and brakes wash wet like lakes In his journal entry for 9 May 1871 Hopkins says: In the little wood opposite the light they stood in blackish spreads or sheddings like spots on a snake. The heads are then like thongs and solemn in grain and grape-colour. But in the clough through the light they come in falls of sky-colour washing the brows and slacks of the ground with vein-blue, thickening at the double, vertical themselves and the young grass and brake-fern combed vertical, but the brake struck the upright of all this with winged transomes. It was a lovely sight. - The bluebells in your hand baffle you with their inscape, made to every sense. If you draw your fingers through them they are lodged and struggle with a shock of wet heads; the long stalks rub and click and flatten to a fan on one another like your fingers themselves would when you passed the palms hard across one another, making a brittle rub and jostle like the noise of a hurdle strained by leaning against; then there is the faint honey smell and in the mouth the sweet gum when you bite them. See also Ancient woodland In and out the Dusting Bluebells - children's rhyme and dance. References Forests
Bluebell wood
[ "Biology" ]
472
[ "Forests", "Ecosystems" ]
152,952
https://en.wikipedia.org/wiki/Portuguese%20man%20o%27%20war
The Portuguese war (Physalia physalis), also known as the man-of-war or bluebottle, is a marine hydrozoan found in the Atlantic Ocean and the Indian Ocean. It is considered to be the same species as the Pacific man o' war or bluebottle, which is found mainly in the Pacific Ocean. The Portuguese man o' war is the only species in the genus Physalia, which in turn is the only genus in the family Physaliidae. The Portuguese man o' war is a conspicuous member of the neuston, the community of organisms that live at the surface of the ocean. It has numerous microscopic venomous cnidocytes which deliver a painful sting powerful enough to kill fish, and even, in some cases, humans. Although it superficially resembles a jellyfish, the Portuguese man o' war is in fact a siphonophore. Like all siphonophores, it is a colonial organism, made up of many smaller units called zooids. Although they are morphologically quite different, all of the zooids in a single specimen are genetically identical. These different types of zooids fulfill specialized functions, such as hunting, digestion and reproduction, and together they allow the colony to operate as a single individual. Etymology The name man o’ war comes from the man-of-war, a sailing warship, and the animal's resemblance to the Portuguese version (the caravel) at full sail. Taxonomy The bluebottle, Pacific man o' war or Indo-Pacific Portuguese man o' war, distinguished by a smaller float and a single long fishing tentacle, was originally considered a separate species in the same genus (P. utriculus). The name was synonymized with P. physalis in 2007, and it is now considered a regional form of the same species. Coloniality The man o' war is described as a colonial organism because the individual zooids in a colony are evolutionarily derived from either polyps or medusae, i.e. the two basic body plans of cnidarians. Both of these body plans comprise entire individuals in non-colonial cnidarians (for example, a jellyfish is a medusa, while a sea anemone is a polyp). All zooids in a man o' war develop from the same single fertilized egg and are therefore genetically identical. They remain physiologically connected throughout life, and essentially function as organs in a shared body. Hence, a Portuguese man o' war constitutes a single organism from an ecological perspective, but is made up of many individuals from an embryological perspective. Most species of siphonophores are fragile and difficult to collect intact. However, P. physalis is the most accessible, conspicuous, and robust of the siphonophores, and much has been written about this species. The development, morphology, and colony organization of P. physalis is very different from that of other siphonophores. Its structure, embryological development, and histology have been examined by several authors. These studies provide an important foundation for understanding the morphology, cellular anatomy, and development of this species. Description Like all siphonophores, P. physalis is a colonial organism: each animal is composed of many smaller units (zooids) that hang in clusters from under a large, gas-filled structure called the pneumatophore. Seven different types of zooids have been described in the man o' war, and all of these are interdependent on each other for survival and performing different functions, such as digestion (gastrozooids), reproduction (gonozooids) and hunting (dactylozooids). A fourth type of zooid is the pneumatophore. Three of these types of zooids are of the medusoid type (gonophores, nectophores, and vestigial nectophores), while the remaining four are of the polypoid type (free gastrozooids, tentacle-bearing zooids, gonozooids and gonopalpons). However, naming and categorization of zooids varies between authors, and much of the embryonic and evolutionary relationships of zooids remains unclear. The pneumatophore or bladder is the most conspicuous part of the man o' war. This large, gas-filled, translucent structure is pink, purple or blue in color; it is long and rises as much as above the water. The pneumatophore functions as both a flotation device and a sail, allowing the animal to move with the prevailing wind. The gas in the pneumatophore is mostly air which diffuses in from the surrounding atmosphere, but it also contains as much as 13% carbon monoxide, which is actively produced by the animal. In the event of a surface attack, the pneumatophore can be deflated, allowing the animal to temporarily submerge. New zooids are added by budding as the colony grows. Long tentacles hang below the float as the animal drifts, fishing for prey to sting and drag up to its digestive zooids. The colony hunts and feeds through the cooperation of two types of zooids: tentacle-bearing zooids known as dactylozooids (or palpons), and gastrozooids. The palpons are equipped with tentacles, which are typically about in length but can reach over . Each tentacle bears tiny, coiled, thread-like structures called nematocysts. Nematocysts trigger and inject venom on contact, stinging, paralyzing, and killing molluscs and fishes. Large groups of Portuguese man o' war, sometimes over 1,000 individuals, may deplete fisheries. Contraction of tentacles drags the prey upward and into range of the gastrozooids. The gastrozooids surround and digest the food by secreting digestive enzymes. P. physalis typically has multiple stinging tentacles, but a regional form (previously known as a separate species, P. utriculus) has only a single stinging tentacle. The main reproductive zooids, the gonophores, are situated on branching structures called gonodendra. Gonophores produce sperm or eggs. Besides gonophores, each gonodendron also contains several other types of specialized zooids: gonozooids (which are accessory gastrozooids), nectophores (which have been speculated to allow detached gonodendra to swim), and vestigial nectophores (also called jelly polyps; the function of these is unclear). Life cycle Man o' war individuals are dioecious, meaning each colony is either male or female. Gonophores producing either sperm or eggs (depending on the sex of the colony) sit on a tree-like structure called a gonodendron, which is believed to drop off from the colony during reproduction. Mating takes place primarily in the autumn, when eggs and sperm are shed from gonophores into the water. As neither fertilization nor early development has been directly observed in the wild, it is not yet known at what depth these occur. A fertilized man o' war egg develops into a planula that buds off new zooids as it grows, gradually forming a new colony. This development initially occurs under the water, and has been reconstructed by comparing different stages of planulae collected at sea. The first two structures to emerge are the pneumatophore (sail) and a single, early feeding zooid called a protozooid. Later, gastrozooids and tentacle-bearing zooids are added. Eventually, the growing pneumatophore becomes buoyant enough to carry the immature colony on the surface of the water. Ecology Predators and prey The Portuguese man o' war is a carnivore. Using its venomous tentacles, it traps and paralyzes its prey while reeling it inwards to its digestive polyps. It typically feeds on small fish, molluscs, shrimp and other small crustaceans, and zooplankton. The organism has few predators; one example is the loggerhead sea turtle, which feeds on the Portuguese man o' war as a common part of its diet. The turtle's skin, including that of its tongue and throat, is too thick for the stings to penetrate. Also, the blue sea slug specializes in feeding on the Portuguese man o' war, as does the violet sea snail. The ocean sunfish's diet, once thought to consist mainly of jellyfish, has been found to include many species, including the Portuguese man o' war. The man-of-war fish, Nomeus gronovii, is a driftfish native to the Atlantic, Pacific and Indian Oceans. It is notable for its ability to live within the deadly tentacles of the Portuguese man o' war, upon whose tentacles and gonads it feeds. Rather than using mucus to prevent nematocysts from firing, as is seen in some of the clownfish sheltering among sea anemones, the man-of-war fish appears to use highly agile swimming to physically avoid tentacles. The fish has a very high number of vertebrae (41), which may add to its agility and primarily uses its pectoral fins for swimming—a feature of fish that specialize in maneuvering tight spaces. It also has a complex skin design and at least one antibody to the man o' war's toxins. Although the fish seems to be 10 times more resistant to the toxin than other fish, it can be stung by the dactylozooides (large tentacles), which it actively avoids. The smaller gonozooids do not seem to sting the fish and the fish is reported to frequently nibble on these tentacles. Commensalism and symbiosis The Portuguese man o' war is often found with a variety of other marine fish, including yellow jack. These fish benefit from the shelter from predators provided by the stinging tentacles, and for the Portuguese , the presence of these species may attract other fish to eat. The blanket octopus is immune to the venom of the Portuguese man o' war. Individuals have been observed to carry broken man o' war tentacles, which males and immature females rip off and use for offensive and defensive purposes. Venom The stinging, venom-filled nematocysts in the tentacles of the Portuguese man o' war can paralyze small fish and other prey. Detached tentacles and dead specimens (including those that wash up on shore) can sting just as painfully as those of the live organism in the water and may remain potent for hours or even days after the death of the organism or the detachment of the tentacle. Stings usually cause severe pain to humans, lasting one to three hours. Red, whip-like welts appear on the skin that last two or three days after the sting. In some cases, the venom may travel to the lymph nodes and may cause symptoms that mimic an allergic reaction, including swelling of the larynx, airway blockage, cardiac distress and shortness of breath. Other symptoms may include fever, circulatory shock and in extreme cases, even death, although this is extremely rare. Medical attention for those exposed to large numbers of tentacles may become necessary to relieve pain or open airways if the pain becomes excruciating or lasts for more than three hours, or if breathing becomes difficult. Instances in which the stings completely surround the trunk of a young child are among those that may be fatal. The species is responsible for up to 10,000 human stings in Australia each summer, particularly on the east coast, with some others occurring off the coast of South Australia and Western Australia. Treatment of stings Stings from a Portuguese man o' war can result in severe dermatitis characterized by long, thin, open wounds that resemble those caused by a whip. These are not caused by any impact or cutting action, but by irritating urticariogenic substances in the tentacles. Treatment for sting pain is immersion in water for 20 minutes. The cnidocyte found in the box jellyfish react differently than the nematocyst in the Portuguese man o' war; cnidocytes are inhibited by application of vinegar, but nematocysts can discharge more venom if vinegar is applied. Distribution The species is found throughout the world's oceans, mainly in tropical and subtropical regions, but occasionally also in temperate regions. Habitat P. physalis is a member of the neuston (the floating community of organisms that live at the interface between water and air). This community is exposed to a unique set of environmental conditions including prolonged exposure to intense ultraviolet light, risk of desiccation, and rough sea conditions. The gas-filled bladder, or pneumatophore, remains at the surface, while the remainder is submerged. The animal has no means of propulsion; it moves passively, driven by the winds, currents, and tides. Winds can drive them into bays or onto beaches. Often, finding a single Portuguese man o' war is followed by finding many others in the vicinity. The Portuguese man o' war is well known to beachgoers for the painful stings delivered by its tentacles. Because they can sting while beached, the discovery of a man o' war washed up on a beach may lead to the closure of the beach. Drifting dynamics P. physalis uses a float filled with carbon monoxide and air as a sail to travel by wind for thousands of miles, dragging behind long tentacles that deliver a deadly venomous sting to fish. This sailing ability, combined with a painful sting and a life cycle with seasonal blooms, results in periodic mass beach strandings and occasional human envenomations, making P. physalis the most infamous of the siphonophores. Despite being a common occurrence, the origin of the man o' war or bluebottle before reaching the coastline is not well understood, and neither is the way it drifts at the surface of the ocean. Left- and right-handedness The Portuguese man o' war is asymmetrically shaped: the zooids hang down from either the right or left side of the midline of the pneumatophore or bladder. The pneumatophore can be oriented towards the left or the right. This phenomenon may be an adaptation that prevents an entire population from being washed on shore to die. The "left-handed" animals sail to the right of the wind, while the "right-handed" animals sail to the left. The wind will always push the two types in opposite directions, so at most half the population will be pushed towards the coast. Regional populations can have substantial differences in float size and the number of tentacles used for hunting. The regional form previously known as P. utriculus has a bladder rarely exceeding in length and has one long hunting tentacle that is less than long. In comparison, the typical man o' war has a float of around , and several hunting tentacles that can reach in mature colonies when fully extended. When combined with the trailing action of the tentacles, this left- or right-handedness makes the colony sail sideways relative to the wind, by about 45° in either direction. Colony handedness has therefore been theorized to influence man o' war migration, with left-handed or right-handed colonies potentially being more likely to drift down particular respective sea routes. Handedness develops early in the colony's life, while it is still living below the surface of the sea. Mathematical modelling Since they have no propulsion system, the movement of the man o' war can be modelled mathematically by calculating the forces acting on it, or by advecting virtual particles in ocean and atmospheric circulation models. Earlier studies modelled the movement of the man o' war with Lagrangian particle tracking to explain major beaching events. In 2017, Ferrer and Pastor were able to estimate the region of origin of a significant beaching event on the southeastern Bay of Biscay. They ran a Lagrangian model backwards in time, using wind velocity and a wind drag coefficient as drivers of the man o' war motion. They found that the region of origin was the North Atlantic subtropical gyre. In 2015 Prieto et al. included both the effect of the surface currents and wind to predict the initial colony position prior to major beaching events in the Mediterranean. This model assumed the man o' war was advected by the surface currents, with the effect of the wind being added with a much higher wind drag coefficient of 10%. Similarly, in 2020 Headlam et al. used beaching and offshore observations to identify a region of origin, using the joint effects of surface currents and wind drag, for the largest mass man o' war beaching on the Irish coastline in over 150 years. These earlier studies used numerical models in combination with simple assumptions to calculate the drift of this species, excluding complex drifting dynamics. In 2021, Lee et al. provide a parameterisation for Lagrangian modelling of the bluebottle by considering the similarities between the bluebottle and a sailboat. This allowed them to compute the hydrodynamic and aerodynamic forces acting on the bluebottle and use an equilibrium condition to create a generalised model for calculating the drifting speed and course of the bluebottle under any wind and ocean current conditions. Gallery See also Chondrophore References External links Siphonophores.org General information Portuguese Man-of-War National Geographic Bluebottle Life In The Fast Lane PortugueseManOfWar.com Physalia physalis discussed on RNZ Critter of the Week, 24 December 2021. Physaliidae Animals described in 1758 Cnidarians of the Atlantic Ocean Cnidarians of the Caribbean Sea Cnidarians of the Indian Ocean Cnidarians of the Pacific Ocean Taxa named by Carl Linnaeus Venomous animals Colonial animals
Portuguese man o' war
[ "Biology" ]
3,712
[ "Colonial animals", "Animals" ]
152,958
https://en.wikipedia.org/wiki/Hexokinase
A hexokinase is an enzyme that irreversibly phosphorylates hexoses (six-carbon sugars), forming hexose phosphate. In most organisms, glucose is the most important substrate for hexokinases, and glucose-6-phosphate is the most important product. Hexokinase possesses the ability to transfer an inorganic phosphate group from ATP to a substrate. Hexokinases should not be confused with glucokinase, which is a specific hexokinase found in the liver. All hexokinases are capable of phosphorylating several hexoses but hexokinase IV(D) is often misleadingly called glucokinase, though it is no more specific for glucose than the other mammalian isoenzymes. Variation Genes that encode hexokinase have been discovered in every domain of life, and exist among a variety of species that range from bacteria, yeast, and plants to humans and other vertebrates. The enzymes from yeast, plants and vertebrates all show clear sequence evidence of homology, but those of bacteria may not be related. They are categorized as actin fold proteins, sharing a common ATP binding site core that is surrounded by more variable sequences which determine substrate affinities and other properties. Several hexokinase isoenzymes that provide different functions can occur in a single species. Reaction The intracellular reactions mediated by hexokinases can be typified as: Hexose-CH2OH + MgATP → Hexose-CH2O-PO + MgADP + H+ where hexose-CH2OH represents any of several hexoses (like glucose) that contain an accessible -CH2OH moiety. Consequences of hexose phosphorylation Phosphorylation of a hexose such as glucose often limits it to a number of intracellular metabolic processes, such as glycolysis or glycogen synthesis. This is because phosphorylated hexoses are charged, and thus more difficult to transport out of a cell. In patients with essential fructosuria, metabolism of fructose by hexokinase to fructose-6-phosphate is the primary method of metabolizing dietary fructose; this pathway is not significant in normal individuals. Size of different isoforms Most bacterial hexokinases are approximately 50 kDa in size. Multicellular organisms including plants and animals often have more than one hexokinase isoform. Most are about 100 kDa in size and consist of two halves (N and C terminal), which share much sequence homology. This suggests an evolutionary origin by duplication and fusion of a 50 kDa ancestral hexokinase similar to those of bacteria. Types of mammalian hexokinase There are four important mammalian hexokinase isozymes () that vary in subcellular locations and kinetics with respect to different substrates and conditions, and physiological function. They were designated hexokinases A, B, C, and D on the basis of their electrophoretic mobility. The alternative names hexokinases I, II, III, and IV (respectively) proposed later are widely used. Hexokinases I, II, and III Hexokinases I, II, and III are referred to as low-Km isoenzymes because of a high affinity for glucose (below 1 mM). Hexokinases I and II follow Michaelis-Menten kinetics at physiological concentrations of substrates. All three are strongly inhibited by their product, glucose-6-phosphate. Molecular masses are around 100 kDa. Each consists of two similar 50kDa halves, but only in hexokinase II do both halves have functional active sites. Hexokinase I/A is found in all mammalian tissues, and is considered a "housekeeping enzyme," unaffected by most physiological, hormonal, and metabolic changes. Hexokinase II/B constitutes the principal regulated isoenzyme in many cell types and is increased in many cancers. It is the hexokinase found in muscle and heart. Hexokinase II is also located at the mitochondria outer membrane so it can have direct access to ATP. The relative specific activity of hexokinase II increases with pH at least in a pH range from 6.9 to 8.5. Hexokinase III/C is substrate-inhibited by glucose at physiological concentrations. Little is known about the regulatory characteristics of this isoenzyme. Hexokinase IV ("glucokinase") Mammalian hexokinase IV, also referred to as glucokinase, differs from other hexokinases in kinetics and functions. The location of the phosphorylation on a subcellular level occurs when glucokinase translocates between the cytoplasm and nucleus of liver cells. Glucokinase can only phosphorylate glucose if the concentration of this substrate is high enough; it does not follow Henri–Michaelis–Menten kinetics, and has no Km; It is half-saturated at glucose concentrations 100 times higher than those of hexokinases I, II, and III. Hexokinase IV is monomeric, about 50kDa, displays positive cooperativity with glucose, and is not allosterically inhibited by its product, glucose-6-phosphate. Hexokinase IV is present in the liver, pancreas, hypothalamus, small intestine, and perhaps certain other neuroendocrine cells, and plays an important regulatory role in carbohydrate metabolism. In the β cells of the pancreatic islets, it serves as a glucose sensor to control insulin release, and similarly controls glucagon release in the α cells. In hepatocytes of the liver, glucokinase responds to changes of ambient glucose levels by increasing or reducing glycogen synthesis. In glycolysis Glucose is unique in that it can be used to produce ATP by all cells in both the presence and absence of molecular oxygen (O2). The first step in glycolysis is the phosphorylation of glucose by hexokinase. By catalyzing the phosphorylation of glucose to yield glucose 6-phosphate, hexokinases maintain the downhill concentration gradient that favors the facilitated transport of glucose into cells. This reaction also initiates all physiologically relevant pathways of glucose utilization, including glycolysis and the pentose phosphate pathway. The addition of a charged phosphate group at the 6-position of hexoses also ensures 'trapping' of glucose and 2-deoxyhexose glucose analogs (e.g. 2-deoxyglucose, and 2-fluoro-2-deoxyglucose) within cells, as charged hexose phosphates cannot easily cross the cell membrane. Association with mitochondria Hexokinases I and II can associate physically to the outer surface of the external membrane of mitochondria through specific binding to a porin, or voltage dependent anion channel. This association confers hexokinase direct access to ATP generated by mitochondria, which is one of the two substrates of hexokinase. Mitochondrial hexokinase is highly elevated in rapidly growing malignant tumor cells, with levels up to 200 times higher than normal tissues. Mitochondrially bound hexokinase has been demonstrated to be the driving force for the extremely high glycolytic rates that take place aerobically in tumor cells (the so-called Warburg effect described by Otto Heinrich Warburg in 1930). Deficiency Hexokinase deficiency is a genetic autosomal recessive disease that causes chronic haemolytic anaemia. Chronic haemolytic anaemia is caused by a mutation in the gene that codes for hexokinase. The mutation causes a reduction of the hexokinase activity, and hence hexokinase deficiency. See also References Glycolysis enzymes EC 2.7.1 Moonlighting proteins Glycolysis
Hexokinase
[ "Chemistry" ]
1,744
[ "Carbohydrate metabolism", "Glycolysis" ]
152,969
https://en.wikipedia.org/wiki/Eutectic%20system
A eutectic system or eutectic mixture ( ) is a type of a homogeneous mixture that has a melting point lower than those of the constituents. The lowest possible melting point over all of the mixing ratios of the constituents is called the eutectic temperature. On a phase diagram, the eutectic temperature is seen as the eutectic point (see plot on the right). Non-eutectic mixture ratios have different melting temperatures for their different constituents, since one component's lattice will melt at a lower temperature than the other's. Conversely, as a non-eutectic mixture cools down, each of its components solidifies into a lattice at a different temperature, until the entire mass is solid. A non-eutectic mixture thus does not have a single melting/freezing point temperature at which it changes phase, but rather a temperature at which it changes between liquid and slush (known as the liquidus) and a lower temperature at which it changes between slush and solid (the solidus). In the real world, eutectic properties can be used to advantage in such processes as eutectic bonding, where silicon chips are bonded to gold-plated substrates with ultrasound, and eutectic alloys prove valuable in such diverse applications as soldering, brazing, metal casting, electrical protection, fire sprinkler systems, and nontoxic mercury substitutes. The term was coined in 1884 by British physicist and chemist Frederick Guthrie (1833–1886). The word originates . Before his studies, chemists assumed "that the alloy of minimum fusing point must have its constituents in some simple atomic proportions", which was indeed proven to be not the case. Eutectic phase transition The eutectic solidification is defined as follows: This type of reaction is an invariant reaction, because it is in thermal equilibrium; another way to define this is the change in Gibbs free energy equals zero. Tangibly, this means the liquid and two solid solutions all coexist at the same time and are in chemical equilibrium. There is also a thermal arrest for the duration of the phase change during which the temperature of the system does not change. The resulting solid macrostructure from a eutectic reaction depends on a few factors, with the most important factor being how the two solid solutions nucleate and grow. The most common structure is a lamellar structure, but other possible structures include rodlike, globular, and acicular. Non-eutectic compositions Compositions of eutectic systems that are not at the eutectic point can be classified as hypoeutectic or hypereutectic: Hypoeutectic compositions are those with a greater composition of species α and a smaller percent composition of species β than the eutectic composition (E) Hypereutectic compositions are characterized as those with a higher composition of species β and a lower composition of species α than the eutectic composition. As the temperature of a non-eutectic composition is lowered the liquid mixture will precipitate one component of the mixture before the other. In a hypereutectic solution, there will be a proeutectoid phase of species β whereas a hypoeutectic solution will have a proeutectic α phase. Types Alloys Eutectic alloys have two or more materials and have a eutectic composition. When a non-eutectic alloy solidifies, its components solidify at different temperatures, exhibiting a plastic melting range. Conversely, when a well-mixed, eutectic alloy melts, it does so at a single, sharp temperature. The various phase transformations that occur during the solidification of a particular alloy composition can be understood by drawing a vertical line from the liquid phase to the solid phase on the phase diagram for that alloy. Some uses for eutectic alloys include: NEMA eutectic alloy overload relays for electrical protection of three-phase motors for pumps, fans, conveyors, and other factory process equipment. Eutectic alloys for soldering, both traditional alloys composed of lead (Pb) and tin (Sn), sometimes with additional silver (Ag) or gold (Au) — especially SnPb and SnPbAg alloy formula for electronics - and newer lead-free soldering alloys, in particular ones composed of tin, silver, and copper (Cu) such as SnAg. Casting alloys, such as aluminium-silicon and cast iron (at the composition of 4.3% carbon in iron producing an austenite-cementite eutectic) Silicon chips are eutectic bonded to gold-plated substrates through a silicon-gold eutectic by the application of ultrasonic energy to the chip. Brazing, where diffusion can remove alloying elements from the joint, so that eutectic melting is only possible early in the brazing process Temperature response, e.g., Wood's metal and Field's metal for fire sprinklers Non-toxic mercury replacements, such as galinstan Experimental glassy metals, with extremely high strength and corrosion resistance Eutectic alloys of sodium and potassium (NaK) that are liquid at room temperature and used as coolant in experimental fast neutron nuclear reactors. Others Sodium chloride and water form a eutectic mixture whose eutectic point is −21.2 °C and 23.3% salt by mass. The eutectic nature of salt and water is exploited when salt is spread on roads to aid snow removal, or mixed with ice to produce low temperatures (for example, in traditional ice cream making). Ethanol–water has an unusually biased eutectic point, i.e. it is close to pure ethanol, which sets the maximum proof obtainable by fractional freezing. "Solar salt", 60% NaNO3 and 40% KNO3, forms a eutectic molten salt mixture which is used for thermal energy storage in concentrated solar power plants. To reduce the eutectic melting point in the solar molten salts, calcium nitrate is used in the following proportion: 42% Ca(NO3)2, 43% KNO3, and 15% NaNO3. Lidocaine and prilocaine—both are solids at room temperature—form a eutectic that is an oil with a melting point that is used in eutectic mixture of local anesthetic (EMLA) preparations. Menthol and camphor, both solids at room temperature, form a eutectic that is a liquid at room temperature in the following proportions: 8:2, 7:3, 6:4, and 5:5. Both substances are common ingredients in pharmacy extemporaneous preparations. Minerals may form eutectic mixtures in igneous rocks, giving rise to characteristic intergrowth textures exhibited, for example, by granophyre. Some inks are eutectic mixtures, allowing inkjet printers to operate at lower temperatures. Choline chloride produces eutectic mixtures with many natural products such as citric acid, malic acid and sugars. These liquid mixtures can be used, for example, to obtain antioxidant and antidiabetic extracts from natural products. Strengthening mechanisms Alloys The primary strengthening mechanism of the eutectic structure in metals is composite strengthening (See strengthening mechanisms of materials). This deformation mechanism works through load transfer between the two constituent phases where the more compliant phase transfers stress to the stiffer phase. By taking advantage of the strength of the stiff phase and the ductility of the compliant phase, the overall toughness of the material increases. As the composition is varied to either hypoeutectic or hypereutectic formations, the load transfer mechanism becomes more complex as there is a load transfer between the eutectic phase and the secondary phase as well as the load transfer within the eutectic phase itself. A second tunable strengthening mechanism of eutectic structures is the spacing of the secondary phase. By changing the spacing of the secondary phase, the fraction of contact between the two phases through shared phase boundaries is also changed. By decreasing the spacing of the eutectic phase, creating a fine eutectic structure, more surface area is shared between the two constituent phases resulting in more effective load transfer. On the micro-scale, the additional boundary area acts as a barrier to dislocations further strengthening the material. As a result of this strengthening mechanism, coarse eutectic structures tend to be less stiff but more ductile while fine eutectic structures are stiffer but more brittle. The spacing of the eutectic phase can be controlled during processing as it is directly related to the cooling rate during solidification of the eutectic structure. For example, for a simple lamellar eutectic structure, the minimal lamellae spacing  is: Where  is is the surface energy of the two-phase boundary,  is the molar volume of the eutectic phase,   is the solidification temperature of the eutectic phase,  is the enthalpy of formation of the eutectic phase, and  is the undercooling of the material. So, by altering the undercooling, and by extension the cooling rate, the minimal achievable spacing of the secondary phase is controlled. Strengthening metallic eutectic phases to resist deformation at high temperatures (see creep deformation) is more convoluted as the primary deformation mechanism changes depending on the level of stress applied. At high temperatures where deformation is dominated by dislocation movement, the strengthening from load transfer and secondary phase spacing remain as they continue to resist dislocation motion. At lower strains where Nabarro-Herring creep is dominant, the shape and size of the eutectic phase structure plays a significant role in material deformation as it affects the available boundary area for vacancy diffusion to occur. Other critical points Eutectoid When the solution above the transformation point is solid, rather than liquid, an analogous eutectoid transformation can occur. For instance, in the iron-carbon system, the austenite phase can undergo a eutectoid transformation to produce ferrite and cementite, often in lamellar structures such as pearlite and bainite. This eutectoid point occurs at and 0.76 wt% carbon. Peritectoid A peritectoid transformation is a type of isothermal reversible reaction that has two solid phases reacting with each other upon cooling of a binary, ternary, ..., n-ary alloy to create a completely different and single solid phase. The reaction plays a key role in the order and decomposition of quasicrystalline phases in several alloy types. A similar structural transition is also predicted for rotating columnar crystals. Peritectic Peritectic transformations are also similar to eutectic reactions. Here, a liquid and solid phase of fixed proportions react at a fixed temperature to yield a single solid phase. Since the solid product forms at the interface between the two reactants, it can form a diffusion barrier and generally causes such reactions to proceed much more slowly than eutectic or eutectoid transformations. Because of this, when a peritectic composition solidifies it does not show the lamellar structure that is found with eutectic solidification. Such a transformation exists in the iron-carbon system, as seen near the upper-left corner of the figure. It resembles an inverted eutectic, with the δ phase combining with the liquid to produce pure austenite at and 0.17% carbon. At the peritectic decomposition temperature the compound, rather than melting, decomposes into another solid compound and a liquid. The proportion of each is determined by the lever rule. In the Al-Au phase diagram, for example, it can be seen that only two of the phases melt congruently, AuAl2 and Au2Al, while the rest peritectically decompose. "Bad solid solution" Not all minimum melting point systems are "eutectic". The alternative of "poor solid solution" can be illustrated by comparing the common precious metal systems Cu-Ag and Cu-Au. Cu-Ag, source for example https://himikatus.ru/art/phase-diagr1/Ag-Cu.php, is a true eutectic system. The eutectic melting point is at 780 °C, with solid solubility limits at fineness 80 and 912 by weight, and eutectic at 719. Since Cu-Ag is a true eutectic, any silver with fineness anywhere between 80 and 912 will reach solidus line, and therefore melt at least partly, at exactly 780 °C. The eutectic alloy with fineness exactly 719 will reach liquidus line, and therefore melt entirely, at that exact temperature without any further rise of temperature till all of the alloy has melted. Any silver with fineness between 80 and 912 but not exactly 719 will also reach the solidus line at exactly 780 °C, but will melt partly. It will leave a solid residue with fineness of either exactly 912 or exactly 80, but never some of both. It will melt at constant temperature without further rise of temperature until the exact amount of eutectic (fineness 719) alloy has melted off to divide the alloy into eutectic melt and solid solution residue. On further heating, the solid solution residue dissolves in the melt and changes its composition until the liquidus line is reached and the whole residue has dissolved away. Cu-Au source for example https://himikatus.ru/art/phase-diagr1/Au-Cu.php does display a melting point minimum at 910 °C and given as 44 atom % Cu, which converts to about 20 weight percent Cu - about 800 fineness of gold. But this is not a true eutectic. 800 fine gold melts at 910 °C, to a melt of exact same composition, and the whole alloy will melt at exact same temperature. But the differences happen away from the minimum composition. Unlike silver with fineness other than 719 (which melts partly at exactly 780 °C through a wide fineness range), gold with fineness other than 800 will reach solidus and start partial melting at a temperature different from and higher than 910 °C, depending on the alloy fineness. The partial melting does cause some composition changes - the liquid will be closer in fineness towards 800 than the remaining solid, but the liquid will not have fineness of exactly 800 and the fineness of the remaining solid will depend on the fineness of the liquid. The underlying reason is that for an eutectic system like Cu-Ag, the solubility in liquid phase is good but solubility in solid phase is limited. Therefore when a silver-copper alloy is frozen, it actually separates into crystals of 912 fineness silver and 80 fineness silver - both are saturated and always have the same composition at the freezing point of 780 °C. Thus the alloy just below 780 °C consists of two types of crystals of exactly the same composition regardless of the total alloy composition, only the relative amount of each type of crystals differs. Therefore they always melt at 780 °C until one or other type of crystals, or both, will be exhausted. In contrast, in Cu-Au system the components are miscible at the melting point in all compositions even in solid. There can be crystals of any composition, which will melt at different temperatures depending on composition. However, Cu-Au system is a "poor" solid solution. There is a substantial misfit between the atoms in solid which, however, near the melting point is overcome by entropy of thermal motion mixing the atoms. That misfit, however, disfavours the Cu-Au solution relative to phases in which the atoms are better fitted, such as the melt, and causes the melting point to fall below the melting point of components. Eutectic calculation The composition and temperature of a eutectic can be calculated from enthalpy and entropy of fusion of each components. The Gibbs free energy G depends on its own differential: Thus, the G/T derivative at constant pressure is calculated by the following equation: The chemical potential is calculated if we assume that the activity is equal to the concentration: At the equilibrium, , thus is obtained as Using and integrating gives The integration constant K may be determined for a pure component with a melting temperature and an enthalpy of fusion : We obtain a relation that determines the molar fraction as a function of the temperature for each component: The mixture of n components is described by the system which can be solved by See also Azeotrope, or constant boiling mixture Freezing-point depression Fusible alloy References Bibliography Further reading Materials science Chemistry Phase transitions
Eutectic system
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,434
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Critical phenomena", "nan", "Statistical mechanics", "Matter" ]
152,970
https://en.wikipedia.org/wiki/Euglenid
Euglenids or euglenoids are one of the best-known groups of eukaryotic flagellates: single-celled organisms with flagella, or whip-like tails. They are classified in the phylum Euglenophyta, class Euglenida or Euglenoidea. Euglenids are commonly found in fresh water, especially when it is rich in organic materials, but they have a few marine and endosymbiotic members. Many euglenids feed by phagocytosis, or strictly by diffusion. A monophyletic subgroup known as Euglenophyceae have chloroplasts and produce their own food through photosynthesis. This group contains the carbohydrate paramylon. Euglenids split from other Euglenozoa (a larger group of flagellates) more than a billion years ago. The plastids (membranous organelles) in all extant photosynthetic species result from secondary endosymbiosis between a euglenid and a green alga. Structure Euglenoids are distinguished mainly by the presence of a type of cell covering called a pellicle. Within its taxon, the pellicle is one of the euglenoids' most diverse morphological features. The pellicle is composed of proteinaceous strips underneath the cell membrane, supported by dorsal and ventral microtubules. This varies from rigid to flexible, and gives the cell its shape, often giving it distinctive striations. In many euglenids, the strips can slide past one another, causing an inching motion called metaboly. Otherwise, they move using their flagella. Classification The first attempt at classifying euglenids was done by Ehrenberg in 1830, when he described the genus Euglena and placed it in the Polygastrica of family Astasiae, containing other creatures of variable body shape and lacking pseudopods or lorica. Later, various biologists described additional characteristics for Euglena and established different classification systems for euglenids based on nutrition modes, the presence and number of flagella, and the degree of metaboly. The 1942 revision by A. Hollande distinguished three groups, Peranemoidées (flexible phagotrophs), Petalomonadinées (rigid phagotrophs) and Euglenidinées (phototrophs), and was widely accepted as the best reflection of the natural relationships between euglenids, adopted by many other authors. Gordon F. Leedale expanded on Hollande's system, establishing six orders (Eutreptiales, Euglenales, Rhabdomonadales, Sphenomonadales, Heteronematales and Euglenamorphales) and taking into account new data on their physiology and ultrastructure. This scheme endured until 1986, with the sequencing of the SSU rRNA gene from Euglena gracilis. Euglenids are currently regarded as a highly diverse clade within Euglenozoa, in the eukaryotic supergroup Discoba. They are traditionally organized into three categories based on modes of nutrition: the phototrophs (Euglenophyceae), the osmotrophs (mainly the 'primary osmotrophs' known as Aphagea), and the phagotrophs, from which the first two groups have evolved. The phagotrophs, although paraphyletic, have historically been classified under the name of Heteronematina. In addition, euglenids can be divided into inflexible or rigid euglenids, and flexible or metabolic euglenids which are capable of 'metaboly' or 'euglenid motion'. Only those with more than 18 protein strips in their pellicle gain this flexibility. Phylogenetic studies show that various clades of rigid phagotrophic euglenids compose the base of the euglenid tree, namely Petalomonadida and the paraphyletic 'Ploeotiida'. In contrast, all flexible euglenids belong to a monophyletic group known as Spirocuta, which includes Euglenophyceae, Aphagea and various phagotrophs (Peranemidae, Anisonemidae and Neometanemidae). The current classification of class Euglenida, as a result of these studies, is as follows: Euglenida incertae sedis: Atraktomonas, Calycimonas, Dolium, Dylakosoma, Tropidoscyphus, Michajlowastasia, Parastasiella, Dinemula, Paradinemula, Mononema, Ovicola, Naupliicola, Embryocola, Copromonas. Order Petalomonadida Order "Ploeotiida" (paraphyletic) Clade Alistosa Entosiphon Gaulosia Clade Karavia Chelandium Olkasia Clade Spirocuta [Helicales ] Clade Anisonemia Order Anisonemida Family Anisonemidae Order Natomonadida Suborder Metanemina Family Neometanemidae Suborder Aphagea [Rhabdomonadina ] Family Astasiidae Family Distigmidae Order Peranemida Family Peranemidae Clade Euglenophyceae [Euglenea ] Euglenophyceae incertae sedis: Ascoglena, Euglenamorpha, Euglenopsis, Glenoclosteroium, Hegneria, Klebsina, Euglenocapsa. Order Rapazida Family Rapazidae Order Eutreptiales Family Eutreptiaceae Order Euglenales Family Phacaceae Family Euglenaceae Nutrition The classification of euglenids is still variable, as groups are being revised to conform with their molecular phylogeny. Classifications have fallen in line with the traditional groups based on differences in nutrition and number of flagella; these provide a starting point for considering euglenid diversity. Different characteristics of the euglenids' pellicles can provide insight into their modes of movement and nutrition. As with other Euglenozoa, the primitive mode of nutrition is phagocytosis. Prey such as bacteria and smaller flagellates is ingested through a cytostome, supported by microtubules. These are often packed together to form two or more rods, which function in ingestion, and in Entosiphon form an extendable siphon. Most phagotrophic euglenids have two flagella, one leading and one trailing. The latter is used for gliding along the substrate. In some, such as Peranema, the leading flagellum is rigid and beats only at its tip. Osmotrophic euglenoids Osmotrophic euglenids are euglenids which have undergone osmotrophy. Due to a lack of characteristics that are useful for taxonomical purposes, the origin of osmotrophic euglenids is unclear, though certain morphological characteristics reveal a small fraction of osmotrophic euglenids are derived from phototrophic and phagotrophic ancestors. A prolonged absence of light or exposure to harmful chemicals may cause atrophy and absorption of the chloroplasts without otherwise harming the organism. A number of species exists where a chloroplast's absence was formerly marked with separate genera such as Astasia (colourless Euglena) and Hyalophacus (colourless Phacus). Due to the lack of a developed cytostome, these forms feed exclusively by osmotrophic absorption. Reproduction Although euglenids share several common characteristics with animals, which is why they were originally classified as so, no evidence has been found of euglenids ever using sexual reproduction. This is one of the reasons they could no longer be classified as animals. For euglenids to reproduce, asexual reproduction takes place in the form of binary fission, and the cells replicate and divide during mitosis and cytokinesis. This process occurs in a very distinct order. First, the basal bodies and flagella replicate, then the cytostome and microtubules (the feeding apparatus), and finally the nucleus and remaining cytoskeleton. Once this occurs, the organism begins to cleave at the basal bodies, and this cleavage line moves towards the center of the organism until two separate euglenids are evident. Because of the way that this reproduction takes place and the axis of separation, it is called longitudinal cell division or longitudinal binary fission. Evolution The earliest fossil of euglenids is attributed to Moyeria, which is interpreted as possessing a pellicle composed of proteinaceous strips, the defining characteristic of euglenids. It is found in Middle Ordovician and Silurian rocks, making it the oldest fossil evidence of euglenids. Gallery References Bibliography External links The Euglenoid Project Tree of Life: Euglenida Algal taxonomy Euglenozoa Extant Ypresian first appearances
Euglenid
[ "Biology" ]
1,922
[ "Algae", "Algal taxonomy" ]
153,008
https://en.wikipedia.org/wiki/Knot%20theory
In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life, such as those in shoelaces and rope, a mathematical knot differs in that the ends are joined so it cannot be undone, the simplest knot being a ring (or "unknot"). In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, . Two mathematical knots are equivalent if one can be transformed into the other via a deformation of upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting it or passing it through itself. Knots can be described in various ways. Using different description methods, there may be more than one description of the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram, in which any knot can be drawn in many different ways. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot. A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished using a knot invariant, a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants. The original motivation for the founders of knot theory was to create a table of knots and links, which are knots of several components entangled with each other. More than six billion knots and links have been tabulated since the beginnings of knot theory in the 19th century. To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see knot (mathematics). For example, a higher-dimensional knot is an n-dimensional sphere embedded in (n+2)-dimensional Euclidean space. History Archaeologists have discovered that knot tying dates back to prehistoric times. Besides their uses such as recording information and tying objects together, knots have interested humans for their aesthetics and spiritual symbolism. Knots appear in various forms of Chinese artwork dating from several centuries BC (see Chinese knotting). The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often representing strength in unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork. A mathematical theory of knots was first developed in 1771 by Alexandre-Théophile Vandermonde who explicitly noted the importance of topological features when discussing the properties of knots related to the geometry of position. Mathematical studies of knots began in the 19th century with Carl Friedrich Gauss, who defined the linking integral . In the 1860s, Lord Kelvin's theory that atoms were knots in the aether led to Peter Guthrie Tait's creation of the first knot tables for complete classification. Tait, in 1885, published a table of knots with up to ten crossings, and what came to be known as the Tait conjectures. This record motivated the early knot theorists, but knot theory eventually became part of the emerging subject of topology. These topologists in the early part of the 20th century—Max Dehn, J. W. Alexander, and others—studied knots from the point of view of the knot group and invariants from homology theory such as the Alexander polynomial. This would be the main approach to knot theory until a series of breakthroughs transformed the subject. In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. Many knots were shown to be hyperbolic knots, enabling the use of geometry in defining new, powerful knot invariants. The discovery of the Jones polynomial by Vaughan Jones in 1984 , and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. A plethora of knot invariants have been invented since then, utilizing sophisticated tools such as quantum groups and Floer homology. In the last several decades of the 20th century, scientists became interested in studying physical knots in order to understand knotting phenomena in DNA and other polymers. Knot theory can be used to determine if a molecule is chiral (has a "handedness") or not . Tangles, strings with both ends fixed in place, have been effectively used in studying the action of topoisomerase on DNA . Knot theory may be crucial in the construction of quantum computers, through the model of topological quantum computation . Knot equivalence A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop . Simply, we can say a knot is a "simple closed curve" (see Curve) — that is: a "nearly" injective and continuous function , with the only "non-injectivity" being . Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot. The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A formal mathematical definition is that two knots are equivalent if there is an orientation-preserving homeomorphism with . What this definition of knot equivalence means is that two knots are equivalent when there is a continuous family of homeomorphisms of space onto itself, such that the last one of them carries the first knot onto the second knot. (In detail: Two knots and are equivalent if there exists a continuous mapping such that a) for each the mapping taking to is a homeomorphism of onto itself; b) for all ; and c) . Such a function is known as an ambient isotopy.) These two notions of knot equivalence agree exactly about which knots are equivalent: Two knots that are equivalent under the orientation-preserving homeomorphism definition are also equivalent under the ambient isotopy definition, because any orientation-preserving homeomorphisms of to itself is the final stage of an ambient isotopy starting from the identity. Conversely, two knots equivalent under the ambient isotopy definition are also equivalent under the orientation-preserving homeomorphism definition, because the (final) stage of the ambient isotopy must be an orientation-preserving homeomorphism carrying one knot to the other. The basic problem of knot theory, the recognition problem, is determining the equivalence of two knots. Algorithms exist to solve this problem, with the first given by Wolfgang Haken in the late 1960s . Nonetheless, these algorithms can be extremely time-consuming, and a major issue in the theory is to understand how hard this problem really is . The special case of recognizing the unknot, called the unknotting problem, is of particular interest . In February 2021 Marc Lackenby announced a new unknot recognition algorithm that runs in quasi-polynomial time. Knot diagrams A useful way to visualise and manipulate knots is to project the knot onto a plane—think of the knot casting a shadow on the wall. A small change in the direction of projection will ensure that it is one-to-one except at the double points, called crossings, where the "shadow" of the knot crosses itself once transversely . At each crossing, to be able to recreate the original knot, the over-strand must be distinguished from the under-strand. This is often done by creating a break in the strand going underneath. The resulting diagram is an immersed plane curve with the additional data of which strand is over and which is under at each crossing. (These diagrams are called knot diagrams when they represent a knot and link diagrams when they represent a link.) Analogously, knotted surfaces in 4-space can be related to immersed surfaces in 3-space. A reduced diagram is a knot diagram in which there are no reducible crossings (also nugatory or removable crossings), or in which all of the reducible crossings have been removed. A petal projection is a type of projection in which, instead of forming double points, all strands of the knot meet at a single crossing point, connected to it by loops forming non-nested "petals". Reidemeister moves In 1927, working with this diagrammatic form of knots, J. W. Alexander and Garland Baird Briggs, and independently Kurt Reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown below. These operations, now called the Reidemeister moves, are: The proof that diagrams of equivalent knots are connected by Reidemeister moves relies on an analysis of what happens under the planar projection of the movement taking one knot to another. The movement can be arranged so that almost all of the time the projection will be a knot diagram, except at finitely many times when an "event" or "catastrophe" occurs, such as when more than two strands cross at a point or multiple strands become tangent at a point. A close inspection will show that complicated events can be eliminated, leaving only the simplest events: (1) a "kink" forming or being straightened out; (2) two strands becoming tangent at a point and passing through; and (3) three strands crossing at a point. These are precisely the Reidemeister moves . Knot invariants A knot invariant is a "quantity" that is the same for equivalent knots . For example, if the invariant is computed from a knot diagram, it should give the same value for two knot diagrams representing equivalent knots. An invariant may take the same value on two different knots, so by itself may be incapable of distinguishing all knots. An elementary invariant is tricolorability. "Classical" knot invariants include the knot group, which is the fundamental group of the knot complement, and the Alexander polynomial, which can be computed from the Alexander invariant, a module constructed from the infinite cyclic cover of the knot complement . In the late 20th century, invariants such as "quantum" knot polynomials, Vassiliev invariants and hyperbolic invariants were discovered. These aforementioned invariants are only the tip of the iceberg of modern knot theory. Knot polynomials A knot polynomial is a knot invariant that is a polynomial. Well-known examples include the Jones polynomial, the Alexander polynomial, and the Kauffman polynomial. A variant of the Alexander polynomial, the Alexander–Conway polynomial, is a polynomial in the variable z with integer coefficients . The Alexander–Conway polynomial is actually defined in terms of links, which consist of one or more knots entangled with each other. The concepts explained above for knots, e.g. diagrams and Reidemeister moves, also hold for links. Consider an oriented link diagram, i.e. one in which every component of the link has a preferred direction indicated by an arrow. For a given crossing of the diagram, let be the oriented link diagrams resulting from changing the diagram as indicated in the figure: The original diagram might be either or , depending on the chosen crossing's configuration. Then the Alexander–Conway polynomial, , is recursively defined according to the rules: (where is any diagram of the unknot) The second rule is what is often referred to as a skein relation. To check that these rules give an invariant of an oriented link, one should determine that the polynomial does not change under the three Reidemeister moves. Many important knot polynomials can be defined in this way. The following is an example of a typical computation using a skein relation. It computes the Alexander–Conway polynomial of the trefoil knot. The yellow patches indicate where the relation is applied. C() = C() + z C() gives the unknot and the Hopf link. Applying the relation to the Hopf link where indicated, C() = C() + z C() gives a link deformable to one with 0 crossings (it is actually the unlink of two components) and an unknot. The unlink takes a bit of sneakiness: C() = C() + z C() which implies that C(unlink of two components) = 0, since the first two polynomials are of the unknot and thus equal. Putting all this together will show: Since the Alexander–Conway polynomial is a knot invariant, this shows that the trefoil is not equivalent to the unknot. So the trefoil really is "knotted". Actually, there are two trefoil knots, called the right and left-handed trefoils, which are mirror images of each other (take a diagram of the trefoil given above and change each crossing to the other way to get the mirror image). These are not equivalent to each other, meaning that they are not amphichiral. This was shown by Max Dehn, before the invention of knot polynomials, using group theoretical methods . But the Alexander–Conway polynomial of each kind of trefoil will be the same, as can be seen by going through the computation above with the mirror image. The Jones polynomial can in fact distinguish between the left- and right-handed trefoil knots . Hyperbolic invariants William Thurston proved many knots are hyperbolic knots, meaning that the knot complement (i.e., the set of points of 3-space not on the knot) admits a geometric structure, in particular that of hyperbolic geometry. The hyperbolic structure depends only on the knot so any quantity computed from the hyperbolic structure is then a knot invariant . Geometry lets us visualize what the inside of a knot or link complement looks like by imagining light rays as traveling along the geodesics of the geometry. An example is provided by the picture of the complement of the Borromean rings. The inhabitant of this link complement is viewing the space from near the red component. The balls in the picture are views of horoball neighborhoods of the link. By thickening the link in a standard way, the horoball neighborhoods of the link components are obtained. Even though the boundary of a neighborhood is a torus, when viewed from inside the link complement, it looks like a sphere. Each link component shows up as infinitely many spheres (of one color) as there are infinitely many light rays from the observer to the link component. The fundamental parallelogram (which is indicated in the picture), tiles both vertically and horizontally and shows how to extend the pattern of spheres infinitely. This pattern, the horoball pattern, is itself a useful invariant. Other hyperbolic invariants include the shape of the fundamental parallelogram, length of shortest geodesic, and volume. Modern knot and link tabulation efforts have utilized these invariants effectively. Fast computers and clever methods of obtaining these invariants make calculating these invariants, in practice, a simple task . Higher dimensions A knot in three dimensions can be untied when placed in four-dimensional space. This is done by changing crossings. Suppose one strand is behind another as seen from a chosen point. Lift it into the fourth dimension, so there is no obstacle (the front strand having no component there); then slide it forward, and drop it back, now in front. Analogies for the plane would be lifting a string up off the surface, or removing a dot from inside a circle. In fact, in four dimensions, any non-intersecting closed loop of one-dimensional string is equivalent to an unknot. First "push" the loop into a three-dimensional subspace, which is always possible, though technical to explain. Four-dimensional space occurs in classical knot theory, however, and an important topic is the study of slice knots and ribbon knots. A notorious open problem asks whether every slice knot is also ribbon. Knotting spheres of higher dimension Since a knot can be considered topologically a 1-dimensional sphere, the next generalization is to consider a two-dimensional sphere () embedded in 4-dimensional Euclidean space (). Such an embedding is knotted if there is no homeomorphism of onto itself taking the embedded 2-sphere to the standard "round" embedding of the 2-sphere. Suspended knots and spun knots are two typical families of such 2-sphere knots. The mathematical technique called "general position" implies that for a given n-sphere in m-dimensional Euclidean space, if m is large enough (depending on n), the sphere should be unknotted. In general, piecewise-linear n-spheres form knots only in (n + 2)-dimensional space , although this is no longer a requirement for smoothly knotted spheres. In fact, there are smoothly knotted -spheres in 6k-dimensional space; e.g., there is a smoothly knotted 3-sphere in . Thus the codimension of a smooth knot can be arbitrarily large when not fixing the dimension of the knotted sphere; however, any smooth k-sphere embedded in with is unknotted. The notion of a knot has further generalisations in mathematics, see: Knot (mathematics), isotopy classification of embeddings. Every knot in the n-sphere is the link of a real-algebraic set with isolated singularity in . An n-knot is a single embedded in . An n-link consists of k-copies of embedded in , where k is a natural number. Both the and the cases are well studied, and so is the case. Adding knots Two knots can be added by cutting both knots and joining the pairs of ends. The operation is called the knot sum, or sometimes the connected sum or composition of two knots. This can be formally defined as follows : consider a planar projection of each knot and suppose these projections are disjoint. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot while the rest of the rectangle is disjoint from the knots. Form a new knot by deleting the first pair of opposite sides and adjoining the other pair of opposite sides. The resulting knot is a sum of the original knots. Depending on how this is done, two different knots (but no more) may result. This ambiguity in the sum can be eliminated regarding the knots as oriented, i.e. having a preferred direction of travel along the knot, and requiring the arcs of the knots in the sum are oriented consistently with the oriented boundary of the rectangle. The knot sum of oriented knots is commutative and associative. A knot is prime if it is non-trivial and cannot be written as the knot sum of two non-trivial knots. A knot that can be written as such a sum is composite. There is a prime decomposition for knots, analogous to prime and composite numbers . For oriented knots, this decomposition is also unique. Higher-dimensional knots can also be added but there are some differences. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions, at least when one considers smooth knots in codimension at least 3. Knots can also be constructed using the circuit topology approach. This is done by combining basic units called soft contacts using five operations (Parallel, Series, Cross, Concerted, and Sub). The approach is applicable to open chains as well and can also be extended to include the so-called hard contacts. Tabulating knots Traditionally, knots have been catalogued in terms of crossing number. Knot tables generally include only prime knots, and only one entry for a knot and its mirror image (even if they are different) . The number of nontrivial knots of a given crossing number increases rapidly, making tabulation computationally difficult . Tabulation efforts have succeeded in enumerating over 6 billion knots and links . The sequence of the number of prime knots of a given crossing number, up to crossing number 16, is 0, 0, 1, 1, 2, 3, 7, 21, 49, 165, 552, 2176, 9988, , , ... . While exponential upper and lower bounds for this sequence are known, it has not been proven that this sequence is strictly increasing . The first knot tables by Tait, Little, and Kirkman used knot diagrams, although Tait also used a precursor to the Dowker notation. Different notations have been invented for knots which allow more efficient tabulation . The early tables attempted to list all knots of at most 10 crossings, and all alternating knots of 11 crossings . The development of knot theory due to Alexander, Reidemeister, Seifert, and others eased the task of verification and tables of knots up to and including 9 crossings were published by Alexander–Briggs and Reidemeister in the late 1920s. The first major verification of this work was done in the 1960s by John Horton Conway, who not only developed a new notation but also the Alexander–Conway polynomial . This verified the list of knots of at most 11 crossings and a new list of links up to 10 crossings. Conway found a number of omissions but only one duplication in the Tait–Little tables; however he missed the duplicates called the Perko pair, which would only be noticed in 1974 by Kenneth Perko . This famous error would propagate when Dale Rolfsen added a knot table in his influential text, based on Conway's work. Conway's 1970 paper on knot theory also contains a typographical duplication on its non-alternating 11-crossing knots page and omits 4 examples — 2 previously listed in D. Lombardero's 1968 Princeton senior thesis and 2 more subsequently discovered by Alain Caudron. [see Perko (1982), Primality of certain knots, Topology Proceedings] Less famous is the duplicate in his 10 crossing link table: 2.-2.-20.20 is the mirror of 8*-20:-20. [See Perko (2016), Historical highlights of non-cyclic knot theory, J. Knot Theory Ramifications]. In the late 1990s Hoste, Thistlethwaite, and Weeks tabulated all the knots through 16 crossings . In 2003 Rankin, Flint, and Schermann, tabulated the alternating knots through 22 crossings . In 2020 Burton tabulated all prime knots with up to 19 crossings . Alexander–Briggs notation This is the most traditional notation, due to the 1927 paper of James W. Alexander and Garland B. Briggs and later extended by Dale Rolfsen in his knot table (see image above and List of prime knots). The notation simply organizes knots by their crossing number. One writes the crossing number with a subscript to denote its order amongst all knots with that crossing number. This order is arbitrary and so has no special significance (though in each number of crossings the twist knot comes after the torus knot). Links are written by the crossing number with a superscript to denote the number of components and a subscript to denote its order within the links with the same number of components and crossings. Thus the trefoil knot is notated 31 and the Hopf link is 2. Alexander–Briggs names in the range 10162 to 10166 are ambiguous, due to the discovery of the Perko pair in Charles Newton Little's original and subsequent knot tables, and differences in approach to correcting this error in knot tables and other publications created after this point. Dowker–Thistlethwaite notation The Dowker–Thistlethwaite notation, also called the Dowker notation or code, for a knot is a finite sequence of even integers. The numbers are generated by following the knot and marking the crossings with consecutive integers. Since each crossing is visited twice, this creates a pairing of even integers with odd integers. An appropriate sign is given to indicate over and undercrossing. For example, in this figure the knot diagram has crossings labelled with the pairs (1,6) (3,−12) (5,2) (7,8) (9,−4) and (11,−10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6, −12, 2, 8, −4, −10. A knot diagram has more than one possible Dowker notation, and there is a well-understood ambiguity when reconstructing a knot from a Dowker–Thistlethwaite notation. Conway notation The Conway notation for knots and links, named after John Horton Conway, is based on the theory of tangles . The advantage of this notation is that it reflects some properties of the knot or link. The notation describes how to construct a particular link diagram of the link. Start with a basic polyhedron, a 4-valent connected planar graph with no digon regions. Such a polyhedron is denoted first by the number of vertices then a number of asterisks which determine the polyhedron's position on a list of basic polyhedra. For example, 10** denotes the second 10-vertex polyhedron on Conway's list. Each vertex then has an algebraic tangle substituted into it (each vertex is oriented so there is no arbitrary choice in substitution). Each such tangle has a notation consisting of numbers and + or − signs. An example is 1*2 −3 2. The 1* denotes the only 1-vertex basic polyhedron. The 2 −3 2 is a sequence describing the continued fraction associated to a rational tangle. One inserts this tangle at the vertex of the basic polyhedron 1*. A more complicated example is 8*3.1.2 0.1.1.1.1.1 Here again 8* refers to a basic polyhedron with 8 vertices. The periods separate the notation for each tangle. Any link admits such a description, and it is clear this is a very compact notation even for very large crossing number. There are some further shorthands usually used. The last example is usually written 8*3:2 0, where the ones are omitted and kept the number of dots excepting the dots at the end. For an algebraic knot such as in the first example, 1* is often omitted. Conway's pioneering paper on the subject lists up to 10-vertex basic polyhedra of which he uses to tabulate links, which have become standard for those links. For a further listing of higher vertex polyhedra, there are nonstandard choices available. Gauss code Gauss code, similar to the Dowker–Thistlethwaite notation, represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labeled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3 Gauss code is limited in its ability to identify knots. This problem is partially addressed with by the extended Gauss code. See also Arithmetic rope Circuit topology Lamp cord trick Legendrian submanifolds and knots List of knot theory topics Molecular knot Quantum topology Ribbon theory References Sources Footnotes Further reading Introductory textbooks There are a number of introductions to knot theory. A classical introduction for graduate students or advanced undergraduates is . Other good texts from the references are and . Adams is informal and accessible for the most part to high schoolers. Lickorish is a rigorous introduction for graduate students, covering a nice mix of classical and modern topics. is suitable for undergraduates who know point-set topology; knowledge of algebraic topology is not required. Surveys Menasco and Thistlethwaite's handbook surveys a mix of topics relevant to current research trends in a manner accessible to advanced undergraduates but of interest to professional researchers. External links "Mathematics and Knots" This is an online version of an exhibition developed for the 1989 Royal Society "PopMath RoadShow". Its aim was to use knots to present methods of mathematics to the general public. History Movie of a modern recreation of Tait's smoke ring experiment History of knot theory (on the home page of Andrew Ranicki) Knot tables and software KnotInfo: Table of Knot Invariants and Knot Theory Resources The Knot Atlas — detailed info on individual knots in knot tables KnotPlot — software to investigate geometric properties of knots Knotscape — software to create images of knots Knoutilus — online database and image generator of knots KnotData.html — Wolfram Mathematica function for investigating knots Regina — software for low-dimensional topology with native support for knots and links. Tables of prime knots with up to 19 crossings Low-dimensional topology
Knot theory
[ "Mathematics" ]
5,887
[ "Topology", "Low-dimensional topology" ]
153,095
https://en.wikipedia.org/wiki/Radio%20navigation
Radio navigation or radionavigation is the application of radio waves to determine a position of an object on the Earth, either the vessel or an obstruction. Like radiolocation, it is a type of radiodetermination. The basic principles are measurements from/to electric beacons, especially Angular directions, e.g. by bearing, radio phases or interferometry, Distances, e.g. ranging by measurement of time of flight between one transmitter and multiple receivers or vice versa, Distance differences by measurement of times of arrival of signals from one transmitter to multiple receivers or vice versa Partly also velocity, e.g. by means of radio Doppler shift. Combinations of these measurement principles also are important—e.g., many radars measure range and azimuth of a target. Bearing-measurement systems These systems used some form of directional radio antenna to determine the location of a broadcast station on the ground. Conventional navigation techniques are then used to take a radio fix. These were introduced prior to World War I, and remain in use today. Radio direction finding The first system of radio navigation was the Radio Direction Finder, or RDF. By tuning in a radio station and then using a directional antenna, one could determine the direction to the broadcasting antenna. A second measurement using another station was then taken. Using triangulation, the two directions can be plotted on a map where their intersection reveals the location of the navigator. Commercial AM radio stations can be used for this task due to their long range and high power, but strings of low-power radio beacons were also set up specifically for this task, especially near airports and harbours. Early RDF systems normally used a loop antenna, a small loop of metal wire that is mounted so it can be rotated around a vertical axis. At most angles the loop has a fairly flat reception pattern, but when it is aligned perpendicular to the station the signal received on one side of the loop cancels the signal in the other, producing a sharp drop in reception known as the "null". By rotating the loop and looking for the angle of the null, the relative bearing of the station can be determined. Loop antennas can be seen on most pre-1950s aircraft and ships. Reverse RDF The main problem with RDF is that it required a special antenna on the vehicle, which may not be easy to mount on smaller vehicles or single-crew aircraft. A smaller problem is that the accuracy of the system is based to a degree on the size of the antenna, but larger antennas would likewise make the installation more difficult. During the era between World War I and World War II, a number of systems were introduced that placed the rotating antenna on the ground. As the antenna rotated through a fixed position, typically due north, the antenna was keyed with the morse code signal of the station's identification letters so the receiver could ensure they were listening to the right station. Then they waited for the signal to either peak or disappear as the antenna briefly pointed in their direction. By timing the delay between the morse signal and the peak/null, then dividing by the known rotational rate of the station, the bearing of the station could be calculated. The first such system was the German Telefunken Kompass Sender, which began operations in 1907 and was used operationally by the Zeppelin fleet until 1918. An improved version was introduced by the UK as the Orfordness Beacon in 1929 and used until the mid-1930s. A number of improved versions followed, replacing the mechanical motion of the antennas with phasing techniques that produced the same output pattern with no moving parts. One of the longest lasting examples was Sonne, which went into operation just before World War II and was used operationally under the name Consol until 1991. The modern VOR system is based on the same principles (see below). ADF and NDB A great advance in the RDF technique was introduced in the form of phase comparisons of a signal as measured on two or more small antennas, or a single highly directional solenoid. These receivers were smaller, more accurate, and simpler to operate. Combined with the introduction of the transistor and integrated circuit, RDF systems were so reduced in size and complexity that they once again became quite common during the 1960s, and were known by the new name, automatic direction finder, or ADF. This also led to a revival in the operation of simple radio beacons for use with these RDF systems, now referred to as non-directional beacons (NDB). As the LF/MF signals used by NDBs can follow the curvature of earth, NDB has a much greater range than VOR which travels only in line of sight. NDB can be categorized as long range or short range depending on their power. The frequency band allotted to non-directional beacons is 190–1750 kHz, but the same system can be used with any common AM-band commercial station. VOR VHF omnidirectional range, or VOR, is an implementation of the reverse-RDF system, but one that is more accurate and able to be completely automated. The VOR station transmits two audio signals on a VHF carrier – one is Morse code at 1020 Hz to identify the station, the other is a continuous 9960 Hz audio modulated at 30 Hz, with the 0-degree referenced to magnetic north. This signal is rotated mechanically or electrically at 30 Hz, which appears as a 30 Hz AM signal added to the previous two signals, the phasing of which is dependent on the position of the aircraft relative to the VOR station. The VOR signal is a single RF carrier that is demodulated into a composite audio signal composed of a 9960 Hz reference signal frequency modulated at 30 Hz, a 30 Hz AM reference signal, and a 1020 Hz 'marker' signal for station identification. Conversion from this audio signal into a usable navigation aid is done by a navigation converter, which takes the reference signal and compares the phasing with the variable signal. The phase difference in degrees is provided to navigational displays. Station identification is by listening to the audio directly, as the 9960 Hz and 30 Hz signals are filtered out of the aircraft internal communication system, leaving only the 1020 Hz Morse-code station identification. The system may be used with a compatible glideslope and marker beacon receiver, making the aircraft ILS-capable (Instrument Landing System)}. Once the aircraft's approach is accurate (the aircraft is in the "right place"), the VOR receiver will be used on a different frequency to determine if the aircraft is pointed in the "right direction." Some aircraft will usually employ two VOR receiver systems, one in VOR-only mode to determine "right place" and another in ILS mode in conjunction with a glideslope receiver to determine "right direction." }The combination of both allows for a precision approach in foul weather. Beam systems Beam systems broadcast narrow signals in the sky, and navigation is accomplished by keeping the aircraft centred in the beam. A number of stations are used to create an airway, with the navigator tuning in different stations along the direction of travel. These systems were common in the era when electronics were large and expensive, as they placed minimum requirements on the receivers – they were simply voice radio sets tuned to the selected frequencies. However, they did not provide navigation outside of the beams, and were thus less flexible in use. The rapid miniaturization of electronics during and after World War II made systems like VOR practical, and most beam systems rapidly disappeared. Lorenz In the post-World War I era, the Lorenz company of Germany developed a means of projecting two narrow radio signals with a slight overlap in the center. By broadcasting different audio signals in the two beams, the receiver could position themselves very accurately down the centreline by listening to the signal in their headphones. The system was accurate to less than a degree in some forms. Originally known as "Ultrakurzwellen-Landefunkfeuer" (LFF), or simply "Leitstrahl" (guiding beam), little money was available to develop a network of stations. The first widespread radio navigation network, using Low and Medium Frequencies, was instead led by the US (see LFF, below). Development was restarted in Germany in the 1930s as a short-range system deployed at airports as a blind landing aid. Although there was some interest in deploying a medium-range system like the US LFF, deployment had not yet started when the beam system was combined with the Orfordness timing concepts to produce the highly accurate Sonne system. In all of these roles, the system was generically known simply as a "Lorenz beam". Lorenz was an early predecessor to the modern Instrument Landing System. In the immediate pre-World War II era the same concept was also developed as a blind-bombing system. This used very large antennas to provide the required accuracy at long distances (over England), and very powerful transmitters. Two such beams were used, crossing over the target to triangulate it. Bombers would enter one of the beams and use it for guidance until they heard the second one in a second radio receiver, using that signal to time the dropping of their bombs. The system was highly accurate, and the 'Battle of the Beams' broke out when United Kingdom intelligence services attempted, and then succeeded, in rendering the system useless through electronic warfare. Low-frequency radio range The low-frequency radio range (LFR, also "Four Course Radio Range" among other names) was the main navigation system used by aircraft for instrument flying in the 1930s and 1940s in the U.S. and other countries, until the advent of the VOR in the late 1940s. It was used for both en route navigation as well as instrument approaches. The ground stations consisted of a set of four antennas that projected two overlapping directional figure-eight signal patterns at a 90-degree angle to each other. One of these patterns was "keyed" with the Morse code signal "A", dit-dah, and the second pattern "N", dah-dit. This created two opposed "A" quadrants and two opposed "N" quadrants around the station. The borders between these quadrants created four course legs or "beams" and if the pilot flew down these lines, the "A" and "N" signal merged into a steady "on course" tone and the pilot was "on the beam". If the pilot deviated to either side the "A" or "N" tone would become louder and the pilot knew to make a correction. The beams were typically aligned with other stations to produce a set of airways, allowing an aircraft to travel from airport to airport by following a selected set of stations. Effective course accuracy was about three degrees, which near the station provided sufficient safety margins for instrument approaches down to low minimums. At its peak deployment, there were over 400 LFR stations in the US. Glide path and the localizer of ILS The remaining widely used beam systems are glide path and the localizer of the instrument landing system (ILS). ILS uses a localizer to provide horizontal position and glide path to provide vertical positioning. ILS can provide enough accuracy and redundancy to allow automated landings. For more information see also: Transponder systems Positions can be determined with any two measures of angle or distance. The introduction of radar in the 1930s provided a way to directly determine the distance to an object even at long distances. Navigation systems based on these concepts soon appeared, and remained in widespread use until recently. Today they are used primarily for aviation, although GPS has largely supplanted this role. Radar and transponders Early radar systems, like the UK's Chain Home, consisted of large transmitters and separate receivers. The transmitter periodically sends out a short pulse of a powerful radio signal, which is sent into space through broadcast antennas. When the signal reflects off a target, some of that signal is reflected back in the direction of the station, where it is received. The received signal is a tiny fraction of the broadcast power, and has to be powerfully amplified in order to be used. The same signals are also sent over local electrical wiring to the operator's station, which is equipped with an oscilloscope. Electronics attached to the oscilloscope provides a signal that increases in voltage over a short period of time, a few microseconds. When sent to the X input of the oscilloscope, this causes a horizontal line to be displayed on the scope. This "sweep" is triggered by a signal tapped off the broadcaster, so the sweep begins when the pulse is sent. Amplified signals from the receiver are then sent to the Y input, where any received reflection causes the beam to move upward on the display. This causes a series of "blips" to appear along the horizontal axis, indicating reflected signals. By measuring the distance from the start of the sweep to the blip, which corresponds to the time between broadcast and reception, the distance to the object can be determined. Soon after the introduction of radar, the radio transponder appeared. Transponders are a combination of receiver and transmitter whose operation is automated – upon reception of a particular signal, normally a pulse on a particular frequency, the transponder sends out a pulse in response, typically delayed by some very short time. Transponders were initially used as the basis for early IFF systems; aircraft with the proper transponder would appear on the display as part of the normal radar operation, but then the signal from the transponder would cause a second blip to appear a short time later. Single blips were enemies, double blips friendly. Transponder-based distance-distance navigation systems have a significant advantage in terms of positional accuracy. Any radio signal spreads out over distance, forming the fan-like beams of the Lorenz signal, for instance. As the distance between the broadcaster and receiver grows, the area covered by the fan increases, decreasing the accuracy of location within it. In comparison, transponder-based systems measure the timing between two signals, and the accuracy of that measure is largely a function of the equipment and nothing else. This allows these systems to remain accurate over very long range. The latest transponder systems (mode S) can also provide position information, possibly derived from GNSS, allowing for even more precise positioning of targets. Bombing systems The first distance-based navigation system was the German Y-Gerät blind-bombing system. This used a Lorenz beam for horizontal positioning, and a transponder for ranging. A ground-based system periodically sent out pulses which the airborne transponder returned. By measuring the total round-trip time on a radar's oscilloscope, the aircraft's range could be accurately determined even at very long ranges. An operator then relayed this information to the bomber crew over voice channels, and indicated when to drop the bombs. The British introduced similar systems, notably the Oboe system. This used two stations in England that operated on different frequencies and allowed the aircraft to be triangulated in space. To ease pilot workload only one of these was used for navigation – prior to the mission a circle was drawn over the target from one of the stations, and the aircraft was directed to fly along this circle on instructions from the ground operator. The second station was used, as in Y-Gerät, to time the bomb drop. Unlike Y-Gerät, Oboe was deliberately built to offer very high accuracy, as good as 35 m, much better than even the best optical bombsights. One problem with Oboe was that it allowed only one aircraft to be guided at a time. This was addressed in the later Gee-H system by placing the transponder on the ground and broadcaster in the aircraft. The signals were then examined on existing Gee display units in the aircraft (see below). Gee-H did not offer the accuracy of Oboe, but could be used by as many as 90 aircraft at once. This basic concept has formed the basis of most distance measuring navigation systems to this day. Beacons The key to the transponder concept is that it can be used with existing radar systems. The ASV radar introduced by RAF Coastal Command was designed to track down submarines and ships by displaying the signal from two antennas side by side and allowing the operator to compare their relative strength. Adding a ground-based transponder immediately turned the same display into a system able to guide the aircraft towards a transponder, or "beacon" in this role, with high accuracy. The British put this concept to use in their Rebecca/Eureka system, where battery-powered "Eureka" transponders were triggered by airborne "Rebecca" radios and then displayed on ASV Mk. II radar sets. Eureka's were provided to French resistance fighters, who used them to call in supply drops with high accuracy. The US quickly adopted the system for paratroop operations, dropping the Eureka with pathfinder forces or partisans, and then homing in on those signals to mark the drop zones. The beacon system was widely used in the post-war era for blind bombing systems. Of particular note were systems used by the US Marines that allowed the signal to be delayed in such a way to offset the drop point. These systems allowed the troops at the front line to direct the aircraft to points in front of them, directing fire on the enemy. Beacons were widely used for temporary or mobile navigation as well, as the transponder systems were generally small and low-powered, able to be man portable or mounted on a Jeep. DME In the post-war era, a general navigation system using transponder-based systems was deployed as the distance measuring equipment (DME) system. DME was identical to Gee-H in concept, but used new electronics to automatically measure the time delay and display it as a number, rather than having the operator time the signals manually on an oscilloscope. This led to the possibility that DME interrogation pulses from different aircraft might be confused, but this was solved by having each aircraft send out a different series of pulses which the ground-based transponder repeated back. DME is almost always used in conjunction with VOR, and is normally co-located at a VOR station. This combination allows a single VOR/DME station to provide both angle and distance, and thereby provide a single-station fix. DME is also used as the distance-measuring basis for the military TACAN system, and their DME signals can be used by civilian receivers. Hyperbolic systems Hyperbolic navigation systems are a modified form of transponder systems which eliminate the need for an airborne transponder. The name refers to the fact that they do not produce a single distance or angle, but instead indicate a location along any number of hyperbolic lines in space. Two such measurements produces a fix. As these systems are almost always used with a specific navigational chart with the hyperbolic lines plotted on it, they generally reveal the receiver's location directly, eliminating the need for manual triangulation. As these charts were digitized, they became the first true location-indication navigational systems, outputting the location of the receiver as latitude and longitude. Hyperbolic systems were introduced during World War II and remained the main long-range advanced navigation systems until GPS replaced them in the 1990s. Gee The first hyperbolic system to be developed was the British Gee system, developed during World War II. Gee used a series of transmitters sending out precisely timed signals, with the signals leaving the stations at fixed delays. An aircraft using Gee, RAF Bomber Command's heavy bombers, examined the time of arrival on an oscilloscope at the navigator's station. If the signal from two stations arrived at the same time, the aircraft must be an equal distance from both transmitters, allowing the navigator to determine a line of position on his chart of all the positions at that distance from both stations. More typically, the signal from one station would be received earlier than the other. The difference in timing between the two signals would reveal them to be along a curve of possible locations. By making similar measurements with other stations, additional lines of position can be produced, leading to a fix. Gee was accurate to about 165 yards (150 m) at short ranges, and up to a mile (1.6 km) at longer ranges over Germany. Gee remained in use long after World War II, and equipped RAF aircraft as late as the 1960s (approx freq was by then 68 MHz). LORAN With Gee entering operation in 1942, similar US efforts were seen to be superfluous. They turned their development efforts towards a much longer-ranged system based on the same principles, using much lower frequencies that allowed coverage across the Atlantic Ocean. The result was LORAN, for "LOng-range Aid to Navigation". The downside to the long-wavelength approach was that accuracy was greatly reduced compared to the high-frequency Gee. LORAN was widely used during convoy operations in the late war period. Decca Another British system from the same era was Decca Navigator. This differed from Gee primarily in that the signals were not pulses delayed in time, but continuous signals delayed in phase. By comparing the phase of the two signals, the time difference information as Gee was returned. However, this was far easier to display; the system could output the phase angle to a pointer on a dial removing any need for visual interpretation. As the circuitry for driving this display was quite small, Decca systems normally used three such displays, allowing quick and accurate reading of multiple fixes. Decca found its greatest use post-war on ships, and remained in use into the 1990s. LORAN-C Almost immediately after the introduction of LORAN, in 1952 work started on a greatly improved version. LORAN-C (the original retroactively became LORAN-A) combined the techniques of pulse timing in Gee with the phase comparison of Decca. The resulting system (operating in the low frequency (LF) radio spectrum from 90 to 110 kHz) that was both long-ranged (for 60 kW stations, up to 3400 miles) and accurate. To do this, LORAN-C sent a pulsed signal, but modulated the pulses with an AM signal within it. Gross positioning was determined using the same methods as Gee, locating the receiver within a wide area. Finer accuracy was then provided by measuring the phase difference of the signals, overlaying that second measure on the first. By 1962, high-power LORAN-C was in place in at least 15 countries. LORAN-C was fairly complex to use, requiring a room of equipment to pull out the different signals. However, with the introduction of integrated circuits, this was quickly reduced further and further. By the late 1970s, LORAN-C units were the size of a stereo amplifier and were commonly found on almost all commercial ships as well as some larger aircraft. By the 1980s, this had been further reduced to the size of a conventional radio, and it became common even on pleasure boats and personal aircraft. It was the most popular navigation system in use through the 1980s and 90s, and its popularity led to many older systems being shut down, like Gee and Decca. However, like the beam systems before it, civilian use of LORAN-C was short-lived when GPS technology drove it from the market. Other hyperbolic systems Similar hyperbolic systems included the US global-wide VLF/Omega Navigation System, and the similar Alpha deployed by the USSR. These systems determined pulse timing not by comparison of two signals, but by comparison of a single signal with a local atomic clock. The expensive-to-maintain Omega system was shut down in 1997 as the US military migrated to using GPS. Alpha is still in use. Satellite navigation Since the 1960s, navigation has increasingly moved to satellite navigation systems. These are essentially hyperbolic systems whose transmitters are in orbits. That the satellites move with respect to the receiver requires that the calculation of the positions of the satellites must be taken into account, which can only be handled effectively with a computer. Satellite navigation systems send several signals that are used to decode the satellite's position, distance between the user satellite, and the user's precise time. One signal encodes the satellite's ephemeris data, which is used to accurately calculate the satellite's location at any time. Space weather and other effects causes the orbit to change over time so the ephemeris has to be updated periodically. Other signals send out the time as measured by the satellite's onboard atomic clock. By measuring signal times of arrival (TOAs) from at least four satellites, the user's receiver can re-build an accurate clock signal of its own and allows hyperbolic navigation to be carried out. Satellite navigation systems offer better accuracy than any land-based system, are available at almost all locations on the Earth, can be implemented (receiver-side) at modest cost and complexity, with modern electronics, and require only a few dozen satellites to provide worldwide coverage. As a result of these advantages, satellite navigation has led to almost all previous systems falling from use. LORAN, Omega, Decca, Consol and many other systems disappeared during the 1990s and 2000s. The only other systems still in use are aviation aids, which are also being turned off for long-range navigation while new differential GPS systems are being deployed to provide the local accuracy needed for blind landings. International regulation Radionavigation service (short: RNS) is – according to Article 1.42 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as "A radiodetermination service for the purpose of radionavigation, including obstruction warning." This service is a so-called safety-of-life service, must be protected for Interferences, and is essential part of Navigation. This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows: Radiodetermination service (article 1.40) Radiodetermination-satellite service (article 1.41) Radionavigation service (article 1.42) Radionavigation-satellite service (article 1.43) Maritime radionavigation service (article 1.44) Maritime radionavigation-satellite service (article 1.45) Aeronautical radionavigation service (article 1.46) Aeronautical radionavigation-satellite service (article 1.47) Aeronautical Aeronautical radionavigation service (short: ARNS) is – according to Article 1.46 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as "A radionavigation service intended for the benefit and for the safe operation of aircraft." This service is a so-called safety-of-life service, must be protected against interference, and is an essential part of navigation. Maritime Maritime radionavigation service (short: MRNS) is – according to Article 1.44 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as "A radionavigation service intended for the benefit and for the safe operation of ships." This service is a so-called safety-of-life service, must be protected for interferences, and is essential part of navigation. Stations Land station A radionavigation land station is – according to article 1.88 of the International Telecommunication Union´s (ITU) ITU Radio Regulations (RR) – defined as "A radio station in the radionavigation service not intended to be used while in motion." Each radio station shall be classified by the radiocommunication service in which it operates permanently or temporarily. This station operates in a safety-of-life service and must be protected for Interferences. In accordance with ITU Radio Regulations (article 1) this type of radio station might be classified as follows: Radiodetermination station (article 1.86) of the radiodetermination service (article 1.40 ) Radionavigation mobile station (article 1.87) of the radionavigation service (article 1.42) Radionavigation land station Selection radionavigation land stations Mobile station A radionavigation mobile station is – according to article 1.87 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) – defined as "A radio station in the radionavigation service intended to be used while in motion or during halts at unspecified points." Each radio station shall be classified by the radiocommunication service in which it operates permanently or temporarily. This station operates in a safety-of-life service and must be protected for Interferences. In accordance with ITU Radio Regulations (article 1) this type of radio station might be classified as follows: Radiodetermination station (article 1.86) of the radiodetermination service (article 1.40 ) Radionavigation mobile station Selection radionavigation mobile stations See also Ambrose Channel pilot cable American Practical Navigator Differential GPS (DGPS) Distance measuring equipment (DME) EGNOS (European Geostationary Navigation Overlay Service) Galileo positioning system (Galileo) Global Positioning System (GPS) Global Navigation Satellite System (GLONASS) Inertial navigation system Instrument landing system (ILS) Local Area Augmentation System (LAAS) Long-range navigation (LORAN) Marker beacon (three-light marker beacon system) Microwave landing system (MLS) Multilateration Non-directional beacon (NDB) Radio altimeter Radar navigation Real-time locating Receiver Autonomous Integrity Monitoring (RAIM) Satellite geodesy#Radio techniques Space Integrated GPS/INS (SIGI) SCR-277 Tactical air navigation (TACAN) Transponder Landing System (TLS) Transit (satellite) VHF omnidirectional range (VOR) X-ray pulsar-based navigation Wide Area Augmentation System (WAAS) Wind triangle References External links UK Navaids Gallery with detailed Technical Descriptions of their operation U.S. Federal Radionavigation Plan Air traffic control Angle Euclidean geometry Navigation Surveying Wireless locating
Radio navigation
[ "Physics", "Technology", "Engineering" ]
6,219
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Wireless locating", "Surveying", "Civil engineering", "Wikipedia categories named after physical quantities", "Angle" ]
153,099
https://en.wikipedia.org/wiki/Normal%20closure%20%28group%20theory%29
In group theory, the normal closure of a subset of a group is the smallest normal subgroup of containing Properties and description Formally, if is a group and is a subset of the normal closure of is the intersection of all normal subgroups of containing : The normal closure is the smallest normal subgroup of containing in the sense that is a subset of every normal subgroup of that contains The subgroup is generated by the set of all conjugates of elements of in Therefore one can also write Any normal subgroup is equal to its normal closure. The conjugate closure of the empty set is the trivial subgroup. A variety of other notations are used for the normal closure in the literature, including and Dual to the concept of normal closure is that of or , defined as the join of all normal subgroups contained in Group presentations For a group given by a presentation with generators and defining relators the presentation notation means that is the quotient group where is a free group on References Group theory Closure operators
Normal closure (group theory)
[ "Mathematics" ]
200
[ "Group theory", "Fields of abstract algebra", "Order theory", "Closure operators" ]