id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,219,658
https://en.wikipedia.org/wiki/Check%20Point
Check Point Software Technologies Ltd. is an American-Israeli multinational provider of software and combined hardware and software products for IT security, including network security, endpoint security, cloud security, mobile security, data security and security management. History Check Point was established in Ramat Gan, Israel in 1993, by Gil Shwed (CEO ), Marius Nacht (Chairman ) and Shlomo Kramer (who left Check Point in 2003). Shwed had the initial idea for the company's core technology known as stateful inspection, which became the foundation for the company's first product, FireWall-1; soon afterwards they also developed one of the world's first VPN products, VPN-1. Shwed developed the idea while serving in the Unit 8200 of the Israel Defense Forces, where he worked on securing classified networks. Initial funding of US$250,000 was provided by venture capital fund BRM Group. In 1994 Check Point signed an OEM agreement with Sun Microsystems, followed by a distribution agreement with HP in 1995. The same year, the U.S. head office was established in Redwood City, California. By February 1996, the company was named worldwide firewall market leader by IDC, with a market share of 40 percent. In June 1996 Check Point raised $67 million from its initial public offering on NASDAQ. In 1998, Check Point established a partnership with Nokia, which bundled Check Point's Software with Nokia's computer Network Security Appliances. In 2003, a class-action lawsuit was filed against Check Point over violation of the Securities Exchange Act by failing to disclose major financial information. On 14 August 2003 Check Point opened its branch in India's capital, Delhi (with the legal name Check Point Software Technologies India Pvt. Ltd.). Eyal Desheh was the first director appointed in India. During the first decade of the 21st century Check Point started acquiring other IT security companies, including Nokia's network security business unit in 2009. In 2019, researchers at Check Point found a security breach in Xiaomi phone apps. The security flaw was reported preinstalled. Over the years many employees who worked at Check Point have left to start their own software companies. These include Shlomo Kremer, who started Imperva; Nir Zuk, who founded Palo Alto Networks; Ruvi Kitov and Reuven Harrison of Tufin; Yonadav Leitersdorf, who founded Indeni; and Avi Shua, who founded Orca Security. Critics As of December 2023, Check Point Software continues to operate in Russia, selling its cybersecurity products in the country. Despite the ongoing conflict in Ukraine, the company has maintained its office in Moscow and has faced criticism for its decision to remain active in Russia. SofaWare legal battle SofaWare Technologies was founded in 1999, as a cooperation between Check Point and SofaWare's founders, Adi Ruppin and Etay Bogner, with the purpose of extending Check Point from the enterprise market to the small business, consumer and branch office market. SofaWare's co-founder Adi Ruppin said that his company wanted to make the technology simple to use and affordable, and to lift the burden of security management from end users while adding some features. In 2001 SofaWare began selling firewall appliances under the SofaWare S-Box brand; in 2002 the company started selling the Safe@Office and Safe@Home line of security appliances, under the Check Point brand. By the fourth quarter of 2002 sales of SofaWare's Safe@Office firewall/VPN appliances had increased greatly, and SofaWare held the #1 revenue position in the worldwide firewall/VPN sub-$490 appliance market, with a 38% revenue market share. Relations between Check Point and the SofaWare founders went sour after the company acquisition in 2002. In 2004 Etay Bogner, co-founder of SofaWare, sought court approval to file a shareholder derivative suit, claiming Check Point was not transferring funds to SofaWare as required for its use of SofaWare's products and technology. His derivative suit was ultimately successful, and Check Point was ordered to pay SofaWare 13 million shekels for breach of contract. In 2006 the Tel Aviv District Court Judge ruled that Bogner SofaWare could sue Check Point by proxy for $5.1 million in alleged damage to SofaWare. Bogner claimed that Check Point, which owned 60% of Sofaware, had behaved belligerently, and withheld money due for use of SofaWare technology and products Check Point appealed the ruling, but lost. In 2009 the Israeli Supreme Court ruled that a group of founders of SofaWare, which includes Bogner, had veto power over any decision of SofaWare. The court ruled that the three founders could exercise their veto power only as a group and by majority rule. In 2011 Check Point settled all litigation relating to SofaWare. As part of the settlement it acquired the SofaWare shares held by Bogner and Ruppin, and began a process of acquiring the remaining shares, resulting in SofaWare becoming a wholly owned subsidiary. See also Economy of Israel Silicon Wadi References External links Corporate website Check Point Research Technology companies of Israel Computer hardware companies Computer security companies Computer security software companies Software companies established in 1993 Israeli brands Networking hardware companies Software companies of Israel Deep packet inspection Server appliance Companies based in San Carlos, California Software companies of the United States Companies based in Tel Aviv Companies listed on the Nasdaq 1993 establishments in Israel 1996 initial public offerings
Check Point
[ "Technology" ]
1,125
[ "Computer hardware companies", "Computers" ]
2,219,841
https://en.wikipedia.org/wiki/Selective%20catalytic%20reduction
Selective catalytic reduction (SCR) means converting nitrogen oxides, also referred to as with the aid of a catalyst into diatomic nitrogen (), and water (). A reductant, typically anhydrous ammonia (), aqueous ammonia (), or a urea () solution, is added to a stream of flue or exhaust gas and is reacted onto a catalyst. As the reaction drives toward completion, nitrogen (), and carbon dioxide (), in the case of urea use, are produced. Selective catalytic reduction of using ammonia as the reducing agent was patented in the United States by the Engelhard Corporation in 1957. Development of SCR technology continued in Japan and the US in the early 1960s with research focusing on less expensive and more durable catalyst agents. The first large-scale SCR was installed by the IHI Corporation in 1978. Commercial selective catalytic reduction systems are typically found on large utility boilers, industrial boilers, and municipal solid waste boilers and have been shown to lower emissions by 70-95%. Applications include diesel engines, such as those found on large ships, diesel locomotives, gas turbines, and automobiles. SCR systems are now the preferred method for meeting Tier 4 Final and EURO 6 diesel emissions standards for heavy trucks, cars and light commercial vehicles. As a result, emissions of NOx, particulates, and hydrocarbons have been lowered by as much as 95% when compared with pre-emissions engines. Chemistry The reduction reaction takes place as the gases pass through the catalyst chamber. Before entering the catalyst chamber, ammonia, or other reductant (such as urea), is injected and mixed with the gases. The intended equations for the reactions using ammonia for a SCR are: Several secondary reactions also occur: With urea, the reactions are: As with ammonia, several secondary reactions also occur in the presence of sulfur: The ideal reaction has an optimal temperature range between 630 and 720 K (357 and 447 °C) but can operate as low as 500 K (227 °C) with longer residence times. The minimum effective temperature depends on the fuels, gas constituents, and catalyst. Other possible reductants include cyanuric acid and ammonium sulfate. Catalysts SCR catalysts are made from various porous ceramic materials used as a support, such as titanium oxide, and active catalytic components are usually either oxides of vanadium, molybdenum and tungsten), zeolites, or cerium. Base metal catalysts, such as vanadium and tungsten, lack high thermal durability, but are less expensive and operate very well at the temperature ranges most commonly applied in industrial and utility boiler applications. Thermal durability is particularly important for automotive SCR applications that incorporate the use of a diesel particulate filter with forced regeneration. They also have a high catalysing potential to oxidize into , which can be extremely damaging due to its acidic properties. Zeolite catalysts have the potential to operate at substantially higher temperature than base metal catalysts; they can withstand prolonged operation at temperatures of 900 K (627 °C) and transient conditions of up to 1120 K (847 °C). Zeolites also have a lower potential for oxidation and thus decrease the related corrosion risks. Iron- and copper-exchanged zeolite urea SCRs have been developed with approximately equal performance to that of vanadium-urea SCRs if the fraction of the is 20% to 50% of the total . The two most common catalyst geometries used today are honeycomb catalysts and plate catalysts. The honeycomb form usually consists of an extruded ceramic applied homogeneously throughout the carrier or coated on the substrate. Like the various types of catalysts, their configuration also has advantages and disadvantages. Plate-type catalysts have lower pressure drops and are less susceptible to plugging and fouling than the honeycomb types, but are much larger and more expensive. Honeycomb configurations are smaller than plate types, but have higher pressure drops and plug much more easily. A third type is corrugated, comprising only about 10% of the market in power plant applications. Reductants Several nitrogen-bearing reductants are used in SCR applications including anhydrous ammonia, aqueous ammonia or dissolved urea. All those three reductants are widely available in large quantities. Anhydrous ammonia can be stored as a liquid at approximately 10 bar in steel tanks. It is classified as an inhalation hazard, but it can be safely stored and handled if well-developed codes and standards are followed. Its advantage is that it needs no further conversion to operate within a SCR and is typically favoured by large industrial SCR operators. Aqueous ammonia must be first vaporized in order to be used, but it is substantially safer to store and transport than anhydrous ammonia. Urea is the safest to store, but requires conversion to ammonia through thermal decomposition. At the end of the process, the purified exhaust gasses are sent to the boiler or condenser or other equipment, or discharged into the atmosphere. Limitations Most catalysts have finite service life mainly due to the formation of ammonium sulfate and ammonium bisulfate from sulfur compounds when high-sulfur fuels are used, as well as the undesirable catalyst-induced oxidation of to and . In applications that use exhaust gas boilers, ammonium sulfate and ammonium bisulfate can accumulate on the boiler tubes, inhibiting steam output and increasing exhaust back-pressure. In marine applications, this can increase fresh water requirements as the boiler must be continuously washed to remove the deposits. Most catalysts on the market have porous structures and a geometries optimized for increasing their specific surface area (a clay planting pot is a good example of what SCR catalyst feels like). This porosity is what gives the catalyst the high surface area needed for reduction of NOx. However, soot, ammonium sulfate, ammonium bisulfate, silica compounds, and other fine particulates can easily clog the pores. Ultrasonic horns and soot blowers can remove most of these contaminants while the unit is online. The unit can also be cleaned by being washed with water or by raising the exhaust temperature. Of more concern to SCR performance are poisons, which will chemically degrade the catalyst itself or block the catalyst's active sites and render it ineffective at reduction, and in severe cases this can result in the ammonia or urea being oxidized and a subsequent increase in emissions. These poisons are alkali metals, alkaline earth metals, halogens, phosphorus, sulfur, arsenic, antimony, chromium, heavy metals (copper, cadmium, mercury, thallium, and lead), and many heavy metal compounds (e.g. oxides and halides). Most SCRs require tuning to properly perform. Part of tuning involves ensuring a proper distribution of ammonia in the gas stream and uniform gas velocity through the catalyst. Without tuning, SCRs can exhibit inefficient NOx reduction along with excessive ammonia slip due to not utilizing the catalyst surface area effectively. Another facet of tuning involves determining the proper ammonia flow for all process conditions. Ammonia flow is in general controlled based on NOx measurements taken from the gas stream or preexisting performance curves from an engine manufacturer (in the case of gas turbines and reciprocating engines). Typically, all future operating conditions must be known beforehand to properly design and tune an SCR system. Ammonia slip is an industry term for ammonia passing through the SCR unreacted. This occurs when ammonia is injected in excess, temperatures are too low for ammonia to react, or the catalyst has been poisoned. In applications using both SCR and an alkaline scrubber, the use of high-sulfur fuels also tend to significantly increase ammonia slip, since compounds such as NaOH and will reduce ammonium sulfate and ammonium bisulfate back into ammonia: Temperature is SCR's largest limitation. Engines all have a period during start-up where exhaust temperatures are too low, and the catalyst must be pre-heated for the desired NOx reduction to occur when an engine is first started, especially in cold climates. Power plants In power stations, the same basic technology is employed for removal of from the flue gas of boilers used in power generation and industry. In general, the SCR unit is located between the furnace economizer and the air heater, and the ammonia is injected into the catalyst chamber through an ammonia injection grid. As in other SCR applications, the temperature of operation is critical. Ammonia slip (unreacted ammonia) is also an issue with SCR technology used in power plants. A significant operational difficulty in coal-fired boilers is the binding of the catalyst by fly ash from the fuel combustion. This requires the usage of sootblowers, ultrasonic horns, and careful design of the ductwork and catalyst materials to avoid plugging by the fly ash. SCR catalysts have a typical operational lifetime of about 16,000 – 40,000 hours (1.8 – 4.5 years) in coal-fired power plants, depending on the flue gas composition, and up to 80,000 hours (9 years) in cleaner gas-fired power plants. Poisons, sulfur compounds, and fly ash can all be removed by installing scrubbers before the SCR system to increase the life of the catalyst, though in most power plants and marine engines, scrubbers are installed after the system to maximize the SCR system's effectiveness. Automobiles History SCR was applied to trucks by Nissan Diesel Corporation, and the first practical product "Nissan Diesel Quon" was introduced in 2004 in Japan. In 2007, the United States Environmental Protection Agency (EPA) enacted requirements to significantly lower harmful exhaust emissions. To achieve this standard, Cummins and other diesel engine manufacturers developed an aftertreatment system that includes the use of a diesel particulate filter (DPF). As the DPF does not function with low-sulfur diesel fuel, diesel engines that conform to 2007 EPA emissions standards require ultra-low sulfur diesel fuel (ULSD) to prevent damage to the DPF. After a brief transition period, ULSD fuel became common at fuel pumps in the United States and Canada. The 2007 EPA regulations were meant to be an interim solution to allow manufacturers time to prepare for the more stringent 2010 EPA regulations, which lowers NOx levels even further. 2010 EPA regulations Diesel engines manufactured after January 1, 2010 are required to meet lowered NOx standards for the US market. All of the heavy-duty engine (Class 7-8 trucks) manufacturers except for Navistar International and Caterpillar continuing to manufacture engines after this date have chosen to use SCR. This includes Detroit Diesel (DD13, DD15, and DD16 models), Cummins (ISX, ISL9, and ISB6.7), Paccar, and Volvo/Mack. These engines require the periodic addition of diesel exhaust fluid (DEF, a urea solution) to enable the process. DEF is available in bottles and jugs from most truck stops, and a more recent development is bulk DEF dispensers near diesel fuel pumps. Caterpillar and Navistar had initially chosen to use enhanced exhaust gas recirculation (EEGR) to comply with the Environmental Protection Agency (EPA) standards, but in July 2012 Navistar announced it would be pursuing SCR technology for its engines, except on the MaxxForce 15 which was to be discontinued. Caterpillar ultimately withdrew from the on-highway engine market prior to implementation of these requirements. BMW, Daimler AG (as BlueTEC), and Volkswagen have used SCR technology in some of their passenger diesel cars. See also Acid rain Catalytic converter, which also catalyzes NOx conversion but does not use urea or ammonia Diesel exhaust fluid (DEF) or AdBlue Exhaust gas recirculation versus selective catalytic reduction Environmental engineering Selective non-catalytic reduction (SNCR) NOx adsorber (LNT) Vehicle emissions control References Pollution control technologies Chemical processes Air pollution control systems NOx control Catalysis
Selective catalytic reduction
[ "Chemistry", "Engineering" ]
2,523
[ "Catalysis", "Pollution control technologies", "Chemical processes", "nan", "Environmental engineering", "Chemical process engineering", "Chemical kinetics" ]
2,219,887
https://en.wikipedia.org/wiki/Negative%20refraction
In optics, negative refraction is the electromagnetic phenomenon where light rays become refracted at an interface that is opposite to their more commonly observed positive refractive properties. Negative refraction can be obtained by using a metamaterial which has been designed to achieve a negative value for electric permittivity () and magnetic permeability (); in such cases the material can be assigned a negative refractive index. Such materials are sometimes called "double negative" materials. Negative refraction occurs at interfaces between materials at which one has an ordinary positive phase velocity (i.e., a positive refractive index), and the other has the more exotic negative phase velocity (a negative refractive index). Negative phase velocity Negative phase velocity (NPV) is a property of light propagation in a medium. There are different definitions of NPV; the most common is Victor Veselago's original proposal of opposition of the wave vector and (Abraham) the Poynting vector. Other definitions include the opposition of wave vector to group velocity, and energy to velocity. "Phase velocity" is used conventionally, as phase velocity has the same sign as the wave vector. A typical criterion used to determine Veselago's NPV is that the dot product of the Poynting vector and wave vector is negative (i.e., that ), but this definition is not covariant. While this restriction is not practically significant, the criterion has been generalized into a covariant form. Veselago NPV media are also called "left-handed (meta)materials", as the components of plane waves passing through (electric field, magnetic field, and wave vector) follow the left-hand rule instead of the right-hand rule. The terms "left-handed" and "right-handed" are generally avoided as they are also used to refer to chiral media. Negative refractive index One can choose to avoid directly considering the Poynting vector and wave vector of a propagating light field, and instead directly consider the response of the materials. Assuming the material is achiral, one can consider what values of permittivity (ε) and permeability (μ) result in negative phase velocity (NPV). Since both ε and μ are generally complex, their imaginary parts do not have to be negative for a passive (i.e. lossy) material to display negative refraction. In these materials, the criterion for negative phase velocity is derived by Depine and Lakhtakia to be where are the real valued parts of ε and μ, respectively. For active materials, the criterion is different. NPV occurrence does not necessarily imply negative refraction (negative refractive index). Typically, the refractive index is determined using , where by convention the positive square root is chosen for . However, in NPV materials, the negative square root is chosen to mimic the fact that the wave vector and phase velocity are also reversed. The refractive index is a derived quantity that describes how the wavevector is related to the optical frequency and propagation direction of the light; thus, the sign of must be chosen to match the physical situation. In chiral materials The refractive index also depends on the chirality parameter , resulting in distinct values for left and right circularly polarized waves, given by . A negative refractive index occurs for one polarization if > ; in this case, and/or do not need to be negative. A negative refractive index due to chirality was predicted by Pendry and Tretyakov et al., and first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. Refraction The consequence of negative refraction is light rays are refracted on the same side of the normal on entering the material, as indicated in the diagram, and by a general form of Snell's law. See also Acoustic metamaterials Metamaterial Negative index metamaterials Metamaterial antennas Multiple-prism dispersion theory N-slit interferometric equation Perfect lens Photonic metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Tunable metamaterials Electromagnetic interactions Bloch's theorem Casimir effect Dielectric Electromagnetism EM radiation Electron mobility Permeability (electromagnetism)* Permittivity* Wavenumber Photo-Dember Impedance References Photonics Physical phenomena Metamaterials Articles containing video clips
Negative refraction
[ "Physics", "Materials_science", "Engineering" ]
915
[ "Physical phenomena", "Metamaterials", "Materials science" ]
2,220,039
https://en.wikipedia.org/wiki/Rotating-wave%20approximation
The rotating-wave approximation is an approximation used in atom optics and magnetic resonance. In this approximation, terms in a Hamiltonian that oscillate rapidly are neglected. This is a valid approximation when the applied electromagnetic radiation is near resonance with an atomic transition, and the intensity is low. Explicitly, terms in the Hamiltonians that oscillate with frequencies are neglected, while terms that oscillate with frequencies are kept, where is the light frequency, and is a transition frequency. The name of the approximation stems from the form of the Hamiltonian in the interaction picture, as shown below. By switching to this picture the evolution of an atom due to the corresponding atomic Hamiltonian is absorbed into the system ket, leaving only the evolution due to the interaction of the atom with the light field to consider. It is in this picture that the rapidly oscillating terms mentioned previously can be neglected. Since in some sense the interaction picture can be thought of as rotating with the system ket only that part of the electromagnetic wave that approximately co-rotates is kept; the counter-rotating component is discarded. The rotating-wave approximation is closely related to, but different from, the secular approximation. Mathematical formulation For simplicity consider a two-level atomic system with ground and excited states and , respectively (using the Dirac bracket notation). Let the energy difference between the states be so that is the transition frequency of the system. Then the unperturbed Hamiltonian of the atom can be written as . Suppose the atom experiences an external classical electric field of frequency , given by ; e.g., a plane wave propagating in space. Then under the dipole approximation the interaction Hamiltonian between the atom and the electric field can be expressed as , where is the dipole moment operator of the atom. The total Hamiltonian for the atom-light system is therefore The atom does not have a dipole moment when it is in an energy eigenstate, so This means that defining allows the dipole operator to be written as (with denoting the complex conjugate). The interaction Hamiltonian can then be shown to be where is the Rabi frequency and is the counter-rotating frequency. To see why the terms are called counter-rotating consider a unitary transformation to the interaction or Dirac picture where the transformed Hamiltonian is given by where is the detuning between the light field and the atom. Making the approximation This is the point at which the rotating wave approximation is made. The dipole approximation has been assumed, and for this to remain valid the electric field must be near resonance with the atomic transition. This means that and the complex exponentials multiplying and can be considered to be rapidly oscillating. Hence on any appreciable time scale, the oscillations will quickly average to 0. The rotating wave approximation is thus the claim that these terms may be neglected and thus the Hamiltonian can be written in the interaction picture as Finally, transforming back into the Schrödinger picture, the Hamiltonian is given by Another criterion for rotating wave approximation is the weak coupling condition, that is, the Rabi frequency should be much less than the transition frequency. At this point the rotating wave approximation is complete. A common first step beyond this is to remove the remaining time dependence in the Hamiltonian via another unitary transformation. Derivation Given the above definitions the interaction Hamiltonian is as stated. The next step is to find the Hamiltonian in the interaction picture, . The required unitary transformation is , where the 3rd step can be proved by using a Taylor series expansion, and using the orthogonality of the states and . Note that a multiplication by an overall phase of on a unitary operator does not affect the underlying physics, so in the further usages of we will neglect it. Applying gives: Now we apply the RWA by eliminating the counter-rotating terms as explained in the previous section: Finally, we transform the approximate Hamiltonian back to the Schrödinger picture: The atomic Hamiltonian was unaffected by the approximation, so the total Hamiltonian in the Schrödinger picture under the rotating wave approximation is References Atomic, molecular, and optical physics Chemical physics
Rotating-wave approximation
[ "Physics", "Chemistry" ]
852
[ "Applied and interdisciplinary physics", " molecular", "nan", "Atomic", "Chemical physics", " and optical physics" ]
2,220,218
https://en.wikipedia.org/wiki/Kt/V
In medicine, Kt/V is a number used to quantify hemodialysis and peritoneal dialysis treatment adequacy. K – dialyzer clearance of urea t – dialysis time V – volume of distribution of urea, approximately equal to patient's total body water In the context of hemodialysis, Kt/V is a pseudo-dimensionless number; it is dependent on the pre- and post-dialysis concentration (see below). It is not the product of K and t divided by V, as would be the case in a true dimensionless number. In peritoneal dialysis, it isn't dimensionless at all. It was developed by Frank Gotch and John Sargent as a way for measuring the dose of dialysis when they analyzed the data from the National Cooperative Dialysis Study. In hemodialysis the US National Kidney Foundation Kt/V target is ≥ 1.3, so that one can be sure that the delivered dose is at least 1.2. In peritoneal dialysis the target is ≥ 1.7/week. Despite the name, Kt/V is quite different from standardized Kt/V. Rationale for Kt/V as a marker of dialysis adequacy K (clearance) multiplied by t (time) is a volume (since mL/min × min = mL, or L/h × h = L), and (K × t) can be thought of as the mL or L of fluid (blood in this case) cleared of urea (or any other solute) during the course of a single treatment. V also is a volume, expressed in mL or L. So the ratio of K × t / V is a so-called "dimensionless ratio" and can be thought of as a multiple of the volume of plasma cleared of urea divided by the distribution volume of urea. When Kt/V = 1.0, a volume of blood equal to the distribution volume of urea has been completely cleared of urea. The relationship between Kt/V and the concentration of urea C at the end of dialysis can be derived from the first-order differential equation that describes exponential decay and models the clearance of any substance from the body where the concentration of that substance decreases in an exponential fashion: where C is the concentration [mol/m3] t is the time [s] K is the clearance [m3/s] V is the volume of distribution [m3] From the above definitions it follows that is the first derivative of concentration with respect to time, i.e. the change in concentration with time. This equation is separable and can be integrated (assuming K and V are constant) as follows: After integration, where c is the constant of integration If one takes the antilog of equation the result is: where e is the base of the natural logarithm By integer exponentiation this can be written as: where C0 is the concentration at the beginning of dialysis [mmol/L] or [mol/m3]. The above equation can also be written as Normally we measure postdialysis serum urea nitrogen concentration C and compare this with the initial or predialysis level C0. The session length or time is t and this is measured by the clock. The dialyzer clearance K is usually estimated, based on the urea transfer ability of the dialyzer (a function of its size and membrane permeability), the blood flow rate, and the dialysate flow rate. In some dialysis machines, the urea clearance during dialysis is estimated by testing the ability of the dialyzer to remove a small salt load that is added to the dialysate during dialysis. Relation to URR The URR or Urea reduction ratio is simply the fractional reduction of urea during dialysis. So by definition, URR = 1 − C/C0. So 1−URR = C/C0. So by algebra, substituting into equation () above, since , we get: Sample calculation Patient has a mass of 70 kg (154 lb) and gets a hemodialysis treatment that lasts 4 hours where the urea clearance is 215 mL/min. K = 215 mL/min t = 4.0 hours = 240 min V = 70 kg × 0.6 L of water/kg of body mass = 42 L = 42,000 mL Therefore: Kt/V = 1.23 This means that if you dialyze a patient to a Kt/V of 1.23, and measure the postdialysis and predialysis urea nitrogen levels in the blood, then calculate the URR, then −ln(1−URR) should be about 1.23. The math does not quite work out, and more complicated relationships have been worked-out to account for the fluid removal (ultrafiltration) during dialysis as well as urea generation (see urea reduction ratio). Nevertheless, the URR and Kt/V are so closely related mathematically, that their predictive power has been shown to be no different in terms of prediction of patient outcomes in observational studies. Post-dialysis rebound The above analysis assumes that urea is removed from a single compartment during dialysis. In fact, this Kt/V is usually called the "single-pool" Kt/V. Due to the multiple compartments in the human body, a significant concentration rebound occurs following hemodialysis. Usually rebound lowers the Kt/V by about 15%. The amount of rebound depends on the rate of dialysis (K) in relation to the size of the patient (V). Equations have been devised to predict the amount of rebound based on the ratio of K/V, but usually this is not necessary in clinical practice. One can use such equations to calculate an "equilibrated Kt/V" or a "double-pool Kt/V", and some think that this should be used as a measure of dialysis adequacy, but this is not widely done in the United States, and the KDOQI guidelines (see below) recommend using the regular single pool Kt/V for simplicity. Peritoneal dialysis Kt/V (in the context of peritoneal dialysis) was developed by Michael J. Lysaght in a series of articles on peritoneal dialysis. The steady-state solution of a simplified mass transfer equation that is used to describe the mass exchange over a semi-permeable membrane and models peritoneal dialysis is where CB is the concentration in the blood [ mol/m3 ] KD is the clearance [ m3/s ] is the urea mass generation [ mol/s ] This can also be written as: The mass generation (of urea), in steady state, can be expressed as the mass (of urea) in the effluent per time: where CE is the concentration of urea in effluent [ mol/m3 ] VE is the volume of effluent [ m3 ] t is the time [ s ] Lysaght, motivated by equations and , defined the value KD: Lysaght uses "ml/min" for the clearance. In order to convert the above clearance (which is in m3/s) to ml/min one has to multiply by 60 × 1000 × 1000. Once KD is defined the following equation is used to calculate Kt/V: where V is the volume of distribution. It has to be in litres (L), as the equation is not really non-dimensional. The 7/3 is used to adjust the Kt/V value so it can be compared to the Kt/V for hemodialysis, which is typically done thrice weekly in the USA. Weekly Kt/V To calculate the weekly Kt/V (for peritoneal dialysis) KD has to be in litres/day. Weekly Kt/V is defined by the following equation: Sample calculation Assume: Then by equation , KD is: or 8.00 mL/min or 11.52 L/d. Kt/V and the weekly Kt/V by equations and respectively are thus: 0.45978 and 1.9863. A simplified analysis of Kt/V in PD On a practical level, in peritoneal dialysis the calculation of Kt/V is often relatively easy because the fluid drained is usually close to 100% saturated with urea, i.e. the dialysate has equilibriated with the body. Therefore, the daily amount of plasma cleared is simply the drain volume divided by an estimate of the patient's volume of distribution. As an example, if someone is infusing four 2 liter exchanges a day, and drains out a total of 9 liters per day, then they drain 9 × 7 = 63 liters per week. If the patient has an estimated total body water volume V of about 35 liters, then the weekly Kt/V would be 63/35, or about 1.8. The above calculation is limited by the fact that the serum concentration of urea is changing during dialysis. So ideally this should not be used as it has not taken in account the urea level in dialysate or serum...so it cannot be labelled as urea clearance In automated PD this change cannot be ignored; thus, blood samples are usually measured at some time point in the day and assumed to be representative of an average value. The clearance is then calculated using this measurement. Reason for adoption Kt/V has been widely adopted because it was correlated with survival. Before Kt/V nephrologists measured the serum urea concentration (specifically the time-averaged concentration of urea (TAC of urea)), which was found not to be correlated with survival (due to its strong dependence on protein intake) and thus deemed an unreliable marker of dialysis adequacy. Criticisms/disadvantages of Kt/V It is complex and tedious to calculate. Many nephrologists have difficulty understanding it. Urea is not associated with toxicity. Kt/V only measures a change in the concentration of urea and implicitly assumes the clearance of urea is comparable to other toxins. (It ignores molecules larger than urea having diffusion-limited transport - so called middle molecules). Kt/V does not take into account the role of ultrafiltration. It ignores the mass transfer between body compartments and across the plasma membrane (i.e. intracellular to extracellular transport), which has been shown to be important for the clearance of molecules such as phosphate. Practical use of Kt/V requires adjustment for rebound of the urea concentration due to the multi-compartmental nature of the body. Kt/V may disadvantage women and smaller patients in terms of the amount of dialysis received. Normal kidney function may be modeled as optimal Glomerular filtration rate or GFR. GFR is usually normalized in people to body surface area. A man and a woman of similar body surface areas will have markedly different levels of total body water (which corresponds to V). Also, smaller people of either sex will have markedly lower levels of V, but only slightly lower levels of body surface area. For this reason, any dialysis dosing system that is based on V may tend to underdose smaller patients and women. Some investigators have proposed dosing based on surface area (S) instead of V, but clinicians usually measure the URR and then calculate Kt/V. One can "adjust" the Kt/V, to calculate a "surface-area-normalized" or "SAN"-Kt/V as well as a "SAN"-standard Kt/V. This puts a wrapper around Kt/V and normalizes it to body surface area. Importance of total weekly dialysis time and frequency Kt/V has been criticized because quite high levels can be achieved, particularly in smaller patients, during relatively short dialysis sessions. This is especially true for small people, where "adequate" levels of Kt/V often can be achieved over 2 to 2.5 hours. One important part of dialysis adequacy has to do with adequate removal of salt and water, and also of solutes other than urea, especially larger molecular weight substances and phosphorus. Phosphorus and similar molecular weights remain elusive to filtration of any degree. A number of studies suggest that a longer amount of time on dialysis, or more frequent dialysis sessions, lead to better results. There have been various alternative methods of measuring dialysis adequacy, most of which have proposed some number based on Kt/V and number of dialysis sessions per week, e.g., the standardized Kt/V, or simply number of dialysis sessions per week squared multiplied by the hours on dialysis per session; e.g. the hemodialysis product by Scribner and Oreopoulos It is not practical to give long dialysis sessions (greater than 4.5 hours) thrice a week in a dialysis center during the day. Longer sessions can be practically delivered if dialysis is done at home. Most experience has been gained with such long dialysis sessions given at night. Some centers are offering every-other-night or thrice a week nocturnal dialysis. The benefits of giving more frequent dialysis sessions is also an area of active study, and new easy-to-use machines are permitting easier use of home dialysis, where 2–3+ hour sessions can be given 4–7 days per week. Kt/V minimums and targets for hemodialysis One question in terms of Kt/V is, how much is enough? The answer has been based on observational studies, and the NIH-funded HEMO trial done in the United States, and also, on kinetic analysis. For a US perspective, see the KDOQI clinical practice guidelines and for a United Kingdom perspective see: U.K. Renal Association clinical practice guidelines According to the US guidelines, for thrice a week dialysis a Kt/V (without rebound) should be 1.2 at a minimum with a target value of 1.4 (15% above the minimum values). However, there is suggestive evidence that larger amounts may need to be given to women, smaller patients, malnourished patients, and patients with clinical problems. The recommended minimum Kt/V value changes depending on how many sessions per week are given, and is reduced for patients who have a substantial degree of residual renal function. Kt/V minimums and targets for peritoneal dialysis For a US perspective, see: For the United States, the minimum weekly Kt/V target used to be 2.0. This was lowered to 1.7 in view of the results of a large randomized trial done in Mexico, the ADEMEX trial, and also from reanalysis of previous observational study results from the perspective of residual kidney function. For a United Kingdom perspective see: This is still in draft form. References External links Hemodialysis Hemodialysis Dose and Adequacy – a description of URR and Kt/V from the Kidney and Urologic Diseases Clearinghouse. Kt/V and the adequacy of hemodialysis – UpToDate.com Peritoneal dialysis Advisory on Peritoneal Dialysis – American Association of Kidney Patients Peritoneal Dialysis Dose and Adequacy – a description of URR and Kt/V from the Kidney and Urologic Diseases Clearinghouse. Calculators spKt/V, eKt/V, URR, nPCR, GNRI etc. dialysis calculation – hdtool.net. free Kt/V calculators, single pool and equilibrated HD, PD, no login needed, site used by dozens of dialysis centers around the world for over 10 years – kt-v.net Web/javascript program that does formal 2-pool urea kinetics in multiple patients – ureakinetics.org Kt/V calculator – medindia.com Kt/V – HDCN Diagnostic nephrology Laboratory medicine techniques Renal dialysis
Kt/V
[ "Chemistry" ]
3,418
[ "Laboratory medicine techniques" ]
2,220,328
https://en.wikipedia.org/wiki/Calspan
Calspan Corporation is a science and technology company founded in 1943 as part of the Research Laboratory of the Curtiss-Wright Airplane Division at Buffalo, New York. Calspan consists of four primary operating units: Flight Research, Transportation Research, Aerospace Sciences Transonic Wind Tunnel, and Crash Investigations. The company's main facility is in Cheektowaga, New York, while it has other facilities such as the Flight Research Center in Niagara Falls, New York, and remote flight test operations at Edwards Air Force Base, California, and Patuxent River, Maryland. Calspan also has thirteen field offices throughout the Eastern United States which perform accident investigations on behalf of the United States Department of Transportation. Calspan was acquired by TransDigm Group in 2023. History The facility was started as a private defense contractor in the home front of World War II. As a part of its tax planning in the wake of the war effort, Curtiss-Wright donated the facility to Cornell University to operate "as a public trust." Seven other east coast aircraft companies also donated $675,000 to provide working capital for the lab. The lab operated under the name Cornell Aeronautical Laboratory from 1946 until 1972. During this same time, Cornell formed a new Graduate School of Aerospace Engineering on its Ithaca, New York campus. During the late 1960s and early 1970s, universities came under criticism for conducting war-related research particularly as the Vietnam War became unpopular, and Cornell University tried to sever its ties. Similar laboratories at other colleges, such as the Lincoln Laboratory and Draper Laboratory at MIT came under similar criticism, but some labs, such as Lincoln, retained their collegiate ties. Cornell accepted a $25 million offer from EDP Technology, Inc. to purchase the lab in 1968. However, a group of lab employees who had made a competing $15 million offer organized a lawsuit to block the sale. In May 1971, New York's highest court ruled that Cornell had the right to sell the lab. At the conclusion of the suit, EDP Technology could not raise the money, and in 1972, Cornell reorganized the lab as the for-profit "Calspan Corporation" and then sold its stock in Calspan to the public. Calspan was the first in a series of corporate owners that have included Arvin Industries, Space Industries International, Veridian Corporation and General Dynamics. In 2005, Calspan Corporation was returned to independent ownership when a local management group purchased the Aeronautics and Transportation Testing Groups of the Western New York operation from General Dynamics. Under the name of Cornell Aeronautical Laboratory were inventions of the first crash test dummy in 1948, the automotive seat belt in 1951, the first mobile field unit with Doppler weather radar for weather-tracking in 1956, the first accurate airborne simulation of another aircraft (the North American X-15) in 1960, the first successful demonstration of an automatic terrain-following radar system in 1964, the first use of a laser beam to successfully measure gas density in 1966, the first independent HYGE sled test facility to evaluate automotive restraint systems in 1967, the mytron, an instrument for research on neuromuscular behavior and disorders in 1969, and the prototype for the Federal Bureau of Investigation's fingerprint reading system in 1972. CAL served as an "honest broker" making objective comparisons of competing plans to build military hardware. It also conducted classified counter-insurgency research in Thailand for the Defense Department. By the time of its divestiture, CAL had 1,600 employees. Aerospace components manufacturer TransDigm Group acquired Calspan for $725 million in May 2023. Airplanes Calspan owns and operates, or has owned and operated, a fleet of advanced experimental aircraft, including the X-62, the Convair NC-131H TIFS, four Learjets, a Gulfstream G-III, a SAAB 340, and a Hawker-Beechcraft Bonanza aerobatic airplane. References External links Further reading Cornell Research Has Great Freedom. // Aviation Week, June 3, 1957, v. 66, no. 22, PP. 290-303. 2023 mergers and acquisitions Aerospace engineering organizations Automotive engineering Safety engineering Aeronautical Laboratory Edwards Air Force Base Mojave Air and Space Port Laboratories in the United States University and college laboratories in the United States
Calspan
[ "Engineering" ]
873
[ "Systems engineering", "Aerospace engineering organizations", "Aeronautics organizations", "Safety engineering", "Automotive engineering", "Mechanical engineering by discipline", "Aerospace engineering" ]
2,220,565
https://en.wikipedia.org/wiki/Rudder%20ratio
Rudder ratio refers to a value that is monitored by the computerized flight control systems in modern aircraft. The ratio relates the aircraft airspeed to the rudder deflection setting that is in effect at the time. As an aircraft accelerates, the deflection of the rudder needs to be reduced proportionately within the range of the rudder pedal depression by the pilot. This automatic reduction process is needed because if the rudder is fully deflected when the aircraft is in high-speed flight, it will cause the plane to sharply and violently yaw, or swing from side to side, leading to loss of control and rudder, tail and other damages, even causing the aircraft to crash. See also American Airlines Flight 587 References Aerospace engineering Engineering ratios
Rudder ratio
[ "Mathematics", "Engineering" ]
151
[ "Aerospace engineering", "Quantity", "Metrics", "Engineering ratios" ]
2,220,582
https://en.wikipedia.org/wiki/Cooperative%20board%20game
Cooperative board games are board games in which players work together to achieve a common goal rather than competing against each other. Either the players win the game by reaching a predetermined objective, or all players lose the game, often by not reaching the objective before a certain event ends the game. Definition In cooperative board games, all players win or lose the game together. These games should not be confused with noncompetitive games, such as The Ungame, which simply do not have victory conditions or any set objective to complete. While adventure board games with role playing and dungeon crawl elements like Gloomhaven may be included, pure tabletop role-playing games like Descent: Journeys in the Dark are excluded as they have potentially infinite victory conditions with persistent player characters. Furthermore, games in which players compete together in two or more groups, teams or partnerships (such as Axis & Allies, and card games like Bridge and Spades) fall outside of this definition, even though there is temporary cooperation between some of the players. Multiplayer conflict games like Diplomacy may also feature temporary cooperation during the course of the game. These games are not considered cooperative though, because players are eliminated and ultimately only one individual can win. History and development 20th century Early cooperative games were used by parents and teachers in educational settings. In 1903 Elizabeth Magie patented "The Landlord's Game", inspired by the principles and philosophy of Henry George. The Landlords' and designed as a protest against the monopolists of the time, the game is considered to be the game from which Monopoly was largely derived. In it, Magie had two rule-sets - the Monopoly rules, in which players all vied to accrue the largest revenue and crush their opponents, and a co-operative set. Her dualistic approach was a teaching tool meant to demonstrate that the co-operative rules were morally superior. In 1954, a board game version of Beat the Clock, a game show, was released. In 1956, the Lowell Toy Manufacturing Corporation of New York City released a board game version of I've Got a Secret, a panel show, featuring host Garry Moore on the cover of the box. Teacher Jim Deacove published the cooperative game Together in 1971. He founded Family Pastimes in 1972 in Perth, Ontario, focusing exclusively on cooperative games. Family Pastimes has published numerous cooperative games, having released over 100 board games including the popular game of Max the Cat. The company also holds the trademark for the phrase, "A co-operative game". Ken Kolsbun and Jann Kolsbun founded Animal Town in 1976 in California. They invented cooperative games such as Save the Whales, Nectar Collector, and Dam Builders. Animal Town was renamed as Child and Nature in 2003. During the 1980s, several cooperative games like The Wreck of the B.S.M. Pandora, Time Tripper, and Arkham Horror were published. In the Sherlock Holmes: Consulting Detective series of games published in the 1980s, players are presented with a mystery to solve, and they trace the evidence together. Many cooperative Adventure board game series with Role-playing elements like Citadel of Blood, HeroQuest, Wizards (board game), Advanced HeroQuest, Deathmaze were released in this decade. Minion Hunter is a board game originally released in 1992 by Game Designers' Workshop in conjunction with their Dark Conspiracy Role Playing Game. The game is designed to encourage the players to work together to stall and/or defeat the plans of four monster races as a primary goal, with the individual advancement of the players as a secondary objective. Star Trek: The Next Generation Interactive VCR Board Game is set in the Star Trek universe and released in 1993. It utilizes a video tape that runs constantly while users play the board game portion. Events on the video tape combine with board game play to determine whether users win or lose the game. The video itself was directed by Les Landau and contains original footage filmed on the actual Star Trek: The Next Generation sets at Paramount Studios. Warhammer Quest is a fantasy dungeon, role-playing adventure board game released by Games Workshop in 1995 as the successor to HeroQuest and Advanced HeroQuest, set in its fictional Warhammer Fantasy world. 21st century In 2000, Reiner Knizia published Lord of the Rings which influenced a number of subsequent titles, including Shadows over Camelot. Pandemic, designed by Matt Leacock was first published by Z-Man Games in the United States in 2008. Space Alert is a cooperative survival designer board game created by Vlaada Chvátil in 2008. Players assume the roles of space explorers on a mission to survey the galaxy. The crew is evaluated on teamwork and how they deal with problems that arise on their journey. Other cooperative games of the last 10 years include Star Trek: Expeditions, Sentinels of the Multiverse, Freedom: The Underground Railroad, Mechs vs. Minions, Robinson Crusoe: Adventures on the Cursed Island, The 7th Continent, Zombicide, Spirit Island, and Hanabi, which won the Spiel des Jahres award in 2013. Gloomhaven is a cooperative board game for 1 to 4 players designed by Isaac Childres and published by Cephalofair Games in 2017. It is a campaign-based dungeon crawl game with a branching narrative campaign, 95 unique playable scenarios, 17 playable classes, and more than 1,500 cards in a box which weighs almost . Gloomhaven was selected by both a jury and fans as the Origins Game Fair Best Board Game of 2018. As of early August 2018, it had sold about 120,000 copies. Characteristics In José P. Zagal, Jochen Rick, and Idris Hsi's "Collaborative games: Lessons learned from board games", the lessons the researchers learned highlight what makes a good cooperative board game. First, the game needs to point out the folly of being competitive by allowing players to make decisions that benefits themselves rather than the whole group. Second, each player should not need the input of the rest of the group when making a decision. Third, players need to be able to identify what actions had benefits or consequences. Fourth, the game should reward selfless players by giving players unique roles or traits. These researchers also point out Challenges in designing collaborative games due to the following pitfalls that must be overcome: Game degenerating into a single player decides the actions for everyone For a game to be engaging, players should be invested in the end result and winning the game should be satisfying For repeat play, game experience should vary and challenge needs to evolve Game as the opponent Participants typically play against the game. Cooperative board games generally involve players joining forces against the game itself, and can be played without any player in the role of the opposition or Gamemaster. In Pandemic, for example, players work together to stop and cure different strains of diseases. Also, in some cooperative games, players actually cooperate with the opposing forces in the game. For example, in Max the Cat, players are mice who keep an aggressive cat at bay by offering him milk and other appeasements. In this way, all participants in the conflict scenario are fulfilled and the resolution is truly cooperative or "win-win". Randomness In many contemporary cooperative games, randomizing devices help in varying game experience over multiple plays. Dice can be rolled, cards can be drawn each turn from a shuffled deck or various board sections of a modular board revealed to generate random objectives, events and challenges. These provide the conflict or challenge in the game, and make it progressively more difficult for the players. For example, in Save the Whales, players work together to protect whales from the challenges inherent in the game setting—radioactive waste, commercial whaling, etc. Cooperation and its variations Most cooperative games bestow different abilities or responsibilities upon the players incentivizing cooperation. Some cooperative games might have an added layer of intrigue by giving players personal win conditions. In Dead of Winter: A Cross Roads Game, a zombie apocalypse game, in order to win players must achieve the communal victory condition and a personal objective. Gloomhaven is a cooperative board game for 1 to 4 players designed by Isaac Childres and published by Cephalofair Games in 2017 where each player also has a personal objective. Opposing teams In some games, there are opposing teams whose members cooperate with one another, working together against the other teams. Such teams may have equal or unequal number of players, in some cases taking the format of "one versus all". For example, in Scotland Yard and The Fury of Dracula, one player controls the "enemy", while the other players are cooperating to locate and defeat said enemy. Traitor A traitor game or semi-cooperative game can be seen as a cooperative game with a betrayal mechanism. While, as in a standard cooperative game, the majority of players work towards a common goal, one or more players are secretly assigned to be traitors who win if the other player fail. Determining the identity of traitors is often central to such games. For example, in Battlestar Galactica: The Board Game, players secretly designated as Cylons can usually be more effective if the human players are unaware who they are, and can profit from the human players suspecting each other. Other games, like Betrayal at House on the Hill, start out fully cooperative, but assigns a player to be the villain mid-game. See also Cooperative game theory (in mathematical game theory) Cooperative gameplay (in video games) Eurogame Amerigame Board wargame Party game References Cooperative games
Cooperative board game
[ "Mathematics" ]
1,936
[ "Game theory", "Cooperative games" ]
2,220,692
https://en.wikipedia.org/wiki/Wick%20effect
The wick effect is an alleged partial or total destruction of a human body by fire, when the clothing of the victim soaks up melted human fat and acts like the wick of a candle. The wick effect is a phenomenon that is found to occur under certain conditions. Details The wick effect theory says a person is kept aflame through their own fats after being ignited, accidentally or otherwise. The clothed human body acts like an "inside-out" candle, with the fuel source (human fat) inside and the wick (the clothing of the victim) outside. Hence there is a continuous supply of fuel in the form of melting fat seeping into the victim's clothing. Fat contains a large amount of energy due to the presence of long hydrocarbon chains. Examples Mary Reeser case Mary Reeser (1884–1951) of St. Petersburg, Florida was most likely a victim of the wick effect. It was suspected that she had accidentally ignited herself with a cigarette. The fat which over time had been absorbed by her clothing likely acted as fuel for the fire. At the scene, investigators found melted fat in the rug near Mary's body. 1963 Leeds case An investigation of a 1963 case in Leeds included an experiment with a wick effect. A small portion of human fat was wrapped in cloth to simulate clothing. A Bunsen burner flame was then applied to the 'candle'. Due to the high water content of human fat the flame had to be held on the 'candle' for over a minute before it would catch fire: "One end of the candle was ignited by a Bunsen flame, the fat catching fire after about a minute. Although the Bunsen was removed at this point, combustion of the fat proceeded slowly along the length of the roll, with a smoky yellow flame and much production of soot, the entire roll being consumed after about one hour." This gives some indication of the slow speed with which the wick effect will proceed. 1991 Oregon murder In February 1991, in woodland near Medford, Oregon, USA, two hikers came across the burning body of a female adult, lying face down in fallen leaves. They alerted the officials and a local deputy sheriff soon arrived. She had been stabbed several times in the upper regions of the chest and back. Both arms were spread outwards from the torso. The lower legs and surface of the neck showed signs of fire damage. The soft tissues of the right arm, torso and upper legs were consumed. The majority of bones of these parts retained their integrity, although friability was increased. Between the victim's mid-chest and knees the fleshy parts of the body were mostly destroyed. Crime scene personnel reported that the pelvis and spine were "not recoverable", having been reduced to a grey powder. Her killer had soaked the clothes and corpse in nearly a pint of barbecue starter fluid and set her on fire. In the well-oxygenated outdoor environment, this combination of circumstances—an immobile and clothed body with a high fat-to-muscle ratio, accelerant (lighter fluid), and artificial ignition—made it prime for the wick effect to occur. The murderer was arrested and made a full confession. He claimed to have set the body alight some 13 hours before it was discovered. 1998 experiment A larger scale experiment conducted for the BBC television programme Q.E.D. involved a dead pig's body being wrapped in a blanket and placed in a furnished room. The blanket was lit with the aid of a small amount of petrol. The body took some time to ignite and burned at a very high temperature with low flames. The heat collected at the top of the room and melted a television. However, the flames caused very little damage to the surroundings, and the body burned for a number of hours before it was extinguished and examined. On examination it was observed that the flesh and bones in the burnt portion had been destroyed. 2006 Geneva case In October 2006, the body of a man was discovered at home in Geneva, almost completely incinerated between the mid-chest and the knees, most probably due to heart attack while smoking, followed by the wick effect. The chair containing the body was mostly consumed, but other objects in the room were almost undamaged, albeit covered with a brown oily or greasy coating. The source of the fire was most likely a cigarette or cigar. The man's dog also died in another room of the man's apartment; this was attributed to carbon monoxide poisoning. 2010 Galway case In December 2010, the cremated body of a 76-year-old man was found alongside an open fireplace in his home in Clareview Park at Ballybane in the Irish city of Galway. The fire investigators concluded that no accelerants were used and that the open fireplace was not the cause of the fire. The coroner in the case could not identify the cause of the death due to extensive internal organ damage and concluded that "this [case] fits into the category of spontaneous human combustion, for which there is no adequate explanation". The body of the man, Michael Faherty, was found in the living room of his home on 22 December 2010. The scene was searched by forensic experts from the Gardaí and the fire service, and a post-mortem was carried out by pathologist Grace Callagy. Callagy noted that Faherty had suffered from Type 2 diabetes and hypertension, but had not died from heart failure. Callagy concluded that the "extensive nature of the burns sustained precludes determining the precise cause of death". In September 2011, the west Galway coroner informed an inquiry into the death that he had searched medical literature, and referred to Professor Bernard Knight's book on forensic pathology, which states that a high number of alleged incidents of spontaneous human combustion had taken place near an open fireplace or chimney. Benjamin Radford, deputy editor of the science magazine Skeptical Inquirer, questioned why the coroner had "conclusively ruled out" other possible explanations. References Fire Spontaneous human combustion pt:Efeito pavio
Wick effect
[ "Chemistry" ]
1,247
[ "Combustion", "Spontaneous human combustion", "Fire" ]
2,220,844
https://en.wikipedia.org/wiki/Hide%20%28unit%29
The hide was an English unit of land measurement originally intended to represent the amount of land sufficient to support a household. The Anglo-Saxon hide commonly appeared as of arable land, but it probably represented a much smaller holding before 1066. It was a measure of value and tax assessment, including obligations for food-rent (), maintenance and repair of bridges and fortifications, manpower for the army (), and (eventually) the land tax. The hide's method of calculation is now obscure: different properties with the same hidage could vary greatly in extent even in the same county. Following the Norman Conquest of England, the hidage assessments were recorded in the Domesday Book of 1086, and there was a tendency for land producing £1 of income per year to be assessed at 1 hide. The Norman kings continued to use the unit for their tax assessments until the end of the 12th century. The hide was divided into four yardlands or virgates. It was hence nominally equivalent in area to a carucate, a unit used in the Danelaw. Original meaning The Anglo-Saxon word for a hide was hid (or its synonym hiwisc). Both words are believed to be derived from the same root hiwan, which meant "family". Bede in his Ecclesiastical History (c. 731) describes the extent of a territory by the number of families which it supported, as (for instance), in Latin, terra x familiarum meaning 'a territory of ten families'. In the Anglo-Saxon version of the same work hid or hiwan is used in place of terra ... familiarum. Other documents of the period show the same equivalence and it is clear that the word hide originally signified land sufficient for the support of a peasant and his household or of a 'family', which may have had an extended meaning. It is uncertain whether it meant the immediate family or a more extensive group. Charles-Edwards suggests that in its early usage it referred to the land of one family, worked by one plough and that ownership of a hide conferred the status of a freeman, to whom Stenton referred as "the independent master of a peasant household". Holy Roman usage Hides of land formed the basis for tax levies used to equip free warriors (miles) of the Holy Roman Empire. In 807 it was specified that in the region west of the Seine, for example, a vassal who held four or five hides was responsible for showing up to a muster in person, fully equipped for war. Three men who each possessed one hide, though, merely were grouped such that two of them were responsible for equipping the third, who would go to war in their name. Those holding half-hides were responsible for readying one man for every group of six. This came about as a way of ensuring that the liege took to the field with a fully equipped and provisioned force. In Anglo-Saxon England In early Anglo-Saxon England, the hide was used as the basis for assessing the amount of food rent (known as feorm) due from a village or estate and it became the unit on which all public obligations were assessed, including in particular the maintenance and repair of bridges and fortifications and the provision of troops for manning the defences of a town or for the defence force known as the 'fyrd'. For instance, at one period, five hides were expected to provide one fully armed soldier in the king's service, and one man from every hide was to be liable to do garrison duty for the burhs and to help in their initial construction and upkeep. A land tax known as geld was first levied in 990 and this became known as the Danegeld, as it was used to buy off the Danes who were then raiding and invading the country. It was raised again for the same purpose on several occasions. The already existing system of assessment of land in hides was utilised to raise the geld, which was levied at a stated rate per hide (e.g. two shillings per hide). Subsequently the same system was used for general taxation and the geld was raised as required. The hide was a measure of value rather than a measurement of area, but the logic of its assessment is not easy to understand, especially as assessments were changed from time to time and not always consistently. By the end of the Anglo-Saxon period, it was a measure of 'the taxable worth of an area of land', but it had no fixed relationship to its area, the number of ploughteams working on it, or its population; nor was it limited to the arable land on an estate. According to Bailey, "It is a commonplace that the hide in 1086 had a very variable extent on the ground; the old concept of 120 acres cannot be sustained." Many details of the development of the system during the 350 years which elapsed between the time of Bede and the Domesday Book remain obscure. According to Sir Frank Stenton, "Despite the work of many great scholars the hide of early English texts remains a term of elusive meaning." The fact that assessments consistently tended to be made in units of 5 hides or multiples of 5 hides goes to show that we are not speaking of fixed or even approximate acreages and this applies not only to the 11th century but to charters of the 7th and 8th centuries. Nevertheless, the hide became the basis of an artificial system of assessment of land for purposes of taxation, which lasted for a long period. The most consistent aspect of the hide is described as follows by Sally Harvey (referring particularly to Domesday Book): "Both Maitland and Vinogradoff long ago noticed that there was a general tendency throughout Domesday for a hide of land to be worth £1, or, put another way, for land producing £1 of income to be assessed at one hide." A number of early documents referring to hides have survived, but these can only be seen as steps in the development of the concept of the hide and do not enable us to see the full story. The document known as the Tribal Hidage is a very early list thought to date possibly from the 7th century, but known only from a later and unreliable manuscript. It is a list of tribes and small kingdoms owing tribute to an overlord and of the proportionate liability or quota imposed on each of them. This is expressed in terms of hides, though we have no details as to how these were arrived at nor how they were converted into a cash liability. The Burghal Hidage (early 10th century) is a list of boroughs giving the hide assessments of neighbouring districts which were liable to contribute to the defence of the borough, each contributing to the maintenance and manning of the fortifications in proportion to the number of hides for which they answered. The County Hidage (early 11th century) lists the total number of hides to be assessed on each county and it seems that by this time at least the total number of hides in a given area was imposed from above. Each county was assigned a round number of hides, for which it would be required to answer. For instance, at an early date in the 11th century, Northamptonshire was assigned 3,200 hides, while Staffordshire was assigned only 500. This number was then divided up between the hundreds in the county. Theoretically there were 100 hides in each hundred, but this proportion was often not maintained, for example because of changes in the hundreds or in the estates comprising them or because assessments were altered when the actual cash liability was perceived as being too high or too low or for other reasons now unknown. The hides within each hundred were then divided between villages, estates or manors, usually in blocks or multiples of 5 hides, though this was not always maintained. Differences from the norm could result from estates being moved from one hundred to another, or from adjustments to the size of an estate or alterations in the number of hides for which an estate should answer. As each local community had the task of deciding how its quota of hides should be divided between the lands held by that community, different communities used different criteria, depending on the type of land held and on the way in which an individual's wealth was reckoned within that community, it is self-evident that no single comprehensive definition is possible. After the Norman conquest The Norman kings, after the Norman Conquest, continued to use the system which they found in place. Geld was levied at intervals on the existing hidage assessments. In 1084, William I laid an exceptionally heavy geld of six shillings upon every hide. At the time the value of the hide was approximating twenty shillings a year, and the price of an ox was two shillings. Thus the holder of a hide had a tax burden equivalent to three of his oxen and close upon one-third of the annual value of his land. A more normal rate was 2 shillings on each hide. Domesday Book, recording the results of the survey made on the orders of William I in 1086, states in hides (or carucates or sulungs as the case might be) the assessed values of estates throughout the area covered by the survey. Usually it gives this information for 1086 and 1066, but some counties were different and only showed this information for one of those dates. By that time the assessments showed many anomalies. Many of the hide assessments on lands held by tenants-in-chief were reduced between 1066 and 1086 in order to effect an exemption from or reduction in tax; this again shows that the hide is a tax assessment, not an area of land. Sometimes, the assessment in hides is given both for the whole manor and for the demesne land (i.e. the lord's own demesne) included in it. Sally Harvey has suggested that the ploughland data in Domesday Book was intended to be used for a complete re-assessment but, if so, it was never actually made. The Pipe Rolls, where they are available, show that levies were based largely on the old assessments, though with some amendments and exemptions. The last recorded levy was for 1162-3 during the reign of Henry II, but the tax was not formally abolished and Henry II thought of using it again between 1173 and 1175. The old assessments were used for a tax on land in 1193-4 to raise money for King Richard's ransom. Relationship to other terms A hide was usually made up of four virgates although exceptionally Sussex had eight virgates to the hide. A similar measure was used in the northern Danelaw, known as a carucate, consisting of eight bovates, and Kent used a system based on a "sulung", consisting of four yokes, which was larger than the hide and on occasion treated as equivalent to two hides. These measures had a different origin, signifying the amount of land which could be cultivated by one plough team as opposed to a family holding, but all later became artificial fiscal assessments. In some counties in Domesday Book (e.g. Cambridgeshire), the hide is sometimes shown as consisting of 120 acres (30 acres to the virgate), but as Darby explains: "The acres are, of course, not units of area, but geld acres, i.e. units of assessment". In other words, this was a way of dividing the tax assessment on the hide between several owners of parts of the land assessed. The owner of land assessed at 40 notional (or 'fiscal') acres in a village assessed at 10 hides and paying geld of 2 shillings per hide would be responsible for one-third () of 2 shillings—that is, 8 pence—though his land might be considerably more or less than 40 modern statute acres in extent. The surname Huber (also anglicized as Hoover) is based on the equivalent German word Hube, a unit of land a farmer might own. Notes Citations General references Bailey, Keith, The Hidation of Buckinghamshire, in Records of Buckinghamshire, Vol.32, 1990 (pp. 1–22) Charles-Edwards, T. M., Kinship, Status and the Origins of the Hide in Past & Present, Vol. 36 1972 (pp. 3–33) Darby, Henry C., Domesday England, Cambridge University Press, 1977 Darby, Henry C.; The Domesday Geography of Eastern England, Cambridge university Pree, 1971 Delbrück, Hans, trans. Walter Renfroe Jr. History of the Art of War, Volume III: Medieval Warfare (Lincoln, NE: University of Nebraska Press, 1982) Faith, Rosamund J., The English Peasantry and the Growth of Lordship, London. 1997 Faith, Rosamund J., Hide, article in The Blackwell Encyclopaedia of Anglo-Saxon England, ed: Michael Lapidge et al., London. 2001 Green, J. A.: The Last Century of Danegeld in The English Historical Review Vol.96, no.379 (April 1981) pp. 241–258 Harvey, Sally P. J,: "Domesday Book and Anglo-Norman Governance" in Transactions of the Royal Historical Society 5th series, Vol. 25 (1975) pp. 175–193 Harvey, Sally P. J,: "Taxation and the Economy" in Domesday Studies edited by J. C. Holt. Woodbridge. 1987 Lennard, Reginald: "The origin of the Fiscal Carucate" in The Economic History Review Vol. 14, No. 1 (1944) pp. 51–63 Lipson, E.,The Economic History of England, Volume 1, (12th edition; London, 1959) . Stenton, Frank M., Anglo-Saxon England (3rd ed.), Oxford University Press, 1971 Further reading Much work has been done investigating the hidation of various counties and also in attempts to discover more about the origin and development of the hide and the purposes for which it was used, but without producing many clear conclusions which would help the general reader. Those requiring more information may wish to consult the following works in addition to those quoted in the Citations: Bridbury, A. R. (1990) "Domesday Book: a Re-interpretation", in: English Historical Review, Vol. 105, No. 415. [Apr. 1990], pp. 284–309 Darby, Henry C. & Campbell, Eila M. J. (1961) The Domesday Geography of South Eastern England Darby, Henry C. & Maxwell, I. S. (1962) The Domesday Geography of Northern England Darby, Henry C. & Finn, R. Welldon (1967) The Domesday Geography of South West England Darby, Henry C. (1971) The Domesday Geography of Eastern England, 3rd ed. Darby, Henry C. & Terrett, I. B. (1971) The Domesday Geography of Midland England, 2nd ed. Hamshere, J. D. (1987) "Regressing Domesday Book: Tax Assessments of Domesday England, in: The Economic History Review, New series, Vol. 40, No. 2. [May 1987], pp. 247–251 Leaver, R. A. (1988) "Five Hides in Ten Counties: a Contribution to the Domesday Regression Debate", in: The Economic History Review, New series, Vol. 41, No. 4, [Nov. 1988], pp. 525–542 McDonald, John & Snooks, Graeme D. (1985) "Were the Tax Assessments of Domesday England Artificial?: the Case of Essex", in: The Economic History Review, New series, Vol. 38, No. 3, [Aug. 1985], pp. 352–372 Snooks, Graeme D. and McDonald, John. Domesday Economy: a New Approach to Anglo-Norman History. Oxford: Clarendon Press, 1986 Obsolete units of measurement Types of administrative division Units of area
Hide (unit)
[ "Mathematics" ]
3,248
[ "Obsolete units of measurement", "Quantity", "Units of area", "Units of measurement" ]
2,220,846
https://en.wikipedia.org/wiki/Teleogenesis
In the theory of cybernetics, teleogenesis (from the Greek teleos = 'purpose' and genesis = 'creation') is the creation of goal-creating processes. According to Peter Corning: "A cybernetic system is by definition a dynamic purposive system; it is 'designed' to pursue or maintain one or more goals or end-states". Teleogenesis refers from an extension of classical cybernetics, as proposed by Norbert Wiener, Ashby and others in late 1950s. See also Homeostasis Homeorhesis References Corning, Peter A. "Thermoeconomics: Beyond the second law" from: www.complexsystems.org Cybernetics
Teleogenesis
[ "Engineering" ]
147
[ "Software engineering", "Software engineering stubs" ]
2,220,957
https://en.wikipedia.org/wiki/Electron%20optics
Electron optics is a mathematical framework for the calculation of electron trajectories in the presence of electromagnetic fields. The term optics is used because magnetic and electrostatic lenses act upon a charged particle beam similarly to optical lenses upon a light beam. Electron optics calculations are crucial for the design of electron microscopes and particle accelerators. In the paraxial approximation, trajectory calculations can be carried out using ray transfer matrix analysis. Electron properties Electrons are charged particles (point charges with rest mass) with spin 1/2 (hence they are fermions). Electrons can be accelerated by suitable electric fields, thereby acquiring kinetic energy. Given sufficient voltage, the electron can be accelerated sufficiently fast to exhibit measurable relativistic effects. According to wave particle duality, electrons can also be considered as matter waves with properties such as wavelength, phase and amplitude. Geometric electron optics The Hamilton's optico-mechanical analogy shows that electron beams can be modeled using concepts and mathematical formula of light beams. The electron particle trajectory formula matches the formula for geometrical optics with a suitable electron-optical index of refraction. This index of refraction functions like the material properties of glass in altering the direction ray propagation. In light optics, the refractive index changes abruptly at a surface between regions of constant index: the rays are controlled with the shape of the interface. In the electron-optics, the index varies throughout space and is controlled by electromagnetic fields created outside the electron trajectories. Magnetic fields Electrons interact with magnetic fields according to the second term of the Lorentz force: a cross product between the magnetic field and the electron velocity. In an infinite uniform field this results in a circular motion of the electron around the field direction with a radius given by: where r is the orbit radius, m is the mass of an electron, is the component of the electron velocity perpendicular to the field, e is the electron charge and B is the magnitude of the applied magnetic field. Electrons that have a velocity component parallel to the magnetic field will proceed along helical trajectories. Electric fields In the case of an applied electrostatic field, an electron will deflect towards the positive gradient of the field. Notably, this crossing of electrostatic field lines means that electrons, as they move through electrostatic fields change the magnitude of their velocity, whereas in magnetic fields, only the velocity direction is modified. Relativistic theory At relativistic electron velocity the geometrical electron optical equations rely on an index of refraction that includes both the ratio of electron velocity to light and , the component of the magnetic vector potential along the electron direction: where , , and are the electron mass, electron charge, and the speed of light. The first term is controlled by electrostatic lens while the second one by magnetic lens. Although not very common, it is also possible to derive effects of magnetic structures to charged particles starting from the Dirac equation. Diffractive electron optics As electrons can exhibit non-particle (wave-like) effects such as interference and diffraction, a full analysis of electron paths must go beyond geometrical optics. Free electron propagation (in vacuum) can be accurately described as a de Broglie matter wave with a wavelength inversely proportional to its longitudinal (possibly relativistic) momentum. Fortunately as long as the electromagnetic field traversed by the electron changes only slowly compared with this wavelength (see typical values in matter wave#Applications of matter waves), Kirchhoff's diffraction formula applies. The essential character of this approach is to use geometrical ray tracing but to keep track of the wave phase along each path to compute the intensity in the diffraction pattern. As a result of the charge carried by the electron, electric fields, magnetic fields, or the electrostatic mean inner potential of thin, weakly interacting materials can impart a phase shift to the wavefront of an electron. Thickness-modulated silicon nitride membranes and programmable phase shift devices have exploited these properties to apply spatially varying phase shifts to control the far-field spatial intensity and phase of the electron wave. Devices like these have been applied to arbitrarily shape the electron wavefront, correct the aberrations inherent to electron microscopes, resolve the orbital angular momentum of a free electron, and to measure dichroism in the interaction between free electrons and magnetic materials or plasmonic nanostructures. Limitations of applying light optics techniques Electrons interact strongly with matter as they are sensitive to not only the nucleus, but also the matter's electron charge cloud. Therefore, electrons require vacuum to propagate any reasonable distance, such as would be desirable in electron optic system. Penetration in vacuum is dictated by mean free path, a measure of the probability of collision between electrons and matter, approximate values for which can be derived from Poisson statistics. See also Charged particle beam Strong focusing Electron beam technology Electron microscope Beam emittance Ernst Ruska Hemispherical electron energy analyzer Further reading P. Grivet, P.W. Hawkes, A.Septier (1972). Electron Optics, 2nd edition. Pergamon Press. . A.Septier (ed.) (1980). Applied Charged Particle Optics. Part A.. Academic Press. . A.Septier (ed.) (1967). Focusing of Charged Particles. Volume 1.. Academic Press. D. W. O. Heddle (2000). Electrostatic Lens Systems, 2nd edition. CRC Press. . A.B El-Kareh, J.C.J. El-Kareh (1970).Electron Beams, Lenses, and Optics Vol. 1. Academic Press. Hawkes, P. W. & Kasper, E. (1994). Principles of Electron Optics. Academic Press. . Pozzi, G. (2016). Particles and Waves in Electron Optics and Microscopy. Academic Press. . Jon Orloff et al., (2008). Handbook of Charged Particle Optics. Second Edition. CRC Press. . Bohdan Paszkowski. (1968). Electron Optics, Iliffe Books Ltd. Miklos Szilagyi (1988). Electron and Ion Optics, Springer New York, NY. . Helmut Liebl (2008). Applied Charged Particle Optics . Springer Berlin. . Erwin Kasper (2001). Advances in Imaging and Electron Physics, Vol. 116 , Numerical Field Calculation for Charged Particle Optics. Academic Press. . Harald Rose (2012). Geometrical Charged-Particle Optics . Springer Berlin, Heidelberg. . Electron Optics Simulation Software Commercial programs SIMION (Ion and Electron Optics Simulator) EOD (Electron Optical Design) CPO (electronoptics.com) MEBS (Munro's Electron Beams Software) Field Precision LLC Free Software IBSIMU (by Taneli Kalvas) (ibsimu.SourceForge.net) References Electromagnetism Accelerator physics
Electron optics
[ "Physics" ]
1,415
[ "Electromagnetism", "Physical phenomena", "Applied and interdisciplinary physics", "Experimental physics", "Fundamental interactions", "Accelerator physics" ]
2,221,032
https://en.wikipedia.org/wiki/Cycles%20and%20fixed%20points
In mathematics, the cycles of a permutation of a finite set S correspond bijectively to the orbits of the subgroup generated by acting on S. These orbits are subsets of S that can be written as , such that for , and . The corresponding cycle of is written as ( c1 c2 ... cn ); this expression is not unique since c1 can be chosen to be any element of the orbit. The size of the orbit is called the length of the corresponding cycle; when , the single element in the orbit is called a fixed point of the permutation. A permutation is determined by giving an expression for each of its cycles, and one notation for permutations consist of writing such expressions one after another in some order. For example, let be a permutation that maps 1 to 2, 6 to 8, etc. Then one may write = ( 1 2 4 3 ) ( 5 ) ( 6 8 ) (7) = (7) ( 1 2 4 3 ) ( 6 8 ) ( 5 ) = ( 4 3 1 2 ) ( 8 6 ) ( 5 ) (7) = ... Here 5 and 7 are fixed points of , since (5) = 5 and (7)=7. It is typical, but not necessary, to not write the cycles of length one in such an expression. Thus,  = (1 2 4 3)(6 8), would be an appropriate way to express this permutation. There are different ways to write a permutation as a list of its cycles, but the number of cycles and their contents are given by the partition of S into orbits, and these are therefore the same for all such expressions. Counting permutations by number of cycles The unsigned Stirling number of the first kind, s(k, j) counts the number of permutations of k elements with exactly j disjoint cycles. Properties (1) For every k > 0 : (2) For every k > 0 : (3) For every k > j > 1, Reasons for properties (1) There is only one way to construct a permutation of k elements with k cycles: Every cycle must have length 1 so every element must be a fixed point. (2.a) Every cycle of length k may be written as permutation of the number 1 to k; there are k! of these permutations. (2.b) There are k different ways to write a given cycle of length k, e.g. ( 1 2 4 3 ) = ( 2 4 3 1 ) = ( 4 3 1 2 ) = ( 3 1 2 4 ). (2.c) Finally: (3) There are two different ways to construct a permutation of k elements with j cycles: (3.a) If we want element k to be a fixed point we may choose one of the permutations with elements and cycles and add element k as a new cycle of length 1. (3.b) If we want element k not to be a fixed point we may choose one of the permutations with elements and j cycles and insert element k in an existing cycle in front of one of the elements. Some values Counting permutations by number of fixed points The value counts the number of permutations of k elements with exactly j fixed points. For the main article on this topic, see rencontres numbers. Properties (1) For every j < 0 or j > k : (2) f(0, 0) = 1. (3) For every k > 1 and k ≥ j ≥ 0, Reasons for properties (3) There are three different methods to construct a permutation of k elements with j fixed points: (3.a) We may choose one of the permutations with elements and fixed points and add element k as a new fixed point. (3.b) We may choose one of the permutations with elements and j fixed points and insert element k in an existing cycle of length > 1 in front of one of the elements. (3.c) We may choose one of the permutations with elements and fixed points and join element k with one of the fixed points to a cycle of length 2. Some values Alternate calculations See also Cyclic permutation Cycle notation Notes References Permutations Fixed points (mathematics)
Cycles and fixed points
[ "Mathematics" ]
895
[ "Functions and mappings", "Mathematical analysis", "Permutations", "Fixed points (mathematics)", "Mathematical objects", "Combinatorics", "Topology", "Mathematical relations", "Dynamical systems" ]
2,221,141
https://en.wikipedia.org/wiki/Electroceramics
Electroceramics are a class of ceramic materials used primarily for their electrical properties. While ceramics have traditionally been admired and used for their mechanical, thermal and chemical stability, their unique electrical, optical and magnetic properties have become of increasing importance in many key technologies including communications, energy conversion and storage, electronics and automation. Such materials are now classified under electroceramics, as distinguished from other functional ceramics such as advanced structural ceramics. Historically, developments in the various subclasses of electroceramics have paralleled the growth of new technologies. Examples include: ferroelectrics - high dielectric capacitors, non-volatile memories; ferrites - data and information storage; solid electrolytes - energy storage and conversion; piezoelectrics - sonar; semiconducting oxides - environmental monitoring. Recent advances in these areas are described in the Journal of Electroceramics. Dielectric ceramics Dielectric materials used for construction of ceramic capacitors include: Lead Zirconate titanate (PZT), Barium titanate(BT), strontium titanate (ST), calcium titanate (CT), magnesium titanate (MT), calcium magnesium titanate (CMT), zinc titanate (ZT), lanthanum titanate (LT), and neodymium titanate (NT), barium zirconate (BZ), calcium zirconate (CZ), lead magnesium niobate (PMN), lead zinc niobate (PZN), lithium niobate (LN), barium stannate (BS), calcium stannate (CS), magnesium aluminium silicate, magnesium silicate, barium tantalate, titanium dioxide, niobium oxide, zirconia, silica, sapphire, beryllium oxide, and zirconium tin titanate Some piezoelectric materials can be used as well; the EIA Class 2 dielectrics are based on mixtures rich on barium titanate. In turn, EIA Class 1 dielectrics contain little or no barium titanate. Electronically conductive ceramics Indium tin oxide (ITO), lanthanum-doped strontium titanate (SLT), yttrium-doped strontium titanate (SYT) Fast ion conductor ceramics Yttria-stabilized zirconia (YSZ), gadolinium-doped ceria (GDC), lanthanum strontium gallate magnesite(LSGM), beta alumina, beta alumina Piezoelectric and ferroelectric ceramics Commercially used piezoceramic is primarily lead zirconate titanate (PZT). Barium titanate (BT), strontium titanate (ST), quartz, and others are also used. See :Category:Piezoelectric materials. Magnetic ceramics Ferrites including iron(III) oxide and strontium carbonate display magnetic properties. Lanthanum strontium manganite exhibits colossal magnetoresistance. See also Ceramic Genoa Joint Laboratories Strontium titanate Barium titanate Lead zirconate titanate References The Electroceramics and Crystal Physics Group at MIT Materials science Ceramic materials Condensed matter physics
Electroceramics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
690
[ "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Ceramic materials", "Condensed matter physics", "nan", "Ceramic engineering", "Matter" ]
2,221,181
https://en.wikipedia.org/wiki/Ullmann%20condensation
The Ullmann condensation or Ullmann-type reaction is the copper-promoted conversion of aryl halides to aryl ethers, aryl thioethers, aryl nitriles, and aryl amines. These reactions are examples of cross-coupling reactions. Ullmann-type reactions are comparable to Buchwald–Hartwig reactions but usually require higher temperatures. Traditionally, these reactions require high-boiling, polar solvents such as N-methylpyrrolidone, nitrobenzene, or dimethylformamide and high temperatures (often in excess of 210 °C) with stoichiometric amounts of copper. Aryl halides are required to be activated by electron-withdrawing groups. Traditional Ullmann style reactions used "activated" copper powder, e.g. prepared in situ by the reduction of copper sulfate by zinc metal in hot water. The methodology improved with the introduction of soluble copper catalysts supported by diamines and acetylacetonate ligands. Ullmann ether synthesis: C-O coupling Illustrative of the traditional Ullmann ether synthesis is the preparation of p-nitrophenyl phenyl ether from 4-chloronitrobenzene and phenol. Copper is used as a catalyst, either in the form of the metal or copper salts. Modern arylations use soluble copper catalysts. Goldberg reaction: C-N coupling A traditional Goldberg reaction involves reaction of an aniline with an aryl halide. The coupling of 2-chlorobenzoic acid and aniline is illustrative: A typical catalyst is formed from copper(I) iodide and phenanthroline. The reaction is an alternative to the Buchwald–Hartwig amination reaction. Aryl iodides are more reactive arylating agents than are aryl chlorides, following the usual pattern. Electron-withdrawing groups on the aryl halide also accelerate the coupling. Hurtley reaction: C-C coupling The nucleophile can also be carbon including carbanions as well as cyanide. In the traditional Hurtley reaction, the carbon nucleophiles were derived from malonic ester and other dicarbonyl compounds: (Z = CO2H) More modern Cu-catalyzed C-C cross-couplings utilize soluble copper complexes containing phenanthroline ligands. C–S coupling The arylation of alkylthiolates proceeds by the intermediacy of cuprous thiolates. Mechanism of Ullmann-type reactions In the case of Ullmann-type reactions (aminations, etherifications, etc. of aryl halides), the conversions involve copper(I) alkoxide, copper(I) amides, copper(I) thiolates. The copper(I) reagent can be generated in situ from the aryl halide and copper metal. Even copper(II) sources are effective under some circumstances. A number of innovations have been developed with regards to copper reagents. These copper(I) compounds subsequently react with the aryl halide in a net metathesis reaction: In the case of C-N coupling, kinetic studies implicate oxidative addition reaction followed by reductive elimination from Cu(III) intermediates ( = one or more spectator ligands): History The Ullmann ether synthesis is named after its inventor, Fritz Ullmann. The corresponding Goldberg reaction, is named after Irma Goldberg. The Hurtley reaction, which involves C-C bond formation, is similarly named after its inventor. References Carbon-heteroatom bond forming reactions Condensation reactions Name reactions
Ullmann condensation
[ "Chemistry" ]
772
[ "Coupling reactions", "Organic reactions", "Name reactions", "Carbon-heteroatom bond forming reactions", "Condensation reactions" ]
2,221,187
https://en.wikipedia.org/wiki/Solid%20solution
A solid solution, a term popularly used for metals, is a homogeneous mixture of two compounds in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species. In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite. Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation. Nomenclature The IUPAC definition of a solid solution is a "solid in which components are compatible and form a unique phase". The definition "crystal containing a second constituent which fits into and is distributed in the lattice of the host crystal" given in refs., is not general and, thus, is not recommended. The expression is to be used to describe a solid phase containing more than one substance when, for convenience, one (or more) of the substances, called the solvent, is treated differently from the other substances, called solutes. One or several of the components can be macromolecules. Some of the other components can then act as plasticizers, i.e., as molecularly dispersed substances that decrease the glass-transition temperature at which the amorphous phase of a polymer is converted between glassy and rubbery states. In pharmaceutical preparations, the concept of solid solution is often applied to the case of mixtures of drug and polymer. The number of drug molecules that do behave as solvent (plasticizer) of polymers is small. Phase diagrams On a phase diagram a solid solution is represented by an area, often labeled with the structure type, which covers the compositional and temperature/pressure ranges. Where the end members are not isostructural there are likely to be two solid solution ranges with different structures dictated by the parents. In this case the ranges may overlap and the materials in this region can have either structure, or there may be a miscibility gap in solid state indicating that attempts to generate materials with this composition will result in mixtures. In areas on a phase diagram which are not covered by a solid solution there may be line phases, these are compounds with a known crystal structure and set stoichiometry. Where the crystalline phase consists of two (non-charged) organic molecules the line phase is commonly known as a cocrystal. In metallurgy alloys with a set composition are referred to as intermetallic compounds. A solid solution is likely to exist when the two elements (generally metals) involved are close together on the periodic table, an intermetallic compound generally results when two metals involved are not near each other on the periodic table. Details The solute may incorporate into the solvent crystal lattice substitutionally, by replacing a solvent particle in the lattice, or interstitially, by fitting into the space between solvent particles. Both of these types of solid solution affect the properties of the material by distorting the crystal lattice and disrupting the physical and electrical homogeneity of the solvent material. Where the atomic radii of the solute atom is larger than the solvent atom it replaces the crystal structure (unit cell) often expands to accommodate it, this means that the composition of a material in a solid solution can be calculated from the unit cell volume a relationship known as Vegard's law. Some mixtures will readily form solid solutions over a range of concentrations, while other mixtures will not form solid solutions at all. The propensity for any two substances to form a solid solution is a complicated matter involving the chemical, crystallographic, and quantum properties of the substances in question. Substitutional solid solutions, in accordance with the Hume-Rothery rules, may form if the solute and solvent have: Similar atomic radii (15% or less difference) Same crystal structure Similar electronegativities Similar valency a solid solution mixes with others to form a new solution The phase diagram in the above diagram displays an alloy of two metals which forms a solid solution at all relative concentrations of the two species. In this case, the pure phase of each element is of the same crystal structure, and the similar properties of the two elements allow for unbiased substitution through the full range of relative concentrations. Solid solution of pseudo-binary systems in complex systems with three or more components may require a more involved representation of the phase diagram with more than one solvus curves drawn corresponding to different equilibrium chemical conditions. Solid solutions have important commercial and industrial applications, as such mixtures often have superior properties to pure materials. Many metal alloys are solid solutions. Even small amounts of solute can affect the electrical and physical properties of the solvent. The binary phase diagram in the above diagram shows the phases of a mixture of two substances in varying concentrations, and . The region labeled "" is a solid solution, with acting as the solute in a matrix of . On the other end of the concentration scale, the region labeled "" is also a solid solution, with acting as the solute in a matrix of . The large solid region in between the and solid solutions, labeled " + ", is not a solid solution. Instead, an examination of the microstructure of a mixture in this range would reveal two phases—solid solution -in- and solid solution -in- would form separate phases, perhaps lamella or grains. Application In the phase diagram, at three different concentrations, the material will be solid until heated to its melting point, and then (after adding the heat of fusion) become liquid at that same temperature: the unalloyed extreme left the unalloyed extreme right the dip in the center (the eutectic composition). At other proportions, the material will enter a mushy or pasty phase until it warms up to being completely melted. The mixture at the dip point of the diagram is called a eutectic alloy. Lead-tin mixtures formulated at that point (37/63 mixture) are useful when soldering electronic components, particularly if done manually, since the solid phase is quickly entered as the solder cools. In contrast, when lead-tin mixtures were used to solder seams in automobile bodies a pasty state enabled a shape to be formed with a wooden paddle or tool, so a 70–30 lead to tin ratio was used. (Lead is being removed from such applications owing to its toxicity and consequent difficulty in recycling devices and components that include lead.) Exsolution When a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae. This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute. Alkali feldspar minerals, for example, have end members of albite, NaAlSi3O8 and microcline, KAlSi3O8. At high temperatures Na+ and K+ readily substitute for each other and so the minerals will form a solid solution, yet at low temperatures albite can only substitute a small amount of K+ and the same applies for Na+ in the microcline. This leads to exsolution where they will separate into two separate phases. In the case of the alkali feldspar minerals, thin white albite layers will alternate between typically pink microcline, resulting in a perthite texture. See also Solid solution strengthening Notes References External links DoITPoMS Teaching and Learning Package—"Solid Solutions" Materials science Mineralogy
Solid solution
[ "Physics", "Materials_science", "Engineering" ]
1,870
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
2,221,337
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Radziszowski
Stanisław P. Radziszowski (born June 7, 1953) is a Polish-American mathematician and computer scientist, best known for his work in Ramsey theory. Radziszowski was born in Gdańsk, Poland, and received his PhD from the Institute of Informatics of the University of Warsaw in 1980. His thesis topic was "Logic and Complexity of Synchronous Parallel Computations". From 1976 to 1980 he worked as a visiting professor in various universities in Mexico City. In 1984, he moved to the United States, where he took up a position in the Department of Computer Science at the Rochester Institute of Technology. Radziszowski has published many papers in graph theory, Ramsey theory, block designs, number theory and computational complexity. In a 1995 paper with Brendan McKay he determined the Ramsey number R(4,5)=25. His survey of Ramsey numbers, last updated in March 2017, is a standard reference on the subject and published at the Electronic Journal of Combinatorics. References External links Radziszowski's survey of small Ramsey numbers Home Page Sound file of Radziszowski speaking his own name (au format) 1953 births Living people Polish academics Polish mathematicians Polish computer scientists Rochester Institute of Technology faculty Combinatorialists University of Warsaw alumni
Stanisław Radziszowski
[ "Mathematics" ]
263
[ "Combinatorialists", "Combinatorics" ]
2,221,365
https://en.wikipedia.org/wiki/Queen%20Saovabha%20Memorial%20Institute
The Queen Saovabha Memorial Institute (QSMI) (; ) in Bangkok, Thailand, is an institute that specialises in the husbandry of venomous snakes, the extraction and research of snake venom, and vaccines, especially rabies vaccine. It houses the snake farm, a popular tourist attraction. The origins of the institute can be traced back to 1912 when King Rama VI granted permission for a government institute to manufacture and distribute rabies vaccine at the suggestion of Prince Damrong, whose daughter, , died from rabies infection. It was officially opened on 26 October 1913 in the Luang Building on Bamrung Muang Road as the Pastura Institute after Louis Pasteur, who discovered the first vaccine against rabies. In 1917 it was renamed the Pasteur Institute and placed under the supervision of the Thai Red Cross Society. The institute also produced vaccine against smallpox. The Travel and Immunization Clinic is now located here. If offers vaccines and pre-travel consultation. In the early-1920s the king offered his private property for the construction of a new home for the institute on Rama IV Road. The new buildings were officially opened on 7 December 1922, now named for the king's mother, Queen Saovabha Phongsri. At the same time, the institute's first director, Dr. Leopold Robert, requested contributions from foreigners living in Thailand for the establishment of a snake farm, which would enable the institute to manufacture antivenom for snake bites. Reportedly the second snake farm in the world after Instituto Butantan in São Paulo, Brazil, it was opened on 22 November 1923 by Queen Savang Vadhana, then President of the Thai Red Cross, on the institute's premises. Research into snake venom is highly important, since many people fall victim to venomous snake bites. Normally only an antidote that is based from the same snake's venom can save the individual's life. The snake farm houses thousands of some of the most venomous snakes in the world, such as the king cobra and all sorts of vipers. Visitors can see handlers interact with pythons, and venom extractions can be seen. There is also a museum, and lectures are given. The QSMI and the snake farm are near Chulalongkorn Hospital, on the corner of Henri Dunant Road and Rama IV Road. References External links FactZoo.com | Queen Saovabha Memorial Institute's Visitors Brochure Thai Red Cross | Queen Saovabha Memorial Institute Bangkok Metropolitan Administration | Queen Saovabha Memorial Institute Out About Bangkok | Queen Saovapha Memorial Institute (includes details of institute's work) Blurrytravel.com | Queen Saovabha Memorial Institute (includes photos) Thailandguidebook.com | Queen Saovabha Memorial Institute (includes photos) Virtualtourist.com | Bangkok Travel Guide (includes reviews and photos) Herpetology organizations Toxicology organizations Research institutes in Thailand Tourist attractions in Bangkok Biological research institutes Museums in Bangkok Thai Red Cross Society Unregistered ancient monuments in Bangkok Pathum Wan district
Queen Saovabha Memorial Institute
[ "Environmental_science" ]
623
[ "Toxicology organizations", "Toxicology" ]
2,221,377
https://en.wikipedia.org/wiki/Irma%20Goldberg
Irma Goldberg (born 1871) was a Russian-born chemist. She was one of the first female organic chemists to have and sustain a successful career, her work even being quoted in her own name in standard textbooks. Life Education Born in Moscow to a Russian-Jewish family, she later traveled to Geneva in the 1890s to study chemistry at Geneva University. Early research, Ullmann reaction Her early research included the development of a process to remove sulfur and phosphorus from acetylene. Her first article on the derivatives of benzophenone, coauthored by German chemist Fritz Ullmann, was published in 1897. She also researched and wrote a paper (published in 1904) on using copper as a catalyst for the preparation of a phenyl derivative of thiosalicylic acid, a process known as the Ullmann reaction; Goldberg is the only woman scientist unambiguously recognized for her own named reaction: the amidation (Goldberg) reaction. This modification to previous forms of the method was a great improvement, and was extremely helpful for laboratory-scale preparations. She coordinated on other forms of chemistry research with her husband, Fritz Ullmann, in what they called the Ullmann-Goldberg collaborative. Move to Berlin, synthetic dye research In 1905, both Goldberg and Ullman moved to Technische Hochschule in Berlin. Goldberg's research, along with that of the Ullmann-Goldberg collaborative, was also a part of Germany's synthetic dye industry. Their research helped with the creation of the synthetic alizarin industry, or the process of replacing natural dye obtained from madder. In 1909, Goldberg also collaborated with Hermann Friedman to review German patents under BASF (Badische Anilin und Soda Fabrik) and Bayer & Co. Farbenfabriken, providing notes on preparation for 114 dyes. Marriage and later life In 1910, Goldberg married Ullman. In 1923, they moved back to Geneva when Ullman accepted a faculty position at Geneva University. Her exact death date is not known, but her name does appear at the top of a list of people signing a memorial notice in a Geneva newspaper for her deceased husband, Fritz Ullmann in 1939. See also Timeline of women in science References External links 19th-century scientists from the Russian Empire 19th-century women scientists from the Russian Empire 20th-century Russian women scientists German women chemists Organic chemists 1871 births Year of death missing Emigrants from the Russian Empire to the German Empire 19th-century German women scientists 19th-century Swiss women scientists Chemists from the Russian Empire
Irma Goldberg
[ "Chemistry" ]
525
[ "Organic chemists" ]
2,221,532
https://en.wikipedia.org/wiki/American%20Coalition%20for%20Clean%20Coal%20Electricity
The American Coalition for Clean Coal Electricity (ACCCE, formerly ABEC or Americans for Balanced Energy Choices) is a U.S. non-profit advocacy group representing major American coal producers, utility companies and railroads. The organization seeks to influence public opinion and legislation in favor of coal-generated electricity in the United States, placing emphasis on the development and deployment of clean coal technologies. Since carbon capture and sequestration—which ACCCE and its member companies advocate to reduce greenhouse gas emissions from coal burning—has yet to be tested on a large scale, some have questioned whether this approach is feasible or realistic. In 2009, ACCCE faced a Congressional investigation when it was discovered that a lobbying firm hired by ACCCE had sent forged letters to lawmakers. The letters, purporting to come from a variety of minority-focused non-profit groups, were in fact forged by a lobbying firm hired by ACCCE. History The ACCCE began operations in 2008, the result of a combination of two organizations: the Center for Energy and Economic Development (CEED) and Americans for Balanced Energy Choices (ABEC). CEED had been founded in 1992 and since then had been involved in a wide range of climate and energy policies related to coal-based electricity. ABEC, formed in 2000, had focused on consumer based advocacy programs concerning the use of coal-based electricity. In 2008 these two groups were combined to form ACCCE, with the goal of focusing on both legislative and public advocacy efforts. The main programs include the America's Power campaign, launched in 2007 by ABEC, which had a significant presence during the 2008 and 2012 elections, as well as legislative efforts during the United States House of Representatives debate over the Waxman-Markey cap and trade legislation. Mike Duncan became President and CEO of ACCCE in 2012. By 2017, Duncan had been succeeded in that position by Paul Bailey, who had previously been named one of the top lobbyists by The Hill, where he was described as ACCCE's "point man for policy... essential in crafting the ACCCE's response" to the positions taken by the Obama administration. Another notable ACCCE lobbyist, Jaime Harrison, was a Democratic political operative, who worked on behalf of ACCCE from 2009 to 2012. Harrison thereafter chaired the South Carolina Democratic Party, and in January 2017 made a bid for DNC chair, which he ended on February 23 with his endorsement of eventual winner Tom Perez. Harrison later accepted an appointment from Perez as Associate Chairman and Counselor of the Democratic National Committee. In June 2017, Paul Bailey joined Republican leaders including Paul Ryan and Mitch McConnell in welcoming the announcement by President Donald Trump of the United States withdrawal from the Paris Agreement. Bailey stated that "[t]he previous administration volunteered to meet one of the most stringent goals of any country in the world, while many other countries do far less to reduce their emissions", and contended that "[m]eeting President Obama's goal would have led to more regulations, higher energy prices, and dependence on less reliable energy sources". The organization maintains headquarters in Washington, D.C. Working methods Legislative In addressing comprehensive climate change legislation that would place a cap on greenhouse gas emissions and allow for trading of emission allowances, the position of ACCCE has primarily involved advocating for the development and use of clean coal technologies, while also including provisions concerning the allocating of carbon emission allowances. ACCCE has as also expressed support for a ceiling on emission allowances prices. At the time in 2008 when the U.S. Senate was considering the Lieberman-Warner bill (bill number ) – which would create a cap and trade system – ACCCE changed its prior stance towards climate-change legislation, noting that it "would support mandatory limits on carbon dioxide as long as legislation met a set of principles that encouraged 'robust utilization of coal.'" The group also employed legislative efforts surrounding the 2009 debate over the Waxman-Markey cap and trade legislation (bill number ), to which it argued that regulations relating to carbon emissions in the proposed legislation would have led to increased energy costs and reduction in employment – potentially placing additional strain on the economy during the late 2000s recession. ACCCE provided proposals to Members of Congress for changes in this legislation, and approved of some changes that were adopted; though the group did not support the final version of the bill that passed the U.S. House of Representatives on account of concerns that there were not enough measures taken to control energy rates. Advocacy-based In addition to legislative methods employed by ACCCE, the organization has engaged in consumer-focused advocacy efforts in response to perceived environmental effects surrounding clean coal, consisting of direct to consumer advertising, as well as a group of approximately 225,000 volunteers (referred to as "America's Power Army," according to their website) involved in "visiting town hall meetings, fairs and other functions attended by members of Congress (to) ask questions about energy policy." Initiatives of this form became the subject of news coverage surrounding the 2008 United States presidential election, as the organization's presence at the Democratic National Convention, Republican National Convention, presidential debates and other events has been described as having impacted both Senators John McCain and Barack Obama's positions in regards to investment in clean coal. In the last debate held prior to the election in 2008, Senator Obama noted his support of clean coal technology, when prompted by Senator McCain to explain a time in which he had backed a position not favored by the leaders of the Democratic Party. The organization actively countered President Obama's climate change agenda, arguing in 2013 that the industry had "made strides toward making coal more environmentally friendly", with ten new clean-coal technology plants having built between 2011 and mid-2013, and five more in development or scheduled to begin operations at that time. Duncan asserted that regulations propounded by the Environmental Protection Agency had contributed to nearly 290 coal plant closures that year, with more likely to come if additional regulations were enacted, and that absent the additional burdens imposed on the industry by such interference, the coal industry would continue developing cleaner technologies. ACCCE supported the FutureGen capture and sequestration project, first announced by President George W. Bush in 2003. The project was funded in the American Recovery and Reinvestment Act of 2009, but the Department of Energy suspended the project in February, 2015. ACCCE's legislative positions and advocacy-based actions have been met with opposing viewpoints from advocacy groups such as the Sierra Club and Greenpeace, which have questioned the viability of developing environmentally sustainable clean coal within an adequate time frame and budget – representing their perspective that funding of such projects should be sourced exclusively from within the coal industry. Climate change denial Since 2009 the Coalition has – according to The Atlantic – "pushed outright denial of climate science". For example in a 2014 report it said that human-caused climate change was a "hypothesis" and a "debate" and claimed that carbon pollution would be beneficial instead of negative and that its benefits could be up to 400 times as high as its costs. Higher atmospheric carbon dioxide levels would be a benefit and more carbon dioxide had no "discernable influence" on how much sea-level would rise. A 2009 article by Josh Harkinson of Mother Jones magazine said ACCCE was among the most prominent organizations in promoting climate disinformation, grouping it with entities including ExxonMobil, the American Petroleum Institute, The Heartland Institute, and the Institute for Energy Research, as "members of the chorus claiming that global warming is a joke and that CO2 emissions are actually good for you". Forgery controversy During the 2009 debate over the Waxman/Markey bill, Bonner & Associates, a Washington, D.C. lobbying firm subcontracted by ACCCE though the Hawthorne Group to drum up "grassroots support" for this effort, sent a number of fraudulent letters to lawmakers on behalf of ACCCE. The letters were forged to appear to come from various minority-focused non-profit groups, including the National Association for the Advancement of Colored People and the American Association of University Women. When the forgery was exposed, and faced with a proposed Congressional investigation, ACCCE apologized to the community groups and to the members of Congress involved. ACCCE disavowed the tactic and blamed the forgeries on their subcontractor, who in turn blamed a temporary worker, acting alone. The Washington Post described the situation as a "saga of modern Washington, in which an 'American coalition' [the ACCCE] claiming 200,000 supporters still relies on a subcontractor to gin up favorable letters." An investigation of ACCCE by U.S. Representative Edward Markey, launched in response to the forgeries, disclosed an additional set of fraudulent letters sent to lawmakers to lobby against the environmental legislation. In response to the investigation, the ACCCE pledged to take "all possible steps" to verify the authenticity of letters sent by Bonner & Associates on its behalf, and stated that it was cooperating with Markey's investigation. The investigation concluded in October 2009 with Jack Bonner, chairman of Bonner & Associates, taking “full responsibility” for the forged letters. Bonner and Associates was never paid by ACCCE for their work on the legislation. Members , ACCCE is supported by 31 member organizations: Alliance Coal, LLC American Electric Power Associated Electric Cooperative Inc. Berwind Natural Resource Corp Big Rivers Electric Corporation BNSF Railway Buckeye Power Inc. Carbon Utilization Research Council (CURC) Caterpillar Incorporated Charah Crounse Corporation CSX Corporation Drummond Company, Inc. Jackson Walker LLP John T. Boyd Company Kentucky River Coal CorporationKentucky Coal Association Komatsu Mining Corporation Murray Energy Corporation Natural Resource Partners Norfolk Southern Corporation Oglethorpe Power Cooperative Ohio CAT Peabody Energy Corporation PowerSouth Energy Cooperative Prairie State Generating Company, LLC Southern Company Trapper Mining Union Pacific Railroad Western Fuels Association White Stallion Energy Center, LLC See also Coal power in the United States Clean coal technology References External links American Coalition for Clean Coal Electricity website American Coalition for Clean Coal Electricity at SourceWatch Climate change in the United States Coal in the United States Coal technology Political advocacy groups in the United States Energy organizations
American Coalition for Clean Coal Electricity
[ "Engineering" ]
2,077
[ "Energy organizations" ]
2,221,642
https://en.wikipedia.org/wiki/Assay%20sensitivity
Assay sensitivity is a property of a clinical trial defined as the ability of a trial to distinguish an effective treatment from a less effective or ineffective intervention. Without assay sensitivity, a trial is not internally valid and is not capable of comparing the efficacy of two interventions. Importance Lack of assay sensitivity has different implications for trials intended to show a difference greater than zero between interventions (superiority trials) and trials intended to show non-inferiority. Non-inferiority trials attempt to rule out some margin of inferiority between a test and control intervention i.e. rule out that the test intervention is no worse than the control intervention by a chosen amount. If a trial intended to demonstrate efficacy by showing superiority of a test intervention to control lacks assay sensitivity, it will fail to show that the test intervention is superior and will fail to lead to a conclusion of efficacy. In contrast, if a trial intended to demonstrate efficacy by showing a test intervention is non-inferior to an active control lacks assay sensitivity, the trial may find an ineffective intervention to be non-inferior and could lead to an erroneous conclusion of efficacy. When two interventions within a trial are shown to have different efficacy (i.e., when one intervention is superior), that finding itself directly demonstrates that the trial had assay sensitivity (assuming the finding is not related to random or systematic error). In contrast, a trial that demonstrates non-inferiority between two interventions, or an unsuccessful superiority trial, generally does not contain such direct evidence of assay sensitivity. However, the idea that non-inferiority trials lack assay sensitivity has been disputed. Differences in sensitivity Assay sensitivity for a non-inferiority trial may depend upon the chosen margin of inferiority ruled out by the trial, and the design of the planned non-inferiority trial. The chosen margin of inferiority in a non-inferiority trial cannot be larger than the largest effect size which the control intervention reliably and reproducibly demonstrates compared to placebo or no treatment in past superiority trials. For instance, if there is reliable and reproducible evidence from previous superiority trials of an effect size of 10% for a control intervention compared to placebo, an appropriately designed non-inferiority trial designed to rule out that the test intervention may be as much as 5% less effective than the control would have assay sensitivity. On the other hand, with this same data, a noninferiority trial designed to rule out that the test intervention may be as much as 15% less effective than the control may not have assay sensitivity, since this trial would not ensure that the test intervention is any more effective than a placebo given that the effect ruled out is larger than the effect of the control compared to placebo. The choice of the margin is sometimes problematic in non-inferiority trials. Given investigators desire to choose larger margins to decrease the sample size needed to perform a trial, the chosen margin is sometimes larger than the effect size of the control compared to placebo. In addition, a valid noninferiority trial is not possible in situations in which there is a lack of data demonstrating a reliable and reproducible effect of the control compared to placebo. In addition to choosing a margin based upon credible past evidence, to have assay sensitivity, the planned non-inferiority trial must be designed in a way similar to the past trials which demonstrated the effectiveness of the control compared to placebo, the so-called "constancy assumption". In this way, non-inferiority trials have a feature in common with external (historically) controlled trials. This also means that non-inferiority trials are subject to some of the same biases as historically controlled trials; that is, the effect of a drug in a past trial may not be the same in a current trial given changes in medical practice, differences in disease definitions or changes in the natural history of a disease, differences in outcome timing and definitions, usage of concomitant medications, etc. The finding of "difference" or "no difference" between two interventions is not a direct demonstration of the internal validity of the trial unless another internal control confirms that the study methods have the ability to show a difference, if one exists, over the range of interest (i.e. the trial contains a third group receiving placebo). Since most clinical trials do not contain an internal "negative" control (i.e. a placebo group) to internally validate the trial, the data to evaluate the validity of the trial comes from past trials external to the current trial. See also Specificity (tests) Spectrum bias References External links ClinicalTrials.gov from US National Library of Medicine FDA Website Clinical research Drug discovery Clinical trials
Assay sensitivity
[ "Chemistry", "Biology" ]
963
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
8,783,558
https://en.wikipedia.org/wiki/Environmental%20Science%20Services%20Administration
The Environmental Science Services Administration (ESSA) was a United States Federal executive agency created in 1965 as part of a reorganization of the United States Department of Commerce. Its mission was to unify and oversee the meteorological, climatological, hydrographic, and geodetic operations of the United States. It operated until 1970, when it was replaced by the new National Oceanic and Atmospheric Administration (NOAA). The first U.S. Government organization with the word "environment" in its title, ESSA was the first such organization chartered to study the global natural environment as whole, bringing together the study of the oceans with that of both the lower atmosphere and the ionosphere. This allowed the U.S. Government for the first time to take a comprehensive approach to studying the oceans and the atmosphere, also bringing together various technologies – ships, aircraft, satellites, radar, and communications systems – that could operate together in gathering data for scientific study. Establishment and mission In May 1964, the U.S. Assistant Secretary of Commerce for Science and Technology, Dr. Herbert Holloman, established a special committee to review the environmental science service activities and responsibilities of the United States Department of Commerce. Committee members included the Director of the United States Weather Bureau, Dr. Robert M. White (1923–2015); the Director of the United States Coast and Geodetic Survey, Rear Admiral Henry Arnold Karo (1903–1986) of the United States Coast and Geodetic Survey Corps; the Director of the National Bureau of Standards, Allen V. Astin (1904–1984); and a panel of scientists from industry and academia. The committee's goal was to consider ways of improving the Department of Commerce's environmental science efforts by improving management efficiency and making the provision of environmental science services to the public more effective. The committee's work resulted in its recommendation that the Department of Commerce consolidate various scientific efforts scattered within and between the Weather Bureau, Coast and Geodetic Survey, and National Bureau of Standards by establishing a new parent agency – the Environmental Science Services Administration (ESSA) – which would coordinate the activities of the Weather Bureau and Coast and Geodetic Survey and bring at least some of their efforts, along with some of the work done in the National Bureau of Standards, together into new organizations that focused scientific and engineering mission support for shared areas of inquiry. In a message to the United States Congress dated 13 May 1965 in which he formally proposed the creation of ESSA, U.S. President Lyndon Johnson described ESSA's mission in this way: The new Administration will then provide a single national focus for our efforts to describe, understand, and predict the state of the oceans, the state of the lower and upper atmosphere, and the size and shape of the earth. The Director of the Weather Bureau, Dr. Robert M. White, explained that the creation of ESSA: responded to an increasing national need for adequate warnings of severe natural hazards (e.g., tornadoes, hurricanes, floods); responded to technological advances in capabilities to observe the physical environment and communicate and process environmental data; and would enable scientists to investigate the physical environment as a "scientific whole" rather than a "collection of separate and distinct fields of scientific interest." ESSA was established on 13 July 1965 under the Department of Commerce's Reorganization Plan No. 2 of 1965. Its creation brought the Weather Bureau and the Coast and Geodetic Survey, as well as the Central Radio Propagation Laboratory that had been part of the National Bureau of Standards, together under a single parent scientific agency for the first time. Although the Weather Bureau and Coast and Geodetic Survey retained their independent identities under ESSA, the offices of Director of the Weather Bureau and Director and Deputy Director of the Coast and Geodetic Survey were abolished. These offices were replaced by a new Administrator and Deputy Administrator of ESSA. Components and activities Headquarters ESSA was headquartered in Rockville, Maryland, with the ESSA Administrator as its senior executive. It consisted of five principal service and research elements, each of which reported directly to the ESSA Administrator: the Institutes for Environmental Research, reorganized in 1967 as the ESSA Research Laboratories; the Environmental Data Service; the United States Weather Bureau; the National Environmental Satellite Center; and the United States Coast and Geodetic Survey. Various other headquarters staff elements also reported directly to the Administrator, including the U.S. ESSA Commissioned Officer Corps (or "ESSA Corps"). Institutes for Environmental Research/ESSA Research Laboratories Institutes for Environmental Research (1965–1967) To tackle scientific and technological problems related to understanding the global environment, ESSA created the Institutes for Environmental Research, based in Boulder, Colorado. The four institutes were: The Institute for Telecommunications Sciences and Aeronomy, made up mostly of personnel from the National Bureau of Standards′ old Central Radio Propagation Laboratory and the Geoacoustics Group of the National Bureau of Standards. The Institute for Earth Sciences, made up of staff from the Research Division of the United States Coast and Geodetic Survey. The Institute for Oceanography, made up of Coast and Geodetic Survey personnel. The Institute for Atmospheric Sciences, mostly staffed by personnel from the U.S. Weather Bureau's Office of Meteorological Research. ESSA Research Laboratories (1967–1970) To more precisely reflect the scope and mission of the individual elements of the Institutes for Environmental Research, ESSA reorganized them into the ESSA Research Laboratories in 1967. The ESSA Research Laboratories were made up of: The Earth Sciences Laboratory at Boulder, Colorado, which studied geomagnetism, seismology, geodesy, and related earth sciences; earthquake processes; the internal structure and accurate figure of the Earth; and the distribution of the Earth's mass. The Atlantic Oceanographic Laboratory at Miami, Florida, which studied oceanography, with an emphasis on the geology and geophysics of ocean basins, oceanic processes, sea-air interactions, hurricane research, and weather modification. The Pacific Oceanographic Laboratory at Seattle, Washington, which studied oceanography, the geology and geophysics of the Pacific Ocean Basin and its margins; oceanic processes and dynamics; and tsunami generation, propagation, modification, detection, and monitoring The Atmospheric Physics and Chemistry Laboratory at Boulder, Colorado, which studied the physics of clouds, precipitation, and the chemical composition of and nucleating substances in the lower atmosphere, and conducted laboratory and field experiments examining ways of developing feasible methods of weather modification The Air Resources Laboratory at Silver Spring, Maryland, which studied the diffusion, transport, and dissipation of atmospheric contaminants and the development of methods for the prediction and control of air pollution. The Geophysical Fluid Dynamics Laboratory at Princeton, New Jersey, which studied the dynamics and physics of geophysical fluid systems and the development of a theoretical basis for the behavior and properties of the atmosphere and the oceans through mathematical modeling and computer simulation,. The National Hurricane Research Laboratory at Miami, Florida, which examined tropical cyclones scientifically in order to improve predictions. The National Severe Storms Laboratory at Norman, Oklahoma, which studied tornadoes, squall lines, thunderstorms, and other severe local convective phenomena with a goal of improving methods of forecasting, detecting, and providing advance warnings of such storms. The Space Disturbances Laboratory at Boulder, Colorado, which studied the nature, behavior, and mechanisms of space disturbances and the development and use of techniques for continuous monitoring and early detection and reporting of important space disturbances. The Aeronomy Laboratory at Boulder, Colorado, which conducted theoretical, laboratory, rocket, and satellite studies of the physical and chemical processes controlling the mesosphere, thermosphere, exosphere and ionosphere of the Earth and equivalent regions of the atmospheres of other planets. The Wave Propagation Laboratory at Boulder, Colorado, which sought to develop new methods for remote sensing of the geophysical environment, with a special emphasis on the propagation of sound waves and of electromagnetic waves at millimeter, infrared, and optical frequencies. The Institute for Telecommunications Science in Boulder, Colorado, which served as the central U.S. Government agency for research and services in the propagation of radio waves, the radio properties of the Earth and its atmosphere, the nature of radio noise and electromagnetic interference, information transmission and antennas, and methods for the more effective use of the radio spectrum for telecommunications. The Research Flight Facility in Miami, Florida, which outfitted and operated aircraft specially instrumented for research and made aerial environmental measurements for ESSA and other groups. Environmental Data Service Under ESSA, the National Data Center was renamed the Environmental Data Service (EDS). In 1966, ESSA transferred the U.S. Coast and Geodetic Survey's Seismology Data Centers to Asheville, North Carolina, where they merged with the U.S. Weather Bureau's National Weather Records Center to create ESSA's Environmental Data Center. United States Weather Bureau Under the 1965 reorganization, the United States Weather Bureau became subordinate to ESSA. It retained its identity as the U.S. Weather Bureau while under ESSA. It was renamed the National Weather Service (NWS) in 1970. National Environmental Satellite Center The National Aeronautics and Space Administration (NASA) began weather satellite programs in 1958, and ESSA inherited these upon its creation in 1965. ESSA's National Environmental Satellite Center worked jointly with NASA to develop weather satellite capabilities. It managed the first operational U.S. polar orbiting weather satellite system, known as the Television Infrared Observation Satellite (TIROS) Program. These satellites, launched between 1960 and 1965 and known as TIROS 1 through 10, were the first generation of American weather satellites. These early satellites carried low-resolution television and infrared cameras. Designed mainly to test the feasibility of weather satellites, TIROS proved to be extremely successful. Four were still operating when ESSA was established in 1965. TIROS paved the way for the more advanced weather satellites of the TIROS Operational System (TOS). The ESSA National Environmental Satellite Center worked jointly with NASA to deploy the new TOS satellites, which constituted an operational experiment with early imaging and weather broadcast systems. Nine of ESSA's TOS satellites were launched between 1966 and 1969, each named "ESSA" followed by a number from 1 to 9, beginning with the launch of ESSA-1 on 3 February 1966. The last of these satellites was decommissioned in 1977, but ESSA's work with NASA laid the foundation for the deployment of the first geostationary weather satellites, the Synchronous Meteorological Satellites of 1974 and 1975. United States Coast and Geodetic Survey Under the 1965 reorganization, the United States Coast and Geodetic Survey, whose history dated to 1807, was subordinated to ESSA. While under ESSA, it retained its distinct identity and continued to carry out its responsibilities for coastal and oceanic hydrographic surveys, geodetic work in the interior of the United States and at sea, and other scientific work, such as in seismology. The Coast and Geodetic Survey also continued to operate its fleet of survey ships and research ships while subordinate to ESSA. U.S. ESSA Commissioned Officer Corps (ESSA Corps) In the 1965 reorganization, the commissioned officers of the United States Coast and Geodetic Survey Corps, a component of the U.S. Coast and Geodetic Survey with a history dating back to 1917, were transferred to the control of the United States Secretary of Commerce. This created the United States Environmental Science Services Commissioned Officer Corps, known informally as the "ESSA Corps," whose director reported directly to the ESSA Administrator. Like the Coast and Geodetic Survey Corps before it, the ESSA Corps was responsible for providing commissioned officers to operate the Coast and Geodetic Survey's ships, fly aircraft, support peacetime defense requirements and purely civilian scientific projects, and provide a ready source of technically skilled officers which could be incorporated into the United States armed forces in time of war, and was one of the uniformed services of the United States. Senior leadership Robert M. White (1923–2015) served as the Administrator of ESSA throughout its existence. On the day ESSA and the ESSA Corps were created, Coast and Geodetic Survey Corps Rear Admiral Henry Arnold Karo (1903–1986) simultaneously became an ESSA Corps officer and was promoted to vice admiral to serve as ESSA's first deputy administrator. At the time the highest-ranking officer in the combined history of the Coast and Geodetic Survey Corps and ESSA Corps, Vice Admiral Karo served as Deputy Administrator of ESSA from 1965 to 1967. He was the only officer in the combined history of the Coast and Geodetic Survey Corps, ESSA Corps, and the ESSA Corps′ successor, the National Oceanic and Atmospheric Administration Commissioned Corps (NOAA Corps), to reach that rank until NOAA Corps Rear Admiral Michael S. Devany was promoted to vice admiral on 2 January 2014. The first Director of the ESSA Corps was Rear Admiral James C. Tison, Jr. (1908–1991), who served in this capacity from 1965 to 1968. He was succeeded by the second and last Director of the ESSA Corps, Rear Admiral Don A. Jones (1912–2000), who served from 1968 to 1970. Flag The flag of the Environmental Science Services Administration was in essence the flag of the United States Coast and Geodetic Survey, modified by the addition of a blue circle to the center of the red triangle, within which was a stylized, diamond-shaped map of the world. Because the Coast and Geodetic Survey retained its identity after it was placed under ESSA in 1965, ships of the Survey's fleet continued to fly the Coast and Geodetic Survey flag as a distinctive mark while the Survey was subordinate to ESSA. Disestablishment and replacement by NOAA In June 1966, the U.S. Congress passed the Marine Resources and Engineering Development Act, which declared that it was U.S. Government policy to: ...develop, encourage, and maintain a coordinated, comprehensive, and long-range national program in marine science for the benefit of mankind, to assist in protection of health and property, enhancement of commerce, transportation, and national security, rehabilitation of our commercial fisheries, and increased utilization of these and other resources. The act created a Commission on Marine Science, Engineering, and Resources – which came to be known informally as the "Stratton Commission" – and gave it the responsibility to review ongoing and planned U.S. Government marine science activities and recommend a national oceanographic program and a reorganization of the U.S. Government to carry out the program. President Lyndon Johnson appointed 15 members to the commission; Ford Foundation chairman Julius A. Stratton chaired it, and its members included attorney Leon Jaworski, Dean of the Graduate School of Oceanography at the University of Rhode Island John Knauss, ESSA Administrator Robert M. White, and other representatives of U.S. Government agencies, U.S. state governments, industry, academia, and other institutions with programs or interest in marine science and technology; it also included four U.S. Congressional advisors, including former U.S. Senator Warren G. Magnuson of Washington. The commission began its work in early 1967, and on 9 January 1969 it issued its final report, entitled Our Nation and the Sea: A Plan For National Action. The Commission determined that "because of the importance of the seas to this Nation and the world, our Federal organization of marine affairs must be put in order," and that fulfilling the U.S. ocean policy declared in the 1966 act and making "full and wise use of the marine environment" required the study of both the ocean and the atmosphere and their interactions with one another. Accordingly, it recommended the creation of an independent "National Oceanic and Atmospheric Agency" to administer the principal civil marine and atmospheric programs of the United States, and that the new agency be composed of the United States Coast Guard from the United States Department of Transportation; ESSA and its subordinates, the National Weather Service and U.S. Coast and Geodetic Survey, from the U.S. Department of Commerce; the Bureau of Commercial Fisheries and the functions of the Bureau of Sport Fisheres and Wildlife dealing with marine and migratory fishes from the United States Department of the Interior′s United States Fish and Wildlife Service; the National Sea Grant Program from the National Science Foundation; elements of the United States Lake Survey from the United States Department of the Army; and the National Oceanographic Data Center from the United States Department of the Navy. Soon after the Commission published the report, the U.S. Congress began to deliberate action on it, as did the Advisory Council on Executive Organization created by President Richard Nixon in 1969. Among the Advisory Council's proposals for reorganization of the executive branch of the United States Government was one that proposed the replacement of the U.S. Department of the Interior with a new U.S. Department of Natural Resources, and that this new department include a "National Oceanic and Atmospheric Administration" which combined ESSA with some elements of the Department of the Interior; the Nixon administration considered placing the new Administration within the Department of the Interior as an interim measure pending the creation of a new Department of Natural Resources. Noting that two-thirds of the new Administration would be made up of ESSA personnel and funding, United States Secretary of Commerce Maurice Stans (1908–1998) proposed instead that the new Administration become part of the Department of Commerce, where ESSA already was in place. Nixon decided to side with Stans, as well as to incorporate some of the Stratton Commission's and Advisory Council's recommendations, and in early July 1970 submitted Department of Commerce Reorganization Plan No. 4. It proposed the creation in 90 days within the Department of Commerce of the new National Oceanic and Atmospheric Administration (NOAA), consisting of ESSA; the Bureau of Commercial Fisheries and the marine sport fishing program of the Bureau of Sport Fisheries and Wildlife; the Office of Sea Grant Programs from the National Science Foundation; the mapping, charting, and research functions of the U.S. Army's U.S. Lake Survey; the U.S. Navy's National Oceanographic Data Center; the Marine Minerals Technology Center from the Department of the Interior's United States Bureau of Mines; the U.S. Navy's National Oceanographic Instrumentation Center; and the Department of Transportation's National Data Buoy Project, although it did not follow the Stratton Commission's recommendation to include the U.S. Coast Guard in NOAA. Accordingly, on 3 October 1970, ESSA was abolished as part of Reorganization Plan No. 4 of 1970, and it was replaced by NOAA. Under NOAA, the National Weather Service continued to operate as such, while the Coast and Geodetic Survey was disestablished and its functions were divided under various new NOAA offices, all of which fell under NOAA's new National Ocean Survey (later renamed the National Ocean Service). The Bureau of Commercial Fisheries of the United States Department of the Interior′s United States Fish and Wildlife Service was transferred to NOAA, and its fisheries science and oceanographic research ships joined the hydrographic survey ships of the former Coast and Geodetic Survey fleet to form the new NOAA fleet. In the 1970 reorganization that created NOAA, the ESSA Corps was resubordinated to NOAA, becoming the National Oceanic and Atmospheric Administration Commissioned Officer Corps, known informally as the "NOAA Corps." Like its predecessors, the Coast and Geodetic Survey Corps and ESSA Corps, the NOAA Corps became one of the then-seven (now eight) uniformed services of the United States, and carries out responsibilities similar to those of the ESSA Corps. Legacy The first U.S. Government organization to address environmental science and earth sciences holistically, ESSA pioneered the revolutionary organizational concept of uniting scientific and engineering activities that had been scattered among its subordinate agencies so as to establish unified mission support to meet environmental science and technology objectives. ESSA's successor, NOAA, continued and broadened the application of this organizational concept by adding marine life sciences to its portfolio of holistic study of the oceans and atmosphere alongside the earth sciences subordinated to ESSA. ESSA served as the prototype not only for NOAA but also for the United States Environmental Protection Agency, which was established two months after NOAA, on 2 December 1970. ESSA's work in designing weather satellites and managing their missions was a major step forward both technologically and in terms of weather monitoring and prediction. It prompted further development of weather satellites in the exploration of their use, playing a major role in the development of modern weather satellites. See also National Oceanic and Atmospheric Administration National Weather Service Television Infrared Observation Satellite TIROS-1 TIROS-2 TIROS-3 TIROS-4 TIROS-5 TIROS-6 TIROS-7 TIROS-8 TIROS-9 TIROS-10 ESSA-1 ESSA-2 ESSA-3 ESSA-4 ESSA-5 ESSA-6 ESSA-7 ESSA-8 ESSA-9 United States Coast and Geodetic Survey References External links NOAA Central Library Our Nation and the Sea: A Plan For National Action Historic technical reports from the Environmental Science Services Administration (and other Federal agencies) are available in the Technical Report Archive and Image Library (TRAIL) Government agencies established in 1965 Agencies of the United States government 1965 establishments in the United States 1970 disestablishments in the United States United States Department of Commerce Meteorological instrumentation and equipment Satellite meteorology Space agencies Meteorology research and field projects Government agencies disestablished in 1970
Environmental Science Services Administration
[ "Technology", "Engineering" ]
4,424
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
8,783,636
https://en.wikipedia.org/wiki/Marianne%20Grunberg-Manago
Marianne Grunberg-Manago (January 6, 1921 – January 3, 2013) was a Soviet-born French biochemist. Her work helped make possible key discoveries about the nature of the genetic code. Grunberg-Manago was the first woman to lead the International Union of Biochemistry and the 400-year-old French Academy of Sciences. Early life Grunberg-Manago was born into a family of artists who adhered to the teachings of the Swiss educational reformer Johann Pestalozzi. When she was 9 months old, her parents emigrated from the Soviet Union to France. Education and Research Grunberg-Manago studied biochemistry and, in 1955, while working in the lab of Spanish-American biochemist Severo Ochoa, she discovered the first nucleic-acid-synthesizing enzyme. Initially, everyone thought the new enzyme was an RNA polymerase used by E. coli cells to make long chains of RNA from separate nucleotides. Although the new enzyme could link a few nucleotides together, the reaction was highly reversible and it later became clear that the enzyme, polynucleotide phosphorylase, usually catalyzes the breakdown of RNA, not its synthesis. Nonetheless, the enzyme was extraordinarily useful and important. Almost immediately, Marshall Nirenberg and J. Heinrich Matthaei put it to use to form the first three-nucleotide RNA codons, which coded for the amino acid phenylalanine. This first step in cracking the genetic code entirely depended on the availability of Grunberg-Manago’s enzyme. In 1959, Ochoa and Arthur Kornberg won the 1959 Nobel Prize in Physiology or Medicine "for the synthesis of the nucleic acids RNA and DNA." She was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1978, a Foreign Associate Member of the National Academy of Sciences in 1982, and an International member of the American Philosophical Society in 1992. Grunberg-Manago was the first woman president of the International Union of Biochemistry (1985–1988), and she was also the first woman to preside over the French Academy of Sciences (1995–1996). Later life and death Late in her career, Grunberg-Manago was named emeritus director of research at CNRS, France's National Center for Scientific Research. Grunberg-Manago died in January 2013, three days before her 92nd birthday. Awards and nominations Member of the EMBO (1964) Charles-Léopold-Mayer Prize from the French Academy of Sciences (1966) Foreign member of the American Society of Biological Chemists (1972) Member of the Federation of American Societies for Experimental Biology) Member of the French Society for biochemistry and molecular biology Foreign member of the Franklin Society (1995) Member of the Spanish Society for molecular biology Member of the Greek Society for molecular biology Member of the Executive Board of the ICSU Foreign member of the New York Academy of Sciences (1977) Foreign member of the American Academy of Arts and Sciences (1978) Foreign member of the National Academy of Sciences in the United States (1982) Honorary foreign member of the USSR Academy of Sciences (1988) Member of Academia Europaea (1988) Honorary foreign member of the Russian Academy of sciences (1991) Foreign member of the Ukrainian Academy of Sciences (1991) Grand Officer of the National Order of the Legion of Honor(2008) References 1921 births 2013 deaths French biochemists Officers of the French Academy of Sciences Fellows of the American Academy of Arts and Sciences Grand Officers of the Legion of Honour Foreign associates of the National Academy of Sciences Foreign members of the USSR Academy of Sciences Foreign members of the Russian Academy of Sciences French women scientists Women biochemists 20th-century American women scientists 20th-century American scientists 20th-century French women Soviet emigrants to France 21st-century American women Members of the American Philosophical Society Presidents of the International Union of Biochemistry and Molecular Biology
Marianne Grunberg-Manago
[ "Chemistry" ]
802
[ "Biochemists", "Women biochemists" ]
8,784,464
https://en.wikipedia.org/wiki/Object-capability%20model
The object-capability model is a computer security model. A capability describes a transferable right to perform one (or more) operations on a given object. It can be obtained by the following combination: An unforgeable reference (in the sense of object references or protected pointers) that can be sent in messages. A message that specifies the operation to be performed. The security model relies on not being able to forge references. Objects can interact only by sending messages on references. A reference can be obtained by: Initial conditions: In the initial state of the computational world being described, object A may already have a reference to object B. Parenthood: If A creates B, at that moment A obtains the only reference to the newly created B. Endowment: If A creates B, B is born with that subset of A's references with which A chose to endow it. Introduction: If A has references to both B and C, A can send to B a message containing a reference to C. B can retain that reference for subsequent use. In the object-capability model, all computation is performed following the above rules. Advantages that motivate object-oriented programming, such as encapsulation or information hiding, modularity, and separation of concerns, correspond to security goals such as least privilege and privilege separation in capability-based programming. The object-capability model was first proposed by Jack Dennis and Earl C. Van Horn in 1966. Loopholes in object-oriented programming languages Some object-based programming languages (e.g. JavaScript, Java, and C#) provide ways to access resources in other ways than according to the rules above including the following: Direct assignment to the instance variables of an object in Java and C#. Direct reflective inspection of the meta-data of an object in Java and C#. The pervasive ability to import primitive modules, e.g. java.io.File that enable external effects. Such use of undeniable authority violates the conditions of the object-capability model. Caja and Joe-E are variants of JavaScript and Java, respectively, that impose restrictions to eliminate these loopholes. Advantages of object capabilities Computer scientist E. Dean Tribble stated that in smart contracts, identity-based access control did not support well dynamically changing permissions, compared to the object-capability model. He analogized the ocap model with giving a valet the key to one's car, without handing over the right to car ownership. The structural properties of object capability systems favor modularity in code design and ensure reliable encapsulation in code implementation. These structural properties facilitate the analysis of some security properties of an object-capability program or operating system. Some of these in particular, information flow properties can be analyzed at the level of object references and connectivity, independent of any knowledge or analysis of the code that determines the behavior of the objects. As a consequence, these security properties can be established and maintained in the presence of new objects that contain unknown and possibly malicious code. These structural properties stem from the two rules governing access to existing objects: 1) An object A can send a message to B only if object A holds a reference to B. 2) An object A can obtain a reference to C only if object A receives a message containing a reference to C. As a consequence of these two rules, an object can obtain a reference to another object only through a preexisting chain of references. In short, "Only connectivity begets connectivity." Glossary of related terms object-capability system A computational system that implements principles described in this article. object An object has local state and behavior. An object in this sense is both a subject and an object in the sense used in the access control literature. reference An unforgeable communications channel (protected pointer, opaque address) that unambiguously designates a single object, and provides permission to send messages to that object. message What is sent on a reference. Depending on the system, messages may or may not themselves be first-class objects. request An operation in which a message is sent on a reference. When the message is received, the receiver will have access to any references included in the message. attenuation A common design pattern in object-capability systems: given one reference of an object, create another reference for a proxy object with certain security restrictions, such as only permitting read-only access or allowing revocation. The proxy object performs security checks on messages that it receives and passes on any that are allowed. Deep attenuation refers to the case where the same attenuation is applied transitively to any objects obtained via the original attenuated object, typically by use of a "membrane". Implementations Almost all historical systems that have been described as "capability systems" can be modeled as object-capability systems. (Note, however, that some uses of the term "capability" are not consistent with the model, such as POSIX "capabilities".) KeyKOS, EROS, Integrity (operating system), CapROS, Coyotos, seL4, OKL4 and Fiasco.OC are secure operating systems that implement the object-capability model. Languages that implement object capabilities Act 1 (1981) Eden (1985), Emerald (1987), Trusty Scheme (1992), W7 (1995), Joule (1996), Original-E (1997), Oz-E (2005), Joe-E (2005), CaPerl (2006), Emily (2006) Caja (2007–2021) Monte (2008–present) Pony (2014–present) Wyvern (2012–present) Newspeak (2007–present) Hacklang (2021-present) Rholang (2018-present) See also Capability-based security Capability-based addressing Actor model References Computer security models
Object-capability model
[ "Engineering" ]
1,196
[ "Cybersecurity engineering", "Computer security models" ]
8,784,618
https://en.wikipedia.org/wiki/Ass%20to%20mouth
Ass to mouth (abbreviated as ATM or A2M in pornography) is a slang term associated with the porn industry describing anal sex immediately followed by oral sex. The term is primarily used to describe a sexual practice whereby an erect penis is removed from a receptive partner's anus and then directly put into their mouth, or possibly the mouth of another. Health concerns If the recipient of ass-to-mouth is performing fellatio on a penis or object that was removed from their own rectum, the health risks are generally limited to disturbances of the gastrointestinal tract, which may proceed from introducing normal intestinal flora from the rectum to the mouth and upper digestive tract. If the recipient's ano-rectal area is infected with a sexually transmitted disease like gonorrhea, however, there is an added risk of transmitting the infection to that person's mouth or throat. Intestinal parasites and other organisms can also be carried in feces. Risk of sexually transmitted infection (STI) or parasitic transmission exists only if fecal particulate from an infected person is transmitted to the mouth of an uninfected person. In porn Porn industry performers often use enemas before filming anal sex sequences; however, this is primarily to eliminate the possibility of any fecal matter appearing on video rather than disease prevention. Ass to mouth, along with a variant of ass to mouth called ATOGM (ass to other girls mouth), began to appear more frequently in hardcore pornography in the early 2000s, seeing an increase in popularity over the next decade. See also Anilingus Coprophilia Dirty Sanchez (sexual act) Pegging References Anal eroticism Oral sex Sexual acts Pornography terminology
Ass to mouth
[ "Biology" ]
347
[ "Sexual acts", "Behavior", "Sexuality", "Mating" ]
8,784,833
https://en.wikipedia.org/wiki/Diaphragm%20compressor
A diaphragm compressor is a variant of the classic reciprocating compressor with backup and piston rings and rod seal. The compression of gas occurs by means of a flexible membrane, instead of an intake element. The back and forth moving membrane is driven by a rod and a crankshaft mechanism. Only the membrane and the compressor box come in touch with pumped gas. For this reason this construction is the best suited for pumping toxic and explosive gases. The membrane has to be reliable enough to take the strain of pumped gas. It must also have adequate chemical properties and sufficient temperature resistance. A diaphragm compressor is the same as a membrane compressor. Invention In the late 19th century William Burton started a workshop building pumps and air compressors at Nogent-sur-Oise, 60km north of Paris, France. Henri Corblin, generally recognised as the inventor of the metallic diaphragm compressor, was based nearby in Paris itself and in 1923 he received a US patent for his invention and design work. Compression of hydrogen gas The photograph included in this section depicts a three-stage diaphragm compressor used to compress hydrogen gas to 6,000 psi (41 MPa) for use in a prototype hydrogen and compressed natural gas (CNG) fueling station built in downtown Phoenix, Arizona by the Arizona Public Service company (an electric utilities company). Reciprocating compressors were used to compress the natural gas. The prototype alternative fueling station was built in compliance with all of the prevailing safety, environmental and building codes in Phoenix to demonstrate that such fueling stations could be built in urban areas. Hydrogen compression can also be achieved without the use of a compressor in high pressure electrolysis, or with an ionic liquid piston compressor. See also Axial compressor References External links Kotech Compressor Compressor CFM Calculator Gas compressors
Diaphragm compressor
[ "Chemistry" ]
377
[ "Gas compressors", "Turbomachinery" ]
8,785,883
https://en.wikipedia.org/wiki/Tuber%20magnatum
Tuber magnatum, the white truffle (Italian: ), is a species of truffle in the order Pezizales and family Tuberaceae. It is found in southern Europe, the Balkans and Thailand. Description Fruiting in autumn, they can reach diameter and , though are usually much smaller. The flesh is pale cream or brown with white marbling. Distribution It is found mainly in the Langhe and Montferrat areas of the Piedmont region in northern Italy and, most famously, in the countryside around the cities of Alba and Asti. Acqualagna, in the northern part of the Marche near Urbino is another center for the production and commercialization of white truffles, and its annual festival is one of the most important in Italy. They can also be found in Molise, Abruzzo and in the hills around San Miniato, in Tuscany. White truffles have also been found in Croatia (Istria, Motovun forest along the Mirna river), in the Ticino and Geneva cantons of Switzerland, in south-east France, in Sicily, Hungary, Serbia, Slovenia (along the Dragonja and Rizana river), Greece, and in Thailand. In recent years, the search for truffles became very popular in Bosnia and Herzegovina. Especially abundant occurrence is recorded in the regions of Vlašić, Lisina and Kozara, and lately, after discovery of its presence, in the western part of the Herzegovina region, around the village of Služanj and the town of Čitluk. Habitat Host plants They grow symbiotically with oak, hazel, poplar and beech. The most common host plants cited in the literature are oaks, including associations with Mediterranean species (Q. pubescens, Q. cerris and Q. ilex) and temperate species (Q. robur and Q. petraea). The second most common host plant cited are poplars, mainly Populus alba (about 13%) but also P. nigra, P. tremula, P. canadensis and P. deltoides. Among willows, four species are listed: Salix caprea, S. alba, S. purpurea and S. apennina. Less commonly, they are associated with five other species of host plants, each from different genera: Abies alba (conifer), Alnus cordata, Fagus sylvatica, Pyrus pyraster and Ulmus minor. Soils Its soils have an average pH level of ~ 7.7, but it ranges from neutral to alkaline (in comparison, Tuber melanosporum (Périgord truffle) are restricted to alkaline environments). In the Balkans and Pannonia regions, its soils contain 20% clay or more (in opposition to Tuber melanosporum which needs well-drained soils with higher sand/silt content); but in the Apennines and maybe also in Istria, the silt content dominates (45%) at the expense of clay (< 20%). Much depends on the vertical repartition of mineral and organic matter, determined during initial soil formation due to flooding. The sediments are typically high in carbonates (15%) in Italy and Istria, but only around 10% at Hungarian and Balkan sites. Samely, organic matter content in Italy is three times higher (about 14%) than that of WT sites in the Balkans (4.5%). Nitrogen content is relatively low (0.19–0.26%). This gives a C/N ratio of around 7 at Italian sites - which corresponds to relatively slow decomposition rates - and a higher C/N ratio in the Hungarian and Balkan lowlands - exposed to very regular flooding, inducing faster decomposition rates and elevated microbial activity in the uppermost soil layer. Associated microbial and fungal communities are poorly known at this stage (2018) and further studies in that direction are recommended. Temperatures Fruitbodies (ascocarps) need at least 0.4 °C (1st percentile) during their formation, which occurs in winter; therefore their distribution range is roughly limited to the north by the mean winter isotherm of 0 °C. But this limit may be modified by localised microclimatic pockets, such as may occur in rugged terrains. Seasonality (the amplitude between summer and winter) seems to also play an important role. It thrives best at sites with ~ 13 °C per year, with average annual temperature ranges of ~ 12 °C in Mar–May, 22 °C in Jun–Aug, 14 °C in Sep–Nov, and 5 °C for Dec–Feb. The warmest mean air temperature for WT growth in Jun–Aug is 24.3 °C (99th percentile), about four degrees above the physiological optimum for mycelial development in soil; temperatures in excess of this limit reduce the amount of mycelium in the topsoil (on about 10 cm); this may explain why T. magnatum develops extra-radical mycelium in soil horizons below 30 cm. Water Drought-induced stress reduces the amount of mycelium in general. But T. magnatum is less tolerant than T. melanosporum and T. uncinatum (Burgundy truffle) of short-term precipitation deficits in summer because its peridium is not as well developed, thus subjecting the ascocarp to more water transpiration than in these two other species. But it also means that T. magnatum is more tolerant of summer excess precipitation - up to 180% of normal precipitations, which a bonus for sites located north of the Mediterranean, in particular Geneva (Switzerland). The ongoing climate change, with expected precipitation increase and projected warming, is likely to bring further north the present northernmost limit of its range and expand it into central and western Europe. On the other hand, temperatures increase in humid continental climates (such as central Europe and the interior of the Balkan Peninsula) is likely to bring in more precipitations and subsequent floodings. The alluvial/riparian habitats of T. magnatum would then be subjected to excessive waterlogging and overall inundations, which would interfere with the development of mycorrhizae and the formation of fruitbodies, as demonstrated by the Burgundy truffle elsewhere. Uses Plans for cultivation were taking shape around Bosnia, with foreign companies, considering the country's adequate climate, investing in local agriculture. Commercialisation Italian white truffles are very highly esteemed and are the most valuable on the market. The white truffle market in Alba is busiest in the months of October and November when the Fiera del Tartufo (truffle fair) takes place. In 2001, Tuber magnatum truffles sold for between ; as of December 2009, they were being sold at $14,203.50/kg. In November 1999, what was then the largest truffle in the world was found near Buje, Croatia. The truffle weighed and was entered into the Guinness Book of Records. The record price paid for a single white truffle was set in December 2007, when Macau casino owner Stanley Ho paid $330,000 (£165,000) for a specimen weighing . One of the largest truffles found in decades, it was unearthed near Pisa, Italy, and sold at an auction held simultaneously in Macau, Hong Kong, and Florence. This record was then matched on November 27, 2010, when Ho again paid $330,000 for a pair of white truffles, including one weighing nearly a kilogram. In December 2014, a white truffle weighing was unearthed in the Umbrian region of Italy. It was auctioned by Sabatino Truffles at Sotheby's in New York. While some had expected it to sell for $1 million, it was sold for $61,000 to a Taiwanese buyer. In 2021, a white truffle from Piedmont weighing 830 g was sold for €103,000 at auction. Frauds Due to its high price tag and that T. magnatum is not the only white-coloured truffle around, frauds are frequent (such as T. borchii or T. asa). Cheaper Tuber borchii are sold for T. magnatum. A 2012 test showed that 15% of high-priced truffles sold as French were a cheaper type of truffles coming from China. Isotopic analysis is the most reliable detection of fraud or of mislabelling; the Jožef Stefan Institute in Slovenia is so far (2021) leading the establishment of a corresponding database On the Asti market in 2012, more than 90% of the truffles did not come from Alba and about 75% of the white truffles supposedly from Piedmont came from other Italian regions. Tuber oligospermum, that grows well in Tunisia's dry sand and not deemed of any culinary value in Italy, is sold as T. magnatum. In some cases, scent is enhanced with such petroleum-based essence as bis(methylthio)methane which is harmful to human health. In 2017, Italy's financial police, the Guardia di Finanza, uncovered a €66 million tax fraud among truffle producers. Zinc content is an important differentiating trait: it was found to be twice as high in T. magnatum than in all the other truffle species so far tested. T. magnatum also assimilates/accumulates Cu, K, Na, P, and Zn more efficiently than these other species; on the other hand, T. brumale was more successful in assimilating/accumulating S. But carbon isotope signatures of the various truffle species cannot discriminate their geographical origins, because mycorrhizal fungi are enriched in 13C compared to their host trees (fungi receive up to 20% of the total carbon fixed by their host trees), and forest ecosystems are characterized by settings that are too complex to allow for such discrimination. For example, highly heterogenous Italian forest ecosystems with high fungal biodiversity showed both the lowest and the highest δ34S values in the truffle samples. In 2017, a new Italian tax law imposed on truffle hunters earning more than €7,000 a year from truffle-hunting to provide receipts indicating the origin of their truffles upon the initial sale to a middleman. References Bibliography Beatrice Belfiori, Valentina D'Angelo, Claudia Riccioni, Marco Leonardi, Francesco Paolocci, Giovanni Pacioni and Andrea Rubin. "Genetic Structure and Phylogeography of Tuber magnatum Populations, Diversity, vol. 12, n° 2, p. 44, January 2020. Luana Bontempo, Federica Camin & Roberto Larcher. "Isotopic and elemental characterisation of Italian white truffle: A first exploratory study", Food and Chemical Toxicology, vol. 145, November 2020. Ulf Büntgen, Maya Jäggi, Simon Egli, Martin Heule, Martina Peter, Imre Zagyva, Paul J. Krusic, Stephan Zimermann & Istvan Bagi. "No radioactive contamination from the Chernobyl disaster in Hungarian white truffles (Tuber magnatum)", Environmental Pollution, vol. 252, Part B, September 2019, p. 1643-1647. Tomáš Čejka, Miroslav Trnka & Ulf Büntgen. "Sustainable cultivation of the white truffle (Tuber magnatum) requires ecological understanding", Mycorrhiza, vol. 33, p. 291–302, 2023 Vasilios Christopoulos, Polyxeni Psoma, Stephanos Diamandis. "Site characteristics of Tuber magnatum in Greece", Acta Mycologica, Vol. 48, n° 1, 2013 Simone Graziosi, Ian Robert Hall & Alessandra Zambonelli. " The Mysteries of the White Truffle: Its Biology, Ecology and Cultivation", Encyclopedia, collection Encyclopedia of Fungi, vol. 2, n° 4, 2022 (detailed description of its morphology, differences with other white-coloured truffles, volatile components producing the aromas, etc.) "Controlled production of white truffles Made in France: a global first", press release, INRAE(fr), 16 February 2021 External links magnatum Truffles (fungi) Fungi described in 1788 Fungus species
Tuber magnatum
[ "Biology" ]
2,599
[ "Fungi", "Fungus species" ]
8,785,892
https://en.wikipedia.org/wiki/Tuber%20brumale
Tuber brumale, also known as Muscat truffle or winter truffle, is a species of truffle native to Southern Europe. It is naturally present in the soils of many truffle orchards. References Truffles (fungi) brumale Fungi described in 1831 Fungus species
Tuber brumale
[ "Biology" ]
65
[ "Fungi", "Fungus species" ]
8,786,058
https://en.wikipedia.org/wiki/Dynamic%20pricing
Dynamic pricing, also referred to as surge pricing, demand pricing, or time-based pricing, and variable pricing, is a revenue management pricing strategy in which businesses set flexible prices for products or services based on current market demands. It usually entails raising prices during periods of peak demand and lowering prices during periods of low demand. As a pricing strategy, it encourages consumers to make purchases during periods of low demand (such as buying tickets well in advance of an event or buying meals outside of lunch and dinner rushes) and disincentivizes them during periods of high demand (such as using less electricity during peak electricity hours). In some sectors, economists have characterized dynamic pricing as having welfare improvements over uniform pricing and contributing to more optimal allocation of limited resources. Its usage often stirs public controversy, as people frequently think of it as price gouging. Businesses are able to change prices based on algorithms that take into account competitor pricing, supply and demand, and other external factors in the market. Dynamic pricing is a common practice in several industries such as hospitality, tourism, entertainment, retail, electricity, and public transport. Each industry takes a slightly different approach to dynamic pricing based on its individual needs and the demand for the product. Methods Cost-plus pricing Cost-plus pricing is the most basic method of pricing. A store will simply charge consumers the cost required to produce a product plus a predetermined amount of profit. Cost-plus pricing is simple to execute, but it only considers internal information when setting the price and does not factor in external influencers like market reactions, the weather, or changes in consumer value. A dynamic pricing tool can make it easier to update prices, but will not make the updates often if the user doesn't account for external information like competitor market prices. Due to its simplicity, this is the most widely used method of pricing with around 74% of companies in the United States employing this dynamic pricing strategy. Although widely used, the usage is skewed, with companies facing a high degree of competition using this strategy the most, on the other hand, companies that deal with manufacturing tend to use this strategy the least. Pricing based on competitors Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. This is called competitor-based pricing. In retail, the competitor that many companies watch is Amazon, which changes prices frequently throughout the day. Amazon is a market leader in retail that changes prices often, which encourages other retailers to alter their prices to stay competitive. Such online retailers use price-matching mechanisms like price trackers. The retailers give the end-user an option for the same, and upon selecting the option to price match, an online bot searches for the lowest price across various websites and offers a price lower than the lowest. Such pricing behavior depends on market conditions, as well as a firm's planning. Although a firm existing within a highly competitive market is compelled to cut prices, that is not always the case. In case of high competition, yet a stable market, and a long-term view, it was predicted that firms will tend to cooperate on a price basis rather than undercut each other. Pricing based on value or elasticity Ideally, companies should ask the price for a product that is equal to the value a consumer attaches to a product. This is called value-based pricing. As this value can differ from person to person, it is difficult to uncover the perfect value and have a differentiated price for every person. However, consumers' willingness to pay can be used as a proxy for the perceived value. With the price elasticity of products, companies can calculate how many consumers are willing to pay for the product at each price point. Products with high elasticities are highly sensitive to changes in price, while products with low elasticities are less sensitive to price changes (ceteris paribus). Subsequently, products with low elasticity are typically valued more by consumers if everything else is equal. The dynamic aspect of this pricing method is that elasticities change with respect to the product, category, time, location, and retailers. With the price elasticity of products and the margin of the product, retailers can use this method with their pricing strategy to aim for volume, revenue, or profit maximization strategies. Bundle pricing There are two types of bundle pricing strategies: one from the consumer's point of view, and one from the seller's point of view. From the seller's point of view, an end product's price depends on whether it is bundled with something else; which bundle it belongs to; and sometimes on which customers it is offered to. This strategy is adopted by print-media houses and other subscription-based services. The Wall Street Journal, for example, offers a standalone price if an electronic mode of delivery is purchased, and a discount when it is bundled with print delivery. Time-based Many industries, especially online retailers, change prices depending on the time of day. Most retail customers shop during weekly office hours (between 9 AM and 5 PM), so many retailers will raise prices during the morning and afternoon, then lower prices during the evening. Time-based pricing of services such as provision of electric power includes: Time-of-use pricing (TOU pricing), whereby electricity prices are set for a specific time period on an advance or forward basis, typically not changing more often than twice a year. Prices paid for energy consumed during these periods are pre-established and known to consumers in advance, allowing them to vary their usage in response to such prices and manage their energy costs by shifting usage to a lower-cost period, or reducing their consumption overall (demand response) Critical peak pricing, whereby time-of-use prices are in effect except for certain peak days, when prices may reflect the costs of generating and/or purchasing electricity at the wholesale level. Real-time pricing, whereby electricity prices may change as often as hourly (exceptionally more often). Prices may be signaled to a user on an advanced or forward basis, reflecting the utility's cost of generating and/or purchasing electricity at the wholesale level; and Peak-load reduction credits, for consumers with large loads who enter into pre-established peak-load-reduction agreements that reduce a utility's planned capacity obligations. Peak fit pricing is best used for products that are inelastic in supply, where suppliers are fully able to anticipate demand growth and thus be able to charge differently for service during systematic periods of time. A utility with regulated prices may develop a time-based pricing schedule on analysis of its long-run costs, such as operation and investment costs. A utility such as electricity (or another service), operating in a market environment, may be auctioned on a competitive market; time-based pricing will typically reflect price variations on the market. Such variations include both regular oscillations due to the demand patterns of users; supply issues (such as availability of intermittent natural resources like water flow or wind); and exceptional price peaks. Price peaks reflect strained conditions in the market (possibly augmented by market manipulation, as during the California electricity crisis), and convey a possible lack of investment. Extreme events include the default by Griddy after the 2021 Texas power crisis. By industry Hospitality Time-based pricing is the standard method of pricing in the tourism industry. Higher prices are charged during the peak season, or during special event periods. In the off-season, hotels may charge only the operating costs of the establishment, whereas investments and any profit are gained during the high season (this is the basic principle of long-run marginal cost pricing: see also long run and short run). Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment. The goal of dynamic pricing in this industry is to find the highest price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing. This form of price discrimination is used to try to maximize revenue based on the willingness to pay of different market segments. It features price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at each point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay. Transportation Airlines change prices often depending on the day of the week, time of day, and the number of days before the flight. For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights. A 2022 study in Econometrica estimated that dynamic pricing was beneficial for "early-arriving, leisure consumers at the expense of late-arriving, business travelers. Although dynamic pricing ensures seat availability for business travelers, these consumers are then charged higher prices. When aggregated over markets, welfare is higher under dynamic pricing than under uniform pricing." Congestion pricing is often used in public transportation and road pricing, where a higher price at peak periods is used to encourage more efficient use of the service or time-shifting to cheaper or free off-peak travel. For example, the San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be traveling. This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. The London congestion charge discourages automobile travel to Central London during peak periods. The Washington Metro and Long Island Rail Road charge higher fares at peak times. The tolls on the Custis Memorial Parkway vary automatically according to the actual number of cars on the roadway, and at times of severe congestion can reach almost $50. Dynamic pricing is also used by Uber and Lyft. Uber's system for "dynamically adjusting prices for service" measures supply (Uber drivers) and demand (passengers hailing rides by use of smartphones), and prices fares accordingly. Ride-sharing companies such as Uber and Lyft have increasingly incorporated dynamic pricing into their operations. This strategy enables these businesses to offer the best prices for both drivers and passengers by adjusting prices in real-time in response to supply and demand. When there is a strong demand for rides, rates go up to encourage more drivers to offer their services, and when there is a low demand, prices go down to draw in more passengers. Professional sports Some professional sports teams use dynamic pricing structures to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues. Sports that are outdoors have to factor weather into pricing strategy, in addition to the date of the game, date of purchase, and opponent. Tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more. Dynamic pricing was first introduced to sports by a start-up software company from Austin, Texas, Qcue and Major League Baseball club San Francisco Giants. The San Francisco Giants implemented a pilot of 2,000 seats in the View Reserved and Bleachers and moved on to dynamically pricing the entire venue for the 2010 season. Qcue currently works with two-thirds of Major League Baseball franchises, not all of which have implemented a full dynamic pricing structure, and for the 2012 postseason, the San Francisco Giants, Oakland Athletics, and St. Louis Cardinals became the first teams to dynamically price postseason tickets. While behind baseball in terms of adoption, the National Basketball Association, National Hockey League, and NCAA have also seen teams implement dynamic pricing. Outside of the U.S., it has since been adopted on a trial basis by some clubs in the Football League. Scottish Premier League club Heart of Midlothian introduced dynamic pricing for the sale of their season tickets in 2012, but supporters complained that they were being charged significantly more than the advertised price. Retail Retailers, and online retailers, in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals. Supermarkets often use dynamic pricing strategies to manage perishable inventory, such as fresh produce and meat products, that have a limited shelf life. By adjusting prices based on factors like expiration dates and current inventory levels, retailers can minimize waste and maximize revenue. Additionally, the widespread adoption of electronic shelf labels in grocery stores has made it easier to implement dynamic pricing strategies in real-time, enabling retailers to respond quickly to changing market conditions and consumer preferences. These labels also makes it easier for grocery stores to markup high demand items (e.g. making it more expensive to purchase ice in warmer weather). Theme parks Theme parks have also recently adopted this pricing model. Disneyland and Disney World adapted this practice in 2016, and Universal Studios followed suit. Since the supply of parks is limited and new rides cannot be added based on the surge of demand, the model followed by theme parks in regards to dynamic pricing resembles that followed by the hotel industry. During summertime, when demand is rather inelastic, the parks charge higher prices, whereas ticket prices in winter are less expensive. Criticism Dynamic pricing is often criticized as price gouging. Dynamic pricing is widely unpopular among consumers as some feel it tends to favour particular buyers. While the intent of surge pricing is generally driven by demand-supply dynamics, some instances have proven otherwise. Some businesses utilise modern technologies (Big data and IoT) to adopt dynamic pricing strategies, where collection and analysis of real-time private data occur almost instantaneously. As modern technology on data analysis is developing rapidly, enabling to detect one’s browsing history, age, gender, location and preference, some consumers fear “unwanted privacy invasions and data fraud” as the extent of their information being used is often undisclosed or ambiguous. Even with firms’ disclaimers stating private information will only be used strictly for data collection and promising no third-party distribution will occur, few cases of misconducting companies can disrupt consumers’ perceptions. Some consumers were simply skeptical on general information collection outright due to the potentiality of “data leakages and misuses”, possibly impacting suppliers’ long-term profitability stimulated by reduced customer loyalty. Consumers can also develop price fairness/unfairness perceptions, whereby different prices being offered to individuals for the same products can affect customers’ perceptions on price fairness. Studies discovered easiness of learning other individuals’ purchase price induced consumers to sense price unfairness and lower satisfaction when others paid less than themselves. However, when consumers were price-advantaged, development of trust and increased repurchase intentions were observed. Other research indicated price fairness perceptions varied depending on their privacy sensitivity and natures of dynamic pricing like, individual pricing, segment pricing, location data pricing and purchase history pricing. Amazon Amazon engaged in price discrimination for some customers in the year 2000, showing different prices at the same time for the same item to different customers, potentially violating the Robinson–Patman Act. When this incident was criticised, Amazon issued a public apology with refunds to almost 7000 customers but did not cease the practice. During the COVID-19 pandemic, prices of certain items in high demand were reported to shoot up by quadruple their original price, garnering negative attention. Although Amazon denied claims of any such manipulation and blamed a few sellers for shooting up prices for essentials such as sanitizers and masks, prices of essential products 'sold by Amazon' had also seen a hefty rise in prices. Amazon claimed this was a result of software malfunction. Uber Uber's surge pricing has also been criticized. In 2013, when New York was in the midst of a storm, Uber users saw fares go up eight times the usual fares. This incident attracted public backlash from public figures, with Salman Rushdie amongst others publicly criticizing this move. After this incident, the company started placing caps on how high surge pricing can go during times of emergency, starting in 2015. Drivers have been known to hold off on accepting rides in an area until surge pricing forces fares up to a level satisfactory to them. Wendy's In 2024, Wendy's announced plans to test dynamic pricing in certain American locations during 2025. This pricing method was included with plans to redesign menu boards and these changes were announced to stakeholders. The company received significant online backlash for this decision. In response, Wendy's stated that the intended implementation was limited to reducing prices during low traffic periods. See also Hedonic regression Pay what you want Demand shaping References Pricing Economics of regulation Economics and time
Dynamic pricing
[ "Physics" ]
3,382
[ "Spacetime", "Economics and time", "Physical quantities", "Time" ]
8,786,357
https://en.wikipedia.org/wiki/Java%20performance
In software development, the programming language Java was historically considered slower than the fastest third-generation typed languages such as C and C++. In contrast to those languages, Java compiles by default to a Java Virtual Machine (JVM) with operations distinct from those of the actual computer hardware. Early JVM implementations were interpreters; they simulated the virtual operations one-by-one rather than translating them into machine code for direct hardware execution. Since the late 1990s, the execution speed of Java programs improved significantly via introduction of just-in-time compilation (JIT) (in 1997 for Java 1.1), the addition of language features supporting better code analysis, and optimizations in the JVM (such as HotSpot becoming the default for Sun's JVM in 2000). Sophisticated garbage collection strategies were also an area of improvement. Hardware execution of Java bytecode, such as that offered by ARM's Jazelle, was explored but not deployed. The performance of a Java bytecode compiled Java program depends on how optimally its given tasks are managed by the host Java virtual machine (JVM), and how well the JVM exploits the features of the computer hardware and operating system (OS) in doing so. Thus, any Java performance test or comparison has to always report the version, vendor, OS and hardware architecture of the used JVM. In a similar manner, the performance of the equivalent natively compiled program will depend on the quality of its generated machine code, so the test or comparison also has to report the name, version and vendor of the used compiler, and its activated compiler optimization directives. Virtual machine optimization methods Many optimizations have improved the performance of the JVM over time. However, although Java was often the first virtual machine to implement them successfully, they have often been used in other similar platforms as well. Just-in-time compiling Early JVMs always interpreted Java bytecodes. This had a large performance penalty of between a factor 10 and 20 for Java versus C in average applications. To combat this, a just-in-time (JIT) compiler was introduced into Java 1.1. Due to the high cost of compiling, an added system called HotSpot was introduced in Java 1.2 and was made the default in Java 1.3. Using this framework, the Java virtual machine continually analyses program performance for hot spots which are executed frequently or repeatedly. These are then targeted for optimizing, leading to high performance execution with a minimum of overhead for less performance-critical code. Some benchmarks show a 10-fold speed gain by this means. However, due to time constraints, the compiler cannot fully optimize the program, and thus the resulting program is slower than native code alternatives. Adaptive optimizing Adaptive optimizing is a method in computer science that performs dynamic recompilation of parts of a program based on the current execution profile. With a simple implementation, an adaptive optimizer may simply make a trade-off between just-in-time compiling and interpreting instructions. At another level, adaptive optimizing may exploit local data conditions to optimize away branches and use inline expansion. A Java virtual machine like HotSpot can also deoptimize code formerly JITed. This allows performing aggressive (and potentially unsafe) optimizations, while still being able to later deoptimize the code and fall back to a safe path. Garbage collection The 1.0 and 1.1 Java virtual machines (JVMs) used a mark-sweep collector, which could fragment the heap after a garbage collection. Starting with Java 1.2, the JVMs changed to a generational collector, which has a much better defragmentation behaviour. Modern JVMs use a variety of methods that have further improved garbage collection performance. Other optimizing methods Compressed Oops Compressed Oops allow Java 5.0+ to address up to 32 GB of heap with 32-bit references. Java does not support access to individual bytes, only objects which are 8-byte aligned by default. Because of this, the lowest 3 bits of a heap reference will always be 0. By lowering the resolution of 32-bit references to 8 byte blocks, the addressable space can be increased to 32 GB. This significantly reduces memory use compared to using 64-bit references as Java uses references much more than some languages like C++. Java 8 supports larger alignments such as 16-byte alignment to support up to 64 GB with 32-bit references. Split bytecode verification Before executing a class, the Sun JVM verifies its Java bytecodes (see bytecode verifier). This verification is performed lazily: classes' bytecodes are only loaded and verified when the specific class is loaded and prepared for use, and not at the beginning of the program. However, as the Java class libraries are also regular Java classes, they must also be loaded when they are used, which means that the start-up time of a Java program is often longer than for C++ programs, for example. A method named split-time verification, first introduced in the Java Platform, Micro Edition (J2ME), is used in the JVM since Java version 6. It splits the verification of Java bytecode in two phases: Design-time – when compiling a class from source to bytecode Runtime – when loading a class. In practice this method works by capturing knowledge that the Java compiler has of class flow and annotating the compiled method bytecodes with a synopsis of the class flow information. This does not make runtime verification appreciably less complex, but does allow some shortcuts. Escape analysis and lock coarsening Java is able to manage multithreading at the language level. Multithreading allows programs to perform multiple processes concurrently, thus improving the performance for programs running on computer systems with multiple processors or cores. Also, a multithreaded application can remain responsive to input, even while performing long running tasks. However, programs that use multithreading need to take extra care of objects shared between threads, locking access to shared methods or blocks when they are used by one of the threads. Locking a block or an object is a time-consuming operation due to the nature of the underlying operating system-level operation involved (see concurrency control and lock granularity). As the Java library does not know which methods will be used by more than one thread, the standard library always locks blocks when needed in a multithreaded environment. Before Java 6, the virtual machine always locked objects and blocks when asked to by the program, even if there was no risk of an object being modified by two different threads at once. For example, in this case, a local was locked before each of the add operations to ensure that it would not be modified by other threads ( is synchronized), but because it is strictly local to the method this is needless: public String getNames() { final Vector<String> v = new Vector<>(); v.add("Me"); v.add("You"); v.add("Her"); return v.toString(); } Starting with Java 6, code blocks and objects are locked only when needed, so in the above case, the virtual machine would not lock the Vector object at all. Since version 6u23, Java includes support for escape analysis. Register allocation improvements Before Java 6, allocation of registers was very primitive in the client virtual machine (they did not live across blocks), which was a problem in CPU designs which had fewer processor registers available, as in x86s. If there are no more registers available for an operation, the compiler must copy from register to memory (or memory to register), which takes time (registers are significantly faster to access). However, the server virtual machine used a color-graph allocator and did not have this problem. An optimization of register allocation was introduced in Sun's JDK 6; it was then possible to use the same registers across blocks (when applicable), reducing accesses to the memory. This led to a reported performance gain of about 60% in some benchmarks. Class data sharing Class data sharing (called CDS by Sun) is a mechanism which reduces the startup time for Java applications, and also reduces memory footprint. When the JRE is installed, the installer loads a set of classes from the system JAR file (the JAR file holding all the Java class library, called rt.jar) into a private internal representation, and dumps that representation to a file, called a "shared archive". During subsequent JVM invocations, this shared archive is memory-mapped in, saving the cost of loading those classes and allowing much of the JVM's metadata for these classes to be shared among multiple JVM processes. The corresponding improvement in start-up time is more obvious for small programs. History of performance improvements Apart from the improvements listed here, each release of Java introduced many performance improvements in the JVM and Java application programming interface (API). JDK 1.1.6: First just-in-time compilation (Symantec's JIT-compiler) J2SE 1.2: Use of a generational collector. J2SE 1.3: Just-in-time compiling by HotSpot. J2SE 1.4: See here, for a Sun overview of performance improvements between 1.3 and 1.4 versions. Java SE 5.0: Class data sharing Java SE 6: Split bytecode verification Escape analysis and lock coarsening Register allocation improvements Other improvements: Java OpenGL Java 2D pipeline speed improvements Java 2D performance also improved significantly in Java 6 See also 'Sun overview of performance improvements between Java 5 and Java 6'. Java SE 6 Update 10 Java Quick Starter reduces application start-up time by preloading part of JRE data at OS startup on disk cache. Parts of the platform needed to execute an application accessed from the web when JRE is not installed are now downloaded first. The full JRE is 12 MB, a typical Swing application only needs to download 4 MB to start. The remaining parts are then downloaded in the background. Graphics performance on Windows improved by extensively using Direct3D by default, and use shaders on graphics processing unit (GPU) to accelerate complex Java 2D operations. Java 7 Several performance improvements have been released for Java 7: Future performance improvements are planned for an update of Java 6 or Java 7: Provide JVM support for dynamic programming languages, following the prototyping work currently done on the Da Vinci Machine (Multi Language Virtual Machine), Enhance the existing concurrency library by managing parallel computing on multi-core processors, Allow the JVM to use both the client and server JIT compilers in the same session with a method called tiered compiling: The client would be used at startup (because it is good at startup and for small applications), The server would be used for long-term running of the application (because it outperforms the client compiler for this). Replace the existing concurrent low-pause garbage collector (also called concurrent mark-sweep (CMS) collector) by a new collector called Garbage First (G1) to ensure consistent pauses over time. Comparison to other languages Objectively comparing the performance of a Java program and an equivalent one written in another language such as C++ needs a carefully and thoughtfully constructed benchmark which compares programs completing identical tasks. The target platform of Java's bytecode compiler is the Java platform, and the bytecode is either interpreted or compiled into machine code by the JVM. Other compilers almost always target a specific hardware and software platform, producing machine code that will stay virtually unchanged during execution. Very different and hard-to-compare scenarios arise from these two different approaches: static vs. dynamic compilations and recompilations, the availability of precise information about the runtime environment and others. Java is often compiled just-in-time at runtime by the Java virtual machine, but may also be compiled ahead-of-time, as is C++. When compiled just-in-time, the micro-benchmarks of The Computer Language Benchmarks Game indicate the following about its performance: slower than compiled languages such as C or C++, similar to other just-in-time compiled languages such as C#, much faster than languages without an effective native-code compiler (JIT or AOT), such as Perl, Ruby, PHP and Python. Program speed Benchmarks often measure performance for small numerically intensive programs. In some rare real-life programs, Java out-performs C. One example is the benchmark of Jake2 (a clone of Quake II written in Java by translating the original GPL C code). The Java 5.0 version performs better in some hardware configurations than its C counterpart. While it is not specified how the data was measured (for example if the original Quake II executable compiled in 1997 was used, which may be considered bad as current C compilers may achieve better optimizations for Quake), it notes how the same Java source code can have a huge speed boost just by updating the VM, something impossible to achieve with a 100% static approach. For other programs, the C++ counterpart can, and usually does, run significantly faster than the Java equivalent. A benchmark performed by Google in 2011 showed a factor 10 between C++ and Java. At the other extreme, an academic benchmark performed in 2012 with a 3D modelling algorithm showed the Java 6 JVM being from 1.09 to 1.91 times slower than C++ under Windows. Some optimizations that are possible in Java and similar languages may not be possible in certain circumstances in C++: C-style pointer use can hinder optimizing in languages that support pointers, The use of escape analysis methods is limited in C++, for example, because a C++ compiler does not always know if an object will be modified in a given block of code due to pointers, Java can access derived instance methods faster than C++ can access derived virtual methods due to C++'s extra virtual-table look-up. However, non-virtual methods in C++ do not suffer from v-table performance bottlenecks, and thus exhibit performance similar to Java. The JVM is also able to perform processor specific optimizations or inline expansion. And, the ability to deoptimize code already compiled or inlined sometimes allows it to perform more aggressive optimizations than those performed by statically typed languages when external library functions are involved. Results for microbenchmarks between Java and C++ highly depend on which operations are compared. For example, when comparing with Java 5.0: 32- and 64-bit arithmetic operations, file input/output, and exception handling have a similar performance to comparable C++ programs Operations on arrays have better performance in C. The performance of trigonometric functions is much better in C. Notes Multi-core performance The scalability and performance of Java applications on multi-core systems is limited by the object allocation rate. This effect is sometimes called an "allocation wall". However, in practice, modern garbage collector algorithms use multiple cores to perform garbage collection, which to some degree alleviates this problem. Some garbage collectors are reported to sustain allocation rates of over a gigabyte per second, and there exist Java-based systems that have no problems scaling to several hundreds of CPU cores and heaps sized several hundreds of GB. Automatic memory management in Java allows for efficient use of lockless and immutable data structures that are extremely hard or sometimes impossible to implement without some kind of a garbage collection. Java offers a number of such high-level structures in its standard library in the java.util.concurrent package, while many languages historically used for high performance systems like C or C++ are still lacking them. Startup time Java startup time is often much slower than many languages, including C, C++, Perl or Python, because many classes (and first of all classes from the platform Class libraries) must be loaded before being used. When compared against similar popular runtimes, for small programs running on a Windows machine, the startup time appears to be similar to Mono's and a little slower than .NET's. It seems that much of the startup time is due to input-output (IO) bound operations rather than JVM initialization or class loading (the rt.jar class data file alone is 40 MB and the JVM must seek much data in this big file). Some tests showed that although the new split bytecode verification method improved class loading by roughly 40%, it only realized about 5% startup improvement for large programs. Albeit a small improvement, it is more visible in small programs that perform a simple operation and then exit, because the Java platform data loading can represent many times the load of the actual program's operation. Starting with Java SE 6 Update 10, the Sun JRE comes with a Quick Starter that preloads class data at OS startup to get data from the disk cache rather than from the disk. Excelsior JET approaches the problem from the other side. Its Startup Optimizer reduces the amount of data that must be read from the disk on application startup, and makes the reads more sequential. In November 2004, Nailgun, a "client, protocol, and server for running Java programs from the command line without incurring the JVM startup overhead" was publicly released. introducing for the first time an option for scripts to use a JVM as a daemon, for running one or more Java applications with no JVM startup overhead. The Nailgun daemon is insecure: "all programs are run with the same permissions as the server". Where multi-user security is needed, Nailgun is inappropriate without special precautions. Scripts where per-application JVM startup dominates resource use, see one to two order of magnitude runtime performance improvements. Memory use Java memory use is much higher than C++'s memory use because: There is an overhead of 8 bytes for each object and 12 bytes for each array in Java. If the size of an object is not a multiple of 8 bytes, it is rounded up to next multiple of 8. This means an object holding one byte field occupies 16 bytes and needs a 4-byte reference. C++ also allocates a pointer (usually 4 or 8 bytes) for every object which class directly or indirectly declares virtual functions. Lack of address arithmetic makes creating memory-efficient containers, such as tightly spaced structures and XOR linked lists, currently impossible (the OpenJDK Valhalla project aims to mitigate these issues, though it does not aim to introduce pointer arithmetic; this cannot be done in a garbage collected environment). Contrary to malloc and new, the average performance overhead of garbage collection asymptotically nears zero (more accurately, one CPU cycle) as the heap size increases. Parts of the Java Class Library must load before program execution (at least the classes used within a program). This leads to a significant memory overhead for small applications. Both the Java binary and native recompilations will typically be in memory. The virtual machine uses substantial memory. In Java, a composite object (class A which uses instances of B and C) is created using references to allocated instances of B and C. In C++ the memory and performance cost of these types of references can be avoided when the instance of B and/or C exists within A. In most cases a C++ application will consume less memory than an equivalent Java application due to the large overhead of Java's virtual machine, class loading and automatic memory resizing. For programs in which memory is a critical factor for choosing between languages and runtime environments, a cost/benefit analysis is needed. Trigonometric functions Performance of trigonometric functions is bad compared to C, because Java has strict specifications for the results of mathematical operations, which may not correspond to the underlying hardware implementation. On the x87 floating point subset, Java since 1.4 does argument reduction for sin and cos in software, causing a big performance hit for values outside the range. Java Native Interface The Java Native Interface invokes a high overhead, making it costly to cross the boundary between code running on the JVM and native code. Java Native Access (JNA) provides Java programs easy access to native shared libraries (dynamic-link library (DLLs) on Windows) via Java code only, with no JNI or native code. This functionality is comparable to Windows' Platform/Invoke and Python's ctypes. Access is dynamic at runtime without code generation. But it has a cost, and JNA is usually slower than JNI. User interface Swing has been perceived as slower than native widget toolkits, because it delegates the rendering of widgets to the pure Java 2D API. However, benchmarks comparing the performance of Swing versus the Standard Widget Toolkit, which delegates the rendering to the native GUI libraries of the operating system, show no clear winner, and the results greatly depend on the context and the environments. Additionally, the newer JavaFX framework, intended to replace Swing, addresses many of Swing's inherent issues. Use for high performance computing Some people believe that Java performance for high performance computing (HPC) is similar to Fortran on compute-intensive benchmarks, but that JVMs still have scalability issues for performing intensive communication on a grid computing network. However, high performance computing applications written in Java have won benchmark competitions. In 2008, and 2009, an Apache Hadoop (an open-source high performance computing project written in Java) based cluster was able to sort a terabyte and petabyte of integers the fastest. The hardware setup of the competing systems was not fixed, however. In programming contests Programs in Java start slower than those in other compiled languages. Thus, some online judge systems, notably those hosted by Chinese universities, use longer time limits for Java programs to be fair to contestants using Java. See also Common Language Runtime Performance analysis Java processor, an embedded processor running Java bytecode natively (such as JStik) Comparison of Java and C++ Java ConcurrentMap Citations References External links Site dedicated to Java performance information Debugging Java performance problems Sun's Java performance portal The Mind-map based on presentations of engineers in the SPb Oracle branch (as big PNG image) Java platform Computing platforms Software optimization
Java performance
[ "Technology" ]
4,656
[ "Computing platforms", "Java platform" ]
8,786,371
https://en.wikipedia.org/wiki/Xbloc
An Xbloc is a wave-dissipating concrete block (or "armour unit") designed to protect shores, harbour walls, seawalls, breakwaters and other coastal structures from the direct impact of incoming waves. The Xbloc model was designed and developed in 2001 by the Dutch firm Delta Marine Consultants, now called BAM Infraconsult, a subsidiary of the Royal BAM Group. Xbloc has been subjected to extensive research by several universities. Benefits vs other systems Concrete armour units are generally applied in breakwaters and shore protections. The units are placed in a single layer as the outer layer of the coastal structure. This layer is called the armour layer. Its function is twofold: (1) to protect the finer material below it against severe wave action; (2) to dissipate the wave energy to reduce the wave run-up, overtopping and reflection. These functions require a heavy, but porous armour. Common factors to apply single layer concrete armour units are: natural rock is unavailable in required size or quality to withstand design wave or current loads quarry production is insufficient to match the material demand existing quarries are in uneconomic distance to project location road connections have load restrictions (bridges) and other bottlenecks, are in poor condition or congested Also compared to older concrete armour units, as e.g. tetrapod which are normally placed in double layer as for rock protection, modern single layer armour units (like the Xbloc and Accropode), involve significantly less concrete. Therefore, less construction material (cement, gravel) is required, reducing costs and also the carbon footprint of coastal protection works. Like Xbloc, most of these blocks are commercial developments and patented as such, Xblocs are not produced by the patent holder, but are fabricated and installed by a contractor who in return pays a license fee. Such an agreement involves certain technical support activities to ensure the correct application of the protection system. The patent expires in 2023, but although after that date anyone can make a block with this shape, one is not allowed to call it Xbloc, because the name is a protected trademark. Hydraulic stability and interlocking mechanism The Xbloc armour unit derives its hydraulic stability from its self-weight and by interlocking with surrounding units. Due to the highly porous armour layer (layer porosity of almost 60%) constructed with Xbloc units, the energy of the incoming waves will be largely absorbed. The Xbloc armour layer is therefore able to protect the rock in the under layer from erosion due to waves. Besides empirical formulae derived from physical model testing, the interaction between breakwater elements (submerged or emerged) and waves as well as the filtration of the fluid into the porous breakwater has been investigated amongst others by MEDUS, based on RANS equations coupled with a RNG turbulence model. Xblocs are typically applied on an armour slope steepness between 3V:4H and 2V:3H. Unlike natural rock, the hydraulic stability does not increase at shallower slope inclinations, because, in that situation, the interlocking effect is reduced. Standard Xbloc sizes vary between 0.75m3 (significant wave height up to Hs = 3.35m) and 20m3 (Hs = 10.0m). It is noted that the given relation between design wave height and volume size is valid for the concept stage only. Further parameters as foreshore slope, crest configuration, construction equipment, etc. can have an important effect on the recommended unit size. For detailed design, in particular for non standard situations, physical model tests are essential and normally carried out to confirm overall stability and functional performance of a breakwater (wave overtopping and/ or wave penetration). The effect of interlocking is apparent when comparing a rock revetment with a modern single layer unit for average boundary conditions, while taking into account the lower specific density of concrete compared to most natural rock commonly used in breakwater construction. Assuming that natural rock would be placed at identical slope steepness, the individual rock weight would require to be three times as high, compared to Xbloc units. Rock is generally to be placed as double layer, thus the volume of armour material which needs to be quarried, stored, handled, transported and installed can be enormous for a larger breakwater exposed to significant wave action. Due to the interlocking effect the weight, and thus the volume, of single layer armour units is considerably less compared to an armour consisting entirely of rock. In addition, units are normally fabricated near or at project site, so that transport issues are less critical. Production of armour units The Xbloc consists of non-reinforced concrete, similar to other single layer armour units. Ordinary concrete C25/30 is normally appropriate for the production of Xbloc armour units. However, often concrete of higher strength is applied for other reasons, e.g. early strength for faster de-moulding, ice loads, etc. By omitting reinforcement, time and costs are cut and the armour units are less vulnerable to long term corrosion damage. The optimal shape of a single layer armour unit combines the robustness of a compact concrete body with the slenderness required for interlocking. The structural integrity is normally confirmed by finite element calculations (FEM) and prototype drop tests. Although both wooden and steel moulds can be used to construct the Xbloc formwork, steel moulds are preferred as they can be used repeatedly to produce large numbers of armour units. Various mould designs, consisting of 2 sections, are used. The moulds are either vertically or horizontally assembled. Pouring and compaction of concrete is done simultaneously. An appropriate formwork design is facilitating the stripping of the moulds at an early stage and largely prevents honey combing, surface bubbles and striking damage. Due to the shape of the Xbloc unit, relatively simple formwork can be used which is made of a limited number of different steel plates. Since a single Xbloc unit can weigh up to 45 tons, the construction is done as close as possible to the area of application. Placement In contrast to the placement of other interlocking concrete blocks, the Xbloc unit does not require stringent specifications about the orientation of each unit on a breakwater slope. Because of the shape of the Xbloc, each of the 6 sides of the unit is efficiently interlocking. Hence, the blocks easily find a position that fully utilizes the interlocking mechanism. This increases the efficiency of placing armour units on a slope. Due to the random structure and high porosity of an Xbloc breakwater, an artificial reef habitat is created for marine fauna and flora. XblocPlus DMC came to the market in 2018 with the XblocPlus. This is not merely an improved version of the Xbloc, rather it is a block that functions differently, and has its own advantages and disadvantages. The XblocPlus needs to be placed regularly and, has characteristics found in placed blocks such as natural basaltic columns or concrete placed blocks like Basalton. DMC saw opportunities for this block in the Afsluitdijk improvement that began in 2018. Here this block is used in the wave impact zone. The block in this usage is called the ‘Levvel-block’, after the joint-venture that improves the Afsluitdijk. The Basalton Quattroblok is placed in the wave run-up zone on the Afsluitdijk. The XblocPlus is also used in the Vistula Spit canal in Poland. See also References British Standards, BS 6349 Code of Practice for Maritime Structures, Part 7, Guide to design & construction of Breakwaters, 1991. CIRIA/CUR, Rock Manual, 2007 2 Research Articles on the Development and Design of Xbloc Breakwater Armour Units 3 H.J. Verhagen, Classical, Innovative and Unconventional Coastline Protection Methods, Coastal Engineering section, Delft University of Technology, the Netherlands, 2004 4 ASCE Specialty Conference, Washington D.C. March, Seabees in Service, 1983 External links Delta Marine Consultants Xbloc design guidelines Rock Manual 2007 MEDUS (Maritime Engineering Division University Salerno) Coastal engineering Wave-dissipating concrete blocks Dutch inventions Royal BAM Group
Xbloc
[ "Engineering" ]
1,724
[ "Coastal engineering", "Civil engineering" ]
8,786,437
https://en.wikipedia.org/wiki/Derepression
In genetics and cell biology, repression is a mechanism often used to decrease or inhibit the expression of a gene. Removal of repression is called derepression. This mechanism may occur at different stages in the expression of a gene, with the result of increasing the overall RNA or protein products. Dysregulation of derepression mechanisms can result in altered gene expression patterns, which may lead to negative phenotypic consequences such as disease. Derepression of transcription Transcription can be repressed in a variety of ways, and therefore can be derepressed in different ways as well. A common mechanism is allosteric regulation. This is when a substrate binds a repressor protein and causes it to undergo a conformational change. If the repressor is bound upstream of a gene, such as in an operator sequence, then it would be repressing the gene's expression. This conformational change would take away the repressor’s ability to bind DNA, thus removing its repressive effect on transcription. Another form of transcriptional derepression uses chromatin remodeling complexes. For transcription to occur, RNA polymerase needs to have access to the promoter sequence of the gene or it cannot bind the DNA. Sometimes these sequences are wrapped around nucleosomes or are in condensed heterochromatin regions, and are therefore inaccessible. Through different chromatin remodeling mechanisms these promoter sequences can become accessible to the RNA polymerase, and transcription becomes derepressed. Transcriptional derepression may also occur at the level of transcription factor activation. Certain families of transcription factors are non-functional on their own because their active domains are blocked by another part of the protein. Substrate binding to this second, regulatory domain causes a conformational change in the protein to allows access to the active domain. This lets the transcription factor bind to DNA and serve its function, thus derepressing the transcription factor. Derepression of translation Derepression of translation increases protein production without altering the levels of mRNA in the cell. miRNAs are a common mechanism of translation repression, binding to the mRNA through complementary base pairing to silence them. Certain RNA binding proteins have been shown to target untranslated regions of the mRNAs and upregulate the translation initiation rates by alleviating the repressive miRNA effects. Example of derepression Auxin signalling An example is the auxin mediated derepression of the auxin response factor family of transcription factors in plants. These auxin response factors are repressed by Aux/IAA repressors. In the presence of auxin, these Aux/AII proteins undergo ubiquitination and are then degraded. This derepresses the auxin response factors so they may carry out their functions in the cell. Altered derepression causing diseases Familial Alzheimer’s disease Alzheimer’s is a neurodegenerative disease involving progressive memory loss and other declines in brain function. One common cause of familial Alzheimer’s is mutation in the PSEN1 gene. This gene encodes a protein that cleaves certain intracellular peptides which, once free in the cytoplasm, promote CBPdegradation. Mutations in PSEN1 decrease its production or ability to cleave proteins. This derepresses the CBP proteins, and allows them to perform their function of upregulating transcription of their target genes. Rett syndrome Rett syndrome is a neurodevelopmental disorder involving deterioration of learned language and motor skills, autism, and seizures starting in infancy. Many cases of Rett syndrome are associated with mutations in MECP2, a gene encoding a transcriptional repressor. Mutations in this gene decrease the levels of MeCP2 binding to different promoter sequences, resulting in their overall derepression. The increased expression of these MeCP2 regulated genes in neurons contribute to the Rett syndrome phenotype. Beckwith-Wiedemann syndrome This syndrome is associated with increased susceptibility to tumors and growth abnormalities in children. A common cause of this syndrome is a mutation in an imprint control region near the Igf2 gene. This imprint control region is normally bound by an insulator on the maternal allele, which represses an enhancer from acting on the Igf2 gene. This insulator is absent on the paternal allele and allows it access to the gene. Mutations in this imprint control region inhibit the insulator from binding, which derepresses enhancer activity on the maternal Igf2 gene. This abnormal derepression and increase in gene expression can result in Beckwith-Wiedemann syndrome. References Gene expression Genetics techniques Molecular genetics Molecular biology
Derepression
[ "Chemistry", "Engineering", "Biology" ]
976
[ "Genetics techniques", "Gene expression", "Genetic engineering", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
8,786,647
https://en.wikipedia.org/wiki/Cash%20%28unit%29
Cash or li () is a traditional Chinese unit of weight. The terms "cash" or "le" were documented to have been used by British explorers in the 1830s when trading in Qing territories of China. Under the Hong Kong statute of the Weights and Measures Ordinance, 1 cash is about . Currently, it is candareen or catty, namely . See also Chinese units of measurement References External links Chinese/Metric/Imperial Measurement Converter Units of mass Chinese units in Hong Kong
Cash (unit)
[ "Physics", "Mathematics" ]
99
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
8,787,159
https://en.wikipedia.org/wiki/Objections%20to%20evolution
Objections to evolution have been raised since evolutionary ideas came to prominence in the 19th century. When Charles Darwin published his 1859 book On the Origin of Species, his theory of evolution (the idea that species arose through descent with modification from a single common ancestor in a process driven by natural selection) initially met opposition from scientists with different theories, but eventually came to receive near-universal acceptance in the scientific community. The observation of evolutionary processes occurring (as well as the modern evolutionary synthesis explaining that evidence) has been uncontroversial among mainstream biologists since the 1940s. Since then, criticisms and denials of evolution have come from religious groups, rather than from the scientific community. Although many religious groups have found reconciliation of their beliefs with evolution, such as through theistic evolution, other religious groups continue to reject evolutionary explanations in favor of creationism, the belief that the universe and life were created by supernatural forces. The U.S.-centered creation–evolution controversy has become a focal point of perceived conflict between religion and science. Several branches of creationism, including creation science, neo-creationism, and intelligent design, argue that the idea of life being directly designed by a god or intelligence is at least as scientific as evolutionary theory, and should therefore be taught in public education. Such arguments against evolution have become widespread and include objections to evolution's evidence, methodology, plausibility, morality, and scientific acceptance. The scientific community does not recognize such objections as valid, pointing to detractors' misinterpretations of such things as the scientific method, evidence, and basic physical laws. History Evolutionary ideas came to prominence in the early 19th century with the theory (developed between 1800 and 1822) of the transmutation of species put forward by Jean-Baptiste Lamarck (1744–1829). At first the scientific community – and notably Georges Cuvier (1769–1832) – opposed the idea of evolution. The idea that laws control nature and society gained vast popular audiences with George Combe's The Constitution of Man of 1828 and with the anonymous Vestiges of the Natural History of Creation of 1844. When Charles Darwin published his 1859 book On the Origin of Species, he convinced most of the scientific community that new species arise through descent through modification in a branching pattern of divergence from common ancestors, but while most scientists accepted natural selection as a valid and empirically testable hypothesis, Darwin's view of it as the primary mechanism of evolution was rejected by some. Darwin's contemporaries eventually came to accept the transmutation of species based upon fossil evidence, and the X Club (operative from 1864 to 1893) formed to defend the concept of evolution against opposition from the church and wealthy amateurs. At that time the specific evolutionary mechanism which Darwin provided – natural selection – was actively disputed by scientists in favour of alternative theories such as Lamarckism and orthogenesis. Darwin's gradualistic account was also opposed by the ideas of saltationism and catastrophism. Lord Kelvin led scientific opposition to gradualism on the basis of his thermodynamic calculations for the age of the Earth at between 24 and 400 million years, and his views favoured a version of theistic evolution accelerated by divine guidance. Geological estimates disputed Kelvin's age of the earth, and the geological approach gained strength in 1907 when radioactive dating of rocks revealed the Earth as billions of years old. The specific hereditary mechanism which Darwin hypothesized, pangenesis, which supported gradualism, also lacked any supporting evidence and was disputed by the empirical tests (1869 onwards) of Francis Galton. Although evolution itself was scientifically unchallenged, uncertainties about the mechanism in the era of "the eclipse of Darwinism" persisted from the 1880s until the 1930s' inclusion of Mendelian inheritance and the rise of the modern evolutionary synthesis. The modern synthesis rose to universal acceptance among biologists with the help of new evidence, such as that from genetics, which confirmed Darwin's predictions and refuted the competing hypotheses. Protestantism, especially in America, broke out in "acrid polemics" and argument about evolution from 1860 to the 1870s—with the turning point possibly marked by the death of Louis Agassiz in 1873—and by 1880 a form of "Christian evolution" was becoming the consensus. In Britain, while publication of The Descent of Man by Darwin in 1871 reinvigorated debate from the previous decade, Sir Henry Chadwick (1920–2008) notes a steady acceptance of evolution "among more educated Christians" between 1860 and 1885. As a result, evolutionary theory was "both permissible and respectable" by 1876. Frederick Temple's lectures on The Relations between Religion and Science (1884) on how evolution was not "antagonistic" to religion highlighted this trend. Temple's appointment as Archbishop of Canterbury in 1896 demonstrated the broad acceptance of evolution within the church hierarchy. For decades the Roman Catholic Church avoided officially rejecting evolution. However, the Church would rein in Catholics who proposed that evolution could be reconciled with the Bible, as this conflicted with the First Vatican Council's (1869–70) finding that everything was created out of nothing by God, and to deny that finding could lead to excommunication. In 1950 the encyclical Humani generis of Pope Pius XII first mentioned evolution directly and officially. It allowed one to enquire into the concept of humans coming from pre-existing living matter, but not to question Adam and Eve or the creation of the soul. In 1996 Pope John Paul II labelled evolution "more than a hypothesis" and acknowledged the large body of work accumulated in its support, but reiterated that any attempt to give a material explanation of the human soul is "incompatible with the truth about man". Pope Benedict XVI in 2005 reiterated the conviction that human beings "are not some casual and meaningless product of evolution. Each of us is the result of a thought of God. Each of us is willed, each of us is loved, each of us is necessary." At the same time, Pope Benedict promoted the study of the relationship between the concepts of creation and evolution, based on the conviction that there cannot be a contradiction between faith and reason. Along these lines, the research project "Thomistic Evolution", run by a team of Dominican scholars, endeavours to reconcile the scientific evidence on evolution with the teaching of Thomas Aquinas (1225–1274). Islamic views on evolution ranged from those believing in literal creation (as implied in the Quran) to many educated Muslims who subscribed to a version of theistic or guided evolution in which the Quran reinforced rather than contradicted mainstream science. This occurred relatively early, as medieval madrasas taught the ideas of Al-Jahiz, a Muslim scholar from the 9th century, who proposed concepts similar to natural selection. However, acceptance of evolution remains low in the Muslim world, as prominent figures reject evolution's underpinning philosophy of materialism as unsound to human origins and a denial of Allah. Further objections by Muslim authors and writers largely reflect those put forward in the Western world. Regardless of acceptance from major religious hierarchies, early religious objections to Darwin's theory continue in use in opposition to evolution. The idea that species change over time through natural processes and that different species share common ancestors seemed to contradict the Genesis account of Creation. Believers in Biblical infallibility attacked Darwinism as heretical. The natural theology of the early-19th century was typified by William Paley's 1802 version of the watchmaker analogy, an argument from design still deployed by the creationist movement. Natural theology included a range of ideas and arguments from the outset, and when Darwin's theory was published, ideas of theistic evolution were presented in which evolution is accepted as a secondary cause open to scientific investigation, while still holding belief in God as a first cause with a non-specified role in guiding evolution and creating humans. This position has been adopted by denominations of Christianity and Judaism in line with modernist theology which views the Bible and Torah as allegorical, thus removing the conflict between evolution and religion. However, in the 1920s Christian fundamentalists in the United States developed their literalist arguments against modernist theology into opposition to the teaching of evolution, with fears that Darwinism had led to German militarism and posed a threat to religion and morality. This opposition developed into the creation–evolution controversy, involving Christian literalists in the United States objecting to the teaching of evolution in public schools. Although early objectors dismissed evolution as contradicting their interpretation of the Bible, this argument was legally invalidated when the United States Supreme Court ruled in Epperson v. Arkansas in 1968 that forbidding the teaching of evolution on religious grounds violated the Establishment Clause. Since then creationists have developed more nuanced objections to evolution, alleging variously that it is unscientific, infringes on creationists' religious freedoms, or that the acceptance of evolution is a religious stance. Creationists have appealed to democratic principles of fairness, arguing that evolution is controversial and that science classrooms should therefore "Teach the Controversy". These objections to evolution culminated in the intelligent-design movement in the 1990s and early 2000s that unsuccessfully attempted to present itself as a scientific alternative to evolution. Defining evolution A major source of confusion and ambiguity in any creation–evolution debate arises from the definition of evolution itself. In the context of biology, evolution is genetic changes in populations of organisms over successive generations. The word also has a number of different meanings in different fields, from evolutionary computation to molecular evolution to sociocultural evolution to stellar and galactic evolution. Evolution in colloquial contexts can refer to any sort of "progressive" development or gradual improvement, and a process that results in greater quality or complexity. When misapplied to biological evolution this common meaning can lead to frequent misunderstandings. For example, the idea of devolution ("backwards" evolution) is a result of erroneously assuming that evolution is directional or has a specific goal in mind (cf. orthogenesis). In reality, the evolution of a biological organism has no "objective" and is only showing increasing ability of successive generations to survive and reproduce in their environment; and increased suitability is only defined in relation to this environment. Biologists do not regard any one species (such as humans) as more highly evolved or advanced than another. Certain sources have been criticized for indicating otherwise due to a tendency to evaluate nonhuman organisms according to anthropocentric standards rather than according to more objective ones. Evolution also does not require that organisms become more complex. Although the biological development of different forms of life shows an apparent trend towards the evolution of biological complexity, there is a question as to whether this appearance of increased complexity is real, or whether it comes from neglecting the fact that the majority of life on Earth has always consisted of prokaryotes. In this view, complexity is not a necessary consequence of evolution, but specific circumstances of evolution on Earth frequently made greater complexity advantageous and thus naturally selected for. Depending on the situation, organisms' complexity can either increase, decrease, or stay the same, and all three of these trends have been observed in studies of evolution. Creationist sources frequently define evolution according to a colloquial, rather than the scientific meaning. As a result, many attempts to rebut evolution do not address the findings of evolutionary biology (see straw-man argument). This also means that advocates of creationism and evolutionary biologists often simply speak past each other. Scientific acceptance Status as a theory Critics of evolution assert that evolution is "just a theory", which emphasizes that scientific theories are never absolute, or misleadingly presents it as a matter of opinion rather than of fact or evidence. This reflects a difference of the meaning of theory in a scientific context: whereas in colloquial speech a theory is a conjecture or guess, in science, a theory is an explanation whose predictions have been verified by experiments or other evidence. Evolutionary theory refers to an explanation for the diversity of species and their ancestry which has met extremely high standards of scientific evidence. An example of evolution as theory is the modern synthesis of Darwinian natural selection and Mendelian inheritance. As with any scientific theory, the modern synthesis is constantly debated, tested, and refined by scientists, but there is an overwhelming consensus in the scientific community that it remains the only robust model that accounts for the known facts concerning evolution. Critics also state that evolution is not a fact. In science a fact is a verified empirical observation while in colloquial contexts a fact can simply refer to anything for which there is overwhelming evidence. For example, in common usage theories such as "the Earth revolves around the Sun" and "objects fall due to gravity" may be referred to as "facts", even though they are purely theoretical. From a scientific standpoint, therefore, evolution may be called a "fact" for the same reason that gravity can: under the scientific definition, evolution is an observable process that occurs whenever a population of organisms genetically changes over time. Under the colloquial definition, the theory of evolution can also be called a fact, referring to this theory's well-established nature. Thus, evolution is widely considered both a theory and a fact by scientists. Similar confusion is involved in objections that evolution is "unproven", since no theory in science is known to be absolutely true, only verified by empirical evidence. This distinction is an important one in philosophy of science, as it relates to the lack of absolute certainty in all empirical claims, not just evolution. Strict proof is possible only in formal sciences such as logic and mathematics, not natural sciences (where terms such as "validated" or "corroborated" are more appropriate). Thus, to say that evolution is not proven is trivially true, but no more an indictment of evolution than calling it a "theory". The confusion arises in that the colloquial meaning of proof is simply "compelling evidence", in which case scientists would indeed consider evolution "proven". Degree of acceptance An objection is often made in the teaching of evolution that evolution is controversial or contentious. Unlike past creationist arguments which sought to abolish the teaching of evolution altogether, this argument makes the claim that evolution should be presented alongside alternative views since it is controversial, and students should be allowed to evaluate and choose between the options on their own. This objection forms the basis of the "Teach the Controversy" campaign by the Discovery Institute, a think tank based in Seattle, Washington, to promote the teaching of intelligent design in U.S. public schools. This goal followed the Institute's "wedge strategy", an attempt to gradually undermine evolution and ultimately to "reverse the stifling dominance of the materialist worldview, and to replace it with a science consonant with Christian and theistic convictions." Several other attempts were made to insert intelligent design or creationism into the U.S. public school curriculum, including the failed Santorum Amendment in 2001. Scientists and U.S. courts have rejected this objection on the grounds that science is not based on appeals to popularity, but on evidence. The scientific consensus of biologists determines what is considered acceptable science, not popular opinion or fairness, and although evolution is controversial in the public arena, it is entirely uncontroversial among experts in the field. In response, creationists have disputed the level of scientific support for evolution. The Discovery Institute has gathered over 761 scientists as of August 2008 to sign A Scientific Dissent From Darwinism in order to show that there are a number of scientists who dispute what they refer to as "Darwinian evolution". This statement did not profess outright disbelief in evolution, but expressed skepticism as to the ability of "random mutation and natural selection to account for the complexity of life." Several counter-petitions have been launched in turn, including A Scientific Support for Darwinism, which gathered over 7,000 signatures in four days, and Project Steve, a tongue-in-cheek petition that has gathered the signatures of 1,497 (as of May 22, 2024) evolution-supporting scientists named "Steve" (or any similar variation thereof—Stephen, Stephanie, Esteban, etc.). Creationists have argued for over a century that evolution is a "theory in crisis" that will soon be overturned, based on objections that it lacks reliable evidence or violates natural laws. These objections have been rejected by most scientists, as have claims that intelligent design, or any other creationist explanation, meets the basic scientific standards that would be required to make them scientific alternatives to evolution. It is also argued that even if evidence against evolution exists, it is a false dilemma to characterize this as evidence for intelligent design. A similar objection to evolution is that certain scientific authorities—mainly pre-modern ones—have doubted or rejected evolution. Most commonly, it is argued that Darwin "recanted" on his deathbed, a false anecdote originating from Lady Hope's story. These objections are generally rejected as appeals to authority. Scientific status A common neo-creationist objection to evolution is that evolution does not adhere to normal scientific standards—that it is not genuinely scientific. It is argued that evolutionary biology does not follow the scientific method and therefore should not be taught in science classes, or at least should be taught alongside other views (i.e., creationism). These objections often deal with: the very nature of evolutionary theory, the scientific method, and the philosophy of science. Religious nature Creationists commonly argue that "evolution is a religion; it is not a science." The purpose of this criticism is to reframe the debate from one between science (evolution) and religion (creationism) to between two religious beliefs—or even to argue that evolution is religious while intelligent design is not. Those that oppose evolution frequently refer to supporters of evolution as "evolutionists" or "Darwinists". The arguments for evolution being a religion generally amount to arguments by analogy: it is argued that evolution and religion have one or more things in common, and that therefore evolution is a religion. Examples of claims made in such arguments are statements that evolution is based on faith, and that supporters of evolution dogmatically reject alternative suggestions out-of-hand. These claims have become more popular in recent years as the neo-creationist movement has sought to distance itself from religion, thus giving it more reason to make use of a seemingly anti-religious analogy. Supporters of evolution have argued in response that no scientist's claims are treated as sacrosanct, as shown by the aspects of Darwin's theory that have been rejected or revised by scientists over the years to form first neo-Darwinism and later the modern evolutionary synthesis. The claim that evolution relies on faith is likewise rejected on the grounds that evolution has strong supporting evidence, and therefore does not require faith. The argument that evolution is religious has been rejected in general on the grounds that religion is not defined by how dogmatic or zealous its adherents are, but by its spiritual or supernatural beliefs. But evolution is neither dogmatic nor based on faith, and they accuse creationists of equivocating between the strict definition of religion and its colloquial usage to refer to anything that is enthusiastically or dogmatically engaged in. United States courts have also rejected this objection: Assuming for the purposes of argument, however, that evolution is a religion or religious tenet, the remedy is to stop the teaching of evolution, not establish another religion in opposition to it. Yet it is clearly established in the case law, and perhaps also in common sense, that evolution is not a religion and that teaching evolution does not violate the Establishment Clause, Epperson v. Arkansas, supra, Willoughby v. Stever, No. 15574-75 (D.D.C. May 18, 1973); aff'd. 504 F.2d 271 (D.C. Cir. 1974), cert. denied, 420 U.S. 924 (1975); Wright v. Houston Indep. School Dist., 366 F. Supp. 1208 (S.D. Tex 1978), aff.d. 486 F.2d 137 (5th Cir. 1973), cert. denied 417 U.S. 969 (1974). A related claim is that evolution is atheistic (see the Atheism section below); creationists sometimes merge the two claims and describe evolution as an "atheistic religion" (cf. humanism). This argument against evolution is also frequently generalized into a criticism of all science; it is argued that "science is an atheistic religion", on the grounds that its methodological naturalism is as unproven, and thus as "faith-based", as the supernatural and theistic beliefs of creationism. Unfalsifiability A statement is considered falsifiable if there is an observation or a test that could be made that would demonstrate that the statement is false. Statements that are not falsifiable cannot be examined by scientific investigation since they permit no tests that evaluate their accuracy. Creationists such as Henry M. Morris have claimed that any observation can be fitted into the evolutionary framework, so it is impossible to demonstrate that evolution is wrong and therefore evolution is non-scientific. Evolution could be falsified by many conceivable lines of evidence, such as: the fossil record showing no change over time, confirmation that mutations are prevented from accumulating in a population, or observations of organisms being created supernaturally or spontaneously. J. B. S. Haldane, when asked what hypothetical evidence could disprove evolution, replied "fossil rabbits in the Precambrian era." Numerous other potential ways to falsify evolution have also been proposed. For example, the fact that humans have one fewer pair of chromosomes than the great apes offered a testable hypothesis involving the fusion or splitting of chromosomes from a common ancestor. The fusion hypothesis was confirmed in 2005 by discovery that human chromosome 2 is homologous with a fusion of two chromosomes that remain separate in other primates. Extra, inactive telomeres and centromeres remain on human chromosome 2 as a result of the fusion. The assertion of common descent could also have been disproven with the invention of DNA sequencing methods. If true, human DNA should be far more similar to chimpanzees and other great apes, than to other mammals. If not, then common descent is falsified. DNA analysis has shown that humans and chimpanzees share a large percentage of their DNA (between 95% and 99.4% depending on the measure). Also, the evolution of chimpanzees and humans from a common ancestor predicts a (geologically) recent common ancestor. Numerous transitional fossils have since been found. Hence, human evolution has passed several falsifiable tests. Many of Darwin's ideas and assertions of fact have been falsified as evolutionary science has developed, but these amendments and falsifications have uniformly confirmed his central concepts. In contrast, creationist explanations involving the direct intervention of the supernatural in the physical world are not falsifiable, because any result of an experiment or investigation could be the unpredictable action of an omnipotent deity. In 1976, the philosopher Karl Popper said that "Darwinism is not a testable scientific theory but a metaphysical research programme." He later changed his mind and argued that Darwin's "theory of natural selection is difficult to test" with respect to other areas of science. In his 1982 book, Abusing Science: The Case Against Creationism, philosopher of science Philip Kitcher specifically addresses the "falsifiability" question by taking into account notable philosophical critiques of Popper by Carl Gustav Hempel and Willard Van Orman Quine and provides a definition of theory other than as a set of falsifiable statements. As Kitcher points out, if one took a strictly Popperian view of "theory", observations of Uranus when it was first discovered in 1781 would have "falsified" Isaac Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed. Kitcher agrees with Popper that "there is surely something right in the idea that a science can succeed only if it can fail." But he insists that we view scientific theories as consisting of an "elaborate collection of statements", some of which are not falsifiable, and others—what he calls "auxiliary hypotheses", which are. Tautological nature A related claim to the supposed unfalsifiability of evolution is that natural selection is tautological. Specifically, it is often argued that the phrase "survival of the fittest" is a tautology, in that fitness is defined as ability to survive and reproduce. This phrase was first used by Herbert Spencer in 1864 but is rarely used by biologists. Additionally, fitness is more accurately defined as the state of possessing traits that make survival more likely; this definition, unlike simple "survivability", avoids being trivially true. Similarly, it is argued that evolutionary theory is circular reasoning, in that evidence is interpreted as supporting evolution, but evolution is required to interpret the evidence. An example of this is the claim that geological strata are dated through the fossils they hold, but that fossils are in turn dated by the strata they are in. However, in most cases strata are not dated by their fossils, but by their position relative to other strata and by radiometric dating, and most strata were dated before the theory of evolution was formulated. Evidence Objections to the fact that evolution occurs tend to focus on specific interpretations about the evidence. Lack of observation A common claim of creationists is that evolution has never been observed. Challenges to such objections often come down to debates over how evolution is defined (see the Defining evolution section above). Under the conventional biological definition of evolution, it is a simple matter to observe evolution occurring. Evolutionary processes, in the form of populations changing their genetic composition from generation to generation, have been observed in different scientific contexts, including the evolution of fruit flies, mice, and bacteria in the laboratory, and of tilapia in the field. Such studies on experimental evolution, particularly those using microorganisms, are now providing important insights into how evolution occurs, especially in the case of antibiotic resistance. In response to such examples, creationists say there are two major subdivisions of evolution to be considered, microevolution and macroevolution, and it is questionable if macro-evolution has been physically observed to occur. Most creationist organizations do not dispute the occurrence of short-term, relatively minor evolutionary changes, such as that observed even in dog breeding. Rather, they dispute the occurrence of major evolutionary changes over long periods of time, which by definition cannot be directly observed, only inferred from microevolutionary processes and the traces of macroevolutionary ones. As biologists define macroevolution, both microevolution and macroevolution have been observed. Speciations, for example, have been directly observed many times. Additionally, the modern evolutionary synthesis draws no distinction in the processes described by the theory of evolution when considering macroevolution and microevolution as the former is simply at the species level or above and the latter is below the species level. An example of this is ring species. Additionally, past macroevolution can be inferred from historical traces. Transitional fossils, for example, provide plausible links between several different groups of organisms, such as Archaeopteryx linking birds and non-avian dinosaurs, or the Tiktaalik linking fish and limbed amphibians. Creationists dispute such examples, from asserting that such fossils are hoaxes or that they belong exclusively to one group or the other, to asserting that there should be far more evidence of obvious transitional species. Darwin himself found the paucity of transitional species to be one of the greatest weaknesses of his theory: Why then is not every geological formation and every stratum full of such intermediate links? Geology assuredly does not reveal any such finely graduated organic chain and this, perhaps, is the most obvious and gravest objection which can be urged against my theory. The explanation lies, as I believe, in the extreme imperfection of the geological record. Darwin appealed to the limited collections then available, the extreme lengths of time involved, and different rates of change with some living species differing very little from fossils of the Silurian period. In later editions he added "that the periods during which species have been undergoing modification, though very long as measured by years, have probably been short in comparison with the periods during which these same species remained without undergoing any change." The number of clear transitional fossils has increased enormously since Darwin's day, and this problem has been largely resolved with the advent of the theory of punctuated equilibrium, which predicts a primarily stable fossil record broken up by occasional major speciations. As more and more compelling direct evidence for inter-species and species-to-species evolution has been gathered, creationists have redefined their understanding of what amounts to "created kinds", and have continued to insist that more dramatic demonstrations of evolution be experimentally produced. One version of this objection is "Were you there?", popularized by young Earth creationist Ken Ham. It argues that because no one except God could directly observe events in the distant past, scientific claims are just speculation or "story-telling". DNA sequences of the genomes of organisms allow an independent test of their predicted relationships, since species which diverged more recently will be more closely related genetically than species which are more distantly related; such phylogenetic trees show a hierarchical organization within the tree of life, as predicted by common descent. In fields such as astrophysics or meteorology, where direct observation or laboratory experiments are difficult or impossible, the scientific method instead relies on observation and logical inference. In such fields, the test of falsifiability is satisfied when a theory is used to predict the results of new observations. When such observations contradict a theory's predictions, it may be revised or discarded if an alternative better explains the observed facts. For example, Newton's theory of gravitation was replaced by Albert Einstein's theory of general relativity when the latter was observed to more precisely predict the orbit of Mercury. Unreliable evidence A related objection is that evolution is based on unreliable evidence, claiming that evolution is not even well-evidenced. Typically, this is either based on the argument that evolution's evidence is full of frauds and hoaxes, that current evidence for evolution is likely to be overturned as some past evidence has been, or that certain types of evidence are inconsistent and dubious. Arguments against evolution's reliability are thus often based on analyzing the history of evolutionary thought or the history of science in general. Creationists point out that in the past, major scientific revolutions have overturned theories that were at the time considered near-certain. They thus claim that current evolutionary theory is likely to undergo such a revolution in the future, on the basis that it is a "theory in crisis" for one reason or another. Critics of evolution commonly appeal to past scientific hoaxes such as the Piltdown Man forgery. It is argued that because scientists have been mistaken and deceived in the past about evidence for various aspects of evolution, the current evidence for evolution is likely to also be based on fraud and error. Much of the evidence for evolution has been accused of being fraudulent at various times, including Archaeopteryx, peppered moth melanism, and Darwin's finches; these claims have been subsequently refuted. It has also been claimed that certain former pieces of evidence for evolution which are now considered out-of-date and erroneous, such as Ernst Haeckel's 19th-century comparative drawings of embryos, used to illustrate his recapitulation theory ("ontogeny recapitulates phylogeny"), were not merely errors but frauds. Molecular biologist Jonathan Wells criticizes biology textbooks by alleging that they continue to reproduce such evidence after it has been debunked. In response, the National Center for Science Education notes that none of the textbooks reviewed by Wells makes the claimed error, as Haeckel's drawings are shown in a historical context with discussion about why they are wrong, and the accurate modern drawings and photos used in the textbooks are misrepresented by Wells. Unreliable chronology Creationists claim that evolution relies on certain types of evidence that do not give reliable information about the past. For example, it is argued that radiometric dating technique of evaluating a material's age based on the radioactive decay rates of certain isotopes generates inconsistent and thus unreliable results. Radiocarbon dating based on the carbon-14 isotope has been particularly criticized. It is argued that radiometric decay relies on a number of unwarranted assumptions such as the principle of uniformitarianism, consistent decay rates, or rocks acting as closed systems. Such arguments have been dismissed by scientists on the grounds that independent methods have confirmed the reliability of radiometric dating as a whole; additionally, different radiometric dating methods and techniques have independently confirmed each other's results. Another form of this objection is that fossil evidence is not reliable. This is based on a much wider range of claims. These include that there are too many "gaps" in the fossil record, that fossil-dating is circular (see the Unfalsifiability section above), or that certain fossils, such as polystrate fossils, are seemingly "out of place". Examination by geologists have found polystrate fossils to be consistent with in situ formation. It is argued that certain features of evolution support creationism's catastrophism (cf. Great Flood), rather than evolution's gradualistic punctuated equilibrium, which some assert is an ad hoc theory to explain the fossil gaps. Plausibility Improbability A common objection to evolution is that it is simply too unlikely for life, in its complexity and apparent "design", to have arisen "by chance". It is argued that the odds of life having arisen without a deliberate intelligence guiding it are so incredibly low that it is unreasonable not to infer an intelligent designer from the natural world, and specifically from the diversity of life. A more extreme version of this argument is that evolution cannot create complex structures (see the Creation of complex structures section below). The idea that it is simply too implausible for life to have evolved is often wrongly encapsulated with a quotation that the "probability of life originating on Earth is no greater than the chance that a hurricane, sweeping through a scrapyard, would have the luck to assemble a Boeing 747"—a claim attributed to astrophysicist Fred Hoyle and known as Hoyle's fallacy. Hoyle was a Darwinist, atheist and anti-theist, but advocated the theory of panspermia, in which abiogenesis begins in outer space and primitive life on Earth is held to have arrived via natural dispersion. Views superficially similar, but unrelated to Hoyle's, are thus invariably justified with arguments from analogy. The basic idea of this argument for a designer is the teleological argument, an argument for the existence of God based on the perceived order or purposefulness of the universe. A common way of using this as an objection to evolution is by appealing to the 18th-century philosopher William Paley's watchmaker analogy, which argues that certain natural phenomena are analogical to a watch (in that they are ordered, or complex, or purposeful), which means that, like a watch, they must have been designed by a "watchmaker"—an intelligent agent. This argument forms the core of intelligent design, a neo-creationist movement seeking to establish certain variants of the design argument as legitimate science, rather than as philosophy or theology, and have them be taught alongside evolution. Supporters of evolution generally respond by arguing that this objection is simply an argument by lack of imagination, or argument from incredulity: a certain explanation is seen as being counterintuitive, and therefore an alternate, more intuitive explanation is appealed to instead. In actuality, evolution is not based on "chance", but on predictable chemical interactions: natural processes, rather than supernatural beings, are the "designer". Although the process involves some random elements, it is the non-random selection of survival-enhancing genes that drives the evolution of complex and ordered patterns. The fact that the results are ordered and seem "designed" is no more evidence for a supernatural intelligence than the appearance of complex non-living phenomena (e.g. snowflakes). It is also argued that there is insufficient evidence to make statements about the plausibility or implausibility of abiogenesis, that certain structures demonstrate poor design, and that the implausibility of life evolving exactly as it did is no more evidence for an intelligence than the implausibility of a deck of cards being shuffled and dealt in a certain random order. It has also been noted that arguments against some form of life arising "by chance" are really objections to nontheistic abiogenesis, not to evolution. Indeed, arguments against "evolution" are based on the misconception that abiogenesis is a component of, or necessary precursor to, evolution. Similar objections sometimes conflate the Big Bang with evolution. Christian apologist and philosopher Alvin Plantinga, who believes evolution must have been guided if it occurred, has formalized and revised the improbability argument as the evolutionary argument against naturalism, which asserts that it is irrational to reject a supernatural, intelligent creator because the apparent probability of certain faculties evolving is so low. Specifically, Plantinga claims that evolution cannot account for the rise of reliable reasoning faculties. Plantinga argues that whereas a God would be expected to create beings with reliable reasoning faculties, evolution would be just as likely to lead to unreliable ones, meaning that if evolution is true, it is irrational to trust whatever reasoning one relies on to conclude that it is true. This novel epistemological argument has been criticized similarly to other probabilistic design arguments. It has also been argued that rationality, if conducive to survival, is more likely to be selected for than irrationality, making the natural development of reliable cognitive faculties more likely than unreliable ones. A related argument against evolution is that most mutations are harmful. However, the vast majority of mutations are neutral, and the minority of mutations which are beneficial or harmful are often situational; a mutation that is harmful in one environment may be helpful in another. Unexplained aspects of the natural world In addition to complex structures and systems, among the phenomena that critics variously claim evolution cannot explain are consciousness, hominid intelligence, instincts, emotions, metamorphosis, photosynthesis, homosexuality, music, language, religion, morality, and altruism (see altruism in animals). Most of these, such as hominid intelligence, instinct, emotion, photosynthesis, language, and altruism, have been well-explained by evolution, while others remain mysterious, or only have preliminary explanations. No alternative explanation has been able to adequately explain the biological origin of these phenomena either. Creationists argue against evolution on the grounds that it cannot explain certain non-evolutionary processes, such as abiogenesis, the Big Bang, or the meaning of life. In such instances, evolution is being redefined to refer to the entire history of the universe, and it is argued that if one aspect of the universe is seemingly inexplicable, the entire body of scientific theories must be baseless. At this point, objections leave the arena of evolutionary biology and become general scientific or philosophical disputes. Astronomers Fred Hoyle and Chandra Wickramasinghe have argued in favor of cosmic ancestry, and against abiogenesis and evolution. Impossibility This class of objections is more radical than the above, claiming that a major aspect of evolution is not merely unscientific or implausible, but rather impossible, because it contradicts some other law of nature or is constrained in such a way that it cannot produce the biological diversity of the world. Creation of complex structures Modern evolutionary theory posits that all biological systems must have evolved incrementally, through a combination of natural selection and genetic drift. Both Darwin and his early detractors recognized the potential problems that could arise for his theory of natural selection if the lineage of organs and other biological features could not be accounted for by gradual, step-by-step changes over successive generations; if all the intermediary stages between an initial organ and the organ it will become are not all improvements upon the original, it will be impossible for the later organ to develop by the process of natural selection alone. Complex organs such as the eye had been presented by William Paley as exemplifying the need for design by God, and anticipating early criticisms that the evolution of the eye and other complex organs seemed impossible, Darwin noted that: [R]eason tells me, that if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly, and the variations be inherited, which is certainly the case; and if any variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real. Similarly, ethologist and evolutionary biologist Richard Dawkins said on the topic of the evolution of the feather in an interview for the television program The Atheism Tapes: There's got to be a series of advantages all the way in the feather. If you can't think of one, then that's your problem not natural selection's problem... It's perfectly possible feathers began as fluffy extensions of reptilian scales to act as insulators... The earliest feathers might have been a different approach to hairiness among reptiles keeping warm. Creationist arguments have been made such as "What use is half an eye?" and "What use is half a wing?". Research has confirmed that the natural evolution of the eye and other intricate organs is entirely feasible. Creationist claims have persisted that such complexity evolving without a designer is inconceivable and this objection to evolution has been refined in recent years as the more sophisticated irreducible complexity argument of the intelligent design movement, formulated by Michael Behe. Biochemist Michael Behe has argued that current evolutionary theory cannot account for certain complex structures, particularly in microbiology. On this basis, Behe argues that such structures were "purposely arranged by an intelligent agent". Irreducible complexity is the idea that certain biological systems cannot be broken down into their constituent parts and remain functional, and therefore that they could not have evolved naturally from less complex or complete systems. Whereas past arguments of this nature generally relied on macroscopic organs, Behe's primary examples of irreducible complexity have been cellular and biochemical in nature. He has argued that the components of systems such as the blood clotting cascade, the immune system, and the bacterial flagellum are so complex and interdependent that they could not have evolved from simpler systems. In the years since Behe proposed irreducible complexity, new developments and advances in biology such as an improved understanding of the evolution of flagella, have already undermined these arguments. The idea that seemingly irreducibly complex systems cannot evolve has been refuted through evolutionary mechanisms, such as exaptation (the adaptation of organs for entirely new functions) and the use of "scaffolding", which are initially necessary features of a system that later degenerate when they are no longer required. Potential evolutionary pathways have been provided for all of the systems Behe used as examples of irreducible complexity. Cambrian explosion complexity argument The Cambrian explosion was the relatively rapid appearance around of most major animal phyla as demonstrated in the fossil record, and many more phyla now extinct. This was accompanied by major diversification of other organisms. Prior to the Cambrian explosion most organisms were simple, composed of individual cells occasionally organized into colonies. Over the following 70 or 80 million years the rate of diversification accelerated by an order of magnitude and the diversity of life began to resemble that of today, although they did not resemble the species of today. The basic problem with this is that natural selection calls for the slow accumulation of changes, where a new phylum would take longer than a new class which would take longer than a new order, which would take longer than a new family, which would take longer than a new genus would take longer than emergence of a new species but the apparent occurrence of high-level taxa without precedents is perhaps implying unusual evolutionary mechanisms. There is general consensus that many factors helped trigger the rise of new phyla, but there is no generally accepted consensus about the combination and the Cambrian explosion continues to be an area of controversy and research over why so rapid, why at the phylum level, why so many phyla then and none since, and even if the apparent fossil record is accurate. Some recent advances suggest that there is no clearly definable "Cambrian Explosion" event in the fossil record, but rather that there was a progression of transitional radiations starting with the Ediacaran period and continuing at a similar rate into the Cambrian. An example of opinions involving the commonly cited rise in oxygen Great Oxidation Event from biologist PZ Myers summarizes: "What it was was environmental changes, in particular the bioturbation revolution caused by the evolution of worms that released buried nutrients, and the steadily increasing oxygen content of the atmosphere that allowed those nutrients to fuel growth; ecological competition, or a kind of arms race, that gave a distinct selective advantage to novelties that allowed species to occupy new niches; and the evolution of developmental mechanisms that enabled multicellular organisms to generate new morphotypes readily." The increase in molecular oxygen (O2) also may have allowed the formation of the protective ozone layer (O3) that helps shield Earth from lethal UV radiation from the Sun. Creation of information A recent objection of creationists to evolution is that evolutionary mechanisms such as mutation cannot generate new information. Creationists such as William A. Dembski, Werner Gitt, and Lee Spetner have attempted to use information theory to dispute evolution. Dembski has argued that life demonstrates specified complexity, and proposed a law of conservation of information that extremely improbable "complex specified information" could be conveyed by natural means but never originated without an intelligent agent. Gitt asserted that information is an intrinsic characteristic of life and that an analysis demonstrates the mind and will of their Creator. These claims have been widely rejected by the scientific community, which asserts that new information is regularly generated in evolution whenever a novel mutation or gene duplication arises. Dramatic examples of entirely new and unique traits arising through mutation have been observed in recent years, such as the evolution of nylon-eating bacteria which developed new enzymes to efficiently digest a material that never existed before the modern era. There is no need to account for the creation of information when an organism is considered together with the environment it evolved in. The information in the genome forms a record of how it was possible to survive in a particular environment. The information is gathered from the environment through trial and error, as mutating organisms either reproduce or fail. The concept of specified complexity is widely regarded as mathematically unsound and has not been the basis for further independent work in information theory, in the theory of complex systems, or in biology. Violation of the second law of thermodynamics Another objection is that evolution violates the second law of thermodynamics. The law states that "the entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium". In other words, an isolated system's entropy (a measure of the dispersal of energy in a physical system so that it is not available to do mechanical work) will tend to increase or stay the same, not decrease. Creationists argue that evolution violates this physical law by requiring an increase in order (i.e., a decrease in entropy). The claims have been criticized for ignoring that the second law only applies to isolated systems. Organisms are open systems as they constantly exchange energy and matter with their environment: for example animals eat food and excrete waste, and radiate and absorb heat. It is argued that the Sun-Earth-space system does not violate the second law because the enormous increase in entropy due to the Sun and Earth radiating into space dwarfs the local decrease in entropy caused by the existence and evolution of self-organizing life. Since the second law of thermodynamics has a precise mathematical definition, this argument can be analyzed quantitatively. This was done by physicist Daniel F. Styer, who concluded: "Quantitative estimates of the entropy involved in biological evolution demonstrate that there is no conflict between evolution and the second law of thermodynamics." In a published letter to the editor of The Mathematical Intelligencer titled "How anti-evolutionists abuse mathematics", mathematician Jason Rosenhouse stated: The fact is that natural forces routinely lead to local decreases in entropy. Water freezes into ice and fertilised eggs turn into babies. Plants use sunlight to convert carbon dioxide and water into sugar and oxygen, but [we do] not invoke divine intervention to explain the process ... thermodynamics offers nothing to dampen our confidence in Darwinism. Moral implications Other common objections to evolution allege that evolution leads to objectionable results, such as eugenics and Nazi racial theory. It is argued that the teaching of evolution degrades values, undermines morals, and fosters irreligion or atheism. These may be considered appeals to consequences (a form of logical fallacy), as the potential ramifications of belief in evolutionary theory have nothing to do with its truth. Humans as animals In biological classification, humans are animals, a basic point which has been known for more than 2,000 years. Aristotle already described man as a political animal and Porphyry defined man as a rational animal, a definition accepted by the Scholastic philosophers in the Middle Ages. The creationist J. Rendle-Short asserted in Creation magazine that if people are taught evolution they can be expected to behave like animals: since animals behave in all sorts of different ways, this is meaningless. In evolutionary terms, humans are able to acquire knowledge and change their behaviour to meet social standards, so humans behave in the manner of other humans. Social effects In 1917, Vernon Kellogg published Headquarters Nights: A Record of Conversations and Experiences at the Headquarters of the German Army in France and Belgium, which asserted that German intellectuals were totally committed to might-makes-right due to "whole-hearted acceptance of the worst of Neo-Darwinism, the Allmacht of natural selection applied rigorously to human life and society and Kultur." This strongly influenced the politician William Jennings Bryan, who saw Darwinism as a moral threat to America and campaigned against evolutionary theory; his campaign culminated in the Scopes Trial, which effectively prevented teaching of evolution in most public schools until the 1960s. R. Albert Mohler, Jr., president of the Southern Baptist Theological Seminary in Louisville, Kentucky, wrote August 8, 2005, in NPR's Taking Issue essay series, that "Debates over education, abortion, environmentalism, homosexuality and a host of other issues are really debates about the origin — and thus the meaning — of human life. ...evolutionary theory stands at the base of moral relativism and the rejection of traditional morality." Henry M. Morris, engineering professor and founder of the Creation Research Society and the Institute of Creation Research, claims that evolution was part of a pagan religion that emerged after the Tower of Babel, was part of Plato's and Aristotle's philosophies, and was responsible for everything from war to pornography to the breakup of the nuclear family. He has also claimed that perceived social ills like crime, teenage pregnancies, homosexuality, abortion, immorality, wars, and genocide are caused by a belief in evolution. Pastor D. James Kennedy of The Center for Reclaiming America for Christ and Coral Ridge Ministries claims that Darwin was responsible for Adolf Hitler's atrocities. In Kennedy's documentary and the accompanying pamphlet with the same title, Darwin's Deadly Legacy, Kennedy states that "To put it simply, no Darwin, no Hitler." In his efforts to expose the "harmful effects that evolution is still having on our nation, our children, and our world," Kennedy also states that, "We have had 150 years of the theory of Darwinian evolution, and what has it brought us? Whether Darwin intended it or not, millions of deaths, the destruction of those deemed inferior, the devaluing of human life, increasing hopelessness." The Discovery Institute's Center for Science and Culture fellow Richard Weikart has made similar claims, as have other creationists. The claim was central to the documentary film Expelled: No Intelligence Allowed (2008) promoting intelligent design creationism. The Anti-Defamation League describes such claims as outrageous misuse of the Holocaust and its imagery, and as trivializing the "...many complex factors that led to the mass extermination of European Jewry. Hitler did not need Darwin or evolution to devise his heinous plan to exterminate the Jewish people, and Darwin and evolutionary theory cannot explain Hitler's genocidal madness. Moreover, anti-Semitism existed long before Darwin ever wrote a word." Young Earth creationist Kent Hovind blames a long list of social ills on evolution, including communism, socialism, World War I, World War II, racism, the Holocaust, Stalin's war crimes, the Vietnam War, Pol Pot's Killing Fields, the increase in crime and unwed mothers. Hovind's son Eric Hovind claims that evolution is responsible for tattoos, body piercing, premarital sex, unwed births, sexually transmitted diseases (STDs), divorce, and child abuse. Such accusations are counterfactual, and there is evidence that the opposite seems to be the case. A study published by the author and illustrator Gregory S. Paul found that religious beliefs, including belief in creationism and disbelief in evolution, are positively correlated with social ills like crime. The Barna Group surveys find that Christians and non-Christians in the U.S. have similar divorce rates, and the highest divorce rates in the U.S. are among Baptists and Pentecostals, both sects which reject evolution and embrace creationism. Michael Shermer argued in Scientific American in October 2006 that evolution supports concepts like family values, avoiding lies, fidelity, moral codes and the rule of law. He goes on to suggest that evolution gives more support to the notion of an omnipotent creator, rather than a tinkerer with limitations based on a human model, the more common image subscribed to by creationists. Careful analysis of the creationist charges that evolution has led to moral relativism and the Holocaust yields the conclusion that these charges appear to be highly suspect. Such analyses conclude that the origins of the Holocaust are more likely to be found in historical Christian antisemitism than in evolution. Evolution has been used to justify Social Darwinism, the exploitation of so-called "lesser breeds without the law" by "superior races", particularly in the nineteenth century. Typically strong European nations that had successfully expanded their empires could be said to have "survived" in the struggle for dominance. With this attitude, Europeans except for Christian missionaries rarely adopted any customs and languages of local people under their empires. Creationists have frequently maintained that Social Darwinism—leading to policies designed to reward the most competitive—is a logical consequence of "Darwinism" (the theory of natural selection in biology). Biologists and historians have stated that this is a fallacy of appeal to nature, since the theory of natural selection is merely intended as a description of a biological phenomenon and should not be taken to imply that this phenomenon is good or that it ought to be used as a moral guide in human society. Atheism Another charge leveled at evolutionary theory by creationists is that belief in evolution is either tantamount to atheism, or conducive to atheism. It is commonly claimed that all proponents of evolutionary theory are "materialistic atheists". On the other hand, Davis A. Young argues that creation science itself is harmful to Christianity because its bad science will turn more away than it recruits. Young asks, "Can we seriously expect non-Christians to develop a respect for Christianity if we insist on teaching the brand of science that creationism brings with it?" However, evolution neither requires nor rules out the existence of a supernatural being. Philosopher Robert T. Pennock makes the comparison that evolution is no more atheistic than plumbing. H. Allen Orr, professor of biology at University of Rochester, notes that: In addition, a wide range of religions have reconciled a belief in a supernatural being with evolution. Molleen Matsumura of the National Center for Science Education found that "of Americans in the twelve largest Christian denominations, 89.6% belong to churches that support evolution education." These churches include the "United Methodist Church, National Baptist Convention USA, Evangelical Lutheran Church in America, Presbyterian Church (USA), National Baptist Convention of America, African Methodist Episcopal Church, the Roman Catholic Church, the Episcopal Church, and others." A poll in 2000 done for People for the American Way found that 70% of the American public felt that evolution was compatible with a belief in God. Only 48% of the people polled could choose the correct definition of evolution from a list, however. One poll reported in the journal Nature showed that among American scientists (across various disciplines), about 40 percent believe in both evolution and an active deity (theistic evolution). This is similar to the results reported for surveys of the general American public. Also, about 40 percent of the scientists polled believe in a God that answers prayers, and believe in immortality. While about 55% of scientists surveyed were atheists, agnostics, or nonreligious theists, atheism is far from universal among scientists who support evolution, or among the general public that supports evolution. Very similar results were reported from a 1997 Gallup Poll of the American public and scientists. Traditionalists still object to the idea that diversity in life, including human beings, arose through natural processes without a need for supernatural intervention, and they argue against evolution on the basis that it contradicts their literal interpretation of creation myths about separate "created kinds". However, many religions, such as Catholicism which does not endorse nor deny evolution, have allowed Catholics to reconcile their own personal belief with evolution through the idea of theistic evolution. See also Alternatives to Darwinian evolution Rejection of evolution by religious groups Faith and rationality Notes References Bibliography The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-03-30. The book is available from the Internet Archive. Retrieved 2015-04-07. Meyer, Stephen C., and Mark Terry. "Darwin’s doubt: The explosive origin of animal life and the case for intelligent design." New York (2013). Further reading External links − (NYT / Retro Report; November 2017) Creationism Biological evolution Criticism of science Pseudoscience Denialism
Objections to evolution
[ "Biology" ]
12,266
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
8,787,169
https://en.wikipedia.org/wiki/Walter%20Reppe
Walter Julius Reppe (29 July 1892 in Göringen – 26 July 1969 in Heidelberg) was a German chemist. He is notable for his contributions to the chemistry of acetylene. Education and career Walter Reppe began his study of the natural sciences University of Jena in 1911. Interrupted by the First World War, he obtained his doctorate in Munich in 1920. In 1921, Reppe worked for BASF's main laboratory. From 1923, he worked on the catalytic dehydration of formamide to prussic acid in the indigo laboratory, developing this procedure for industrial use. In 1924, he left research for 10 years, only resuming it in 1934. Acetylene chemistry Reppe began his interest in acetylene in 1928. Acetylene is a gas which can take part in many chemical reactions. However, it is explosive and accidents often occurred. Because of this danger, small quantities of acetylene were used at a time, and always without high pressures. In fact, it was forbidden to compress acetylene over 1.5 bar at BASF. To work with acetylene safely, Reppe designed special test tubes, the so-called "Reppe glasses" — stainless steel spheres with screw-type cap, which permitted high pressure experiments. The efforts ended finally with a large number of interrelated reactions, known as Reppe chemistry. "Reppe Chemie" The high pressure reactions catalysed by heavy metal acetylides, especially copper acetylide, or metal carbonyls are called Reppe chemistry. Reactions can be classified into four large classes: The vinylization according to the equation: Catalytic ethynylation of aldehydes (although the milder Favorskii reaction was reported earlier): Reactions with carbon monoxide: This simple synthesis was used to prepare acrylic acid derivatives for the production of acrylic glass. The cyclic polymerization or cyclo-oligomerization of acetylene to cyclooctatetraene, which is one of the most important applications of template reactions. The reaction occurs at a nickel(II) centre, where it is supposed that four acetylene molecules occupy four sites around the metal, and react simultaneously to give the product. If a competing ligand such as triphenylphosphine is present in sufficient proportion to occupy one coordination site, then room is left for only three acetylene molecules, and these come together to form benzene This reaction provided an unusual route to benzene and especially to cyclooctatetraene, which was difficult to prepare otherwise. Products from these four reaction types proved to be versatile intermediates in the syntheses of lacquers, adhesives, foam materials, textile fibers, and pharmaceuticals could now be produced. Post-war After the Second World War, Reppe led the research of BASF from 1949 up to his retirement in 1957. From 1952 to 1966, he also sat on the supervisory board. He was also a professor at the University of Mainz and TH Darmstadt from 1951 and 1952 respectively. Together with Otto Bayer and Karl Ziegler he received the Werner von Siemens Ring in 1960 for expanding the scientific knowledge on and for the technical development of new synthetic high-molecular materials. Legacy Most of the industrial processes that were developed by Reppe and coworkers have been superseded, largely because the chemical industry has shifted from coal as feedstock to oil. Alkenes from thermal cracking are readily available, but acetylene is not. Together with his contemporaries Otto Roelen, Karl Ziegler, Hans Tropsch, and Franz Fischer, Reppe was a leader in demonstrating the utility of metal-catalyzed reactions in large scale synthesis of organic compounds. The economic benefits demonstrated by this research motivated the eventual flowering of organometallic chemistry and its close connection to industry. Further reading Neue Entwicklungen auf dem Gebiet der Chemie des Acetylen und Kohlenoxyds (New developments in the area of the chemistry acetylene and carbon monoxide). Springer Berlin, Göttingen, Heidelberg. 1949. 184 pages. References 20th-century German chemists University of Jena alumni Ludwig Maximilian University of Munich alumni Academic staff of Johannes Gutenberg University Mainz Werner von Siemens Ring laureates Members of the Royal Swedish Academy of Sciences 1892 births 1969 deaths Knights Commander of the Order of Merit of the Federal Republic of Germany German organic chemists People from Eisenach
Walter Reppe
[ "Chemistry" ]
910
[ "Organic chemists", "German organic chemists" ]
8,787,364
https://en.wikipedia.org/wiki/Ready%2C%20Set%2C%20Go%21%20%28software%29
Ready, Set, Go! is a software package for desktop publishing. Originally developed for Apple Computer's Macintosh by Manhattan Graphics, it became one of the earliest desktop-publishing packages available for that platform. It was often compared with QuarkXPress and Aldus PageMaker in comparative magazine reviews. It was later acquired by Diwan and is still available today for the Microsoft Windows platform. See also Adobe InDesign Adobe PageMaker Quark Xpress References External links Ready, Set, Go! page at Diwan Desktop publishing software Desktop publishing software for macOS Desktop publishing software for Windows
Ready, Set, Go! (software)
[ "Technology" ]
120
[ "Computing stubs", "Digital typography stubs" ]
8,788,180
https://en.wikipedia.org/wiki/List%20of%20professional%20architecture%20organizations
This is a list of professional architecture organizations listed by country. Many of them are members of the International Union of Architects. Africa Ghana Ghana Institute of Architects Kenya Architectural Association of Kenya Nigeria Nigerian Institute of Architects South Africa South Africa Institute of Architects Asia Bangladesh Institute of Architects Bangladesh Hong Kong Hong Kong Institute of Architects (HKIA) India Indian Institute of Architects (IIA) Japan Architectural Institute of Japan (AIJ) Japan Institute of Architects (JIA) Pakistan Institute of Architects Pakistan Philippines United Architects of the Philippines Europe Armenia Armenian Union of Architects Latvia The Latvian Association of Architecs (LAS) Denmark Danish Association of Architects (Akademisk Arkitektforening) (AA) Finland Suomen Arkkitehtiliitto SAFA Germany Bund Deutscher Architekten Greece Technical Chamber of Greece Ireland The Royal Institute of the Architects of Ireland Netherlands Netherlands Architecture Institute Poland Association of Polish Architects Spain Consejo Superior de los Colegios de Arquitectos de España United Kingdom Royal Institute of British Architects (RIBA) Chartered Institute of Architectural Technologists (CIAT) Royal Incorporation of Architects in Scotland (RIAS) Royal Society of Architects in Wales (RSAW) Royal Society of Ulster Architects (RSUA) North Wales Society of Architects (NWSA) North America Canada Royal Architectural Institute of Canada Architectural Institute of British Columbia Ontario Association of Architects Ordre des architects du Quebec United States The American Institute of Architects The Society of American Registered Architects Oceania Australia Australian Institute of Architects News Zealand New Zealand Institute of Architects References Architecture Professional organizations
List of professional architecture organizations
[ "Engineering" ]
312
[ "Architecture lists", "Architecture" ]
8,788,282
https://en.wikipedia.org/wiki/Bughole
A bughole (or pinhole) is a small hole in the surface of a concrete structure caused by the expansion and eventual outgassing of trapped pockets of air in setting concrete. Bugholes are undesirable, as they may compromise the structural integrity of concrete emplacements. Bughole-induced outgassing is a phenomenon occurring when applying a protective coating (or lining) to concrete (predominantly vertically cast-in-place) where air becomes trapped within bughole cavities and releases into or through the protective coating, thereby causing pinholes and holidays in the coating film. References Concrete
Bughole
[ "Engineering" ]
124
[ "Structural engineering", "Concrete", "Civil engineering", "Civil engineering stubs" ]
8,788,716
https://en.wikipedia.org/wiki/Aspire%20Tower
Aspire Tower, also known as The Torch Doha, is a skyscraper hotel located in the Aspire Zone complex in Doha, Qatar. Designed by architect Hadi Simaan and AREP and engineer Ove Arup and Partners, the tower served as the focal point for the 15th Asian Games hosted by Qatar in December 2006. The tower is currently the second tallest structure and building in Doha and Qatar. In 2023, it was surpassed by the Lusail Plaza Towers. The tower has also been known as Khalifa Sports Tower or Doha Olympic Tower. Construction and use The tower was a landmark of the 2006 Asian Games due to its size and proximity to the main venue, the Khalifa International Stadium. The final form consists of a 1-to-1.8-metre-thick, reinforced-concrete cylinder (the core), varying from 12 to 18 metres in diameter, encircled with radiating networks of cantilevered steel beams on each floor of its building modules. The modules themselves are composed of steel columns, metal decking, concrete slabs and outer tension and compression ring beams, which support glass-paneled outer walls. The bottom of each module is covered with glass fiber reinforced concrete. Beams, as well as steel struts tying all the structural components together, are bolted through the concrete core and hence are anchored into place, transferring vertical loads from perimeter columns and ring beams to the core. The building was constructed by companies Midmac and BESIX subsidiary Six Construct and was completed in November 2007 at a final cost of . See also Hyperboloid structure List of towers Aspire Park References External links Hadi Simaan website The Torch Doha The Aspire Tower: a case study on Constructalia Haver & Boecker - Information about the tower Skyscrapers in Doha Buildings and structures in Doha Buildings and structures completed in 2007 Hotel buildings completed in 2007 Hotels in Qatar Hotels established in 2020 Skyscraper hotels 2007 establishments in Qatar Hyperboloid structures High-tech architecture
Aspire Tower
[ "Technology" ]
395
[ "Structural system", "Hyperboloid structures" ]
8,788,855
https://en.wikipedia.org/wiki/Kharitonov%20region
A Kharitonov region is a concept in mathematics. It arises in the study of the stability of polynomials. Let be a simply-connected set in the complex plane and let be the polynomial family. is said to be a Kharitonov region if is a subset of Here, denotes the set of all vertex polynomials of complex interval polynomials and denotes the set of all vertex polynomials of real interval polynomials See also Kharitonov's theorem References Y C Soh and Y K Foo (1991), “Kharitonov Regions: It Suffices to Check a Subset of Vertex Polynomials”, IEEE Trans. on Aut. Cont., 36, 1102 – 1105. Polynomials Stability theory
Kharitonov region
[ "Mathematics" ]
147
[ "Polynomials", "Stability theory", "Algebra", "Dynamical systems" ]
8,789,505
https://en.wikipedia.org/wiki/Hydroskimming
Hydroskimming is one of the simplest types of refinery used in the petroleum industry and still represents a large proportion of refining facilities, particularly in developing countries. A hydroskimming refinery is defined as a refinery equipped with atmospheric distillation, naphtha reforming and necessary treating processes. A hydroskimming refinery is therefore more complex than a topping refinery (which just separates the crude into its constituent petroleum products by distillation, known as atmospheric distillation, and produces naphtha but no gasoline) and it produces gasoline. The addition of catalytic reformer enables a hydroskimming refinery to generate higher octane reformate; benzene, toluene, and xylene; and hydrogen for hydrotreating units. However, a hydroskimming refinery produces a surplus of fuel oil with a relatively unattractive price and demand. Most refineries, therefore, add vacuum distillation and catalytic cracking, which adds one more level of complexity by reducing fuel oil by conversion to light distillates and middle distillates. A coking refinery adds further complexity to the cracking refinery by high conversion of fuel oil into distillates and petroleum coke. Catalytic cracking, coking and other such conversion units are referred to as secondary processing units. The Nelson Complexity Index, captures the proportion of the secondary conversion unit capacities relative to the primary distillation or topping capacity. The Nelson Complexity Index typically varies from about 2 for hydroskimming refineries, to about 5 for the Cracking refineries and over 9 for the Coking refineries. Notes and references Oil refineries
Hydroskimming
[ "Chemistry" ]
326
[ "Petroleum", "Oil refineries", "Oil refining" ]
8,790,052
https://en.wikipedia.org/wiki/Nelson%20complexity%20index
The Nelson complexity index (NCI) is a measure to compare the secondary conversion capacity of a petroleum refinery with the primary distillation capacity. The index provides an easy metric for quantifying and ranking the complexity of various refineries and units. To calculate the index, it is necessary to use complexity factors, which compare the cost of upgrading units to the cost of crude distillation unit. History It was developed by Wilbur L. Nelson in a series of articles that appeared in the Oil & Gas Journal from 1960 to 1961 (Mar. 14, p. 189; Sept. 26, p. 216; and June 19, p. 109). In 1976, he elaborated on the concept in another series of articles, again in the Oil & Gas Journal (Sept. 13, p. 81; Sept. 20, p. 202; and Sept. 27, p. 83). Formula Where: is a complexity factor is a unit capacity is a capacity of crude distillation unit is a number of all units The NCI assigns a complexity factor to each major piece of refinery equipment based on its complexity and cost in comparison to crude distillation, which is assigned a complexity factor of 1.0. The complexity of each piece of refinery equipment is then calculated by multiplying its complexity factor by its throughput ratio as a percentage of crude distillation capacity. Adding up the complexity values assigned to each piece of equipment, including crude distillation, determines a refinery’s complexity on the NCI. The NCI indicates not only the investment intensity or cost index of the refinery but also its potential value addition. Thus, the higher the index number, the greater the cost of the refinery and the higher the value of its products. In the second edition of the book Petroleum Refinery Process Economics (2000), author Robert Maples notes that U.S. refineries rank highest in complexity index, averaging 9.5, compared with Europe's at 6.5. The Jamnagar Refinery belonging to India-based Reliance Industries Limited is now one of the most complex refineries in the world with a Nelson complexity index of 21.1. The Oil and Gas Journal annually calculates and publishes a list of refineries with their associated Nelson complexity index scores. Complexity factors Some factors for various processing units: Example If an oil refinery has a crude distillation unit (100 kbd), vacuum distillation unit (60 kbd), and catalytic reforming unit (30 kbd), then the NCI will be 1*(100/100) + 2*(60/100) + 5*(30/100) = 1.0 + 1.2 + 1.5 = 3.7. References External links Oil and Gas Journal, Nelson Complexity index Oil refining Dimensionless numbers of chemistry
Nelson complexity index
[ "Chemistry" ]
576
[ "Dimensionless numbers of chemistry", "Petroleum technology", "Oil refining" ]
8,790,120
https://en.wikipedia.org/wiki/Ultra%205/10
The Ultra 5 (code-named Otter) and Ultra 10 (code-named Sea Lion) are 64-bit Sun Microsystems workstations based on the UltraSPARC IIi microprocessor available since January 1998 and last shipped in November 2002. They were introduced as the Darwin line of workstations. Specifications These systems are notable for being the first in the Sun workstation line to introduce various commodity PC compatible hardware components such as ATA hard disks with CMD640 PCI EIDE controller and an ATI Rage PRO video chip. The Ultra 5 came in a "pizzabox" style case with a 270, 333, 360, or 400 MHz UltraSPARC IIi CPU and supported a maximum of 512 MB Buffered EDO ECC RAM in four 50ns 168-pin DIMM slots. It included a single EIDE Hard Disk Drive of between 4 and 20 GB, a CD-ROM drive, three 32-bit 33 MHz PCI slots (two full-size, one short), PGX24 graphics (HD15), a parallel printer port (DB25), two serial ports (DB25 and DE9), an Ethernet port (10BASE-T/100BASE-TX) and headphone, line-in, line-out and microphone 3.5-mm jacks. The Ultra 10 came in a mid-tower case with a 300, 333, 360, or 440-MHz 64-bit UltraSPARC CPU. It doubled the supported RAM to a maximum of 1024 MB in four DIMM slots and added room for a second ATA hard disk, a fourth PCI card, and an UPA graphics card such as the Creator, Creator3D or Elite3D. Keyboard The Sun Type 6 keyboard came in two variants, one with Mini-DIN, one with USB connectors. The top edge of the keyboard is rounded. The keyboard has a special "diamond" key (called Meta key) placed next to the space key. This key comes from Lisp machines and is meant to be used with the Emacs editor. It also has 3 keys for regulating volume control or screen brightness and a power key in the upper top corner. It came with a purple plastic wrist rest. The keyboard has 4 LEDs on the top (rather than incorporated into the key cap): Num Lock, Caps Lock, Scroll Lock, Compose. See also Sun Ultra 1 Sun Ultra series References External links Ultra 5 Service Manual Ultra 10 Service Manual Sun workstations SPARC microprocessor products
Ultra 5/10
[ "Technology" ]
514
[ "Computing stubs", "Computer hardware stubs" ]
8,790,285
https://en.wikipedia.org/wiki/WinPT
WinPT or Windows Privacy Tray is frontend to the Gnu Privacy Guard (GnuPG) for the Windows platform. Released under GPL, it is compatible with OpenPGP compliant software. WinPT represents a collection of user interface tools designed to ease the use of asymmetric encryption software. Based on GnuPG, and OpenPGP-compatible, WinPT is intended for Windows users to use for everyday message signing, verification, encryption and general key management. If installation defaults are used, WinPT will then reside in the task bar tray, and on the right-click menu within Windows Explorer. A Start menu item includes launchers for a GPG commandline (console), WinPT tray, and documentation. , latest version (1.5.3 Beta) is only compatible with GnuPG 1.4.x and not with the most recent version 2.0.x. WinPT is included in the GnuPT installer (that includes the latest version of GnuPG 1.4.x, WinPT 1.4.3 stable and WinPT latest beta.) History On April 4, 2007, the project's author, Timo Schulz, announced that development on WinPT has been suspended for an indefinite period. However, on October 27, 2008, Schulz announced a new version 1.30, described as a bug fix release. On December 14, 2009, Timo Schulz announced that WinPT is discontinued due to lack of resources. On January 19, 2012, Timo Schulz announced work on a new release and asked the community to contact him in regards to further development past future revision 1.5 if they are interested. On October 21, 2012, Timo Schulz announced that the project had a new dedicated website. See also GNU Privacy Guard Gpg4win PGP Public-key cryptography Cryptography References External links WinPT website GnuPT website Gnu Privacy Guard OpenPGP Cryptographic software Software using the GNU General Public License
WinPT
[ "Mathematics" ]
409
[ "Cryptographic software", "Mathematical software" ]
8,790,877
https://en.wikipedia.org/wiki/Pressure%20regulator
A pressure regulator is a valve that controls the pressure of a fluid to a desired value, using negative feedback from the controlled pressure. Regulators are used for gases and liquids, and can be an integral device with a pressure setting, a restrictor and a sensor all in the one body, or consist of a separate pressure sensor, controller and flow valve. Two types are found: The pressure reduction regulator and the back-pressure regulator. A pressure reducing regulator is a control valve that reduces the input pressure of a fluid to a desired value at its output. It is a normally-open valve and is installed upstream of pressure sensitive equipment. A back-pressure regulator, back-pressure valve, pressure sustaining valve or pressure sustaining regulator is a control valve that maintains the set pressure at its inlet side by opening to allow flow when the inlet pressure exceeds the set value. It differs from an over-pressure relief valve in that the over-pressure valve is only intended to open when the contained pressure is excessive, and it is not required to keep upstream pressure constant. They differ from pressure reducing regulators in that the pressure reducing regulator controls downstream pressure and is insensitive to upstream pressure. It is a normally-closed valve which may be installed in parallel with sensitive equipment or after the sensitive equipment to provide an obstruction to flow and thereby maintain upstream pressure. Both types of regulator use feedback of the regulated pressure as input to the control mechanism, and are commonly actuated by a spring loaded diaphragm or piston reacting to changes in the feedback pressure to control the valve opening, and in both cases the valve should be opened only enough to maintain the set regulated pressure. The actual mechanism may be very similar in all respects except the placing of the feedback pressure tap. As in other feedback control mechanisms, the level of damping is important to achieve a balance between fast response to a change in the measured pressure, and stability of output. Insufficient damping may lead to hunting oscillation of the controlled pressure, while excessive friction of moving parts may cause hysteresis. Pressure reducing regulator Operation A pressure reducing regulator's primary function is to match the flow of gas through the regulator to the demand for fluid placed upon it, whilst maintaining a sufficiently constant output pressure. If the load flow decreases, then the regulator flow must decrease as well. If the load flow increases, then the regulator flow must increase in order to keep the controlled pressure from decreasing because of a shortage of fluid in the pressure system. It is desirable that the controlled pressure does not vary greatly from the set point for a wide range of flow rates, but it is also desirable that flow through the regulator is stable and the regulated pressure is not subject to excessive oscillation. A pressure regulator includes a restricting element, a loading element, and a measuring element: The restricting element is a valve that can provide a variable restriction to the flow, such as a globe valve, butterfly valve, poppet valve, etc. The loading element is a part that can apply the needed force to the restricting element. This loading can be provided by a weight, a spring, a piston actuator, or the diaphragm actuator in combination with a spring. The measuring element functions to determine when the inlet flow is equal to the outlet flow. The diaphragm itself is often used as a measuring element; it can serve as a combined element. In the pictured single-stage regulator, a force balance is used on the diaphragm to control a poppet valve in order to regulate pressure. With no inlet pressure, the spring above the diaphragm pushes it down on the poppet valve, holding it open. Once inlet pressure is introduced, the open poppet allows flow to the diaphragm and pressure in the upper chamber increases, until the diaphragm is pushed upward against the spring, causing the poppet to reduce flow, finally stopping further increase of pressure. By adjusting the top screw, the downward pressure on the diaphragm can be increased, requiring more pressure in the upper chamber to maintain equilibrium. In this way, the outlet pressure of the regulator is controlled. Single stage regulator High pressure gas from the supply enters the regulator through the inlet port. The inlet pressure gauge will indicate this pressure. The gas then passes through the normally open pressure control valve orifice and the downstream pressure rises until the valve actuating diaphragm is deflected sufficiently to close the valve, preventing any more gas from entering the low pressure side until the pressure drops again. The outlet pressure gauge will indicate this pressure. The outlet pressure on the diaphragm and the inlet pressure and poppet spring force on the upstream part of the valve hold the diaphragm/poppet assembly in the closed position against the force of the diaphragm loading spring. If the supply pressure falls, the closing force due to supply pressure is reduced, and downstream pressure will rise slightly to compensate. Thus, if the supply pressure falls, the outlet pressure will increase, provided the outlet pressure remains below the falling supply pressure. This is the cause of end-of-tank dump where the supply is provided by a pressurized gas tank. The operator can compensate for this effect by adjusting the spring load by turning the knob to restore outlet pressure to the desired level. With a single stage regulator, when the supply pressure gets low, the lower inlet pressure causes the outlet pressure to climb. If the diaphragm loading spring compression is not adjusted to compensate, the poppet can remain open and allow the tank to rapidly dump its remaining contents. Double stage regulator Two stage regulators are two regulators in series in the same housing that operate to reduce the pressure progressively in two steps instead of one. The first stage, which is preset, reduces the pressure of the supply gas to an intermediate stage; gas at that pressure passes into the second stage. The gas emerges from the second stage at a pressure (working pressure) set by user by adjusting the pressure control knob at the diaphragm loading spring. Two stage regulators may have two safety valves, so that if there is any excess pressure between stages due to a leak at the first stage valve seat the rising pressure will not overload the structure and cause an explosion. An unbalanced single stage regulator may need frequent adjustment. As the supply pressure falls, the outlet pressure may change, necessitating adjustment. In the two stage regulator, there is improved compensation for any drop in the supply pressure. Applications Pressure reducing regulators Air compressors Air compressors are used in industrial, commercial, and home workshop environments to perform an assortment of jobs including blowing things clean; running air powered tools; and inflating things like tires, balls, etc. Regulators are often used to adjust the pressure coming out of an air receiver (tank) to match what is needed for the task. Often, when one large compressor is used to supply compressed air for multiple uses (often referred to as "shop air" if built as a permanent installation of pipes throughout a building), additional regulators will be used to ensure that each separate tool or function receives the pressure it needs. This is important because some air tools, or uses for compressed air, require pressures that may cause damage to other tools or materials. Aircraft Pressure regulators are found in aircraft cabin pressurization, canopy seal pressure control, potable water systems, and waveguide pressurization. Aerospace Aerospace pressure regulators have applications in propulsion pressurant control for reaction control systems (RCS) and Attitude Control Systems (ACS), where high vibration, large temperature extremes and corrosive fluids are present. Cooking Pressurized vessels can be used to cook food much more rapidly than at atmospheric pressure, as the higher pressure raises the boiling point of the contents. All modern pressure cookers will have a pressure regulator valve and a pressure relief valve as a safety mechanism to prevent explosion in the event that the pressure regulator valve fails to adequately release pressure. Some older models lack a safety release valve. Most home cooking models are built to maintain a low and high pressure setting. These settings are usually . Almost all home cooking units will employ a very simple single-stage pressure regulator. Older models will simply use a small weight on top of an opening that will be lifted by excessive pressure to allow excess steam to escape. Newer models usually incorporate a spring-loaded valve that lifts and allows pressure to escape as pressure in the vessel rises. Some pressure cookers will have a quick release setting on the pressure regulator valve that will, essentially, lower the spring tension to allow the pressure to escape at a quick, but still safe rate. Commercial kitchens also use pressure cookers, in some cases using oil based pressure cookers to quickly deep fry fast food. Pressure vessels of this sort can also be used as autoclaves to sterilize small batches of equipment and in home canning operations. Water pressure reduction A water pressure regulating valve limits inflow by dynamically changing the valve opening so that when less pressure is on the outside, the valve opens up fully, and too much pressure on the outside causes the valve to shut. In a no pressure situation, where water could flow backwards, it won't be impeded. A water pressure regulating valve does not function as a check valve. They are used in applications where the water pressure is too high at the end of the line to avoid damage to appliances or pipes. Welding and cutting Oxy-fuel welding and cutting processes require gases at specific pressures, and regulators will generally be used to reduce the high pressures of storage cylinders to those usable for cutting and welding. Oxygen and fuel gas regulators usually have two stages: The first stage of the regulator releases the gas at a constant pressure from the cylinder despite the pressure in the cylinder becoming less as the gas is released. The second stage of the regulator controls the pressure reduction from the intermediate pressure to low pressure. The final flow rate may be adjusted at the torch. The regulator assembly usually has two pressure gauges, one indicating cylinder pressure, the other indicating delivery pressure. Inert gas shielded arc welding also uses gas stored at high pressure provided through a regulator. There may be a flow gauge calibrated to the specific gas. Propane/LP gas All propane and LP gas applications require the use of a regulator. Because pressures in propane tanks can fluctuate significantly with temperature, regulators must be present to deliver a steady pressure to downstream appliances. These regulators normally compensate for tank pressures between and commonly deliver 11 inches water column for residential applications and 35 inches of water column for industrial applications. Propane regulators differ in size and shape, delivery pressure and adjustability, but are uniform in their purpose to deliver a constant outlet pressure for downstream requirements. Common international settings for domestic LP gas regulators are 28 mbar for butane and 37 mbar for propane. Gas powered vehicles All vehicular motors that run on compressed gas as a fuel (internal combustion engine or fuel cell electric power train) require a pressure regulator to reduce the stored gas (CNG or Hydrogen) pressure from 700, 500, 350 or 200 bar (or 70, 50, 35 and 20 MPa) to operating pressure.) Recreational vehicles For recreational vehicles with plumbing, a pressure regulator is required to reduce the pressure of an external water supply connected to the vehicle plumbing, as the supply may be a much higher elevation than the campground, and water pressure depends on the height of the water column. Without a pressure regulator, the intense pressure encountered at some campgrounds in mountainous areas may be enough to burst the camper's water pipes or unseat the plumbing joints, causing flooding. Pressure regulators for this purpose are typically sold as small screw-on accessories that fit inline with the hoses used to connect an RV to the water supply, which are almost always screw-thread-compatible with the common garden hose. Breathing gas supply Pressure regulators are used with diving cylinders for Scuba diving. The tank may contain pressures in excess of , which could cause a fatal barotrauma injury to a person breathing it directly. A demand controlled regulator provides a flow of breathing gas at the ambient pressure (which varies by depth in the water). Pressure reducing regulators are also use to supply breathing gas to surface-supplied divers, and people who use self-contained breathing apparatus (SCBA) for rescue and hazmat work on land. The interstage pressure for SCBA at normal atmospheric pressure can generally be left constant at a factory setting, but for surface supplied divers it is controlled by the gas panel operator, depending on the diver depth and flow rate requirements. Supplementary oxygen for high altitude flight in unpressurised aircraft and medical gases are also commonly dispensed through pressure reducing regulators from high-pressure storage. Supplementary oxygen may also be dispensed through a regulator which both reduces the pressure, and supplies the gas at a metered flow rate, to be mixed with ambient air. One way of producing a constant mass flow at variable ambient pressure is to use a choked flow, where the flow through the metering orifice is sonic. For a given gas in choked flow, the mass flow rate may be controlled by setting the orifice size or the upstream pressure. To produce a choked flow in oxygen, the absolute pressure ratio of upstream and downstream gas must exceed 1.893 at 20 °C. At normal atmospheric pressure this requires an upstream pressure of more than 1.013 × 1.893 = 1.918 bar. A typical nominal regulated gauge pressure from a medical oxygen regulator is , for an absolute pressure of approximately 4.4 bar and a pressure ratio of about 4.4 without back pressure, so they will have choked flow in the metering orifices for a downstream (outlet) pressure of up to about 2.3 bar absolute. This type of regulator commonly uses a rotor plate with calibrated orifices and detents to hold it in place when the orifice corresponding to the desired flow rate is selected. This type of regulator may also have one or two uncalibrated takeoff connections from the intermediate pressure chamber with diameter index safety system (DISS) or similar connectors to supply gas to other equipment, and the high pressure connection is commonly a pin index safety system (PISS) yoke clamp. Similar mechanisms can be used for flow rate control for aviation and mountaineering regulators. Mining industry As the pressure in water pipes builds rapidly with depth, underground mining operations require a fairly complex water system with pressure reducing valves. These devices must be installed at a certain vertical interval, usually . Without such valves, pipes could burst and pressure would be too great for equipment operation. Natural gas industry Pressure regulators are used extensively within the natural gas industry. Natural gas is compressed to high pressures in order to be distributed throughout the country through large transmission pipelines. The transmission pressure can be over and must be reduced through various stages to a usable pressure for industrial, commercial, and residential applications. There are three main pressure reduction locations in this distribution system. The first reduction is located at the city gate, whereas the transmission pressure is dropped to a distribution pressure to feed throughout the city. This is also the location where the odorless natural gas is odorized with mercaptan. The distribution pressure is further reduced at a district regulator station, located at various points in the city, to below 60 psig. The final cut would occur at the end users location. Generally, the end user reduction is taken to low pressures ranging from 0.25 psig to 5 psig. Some industrial applications can require a higher pressure. Back-pressure regulators Maintain upstream pressure control in analytical or process systems Protect sensitive equipment from overpressure damage Reduce the pressure difference over a component which is not tolerant of large pressure differences. Gas sales lines Production vessels (e.g., Separators, heater treaters or free water knockouts) Vent or flare lines Hyperbaric chambers Where the pressure drop on a built-in breathing system exhaust system is too great, typically in saturation systems, a back-pressure regulator may be used to reduce the exhaust pressure drop to a safer and more manageable pressure. Reclaim diving helmets The depth at which most heliox breathing mixtures are used in surface-supplied diving is generally at least 5 bar above surface atmospheric pressure, and the exhaust gas from the diver must pass through a reclaim valve, which is a back-pressure valve activated by the increase in pressure in the diver's helmet above ambient pressure caused by diver exhalation. The reclaim gas hose which carries the exhaled gas back to the surface for recycling must not be at too great a pressure difference from the ambient pressure at the diver. An additional back-pressure regulator in this line allows finer setting of the reclaim valve for lower work of breathing at variable depths. See also References External links Plumbing valves Hydraulics Pneumatics
Pressure regulator
[ "Physics", "Chemistry" ]
3,412
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
8,791,730
https://en.wikipedia.org/wiki/Forming%20gas
Forming gas is a mixture of hydrogen (mole fraction varies) and nitrogen. It is sometimes called a "dissociated ammonia atmosphere" due to the reaction which generates it: 2 NH3 → 3 H2 + N2 It can also be manufactured by thermal cracking of ammonia, in an ammonia cracker or forming gas generator. Forming gas is used as an atmosphere for processes that need the properties of hydrogen gas. Typical forming gas formulations (5% H2 in N2) are not explosive. It is used in chambers for gas hypersensitization, a process in which photographic film is heated in forming gas to drive out moisture and oxygen and to increase the base fog of the film. Hypersensitization is used particularly in deep-sky astrophotography, which deals with low-intensity incoming light, requires long exposure times, and is thus particularly sensitive to contaminants in the film. Forming gas is also used to regenerate catalysts in glove boxes and as an atmosphere for annealing processes. It can be purchased at welding supply stores. It is sometimes used as a reducing agent for high-temperature soldering and brazing, to remove oxidation of the joint without the use of flux. It also finds application in microchip production, where a high-temperature anneal in forming gas assists in silicon-silicon dioxide interface passivation. Quite often forming gas is used in furnaces during annealing or sintering for the thermal treatment of metals, because it reduces oxides on the metal surface. See also Endothermic gas References Gases Welding Brazing and soldering Metal heat treatments Industrial gases
Forming gas
[ "Physics", "Chemistry", "Engineering" ]
331
[ "Matter", "Welding", "Metallurgical processes", "Phases of matter", "Industrial gases", "Metal heat treatments", "Mechanical engineering", "Chemical process engineering", "Statistical mechanics", "Gases" ]
8,792,209
https://en.wikipedia.org/wiki/Chronology%20of%20the%20Bible
The chronology of the Bible is an elaborate system of lifespans, 'generations', and other means by which the Masoretic Hebrew Bible (the text of the Bible most commonly in use today) measures the passage of events from the creation to around 164 BCE (the year of the re-dedication of the Second Temple). It was theological in intent, not historical in the modern sense, and functions as an implied prophecy whose key lies in the identification of the final event. The passage of time is measured initially by adding the ages of the Patriarchs at the birth of their firstborn sons, later through express statements, and later still by the synchronised reigns of the kings of Israel and Judah. The chronology is highly schematic, marking out a world cycle of 4,000 years. The Exodus takes place in the year A.M. 2666 (A.M. = Anno Mundi, years of the world from creation), exactly two thirds of the way through the four thousand years; the construction of Solomon's Temple is commenced 480 years, or 12 generations of 40 years each, after that; and 430 years pass between the building of Solomon's Temple and its destruction during the siege of Jerusalem. The 50 years between the destruction of the Temple and the "Decree of Cyrus" and end of the Babylonian Exile, added to the 430 years for which the Temple stood, produces another symmetrical period of 480 years. The 374 years between the Edict of Cyrus and the re-dedication of the Second Temple by the Maccabees complete the 4,000 year cycle. As recently as the 17th–18th century, the Archbishop of Armagh James Ussher (term 1625–1656), and scholars of the stature of Isaac Newton (1642–1727) believed that dating creation was knowable from the Bible. Today, the Genesis creation narrative has long since vanished from serious cosmology, the Patriarchs and the Exodus are no longer included in most histories of ancient Israel, and it is very widely accepted that the Book of Joshua has little historical value. Even the United Monarchy is questioned, and although scholars continue to advance proposals for reconciling the chronology of the Books of Kings, there is "little consensus on acceptable methods of dealing with conflicting data." Pre-Masoretic chronologies During the centuries that Hebrew Bible canon developed, theological chronologies emerged at different composition stages, although scholars have advanced various theories to identify these stages and their schematizations of time. These chronologies include: A "Progenitor" chronology that placed Abraham's birth at Anno Mundi (AM) 1600 and the foundation of the Temple at AM 2800. Alfred Jepsen proposed this chronology on the basis of melding time periods in the Samaritan and Masoretic recensions. Distinct chronologies can be inferred from the Priestly source (of the Torah), along with priestly authors of later biblical books, and the Deuteronomistic history, which purports to chronicle the reigns of the kings of Judah and Israel (with some significant historical corroboration, see below and History of ancient Israel and Judah). The Nehemiah chronology, devised to show 3,500 years from creation to Nehemiah's mission. Northcote says that this chronology was "probably composed by Levites in Jerusalem not long after Nehemiah's mission, perhaps sometime late in the fifth century BCE (i.e. nearing 400 BCE)." Bousset (1900) apparently sees this schematization, too, but calls it Proto-MT. A proto-Masoretic chronology, shaped by jubilees, with an overall literary showing of 3,480 years from creation to the completion of the Second Temple, per B.W. Bousset (1900), and which had the first Temple at 3,000 years. The Saros chronology that reflected 3,600 years leading up to the first Temple and 4,080 years from creation to the completion of the Second Temple. This scheme served as "the basis for the later Septuagint chronology and pre-SP Samaritan Pentateuch chronologies". Masoretic Text The Masoretic Text is the basis of modern Jewish and Christian bibles. While difficulties with biblical texts make it impossible to reach sure conclusions, perhaps the most widely held hypothesis is that it embodies an overall scheme of 4,000 years (a "great year") taking the re-dedication of the Temple by the Maccabees in 164 BCE as its end-point. Two motives may have led to this: first, there was a common idea at the time of the Maccabees that human history followed the plan of a divine "week" of seven "days" each lasting a thousand years; and second, a 4,000 year history (even longer in the Septuagint version) would establish the antiquity of the Jews against their pagan neighbours. However, Ronald Hendel argues that it is unlikely that 2nd century BCE Jews would have known that 374 years had passed from the Edict of Cyrus to the re-dedication of the Temple, and disputes the idea that the Masoretic chronology actually reflects a 4,000 year scheme. The following table summarises the Masoretic chronology from the creation of the world in Anno Mundi (Year of the World) 1 to its endpoint in AM 4000: Other chronologies: Septuagint, Samaritan, Jubilees, Seder Olam The canonical text of the Hebrew Bible is called the Masoretic Text, a text preserved by Jewish rabbis from early in the 7th and 10th centuries CE. There are, however, two other major texts, the Septuagint and the Samaritan Pentateuch. The Septuagint is a Koine Greek translation of the original Biblical Hebrew holy books. It is estimated that the first five books of the Septuagint, known as the Torah or Pentateuch, were translated in the mid-3rd century BCE and the remaining texts were translated in the 2nd century BCE. It mostly agrees with the Masoretic Text, but not in its chronology. The Samaritan text is preserved by the Samaritan community. This community dates from some time in the last few centuries BCE—just when is disputed—and, like the Septuagint, their Bible differs markedly from the Masoretic Text in its chronology. Modern scholars do not regard the Masoretic Text as superior to the other two—the Masoretic is sometimes clearly wrong, as when it says that Saul began to reign at one year of age and reigned for two years. More relevantly, all three texts have a clear purpose, which is not to record history so much as to bring the narrative to a point which represents the culmination of history. In the Samaritan Pentateuch, the genealogies and narratives were shaped to ensure a chronology of 3000 years from creation to the Israelite settlement of Canaan. Northcote reports this as the "Proto-SP chronology," as designated by John Skinner (1910), and he speculates that this chronology may have been extended to put the rebuilding of the Second Temple at an even AM 3900, after three 1,300-year phases. In the Septuagint version of the Pentateuch the Israelite chronology extends 4,777 years from creation to the finishing of the Second Temple, as witnessed in the Codex Alexandrinus manuscript. This calculation only emerges by supplementing Septuagint with the MT's chronology of kings. There were at least 3 variations of Septuagint chronology; Eusebius used one variation, now favored by Hughes and others. Northcote asserts that the Septuagint calendrical pattern was meant to demonstrate that there were 5,000 years from creation to a contemporaneous Ptolemaic Egypt, . The 2nd century BCE Book of Jubilees begins with the Creation and measures time in years, "weeks" of years (groups of seven years), and jubilees (sevens of sevens), so that the interval from Creation to the settlement of Canaan, for example, is exactly fifty jubilees (2450 years). Dating from the 2nd century CE, and still in common use among Jews, was the Seder Olam Rabbah ("Great Order of the World"), a work tracing the history of the world and the Jews from Creation to the 2nd century CE. It allows 410 years for the duration of the First Temple, 70 years from its destruction to the Second Temple, and 420 years for the duration of the Second Temple, making a total of 900 years for the two temples. This schematic approach to numbers accounts for its most remarkable feature, the fact that it shortens the entire Persian Empire from over two centuries to just 52 years, mirroring the 52 years it gives to the Babylonian exile. Christian use and development of biblical chronology The early church father Eusebius (), attempting to place Christ in the chronology, put his birth in AM 5199, and this became the accepted date for the Western Church. As the year AM 6000 (800 CE) approached there was increasing fear that the end of the world was nigh, until the Venerable Bede made his own calculations and found that Christ's birth took place in AM 3952, allowing several more centuries to the end of time. Martin Luther (1483–1546) switched the point of focus from Christ's birth to the Apostolic Council of Acts 15, which he placed in the year AM 4000, believing this marked the moment when the Mosaic Law was abolished and the new age of grace began. This was widely accepted among European Protestants, but in the English-speaking world, Archbishop James Ussher (1581–1656) calculated a date of 4004 BCE for creation; he was not the first to reach this result, but his chronology was so detailed that his dates were incorporated into the margins of English Bibles for the next two hundred years. This popular 4,000 year theological timespan, which ends with the birth of Jesus, differs from the 4,000 timespan later proposed interpretations of the Masoretic text, which ends with the Temple rededication in 164 BCE. The Israelite kings The chronology of the monarchy, unlike that of earlier periods, can be checked against non-biblical sources and seems to be correct in general terms. This raises the prospect that the Books of Kings, linking the Hebrew kings by accession and length of reign ("king X of Judah came to the throne in the nth year of king Y of Israel and ruled n years"), can be used to reconstruct a chronology for the monarchy, but the task has in fact proven intractably difficult. The problem is that the books contain numerous contradictions: to take just one example, since Rehoboam of Judah and Jeroboam of Israel began to rule at the same time (1 Kings 12), and since Ahaziah of Judah and Joram of Israel were killed at the same time (2 Kings 9:24, 27), the same amount of time should have elapsed in both kingdoms, but the count shows 95 years passing in Judah and 98 in Israel. In short, "[t]he data concerning the synchronisms appeared in hopeless contradiction with the data as to the lengths of reigns." Possibly the most widely followed attempt to reconcile the contradictions has been that proposed by Edwin R. Thiele in his The Mysterious Numbers of the Hebrew Kings (three editions between 1951 and 1983), but his work has been widely criticised for, among other things, introducing "innumerable" co-regencies, constructing a "complex system of calendars", and using "unique" patterns of calculation; as a result his following is largely among scholars "committed ... to a doctrine of scripture's absolute harmony" (the criticism is to be found in Brevard Childs' Introduction to the Old Testament as Scripture). The weaknesses in Thiele's work have led subsequent scholars to continue to propose chronologies, but, in the words of a recent commentary on Kings, there is "little consensus on acceptable methods of dealing with conflicting data." See also Biblical cosmology Biblical literalist chronology Chronology of the ancient Near East Chronology of Babylonia and Assyria Dating creation Development of the Hebrew Bible canon Development of the Old Testament canon Development of the New Testament canon History of ancient Israel and Judah Intertestamental period Jewish chronology Kings of Judah Missing years (Jewish calendar) Universal history Ussher chronology References Sources Bible Biblical studies Chronology Bible Timelines of Christianity
Chronology of the Bible
[ "Physics" ]
2,613
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
8,792,325
https://en.wikipedia.org/wiki/ISO/IEC%2027000
ISO/IEC 27000 is one of the standards in the ISO/IEC 27000 series of information security management systems (ISMS)-related standards. The formal title for ISO/IEC 27000 is Information technology — Security techniques — Information security management systems — Overview and vocabulary. The standard was developed by subcommittee 27 (SC27) of the first Joint Technical Committee (JTC1) of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). ISO/IEC 27000 provides: An overview of, and introduction to, the entire ISO/IEC 27000 series. A formally-defined glossary or vocabulary of the specialist terms used throughout the ISO/IEC 27000 series. ISO/IEC 27000 is available for free via the ITTF website. Overview and introduction The standard describes the purpose of an ISMS, a management system similar in concept to those recommended by other ISO standards such as ISO 9000 and ISO 14000, used to manage information security risks and controls within an organization. Bringing information security deliberately under overt management control is a central principle throughout the ISO/IEC 27000 series of standards. The target audience is users of the remaining ISO/IEC 27000-series information security management standards. Glossary Information security, like many technical subjects, is evolving a complex web of terminology. Relatively few authors take the trouble to define precisely what they mean, an approach which is unacceptable in the standards arena as it potentially leads to confusion and devalues formal assessment and certification. As with ISO 9000 and ISO 14000, the base '000' standard is intended to address this. See also ISO/IEC 27001 ISO/IEC 27002 (formerly ISO/IEC 17799) ISO/IEC JTC 1/SC 27 - IT Security techniques References 27000
ISO/IEC 27000
[ "Technology" ]
366
[ "Computer security stubs", "Computing stubs" ]
8,792,426
https://en.wikipedia.org/wiki/TWA%20Flight%20843
TWA Flight 843 (TW843, TWA843) was a scheduled Trans World Airlines passenger flight that crashed after an aborted takeoff from John F. Kennedy International Airport (New York) to San Francisco International Airport (California) on July 30, 1992. Despite an intense fire after the crash, the crew was able to evacuate all 280 passengers from the aircraft. There was no loss of life, although the aircraft was destroyed by the fire. Background Aircraft The aircraft involved was a 20-year-old Lockheed L-1011 Tristar 1 that had first flown in 1972. It had been leased to Eastern Air Lines, Five Star Airlines, and American Trans Air. The aircraft was powered by three Rolls-Royce RB211-22B turbofan engines. The aircraft was previous involved in a near mid-air collision as TWA Flight 37 with American Airlines Flight 182 in 1975. Crew In command was 54-year-old Captain William Kinkead, a veteran TWA pilot who had been with the airline since 1965 and had 20,149 flight hours, including 2,397 hours on the L-1011 TriStar. He had also previously served with the United States Air Force. The first officer was 53-year-old Dennis Hergert, another veteran TWA pilot who had joined the airline in 1967 and had 15,242 flight hours with 5,183 of them on the L-1011 TriStar; 2,953 of which were as a first officer and 2,230 as a flight engineer. The flight engineer was 34-year-old Charles Long, another former U.S. Air Force pilot who joined TWA in 1988. He was the least experienced member of the flight crew but still had sufficient flight experience, having clocked up a total of 3,922 flight hours, 2,266 of which were on the L-1011 TriStar. He was the head L1011 Flight Engineer instructor in the NY domicile. Accident At 17:16:12 EDT, Flight 843 pushed back from the gate at JFK and taxied to runway 13R. The flight was cleared for takeoff at 17:40:10 with first officer Hergert as the pilot flying. The aircraft reached V1 (the speed at which takeoff can no longer be safely aborted) and VR (rotation speed) at 17:40:58 and 17:41:03, respectively. According to the flight data recorder (FDR) the aircraft began to rotate at . The first abnormality was indicated at 17:41:11 when the stick shaker activated. First officer Hergert said, "Gettin' a stall. You got it.", transferring control of the aircraft back to Captain Kinkead. The Captain said "OK" and, believing he had sufficient runway available, retarded the thrust levers and aborted the takeoff just six seconds after rotation. At the time the takeoff was rejected, the aircraft was above the ground and traveling at a speed of . The aircraft then slammed back onto the runway having reached a maximum speed of during the attempted takeoff. Air traffic control (ATC) warned Flight 843 of "numerous flames" coming from the engines. In the NTSB report, "The Captain stated that he closed the thrust levers and put the airplane back on the runway. He applied full reverse thrust and maximum braking and the airplane began to decelerate, but not as fast as he had expected. He said that the brakes seemed to be losing their effectiveness and concluded that with approximately of runway remaining and the air speed still about 100 kts [], he would not be able to stop before reaching the blast fence at the end of the runway." Captain Kinkead turned the now-burning aircraft to the left and it went off the runway, finally stopping on an area of grass from runway 13R. In addition to the nine flight attendants on board, there were five additional off-duty flight attendants who assisted in the evacuation. Although only three of eight exit doors were available for use, the evacuation was completed within two minutes, and the airport rescue and fire fighting teams' response was timely and adequate. Oakland rapper Saafir was a passenger on the plane and injured his back while jumping to the ground. Only 10 people (all of whom were passengers) were injured. Investigation The Captain told the National Transportation Safety Board (NTSB) that the aircraft's engines stalled after takeoff and felt that the aircraft was unsafe to fly as a result. In addition, two L-1011s had experienced engine fires in the past two years, the first at Boston in 1990, and the second occurring three months earlier at JFK. The Port Authority of New York and New Jersey stated that the fire Flight 843 was the result of a fuel line rupturing, and that the aircraft leaked fuel on the runway. However, the NTSB did not find any evidence of a ruptured fuel line, nor was there any spilled fuel on the runway. The NTSB attributed the crash to Human Factors (Crew Resource Management) and TWA training, procedural, maintenance and Quality Assurance failures. The Angle of Attack sensor that had caused the erroneous stall warning had been found unserviceable on nine previous occasions, had received some checks and was then put back into the parts pool and fitted to the accident aircraft. It was found that it had an intermittent fault that was not able to be detected by the crew during pre-start procedures. According to the report, the First Officer handed over control to the Captain shortly after take-off at an altitude of without a clear transfer of command, due to the erroneous activation of the stick shaker stall warning device. The First Officer makes an exclamation of surprise and then says "Gettin' a stall" and then, two seconds later, "You got it". The Captain said "OK" and simultaneously experienced a feeling of the aircraft 'sinking' that may have been due to the relaxation of the controls by the First Officer who believed they were stalling. (A normal procedure for a stall recovery is to lower the nose.) This may have had the effect of confirming that the aircraft was not climbing normally. In fact, the aircraft was performing normally and could have climbed safely away. The NTSB found that contributing factors included TWA not requiring crew pre-briefings denoting responsibility each crew member had at what stages of the take off (these are standard procedures currently), nor were handover techniques clearly defined. It also noted that "The TWA procedure that allows the flight crews to initiate takeoffs without a predeparture briefing, does not adequately prepare the flight crews for coordination of potential abnormal circumstances during takeoff." It goes on to say: "The Captain made a split second decision to abort the takeoff believing that there was sufficient runway remaining." The extremely hard landing caused damage to the right wing, spilling fuel that was then ingested into the engines and started the fire. The NTSB praised Captain Kinkead for bringing the aircraft to a safe stop, the rest of the crew (including the off-duty flight attendants) for safely evacuating the aircraft, and the airport rescue and fire fighting services for responding in a timely and adequate manner. At the same time however, the NTSB also criticized the flight crew for deciding to abort the takeoff after VR and their response to the stick-shaker activation, both of which were inappropriate. See also Aviation accidents and incidents 2008 South Carolina Learjet 60 crash, another high-speed aborted take-off Kenya Airways Flight 431, another accident caused by a false stall warning Tower Air Flight 41, another accident that occurred during take-off from JFK References External links First-person account by flight attendant Kaye Chandler Aviation accidents and incidents in the United States in 1992 1992 in New York City 843 Accidents and incidents involving the Lockheed L-1011 Airliner accidents and incidents in New York City 1990s in Queens July 1992 events in the United States John F. Kennedy International Airport Airliner accidents and incidents caused by maintenance errors Airliner accidents and incidents caused by instrument failure Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents caused by pilot error
TWA Flight 843
[ "Materials_science" ]
1,683
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
8,792,751
https://en.wikipedia.org/wiki/Delta%20G
The Delta G, or Thor-Delta G was an American expendable launch system used to launch two biological research satellites in 1966 and 1967. It was a member of the Delta family of rockets. The Delta G was a two-stage derivative of the Delta E. The first stage was a Thor missile in the DSV-2C configuration and the second stage was a Delta E. Three Castor-1 solid rocket boosters were clustered around the first stage. The solid-fuel upper stage used on the Delta E was not used on the Delta G. Both launches occurred from Cape Canaveral Air Force Station Launch Complex 17. The first was from pad 17A on 14 December 1966 at 19:20 GMT, with Biosatellite 1. At 22:04 on 7 September 1967, Biosatellite 2 was launched from pad B on the second Delta G. References Delta (rocket family)
Delta G
[ "Astronomy" ]
186
[ "Rocketry stubs", "Astronomy stubs" ]
8,793,206
https://en.wikipedia.org/wiki/External%20flow
In fluid mechanics, external flow is a flow that boundary layers develop freely, without constraints imposed by adjacent surfaces. It can be defined as the flow of a fluid around a body that is completely submerged in it. Examples include fluid motion over a flat plate (inclined or parallel to the free stream velocity) and flow over curved surfaces such as a sphere, cylinder, airfoil, or turbine blade, water flowing around submarines, and air flowing around a truck; a 2000 paper analyzing the latter used computational fluid dynamics to model the three-dimensional flow structure and pressure distribution on the external surface of the truck. In a 2008 paper, external flow was said to be "arguably is the most common and best studied case in soft matter systems. The term can also be used simply to describe flow in any body of fluid external to the system under consideration. In external co-flow, fluid in the external region occurs in the same direction as flow within the system of interest; this contrasts with external counterflow. References Aerodynamics Flow regimes
External flow
[ "Chemistry", "Engineering" ]
206
[ "Aerodynamics", "Flow regimes", "Aerospace engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
8,793,238
https://en.wikipedia.org/wiki/IBM%20Office/36
Office/36 was a suite of applications marketed by IBM from 1983 to 2000 for the IBM System/36 family of midrange computers. IBM announced its System/36 Office Automation (OA) strategy in 1985. Office/36 could be purchased in its entirety, or piecemeal. Components of Office/36 include: IDDU/36, the Interactive Data Definition Utility. Query/36, the Query utility. DisplayWrite/36, a word processing program. Personal Services/36, a calendaring system and an office messaging utility. Query/36 was not quite the same as SQL, but it had some similarities, especially the ability to very rapidly create a displayed recordset from a disk file. Note that SQL, also an IBM development, had not been standardized prior to 1986. DisplayWrite/36, in the same category as Microsoft Word, had online dictionaries and definition capabilities, and spell-check, and unlike the standard S/36 products, it would straighten spillover text and scroll in real time. Considerable changes were required to S/36 design to support Office/36 functionality, not the least of which was the capability to manage new container objects called "folders" and produce multiple extents to them on demand. Q/36 and DW/36 typically exceeded the 64K program limit of the S/36, both in editing and printing, so using Office products could heavily impact other applications. DW/36 allowed use of bold, underline, and other display formatting characteristics in real time. References Business software Office 36 Email systems Discontinued software
IBM Office/36
[ "Technology" ]
322
[ "Email systems", "Telecommunications systems", "Computer systems" ]
8,793,453
https://en.wikipedia.org/wiki/Internal%20flow
In fluid mechanics, internal flow is a flow wherein the fluid is completely confined by inner surfaces of an item (e.g. a tube). Hence the boundary layer is unable to develop without eventually being constrained. The internal flow configuration represents a convenient geometry for heating and cooling fluids used in chemical processing, environmental control, and energy conversion technologies. Internal flow is fully dominated by viscosity throughout the flow field. An example includes flow in a pipe. References Fluid mechanics
Internal flow
[ "Chemistry", "Engineering" ]
96
[ "Civil engineering", "Fluid mechanics", "Fluid dynamics stubs", "Fluid dynamics" ]
11,926,033
https://en.wikipedia.org/wiki/Ephrin
Ephrins (also known as ephrin ligands or Eph family receptor interacting proteins) are a family of proteins that serve as the ligands of the Eph receptor. Eph receptors in turn compose the largest known subfamily of receptor protein-tyrosine kinases (RTKs). Since ephrin ligands (ephrins) and Eph receptors (Ephs) are both membrane-bound proteins, binding and activation of Eph/ephrin intracellular signaling pathways can only occur via direct cell–cell interaction. Eph/ephrin signaling regulates a variety of biological processes during embryonic development including the guidance of axon growth cones, formation of tissue boundaries, cell migration, and segmentation. Additionally, Eph/ephrin signaling has been identified to play a critical role in the maintenance of several processes during adulthood including long-term potentiation, angiogenesis, and stem cell differentiation. Classification Ephrin ligands are divided into two subclasses of ephrin-A and ephrin-B based on their structure and linkage to the cell membrane. Ephrin-As are anchored to the membrane by a glycosylphosphatidylinositol (GPI) linkage and lack a cytoplasmic domain, while ephrin-Bs are attached to the membrane by a single transmembrane domain that contains a short cytoplasmic PDZ-binding motif. The genes that encode the ephrin-A and ephrin-B proteins are designated as EFNA and EFNB respectively. Eph receptors in turn are classified as either EphAs or EphBs based on their binding affinity for either the ephrin-A or ephrin-B ligands. Of the eight ephrins that have been identified in humans there are five known ephrin-A ligands (ephrin-A1-5) that interact with nine EphAs (EphA1-8 and EphA10) and three ephrin-B ligands (ephrin-B1-3) that interact with five EphBs (EphB1-4 and EphB6). Ephs of a particular subclass demonstrate an ability to bind with high affinity to all ephrins of the corresponding subclass, but in general have little to no cross-binding to ephrins of the opposing subclass. However, there are a few exceptions to this intrasubclass binding specificity as it has recently been shown that ephrin-B3 is able to bind to and activate EPH receptor A4 and ephrin-A5 can bind to and activate Eph receptor B2. EphAs/ephrin-As typically bind with high affinity, which can partially be attributed to the fact that ephrinAs interact with EphAs by a "lock-and-key" mechanism that requires little conformational change of the EphAs upon ligand binding. In contrast EphBs typically bind with lower affinity than EphAs/ephring-As since they utilize an "induced fit" mechanism that requires a greater conformational change of EphBs to bind ephrin-Bs. Function Axon guidance During the development of the central nervous system Eph/ephrin signaling plays a critical role in the cell–cell mediated migration of several types of neuronal axons to their target destinations. Eph/ephrin signaling controls the guidance of neuronal axons through their ability to inhibit the survival of axonal growth cones, which repels the migrating axon away from the site of Eph/ephrin activation. The growth cones of migrating axons do not simply respond to absolute levels of Ephs or ephrins in cells that they contact, but rather respond to relative levels of Eph and ephrin expression, which allows migrating axons that express either Ephs or ephrins to be directed along gradients of Eph or ephrin expressing cells towards a destination where axonal growth cone survival is no longer completely inhibited. Although Eph-ephrin activation is usually associated with decreased growth cone survival and the repulsion of migrating axons, it has recently been demonstrated that growth cone survival does not depend just on Eph-ephrin activation, but rather on the differential effects of "forward" signaling by the Eph receptor or "reverse" signaling by the ephrin ligand on growth cone survival. Retinotopic mapping The formation of an organized retinotopic map in the superior colliculus (SC) (referred to as the optic tectum in lower vertebrates) requires the proper migration of the axons of retinal ganglion cells (RGCs) from the retina to specific regions in the SC that is mediated by gradients of Eph and ephrin expression in both the SC and in migrating RGCs leaving the retina. The decreased survival of axonal growth cones discussed above allows for a gradient of high posterior to low anterior ephrin-A ligand expression in the SC to direct migrating RGCs axons from the temporal region of the retina that express a high level of EphA receptors toward targets in the anterior SC and RGCs from the nasal retina that have low EphA expression toward their final destination in the posterior SC. Similarly, a gradient of ephrin-B1 expression along the medial-ventral axis of the SC directs the migration of dorsal and ventral EphB-expressing RGCs to the lateral and medial SC respectively. Angiogenesis Ephrins promote angiogenesis in physiological and pathological conditions (e.g. cancer angiogenesis, neovascularisation in cerebral arteriovenous malformation). In particular, Ephrin-B2 and EphB4 determine the arterial and venous fate of endothelial cells, respectively, though regulation of angiogenesis by mitigating expression in the VEGF signalling pathway. Ephrin-B2 affects VEGF-receptors (e.g.VEGFR3) through forward and reverse signalling pathways. The Ephrin-B2 path extends to lymphangiogenesis, leading to internalization of VEGFR3 in cultured lymphatic endothelial cells. Though the role of ephrins in developmental angiogenesis is elucidated, tumor angiogenesis remains nebulous. Based on observations in Ephrin-A2 deficient mice, Ephrin-A2 may function in forward signalling in tumor angiogenesis; however, this ephrin does not contribute to vascular deformities during development. Moreover, Ephrin-B2 and EphB4 may also contribute to tumor angiogenesis in addition to their positions in development, though the exact mechanism remains unclear. The Ephrin B2/EphB4 and Ephrin B3/EphB1 receptor pairs contribute more to vasculogenesis in addition to angiogenesis whilst Ephrin A1/EphA2 appear to exclusively contribute to angiogenesis. Several types of Ephrins and Eph receptors have been found to be upregulated in human cancers including breast, colon and liver cancers. Surprisingly, the downregulation of other types of Ephrins and their receptors may also contribute to tumorigenesis; namely, EphA1 in colorectal cancers and EphB6 in melanoma. Displaying similar utility, different ephrins incorporate similar mechanistic pathways to supplement growth of different structures. Migration factor in intestinal epithelial cell migration The ephrin protein family of class A and class B guides ligands with the EphB family cell-surface receptors to provide a steady, ordered, and specific migration of the intestinal epithelial cells from the crypt to villus. The Wnt protein triggers expression of the EphB receptors deep within the crypt, leading to decreased Eph expression and increased ephrin ligand expression, the more superficial a progenitor cell's placement. Migration is caused by a bi-directional signaling mechanism in which the engagement of the ephrin ligand with the EphB receptor regulates the actin cytoskeleton dynamics to cause a "repulsion". Cells remain in place once the interaction ceases to a stop. While the mucus secreting Goblet cells and the absorptive cells move towards the lumen, mature Paneth cells move in the opposite direction, to the bottom of the crypt, where they reside. With the exception of the ephrin ligand binding to EphA5, all other proteins from class A and B have been found in the intestine. However, ephrin proteins A4, A8, B2, and B4 have highest levels in fetal stage, and decline with age. Experiments performed with Eph receptor knockout mice revealed disorder in the distribution of different cell types. Absorptive cells of various differentiation were mixed with the stem cells within the villi. Without the receptor, the Ephrin ligand was proved to be insufficient for the correct cell placement. Recent studies with knockout mice have also shown evidence of the ephrin-eph interaction indirect role in the suppression of colorectal cancer. The development of adenomatous polyps created by uncontrolled outgrowth of epithelial cells is controlled by ephrin-eph interaction. Mice with APC mutation, without ephrin-B protein lack the means to prevent the spread of ephB positive tumor cells throughout the crypt-villi junction. Reverse signaling One unique property of the ephrin ligands is that many have the capacity to initiate a "reverse" signal that is separate and distinct from the intracellular signal activated in Eph receptor-expressing cells. Although the mechanisms by which "reverse" signaling occurs are not completely understood, both ephrin-As and ephrin-Bs have been shown to mediate cellular responses that are distinct from those associated with activation of their corresponding receptors. Specifically, ephrin-A5 was shown to stimulate growth cone spreading in spinal motor neurons and ephrin-B1 was shown to promote dendritic spine maturation. References Protein families Ligands (biochemistry) Single-pass transmembrane proteins Neurotrophic factors
Ephrin
[ "Chemistry", "Biology" ]
2,131
[ "Protein classification", "Signal transduction", "Ligands (biochemistry)", "Protein families", "Neurotrophic factors", "Neurochemistry" ]
11,926,825
https://en.wikipedia.org/wiki/Trident%20laser
The Trident Laser was a high power, sub-petawatt class, solid-state laser facility located at Los Alamos National Laboratory (LANL website), in Los Alamos, New Mexico, originally built in the late 1980s for Inertial confinement fusion (ICF) research by KMS Fusion, founded by Kip Siegel, in Ann Arbor, Michigan, it was later moved to Los Alamos in the early 1990s to be used in ICF and materials research. The Trident Laser has been decommissioned, with final experiments in 2017, and is now in storage at the University of Texas at Austin. The Trident Laser consisted of three main laser chains (A,B, and C) of neodymium glass amplifiers (or Nd:glass), two identical longpulse beams lines, A&B, and a third beamline, C, that could be operated either in longpulse or in chirped pulse amplification (CPA) shortpulse mode. Longpulse beams A and B, were laser chains capable of delivering up to ~500 J at 1054 nm, which were frequency doubled to 527 nm and ~200 J depending on pulse duration; the pulse duration could be varied from 100 ps to 1 μs, and was a unique capability of any large laser in the US (and possibly the world). The third laser chain, beamline C, could produce up to ~200 J at 1054 nm, or could be frequency doubled to 527 nm at ~100 J in the longpulse mode with the same pulse duration variability as beams A and B; or could be used in the Trident enhancement configuration allowing the ~200 J beam to be compressed via CPA to ~600 fs and ~100 J, producing powers on the scale of a quarter petawatt(~200 TW) with a host of laser and plasma diagnostics. A 100 mJ 500 fs probe beamline is also available. The 200TW shortpulse ultra high-intensity laser system is currently a world record holder in ion acceleration energy with Target Normal Sheath Acceleration mechanism, producing protons at 58.5 MeV from a flat-foil, beating the record of the NOVA Petawatt laser back in 1999; and 67.5 MeV protons from micro-cone targets. Trident delivers Petawatt performance at a fifth of the power. The 200TW or C beam is capable of focusing down to less than 10 micrometers in diameter to reach laser field intensities (irradiance) of ~2x1020 W/cm2, producing protons over 50 MeV as well as high quality, high energy xrays. The interaction can be diagnosed with a Backscatter Focal Diagnostics similar to a Full Aperture Back-scatter (FABS) diagnostic at the National Ignition Facility. A new front-end for the laser employs a 2nd order cleaning technique, dubbed SPOPA (for Short-Pulse Optical Parametric Amplification) cleaning, which reduces the contrast to better than 10−9 ASE intensity ratio, making it one of the cleanest ultra high-intensity high-power laser in the world. The laser was being used for Fast Ignition ICF research, warm dense matter experiments, materials dynamics studies, and laser-matter interaction research, including particle acceleration, x-ray backlighting and laser-plasma instabilities (LPI). For more information see the Trident User Facility Website: Trident User Facility , Los Alamos National Laboratory, see the references below and these articles using the laser: See also List of laser articles References External links Trident Homepage (Archived) Los Alamos National Laboratory Solid-state lasers Research lasers Inertial confinement fusion research lasers
Trident laser
[ "Chemistry" ]
769
[ "Solid state engineering", "Solid-state lasers" ]
11,930,108
https://en.wikipedia.org/wiki/Volatility%20%28finance%29
In finance, volatility (usually denoted by "σ") is the degree of variation of a trading price series over time, usually measured by the standard deviation of logarithmic returns. Historic volatility measures a time series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option). Volatility terminology Volatility as described here refers to the actual volatility, more specifically: actual current volatility of a financial instrument for a specified period (for example 30 days or 90 days), based on historical prices over the specified period with the last observation the most recent price. actual historical volatility which refers to the volatility of a financial instrument over a specified period but with the last observation on a date in the past near synonymous is realized volatility, the square root of the realized variance, in turn calculated using the sum of squared returns divided by the number of observations. actual future volatility which refers to the volatility of a financial instrument over a specified period starting at the current time and ending at a future date (normally the expiry date of an option) Now turning to implied volatility, we have: historical implied volatility which refers to the implied volatility observed from historical prices of the financial instrument (normally options) current implied volatility which refers to the implied volatility observed from current prices of the financial instrument future implied volatility which refers to the implied volatility observed from future prices of the financial instrument For a financial instrument whose price follows a Gaussian random walk, or Wiener process, the width of the distribution increases as time increases. This is because there is an increasing probability that the instrument's price will be farther away from the initial price as time increases. However, rather than increase linearly, the volatility increases with the square-root of time as time increases, because some fluctuations are expected to cancel each other out, so the most likely deviation after twice the time will not be twice the distance from zero. Since observed price changes do not follow Gaussian distributions, others such as the Lévy distribution are often used. These can capture attributes such as "fat tails". Volatility is a statistical measure of dispersion around the average of any random variable such as market parameters etc. Mathematical definition For any fund that evolves randomly with time, volatility is defined as the standard deviation of a sequence of random variables, each of which is the return of the fund over some corresponding sequence of (equally sized) times. Thus, "annualized" volatility is the standard deviation of an instrument's yearly logarithmic returns. The generalized volatility for time horizon T in years is expressed as: Therefore, if the daily logarithmic returns of a stock have a standard deviation of and the time period of returns is P in trading days, the annualized volatility is so A common assumption is that P = 252 trading days in any given year. Then, if = 0.01, the annualized volatility is The monthly volatility (i.e. of a year) is The formulas used above to convert returns or volatility measures from one time period to another assume a particular underlying model or process. These formulas are accurate extrapolations of a random walk, or Wiener process, whose steps have finite variance. However, more generally, for natural stochastic processes, the precise relationship between volatility measures for different time periods is more complicated. Some use the Lévy stability exponent α to extrapolate natural processes: If α = 2 the Wiener process scaling relation is obtained, but some people believe α < 2 for financial activities such as stocks, indexes and so on. This was discovered by Benoît Mandelbrot, who looked at cotton prices and found that they followed a Lévy alpha-stable distribution with α = 1.7. (See New Scientist, 19 April 1997.) Volatility origin Much research has been devoted to modeling and forecasting the volatility of financial returns, and yet few theoretical models explain how volatility comes to exist in the first place. Roll (1984) shows that volatility is affected by market microstructure. Glosten and Milgrom (1985) shows that at least one source of volatility can be explained by the liquidity provision process. When market makers infer the possibility of adverse selection, they adjust their trading ranges, which in turn increases the band of price oscillation. In September 2019, JPMorgan Chase determined the effect of US President Donald Trump's tweets, and called it the Volfefe index combining volatility and the covfefe meme. Volatility for investors Volatility matters to investors for at least eight reasons, several of which are alternative statements of the same feature or are directly consequent on each other: The wider the swings in an investment's price, the harder emotionally it is to not worry; Price volatility of a trading instrument can help to determine position sizing in a portfolio; When cash flows from selling a security are needed at a specific future date to meet a known fixed liability, higher volatility means a greater chance of a shortfall; Higher volatility of returns while saving for retirement results in a wider distribution of possible final portfolio values; Higher volatility of returns after retirement may result in withdrawals having a larger permanent impact on the portfolio's value; Price volatility presents opportunities to anyone with inside information to buy assets cheaply and sell when overpriced; Volatility affects pricing of options, being a parameter of the Black–Scholes model. Volatility versus direction Volatility does not measure the direction of price changes, merely their dispersion. This is because when calculating standard deviation (or variance), all differences are squared, so that negative and positive differences are combined into one quantity. Two instruments with different volatilities may have the same expected return, but the instrument with higher volatility will have larger swings in values over a given period of time. For example, a lower volatility stock may have an expected (average) return of 7%, with annual volatility of 5%. Ignoring compounding effects, this would indicate returns from approximately negative 3% to positive 17% most of the time (19 times out of 20, or 95% via a two standard deviation rule). A higher volatility stock, with the same expected return of 7% but with annual volatility of 20%, would indicate returns from approximately negative 33% to positive 47% most of the time (19 times out of 20, or 95%). These estimates assume a normal distribution; in reality stock price movements are found to be leptokurtotic (fat-tailed). Volatility over time Although the Black-Scholes equation assumes predictable constant volatility, this is not observed in real markets. Amongst more realistic models are Emanuel Derman and Iraj Kani's and Bruno Dupire's local volatility, Poisson process where volatility jumps to new levels with a predictable frequency, and the increasingly popular Heston model of stochastic volatility.[link broken] It is common knowledge that many types of assets experience periods of high and low volatility. That is, during some periods, prices go up and down quickly, while during other times they barely move at all. In foreign exchange market, price changes are seasonally heteroskedastic with periods of one day and one week. Periods when prices fall quickly (a crash) are often followed by prices going down even more, or going up by an unusual amount. Also, a time when prices rise quickly (a possible bubble) may often be followed by prices going up even more, or going down by an unusual amount. Most typically, extreme movements do not appear 'out of nowhere'; they are presaged by larger movements than usual or by known uncertainty in specific future events. This is termed autoregressive conditional heteroskedasticity. Whether such large movements have the same direction, or the opposite, is more difficult to say. And an increase in volatility does not always presage a further increase—the volatility may simply go back down again. Measures of volatility depend not only on the period over which it is measured, but also on the selected time resolution, as the information flow between short-term and long-term traders is asymmetric. As a result, volatility measured with high resolution contains information that is not covered by low resolution volatility and vice versa. The risk parity weighted volatility of the three assets Gold, Treasury bonds and Nasdaq acting as proxy for the Marketportfolio seems to have a low point at 4% after turning upwards for the 8th time since 1974 at this reading in the summer of 2014. Alternative measures of volatility Some authors point out that realized volatility and implied volatility are backward and forward looking measures, and do not reflect current volatility. To address that issue an alternative, ensemble measures of volatility were suggested. One of the measures is defined as the standard deviation of ensemble returns instead of time series of returns. Another considers the regular sequence of directional-changes as the proxy for the instantaneous volatility. Volatility as it Relates to Options Trading One method of measuring Volatility, often used by quant option trading firms, divides up volatility into two components. Clean volatility - the amount of volatility caused standard events like daily transactions and general noise - and dirty vol, the amount caused by specific events like earnings or policy announcements. For instance, a company like Microsoft would have clean volatility caused by people buying and selling on a daily basis but dirty (or event vol) events like quarterly earnings or a possibly anti-trust announcement. Breaking down volatility into two components is useful in order to accurately price how much an option is worth, especially when identifying what events may contribute to a swing. The job of fundamental analysts at market makers and option trading boutique firms typically entails trying to assign numeric values to these numbers. Crude volatility estimation Using a simplification of the above formula it is possible to estimate annualized volatility based solely on approximate observations. Suppose you notice that a market price index, which has a current value near 10,000, has moved about 100 points a day, on average, for many days. This would constitute a 1% daily movement, up or down. To annualize this, you can use the "rule of 16", that is, multiply by 16 to get 16% as the annual volatility. The rationale for this is that 16 is the square root of 256, which is approximately the number of trading days in a year (252). This also uses the fact that the standard deviation of the sum of n independent variables (with equal standard deviations) is √n times the standard deviation of the individual variables. However importantly this does not capture (or in some cases may give excessive weight to) occasional large movements in market price which occur less frequently than once a year. The average magnitude of the observations is merely an approximation of the standard deviation of the market index. Assuming that the market index daily changes are normally distributed with mean zero and standard deviation σ, the expected value of the magnitude of the observations is √(2/)σ = 0.798σ. The net effect is that this crude approach underestimates the true volatility by about 20%. Estimate of compound annual growth rate (CAGR) Consider the Taylor series: Taking only the first two terms one has: Volatility thus mathematically represents a drag on the CAGR (formalized as the "volatility tax"). Realistically, most financial assets have negative skewness and leptokurtosis, so this formula tends to be over-optimistic. Some people use the formula: for a rough estimate, where k is an empirical factor (typically five to ten). Criticisms of volatility forecasting models Despite the sophisticated composition of most volatility forecasting models, critics claim that their predictive power is similar to that of plain-vanilla measures, such as simple past volatility especially out-of-sample, where different data are used to estimate the models and to test them. Other works have agreed, but claim critics failed to correctly implement the more complicated models. Some practitioners and portfolio managers seem to completely ignore or dismiss volatility forecasting models. For example, Nassim Taleb famously titled one of his Journal of Portfolio Management papers "We Don't Quite Know What We are Talking About When We Talk About Volatility". In a similar note, Emanuel Derman expressed his disillusion with the enormous supply of empirical models unsupported by theory. He argues that, while "theories are attempts to uncover the hidden principles underpinning the world around us, as Albert Einstein did with his theory of relativity", we should remember that "models are metaphors – analogies that describe one thing relative to another". See also Volatility risk Volatility beta References External links Graphical Comparison of Implied and Historical Volatility, video Diebold, Francis X.; Hickman, Andrew; Inoue, Atsushi & Schuermannm, Til (1996) "Converting 1-Day Volatility to h-Day Volatility: Scaling by sqrt(h) is Worse than You Think" A short introduction to alternative mathematical concepts of volatility Volatility estimation from predicted return density Example based on Google daily return distribution using standard density function Research paper including excerpt from report entitled Identifying Rich and Cheap Volatility Excerpt from Enhanced Call Overwriting, a report by Ryan Renicker and Devapriya Mallick at Lehman Brothers (2005). Further reading Mathematical finance Technical analysis Quantity
Volatility (finance)
[ "Mathematics" ]
2,898
[ "Applied mathematics", "Quantity", "Mathematical finance" ]
11,930,474
https://en.wikipedia.org/wiki/French%20Gothic%20architecture
French Gothic architecture is an architectural style which emerged in France in 1140, and was dominant until the mid-16th century. The most notable examples are the great Gothic cathedrals of France, including Notre-Dame Cathedral, Reims Cathedral, Chartres Cathedral, and Amiens Cathedral. Its main characteristics are verticality, or height, and the use of the rib vault and flying buttresses and other architectural innovations to distribute the weight of the stone structures to supports on the outside, allowing unprecedented height and volume. The new techniques also permitted the addition of larger windows, including enormous stained glass windows, which fill the cathedrals with light. French scholars divide the Gothic of their country into four phases: British and American historians use similar periods. (Primary Gothic) or (First Gothic), from short before 1140 until shortly after 1180, marked by tribunes above the aisles of basilicas. The British and American term for the period is Early Gothic. Gothique Classique or (Classic Gothic), from the 1180s to the first third of 13th century, marked by basilicas without lateral tribunes and with triforia without windows. The British and American term is for the period is High Gothic. and Some buildings of this phase, like Chartres Cathedral, are included in Early Gothic; others, like the Reims Cathedral and the western parts of Amiens Cathedral, are included in High Gothic. (Shining Gothic), from the second third of 13th century to the first half of 14th century, marked by triforia with windows and a general preference for stained glass instead of stone walls. It forms the greater portion of High Gothic. American and British historians also use the term Rayonnant. (Flaming Gothic), since mid 14th century, marked by swinging and flaming (that makes the term) forms of tracery. British and American historians use the same term. French scholars divide the Gothic of their country into four phases: British and American historians use similar periods. The French style was widely copied in other parts of northern Europe, particularly Germany and England. It was gradually supplanted as the dominant French style in the mid-16th century by French Renaissance architecture. Origins French Gothic architecture was the result of the emergence in the 12th century of a powerful French state centered in the Île-de-France. During the reign of Louis VI of France (1081–1137), Paris was the principal residence of the Kings of France, Reims the place of coronation, and the Abbey of Saint-Denis became their ceremonial burial place. The Abbot of Saint-Denis, Suger, was a counselor of Louis VI and Louis VII, as well as a historian. He oversaw the reconstruction of the ambulatory of Saint-Denis, making it the first and most influential example of Gothic architecture in France. The first complete Gothic cathedral, Sens Cathedral, was finished shortly afterwards. Over the later course of the Capetian dynasty (1180 to 1328), three Kings: Philip Augustus (1180–1223), Louis IX of France (1226–1270), and Philip le Bel (1285–1314), established France as the major economic and political power on the Continent. The period also saw the founding of the University of Paris or Sorbonne. It produced the High Gothic and the Flamboyant Gothic styles, and the construction of some of the most famous cathedrals, including Chartres Cathedral, Reims Cathedral, and Amiens Cathedral. Primary or Early Gothic Style - Saint-Denis, Sens, Senlis, and Notre Dame The birthplace of the new style was the Basilica of Saint-Denis in the Île-de-France, not far north of Paris where, in 1137, the Abbé Suger began the reconstruction of the Carolingian-era abbey church. Just to the west of the original church, he began building a new structure with two towers, and then, from 1140 to 1144, he began to reconstruct the old church. Most of his modifications were traditional, but he made one remarkable innovation; he decided to create a new choir at the east end of the building, using the pointed arch and the rib vault in the construction of the choir and the ambulatory with radiating chapels. The use of rib vaults, and buttresses outside supporting the walls, allowed the elimination of the traditional walls between the chapels, and the installation of large stained glass windows. This gave the ambulatory a striking openness, light, and greater height. The builders then constructed the nave of the church, also using rib vaults. It was constructed in four levels; the arcades on the ground floor whose two rows of columns received the ribs of the ceiling vaults; the tribune above it, a gallery which concealed the massive contreforts or buttresses which pressed against the walls; the triforium, another, narrower gallery; and, just below the ceiling, the or clerestorey, where the windows were located. The resulting greater height and light differed dramatically from the heaviness of Romanesque architecture. On the facade of the church, Suger introduced another innovation; he used columns in the form of statues of saints to decorate the portal of the church, adding a new element of verticality to the facade. This idea too was soon copied in new cathedrals. Ninety years later, the upper parts of the choir and the whole nave had to be renewed because of signs of decay; the new upper choir (on the arcades of the Primary Gothic) was built with a triforium with windows. This was the onset of Rayonnant style (see below). The first cathedral constructed in the new style was Sens Cathedral, begun between 1135 and 1140 and consecrated in 1160. It featured a Gothic choir, and six-part rib vaults over the nave and collateral aisles, alternating pillars and doubled columns to support the vaults, and flying buttresses. But note, much of the ambulatory is still Romanesque, and all adjacent chapels are younger. One of the builders believed to have worked on that Cathedral, William of Sens, later traveled to England and became the architect who reconstructed the choir of Canterbury Cathedral in the Gothic style. Sens Cathedral was soon followed by Senlis Cathedral (begun 1160), and the most prominent of all, Notre-Dame Cathedral in Paris (begun 1160). Their builders abandoned the traditional plans and introduced the new Gothic elements. The builders of Notre Dame went further by introducing the flying buttress, heavy columns of support outside the walls connected by arches to the walls, which received and counterbalanced the thrust from the rib vaults of the roof. This allowed the builders to construct higher walls and larger windows. Classic Gothic or High Gothic Cathedrals – Chartres, Bourges, Reims, western parts of Amiens The second phase of Gothic in France is called Gothique Classique or Classic Gothic. The similar phase in English is called High Gothic. From the end of the 12th century until the middle of the 13th century, the Gothic style spread from the cathedrals in Île-de-France to appear in other cities of northern France, notably Chartres Cathedral (begun 1200); Bourges Cathedral (1195 to 1230), Reims Cathedral (1211–1275), and Amiens Cathedral (begun 1250); The characteristic Gothic elements were refined to make the new cathedrals taller, wider, and more full of light. At Chartres, the use of the flying buttresses allowed the elimination of the tribune level, which allowed much higher arcades and nave, and larger windows. The pillars were made of a central column surrounded by four more slender columns, which reached up to support the arches of the vaulted ceiling. The rib vault changed from six to four ribs, simpler and stronger. The flying buttresses at Amiens and Chartes were strengthened by an additional arch and with a supporting arcade, allowing even higher walls and more windows. At Reims, the buttresses were given greater weight and strength by the addition of heavy stone pinnacles on top. These were often decorated with statues of angels, and became an important decorative element of the High Gothic style. Another practical and decorative element, the gargoyle, appeared; it was an ornamental rain spout that channeled the water from the roof away from the building. At Amiens, the windows of the nave were made larger, and an additional row of clear glass windows (the ) flooded the interior with light. The new structural technologies allowed the enlargement of the transepts and the choirs at the east end of the cathedrals, creating the space for a ring of well-lit chapels. Rayonnant Gothic – Sainte-Chapelle and the rose windows of Notre-Dame The third period of French Gothic architecture, from the second half of the 13th century until the 1370s, is termed Rayonnant ("Radiant") in both French and English, describing the radiating pattern of the tracery in the stained glass windows, and also describing the tendency toward the use of more and more stained glass and less masonry in the design of the structure, until the walls seemed entirely made of glass. The most celebrated example was the chapel of Sainte-Chapelle, attached to the royal residence on the Palais de la Cité. An elaborate system of exterior columns and arches reduced the walls of the upper chapel to a thin framework for the enormous windows. The weight of each of the masonry gables above the archivolt of the windows also helped the walls to resist the thrust and to distribute the weight. Other landmarks of the Rayonnant Gothic are the two rose windows on the north and south of the transept of Notre-Dame Cathedral, whereas earlier rose windows, like those of Amiens Cathedral, were framed by stone and occupied only a portion of the wall, these two windows, with a delicate lacelike framework, occupied the entire space between the pillars. Flamboyant Gothic - Rouen Cathedral, Sainte-Chapelle de Vincennes The Flamboyant Gothic style appeared beginning about 1350 and lasted until about 1500. Its characteristic features were more exuberant decoration, as the nobles and wealthy citizens of mostly northern French cities competed to build more and more elaborate churches and cathedrals. It took its name from the sinuous, flame-like designs which ornamented windows. Other new features included the arc en accolade, a window decorated with an arch, stone pinnacles and floral sculpture. It also featured an increase in the number of nervures, or ribs, that supported and decorated each vault of the ceiling, both for greater support and decorative effect. Notable examples of Flamboyant Gothic include the western facade of Rouen Cathedral and Sainte-Chapelle de Vincennes in Paris, both built in the 1370s; and the Choir of Mont Saint Michel Abbey (about 1448). Gothic architecture in the French regions The most famous examples of Gothic architecture are found in the Île-de-France and Champagne, but other French regions created their own original versions of the style. Norman Gothic Normandy at the end of the 12th century saw the construction of several notable Gothic cathedrals and churches. The characteristic features of Norman Gothic were sharply pointed arches, lavish use of decorative molding, and walls pierced with numerous passages. Norman architects and builders were active not only in Normandy, but also across the Channel in England. The high-quality Norman stone was cut and transported to England for use in English cathedrals. Notable examples of Norman Gothic include Lisieux Cathedral, Fécamp Abbey, the chevet of Abbey of Saint-Étienne, Caen; Rouen Cathedral; Coutances Cathedral, the chevet of Le Mans Cathedral; Bayeux Cathedral; and the celebrated monastery at Mont-Saint-Michel. Angevin Gothic The Angevin Gothic style or Plantagenet style in the province of Anjou features vaults with elegant decorative ribs, as well as ornate columns. The style is found in the interior of Angers Cathedral (1032–1523), though many of the Gothic elements of the facade were replaced with Renaissance elements and towers. A fine example of Angevin Gothic is found in the medieval Saint Jean Hospital in Angers, which now contains the Musée Jean-Lurçat, a museum of contemporary tapestries. Maine Gothic Poitiers Cathedral in the historic province of Maine also features a distinctive regional Gothic style. It was begun in 1162 under King Henry II of England and Eleanor of Aquitaine. Its distinctive features, like those of Angevin Gothic, include convex vaults with ribs in decorative designs. Burgundian Gothic Burgundy also had its own version of Gothic, found in Nevers Cathedral (1211–1331), Dijon Cathedral (1280–1325), Chalon Cathedral (1220–1522), and Auxerre Cathedral (13th-16th century). The Burgundian Gothic tended to be more sober and monumental than the more ornate northern style, and often included elements of earlier Romanesque churches on the same site, such as the Romanesque crypt beneath the Gothic choir at Auxerre Cathedral. Other Burgundian features included colourful tile roofs in geometric patterns (Langres Cathedral). Meridional Gothic The south of France had its own distinct variation of the Gothic style: the Meriodonal or Southern French Gothic. A prominent example is Albi Cathedral in the Tarn Department, built between 1282 and 1480. It was originally constructed as a fortress, then transformed into a church. Due to a lack of suitable stone, it was constructed almost entirely of brick, and is one of the largest brick buildings in the world. In the Jacobins church of Toulouse, the grafting of a single apse of polygonal plan on a church with two vessels gave birth to a starry vault whose complex organization preceded by more than a century the Flamboyant Gothic. Tradition refers to this masterpiece as "palm tree" because the veins gush out of the smooth shaft of the column, like the fronds of palm trees. Gothic civil architecture The largest civic building built in the Gothic style in France was the Palais des Papes (Palace of the Popes) in Avignon, constructed between 1252 and 1364, when the Popes fled the political chaos and wars enveloping Rome. Given the complicated political situation, it combined the functions of a church, a seat of government and a fortress. In the 15th century, following the Late Gothic or Flamboyant period, some elements of Gothic decoration borrowed from cathedrals began to appear in civil architecture, particularly in the region of Flanders in northern France, and in Paris. The Hôtel de Ville of Compiègne has an imposing Gothic bell tower, featuring a spire surrounded by smaller towers, and its windows are decorated with ornate accolades or ornamental arches. Similarly, flamboyant town halls were found in Arras, Douai, and Saint-Quentin, Aisne, and across the border in Belgium in Brussels and Bruges. Unfortunately, many of the finest buildings were destroyed during World War I, due to their proximity to the front lines. Gothic features also appeared in the elaborate residences built by the nobility and wealthy bourgeoisie in Paris and other large cities. Examples include the Hôtel Cluny (now the Musée de Cluny – Musée national du Moyen Âge) in Paris, and particularly the palatial house built by merchant Jacques Cœur in Bourges (1440–1450). Another good example in Paris is the Tour Jean-sans-Peur, a nobleman's townhouse, which features a Gothic watch tower and a flamboyant gothic ceiling. Transition between Gothic and Renaissance During the Middle Ages Prosperous French cities competed to build the largest cathedral or the highest tower. One of the drawbacks of French Gothic architecture was its cost; it required many skilled craftsmen working for decades. Due to downturns in the economy, a number of French cathedrals were begun but never finished. They also sometimes suffered when the ambitions of the architects exceeded their technical skills. One example was Beauvais Cathedral. Its patrons and architects sought to build the tallest church in the world. with a vaulted choir 48 meters high, taller than its nearby competitor, Amiens Cathedral, at 42 meters. Work began in 1225 but the roof of the vault was too heavy for the walls, and partially collapsed in 1272. They thickened the walls and rebuilt the vault and in 1569 they completed a tower, 72 meters high, which from 1569 to 1573 made Beauvais Cathedral the tallest structure in the world. However, in 1573, the new tower collapsed, fortunately without any casualties. The church remains today as it was, with the choir, some of the ambulatory, apse, some chapels, but no nave or tower. Beginning in the 1530s, the Flamboyant Gothic style of French religious and civil architecture also began to show the influence of the Italian Renaissance. Charles VIII of France and Louis XII of France had both participated in military campaigns in Italy, and had seen the new architecture there. Large numbers of Italian stonemasons had come to Paris to work on the new Pont Notre-Dame (1507–1512) and other construction sites. The Fontaine des Innocents, built by sculptor Jean Goujon to celebrate the entrance of Henry II into Paris in 1549, was the first Renaissance monument in the city. It was soon followed by the new facade of the Cour Carré of the Louvre, also decorated by Jean Goujon. The new Paris Hotel de Ville (1533–1568) was also constructed in an Italianate rather than Gothic style. Most important of all, the new Tuileries Palace by Philippe Delorme, built for Catherine de' Medici, begun in 1564, was inspired by Italian palaces. Religious buildings were slower to change. The Church of the Carmes-Deschaussé (1613–1620) on rue Vaugirard in Paris, and especially the church of St-Gervais-et-St-Protais by Salomon de Brosse (1615–21) with a facade based on the superposition of the three orders of classical architecture, represented the new model. However, the Gothic style remained prominent in new churches. The Church of Saint Eustache in Paris (1532–1640), which rivaled Notre-Dame in size, combined a Gothic plan with Renaissance decoration. In the course of the 17th century, the French classical style of François Mansart began to dominate; then, under Louis XIV, the grand French classical style, practiced by Jules Hardouin-Mansart, Louis Le Vau, and Claude Perrault, took center stage. Landmarks of the Gothic style, such as Notre-Dame, were modified with new interiors designed in the new style. Following the new fashion of his patron, Louis XIV, the poet Molière ridiculed the Gothic style in a 1669 poem: "...the insipid taste of Gothic ornamentation, these odious monstrosities of an ignorant age, produced by the torrents of barbarism...". During the French Revolution, Gothic churches were symbols of the old regime and became targets for the Revolutionaries; the cathedrals were nationalized, and stripped of ornament and valuables. The statues of the Biblical figures on the facade of Notre-Dame were beheaded, under the false belief that they were statues of the French Kings. Under Napoleon Bonaparte, the cathedrals were returned to the church, but were left in a lamentable state of repair. Military architecture In the 13th century, the design of the chateau fort, or castle, was modified, based on the Byzantine and Moslem castles the French knights had seen during the Crusades. The new kind of fortification was called Phillipienne, after Philippe Auguste, who had taken part in the Crusades. The new fortifications were more geometric, usually square, with a high main donjon or tower, in the center, which could be defended even if the walls of the castle were captured. The donjon of the Château de Vincennes, begun by Philip VI of France, is a good example. It is 52 meters high, the tallest military tower in Europe. In the Phillipienne castle, other towers, usually round, were placed at the corners and along the walls, close enough together to support each other. The walls had two levels of walkways on the inside, an upper parapet with openings () from which soldiers could watch or fire arrows on besiegers below; narrow openings () through which they could be sheltered as they fired arrows; and floor openings (), from which they could drop rocks, burning oil or other objects on the besiegers. The upper walls also had protected protruding balconies, and , from which soldiers could see what was happening at the corners or on the ground below. In addition, the towers and walls were pierced with narrow vertical slits, called , through which archers could fire arrows. In later castles, the slits took the form of crosses, so that archers could fire , or crossbows, in different directions. Castles were surrounded by deep moats, spanned by a single drawbridge. The entrance was also protected by a portcullis, which could be opened and closed. The walls at the bottom were often sloping, and protected with earthen barriers. A surviving example is the Château de Dourdan in the Seine-et-Marne department, near Nemours. After the end of the Hundred Years' War (1337–1453), with improvements in artillery, the castles lost most of their military importance. They remained as symbols of the rank of their noble occupants; the narrowing openings in the walls were often widened into the windows of bedchambers and ceremonial halls. The tower of the Chateau of Vincennes became a royal residence. In the 19th century, portions of the Gothic walls and towers of the Cité de Carcassonne were restored, with some modification, by Eugène Viollet-le-Duc. He also rebuilt the Château de Pierrefonds (1393–1407), an unfinished medieval castle, making it into a neo-Gothic residence for Napoleon III. This project was incomplete when Napoleon III was overthrown in 1870, but can be visited today. Restoration and Gothic Revival A large part of the Gothic architectural heritage of France, particularly the churches and monasteries, had been damaged or destroyed during the Revolution. Of the 300 churches in Paris in the 16th century, only 97 still were standing in 1800. The Basilica of St Denis had been stripped of its stained glass and monumental tombs, while the statues on the façade of the cathedral of Notre-Dame de Paris had been beheaded and taken down. Throughout the country, churches and monasteries had been demolished or turned into barns, cafes, schools, or prisons. The first effort to catalogue the remaining monuments was made in 1816 by Alexandre de Laborde, who wrote the first list of "Monuments of France". In 1831, interest in Gothic architecture grew even greater following the popular success of the romantic novel Notre-Dame de Paris by Victor Hugo. In 1832, Hugo wrote an article for the Revue des deux Mondes, which declared war against the "massacre of ancient stones" and the "demolishers" of France's past. Louis Philippe declared that restoration of churches and other monuments would be a priority of his regime. In October 1830, the position of Inspector of Historical Monuments had been created by the Interior Minister, François Guizot, a professor of history at the Sorbonne. In 1833, Prosper Mérimée became its second Inspector, and by far the most energetic and long-lasting. He held the position for twenty-seven years. Under Louis Philippe, French Gothic architecture was officially recognized as a treasure of French culture. Under Mérimée's direction, the first efforts to restore major Gothic monuments began. In 1835, the church of Saint Séverin in Paris was among the first to undergo restoration, followed in 1836 by Sainte-Chapelle, which had been turned into a storage house for government archives after the Revolution. The restoration of Saint-Chapelle as led by Félix Duban with Jean-Baptiste Antoine Lassus and a young Eugène Viollet-le-Duc. In 1843, Lassus and Viollet-le-Duc won the competition for the restoration of Notre-Dame de Paris. Over the rest of the 19th century, all of the major Gothic cathedrals of France underwent extensive restoration. French Gothic architecture also experienced a modest revival, largely confined to new churches. Neo-Gothic churches built in Paris included Sainte-Clothilde by Theodore Ballu (1841–1857), and Saint-Laurent, Paris by Simon-Claude-Constant Dufeux (1862–1865). Jean-Baptiste Lassus became the most prolific neo-Gothic architect in France, constructing Saint-Nicolas de Nantes (1840), Sacré-Coeur de Moulins (1849), Saint-Pierre de Dijon (1850), Saint-Jean-Baptiste de Belleville (1853) and the Église de Cusset (1855). The Saint-Eugene-Sainte-Cécile in Paris by Louis-Auguste Boileau and Adrien-Louis Lasson (1854–1855) was the most innovative example of neo-Gothic; it combined a traditional Gothic design with a modern iron framework. Jules Verne was married in the church in 1857. Characteristics The rib vault The Gothic style emerged from innovative use of existing technologies, particularly the pointed arch and the rib vault. The rib vault was known in the earlier Romanesque period, but it was not widely or effectively used until the Gothic period. The crossed ribs of the vault carried the weight outwards and downwards, to clusters of supporting pillars and columns. The earlier rib vaults, used at Sens Cathedral and Notre-Dame Cathedral, had six compartments bordered by ribs and the crossing arch, which transferred the weight to alternating columns and pillars. A new innovation appeared during the High Gothic: the four-part rib vault, which was used in Chartres Cathedral, Amiens Cathedral and Reims Cathedral. The ribs of this vault distributed the weight more equally to the four supporting pillars below and established a closer connection between the nave and the lower portions of the church walls, and between the arcades below and the windows above. This allowed for greater height and thinner walls and contributed to the strong impression of verticality given by the newer Cathedrals. The flying buttress The second major innovation of the Gothic style was the flying buttress, which was first used at Notre-Dame Cathedral. This transferred the thrust of the weight of the roof outside the walls, where it was countered by the weight of the buttress. Heavy stone pinnacles were added to the top of the buttresses, to precisely counterbalance the thrust from inside the walls. The buttress allowed a significant reduction in the thickness of the cathedral walls, and permitted the use of larger windows in the interior of the church. In churches such as Sainte Chapelle, due to buttresses, the walls were made almost entirely of stained glass. The development of rib-vaults and buttresses brought gradual changes to the interior structure of cathedrals. Early Gothic cathedrals had the walls of the nave built in four levels: a gallery with columns on the ground level; then the tribune, a gallery with windows; then the triforium, a row of smaller windows; and finally the high windows, just below the vaults. During the High Gothic period, with the development of the four-part rib vault and the flying buttress, the tribune was eliminated at Chartres and other new cathedrals, allowing taller windows and arcades. By the 15th century, at Rouen Cathedral, the triforium also disappeared, and the walls between the traverses were filled with high windows. The portal and tympanum Another innovative feature of the French Gothic cathedral was the design of the portal or entry, which by long Christian tradition faced west. The Basilica of St Denis had a triple portal, decorated with columns in the form of statues of apostles and saints around the doorways, and biblical scenes crowded with statuary over the doorways. This triple portal was adopted by all the major cathedrals. A tympanum over the portal, crowded with sculptural figures illustrating a biblical story became a feature of Gothic cathedrals. Following the example of Amiens, the tympanum over the central portal traditionally depicted the Last Judgement, the right portal showed the coronation of the Virgin Mary, and the left portal showed the lives of saints who were important in the diocese. Stained glass and the rose window Large stained glass windows and rose windows were another defining feature of the Gothic style. Some Gothic windows, like those at Chartres, were cut into the stone walls. Other windows, such as those in the chapels of Notre-Dame and Reims, were in stone frames installed into the walls. The most common form was an oculus, a small round window with two lancets, or windows with pointed arches, just below it. The rose window was the most famous type of the Gothic style. They were placed in the transepts and the portals to provide light to the nave. The largest rose windows were ten meters in diameter. They had a framework of stone armatures often in an ornate floral pattern, to help them resist the wind. Gothic windows were in a stone frame separate from the wall, not cut into the wall. The early windows were made of pieces of tinted glass, touched up with grisaille painting, and held in place by pieces of lead that outlined the figures. As the windows grew larger, more intense colors were used. After 1260, the colors became lighter, and the combination of grisaille and pale shades of yellow became more common. Chartres Cathedral and Le Mans Cathedral have some of the finest surviving original windows. Sculpture and symbolism - the "Book for the Poor" The Gothic cathedral was a , literally a "book for the poor", covered with sculpture illustrating biblical stories, for the vast majority of parishioners who were illiterate. These largely illustrated stories from the Bible, but also included stories and figures from mythology and more complicated symbols taken from medieval philosophical and scientific teachings such as alchemy. The exteriors of cathedrals and other Gothic churches were decorated with sculptures of a variety of fabulous and frightening grotesques or monsters. These included the gargoyle, the chimera, the dragon, the tarasque, and others, taken largely from legend and mythology. They were part of the visual message for the illiterate worshippers, symbols of the evil and danger that threatened those who did not follow the teachings of the church. The gargoyle also had a more practical purpose. They were the rain spouts of the Cathedral; rainwater ran from the roof into lead gutters, then down channels on the flying buttresses to the mouths of the gargoyles. The longer the gargoyle, the farther the water was projected from the walls, protecting the walls and windows from water damage. Multiple numbers were used to distribute the water as widely as possible. Amid all the religious figures, some of the sculptural decoration was devoted to illustrating medieval science and philosophy. The porch of Notre-Dame Cathedral in Paris and of Amiens Cathedral are decorated with similar small carved figures holding circular plaques with symbols of transformation taken from alchemy. The central pillar of the central door of Notre-Dame features a statue of a woman on a throne holding a sceptre in her left hand, and in her right hand, two books, one open (symbol of public knowledge), and the other closed (esoteric knowledge), along with a ladder with seven steps, symbolizing the seven steps alchemists followed in their scientific quest of trying to transform ordinary metal into gold. Another common feature of Gothic cathedrals was a design of a labyrinth, usually found in stone on the floor in a central part of the cathedral. Inspired by the labyrinth in Greek legend constructed by King Minos as the home of the Minotaur, in cathedrals they were known as the "Path of Jerusalem" and symbolized the difficult and often roundabout path that a Christian sometimes had to follow in life to reach the gates of Paradise and salvation. Large labyrinths were originally found in Auxerre Cathedral, Sens Cathedral, Reims Cathedral, and Arras Cathedral, but these removed during various renovations in the 18th century. The best surviving examples are in Chartres Cathedral, in its original form, and in Amiens Cathedral, which was reconstructed in 1894. The portal sculpture of Burgundy integrates classical literary elements with its 13th-century Gothic style. In Auxerre, two such examples of sculptures are upon the cathedral of Saint-Étienne depicting Hercules, a satyr, and a sleeping faun; the Chartres–Reims cathedral's north transept illustrates the biblical tale of David and Bathsheba. The Sens Cathedral's "Coronation of the Virgin" reflects a similar relief cathedral on the Notre Dame in Paris, and was created in a workshop that made minor contributions to Spanish Gothic architecture. Timeline of notable buildings Because of the lengthy period of construction of Gothic cathedrals, few were built in a single style. Most, like Notre-Dame, have a combination of features constructed in several different periods, as well as features constructed after the Gothic age. Also, different sources give varying dates for time periods. This list primarily uses the time periods given in LaRousse encyclopedia on-line and the on-line Pedagogical Dossier of Gothic Architecture of the Cité de l'Architecture et du Patrimoine, Paris. Early Gothic, Transition, or Primitive Gothic (1130–1180) 1130: Sens Cathedral, the first French Gothic cathedral, begun (consecrated 1171). 1135: Basilica of Saint-Denis reconstruction in new style begun by Abbot Suger, The Gothic ambulatory was finished in 1144. 1145: Rouen Cathedral begun. (consecrated 1237) 1150: Noyon Cathedral begun. (completed 1231) 1153: Senlis Cathedral begun. (consecrated 1191) 1155: Laon Cathedral begun. Reconstructed with three traverses and completed in 1220 1150 c. Angers Cathedral rebuilding from Romanesque to Angevin Gothic beginning mid-12th century, completed 1250 1162: Poitiers Cathedral begun. (consecrated 1379) 1163: Notre Dame de Paris begun. Choir completed in 1172, Cathedral consecrated in 1182. 1170: Lyon Cathedral begun. (completed 14th century) 1170: Lisieux Cathedral in Normandy reconstruction begins from Romanesque to Gothic style. Work continued until 13th century. High Gothic or Classic Gothic (1180–1230) 1183: Bourges Cathedral begun, nave was finished by 1255; consecrated in 1324. 1194: Chartres Cathedral begun to replace earlier church destroyed by fire. Consecrated 1260. Flamboyant north spire added after earlier spire destroyed by lightning. 1210: Coutances Cathedral, Normandy, begun. (completed 1274) 1210: Toul Cathedral reconstruction from Romanesque began. Flamboyant facade was added in the 15th century. 1211: Reims Cathedral begun. (completed 1345) 1217: Le Mans Cathedral begun. (consecrated 1254) 1220: Amiens Cathedral begun. (completed 1288). Rose window was added 1366–1341 1220 to 1270: Notre-Dame de Paris; Addition of transepts and rose windows, modified buttresses 1225: Beauvais Cathedral begun, but after tower falls in 1272 it is left unfinished Rayonnant (1230–1420) 1231: Basilica of Saint-Denis enlarged with new nave, transept, and rose windows (completed 1264) 1238: Sainte-Chapelle on the Ile de la Cité in Paris begun. (completed 1248). 1252: in Avignon begun. (major enlargement and modification between 1334 and 1364) 1284: Conciergerie and Palais de la Cité begun on the Ile de la Cité in Paris 1340–1410: Château de Vincennes keep and tower Flamboyant Gothic (1400–1520) 1405–1527: Notre-Dame de l'Épine (begun 1405–1406, completed 1527) 1435–1521: Church of Saint-Maclou, Rouen. The west facade and towers of Rouen Cathedral rebuilt after a fire in the (16th century) 1493–1510: The north façade, south façade, and south porch of the Church of Notre-Dame de Louviers 1500–1508: Beauvais Cathedral south transept constructed 1507–13: Chartres Cathedral north tower is destroyed by lightning, and rebuilt in the Flamboyant style See also Building a Gothic cathedral Early Gothic architecture Rayonnant Flamboyant French Gothic stained glass windows High Gothic Southern French Gothic Gothic cathedrals and churches Gothic architecture Romanesque architecture Architecture of cathedrals and great churches References Bibliography Martindale, Andrew, Gothic Art, (1967), Thames and Hudson (in English and French); Rivière, Rémi; Lavoye, Agnès (2007). La Tour Jean sans Peur, Association des Amis de la tour Jean sans Peur. External links Mapping Gothic France, a project by Columbia University and Vassar College with a database of images, 360° panoramas, texts, charts and historical maps Architectural history Architectural styles European architecture Catholic architecture 12th-century architecture 13th-century architecture 14th-century architecture 15th-century architecture 16th-century architecture Gothic
French Gothic architecture
[ "Engineering" ]
7,696
[ "Architectural history", "Architecture" ]
11,930,843
https://en.wikipedia.org/wiki/Infinity%20%28philosophy%29
In philosophy and theology, infinity is explored in articles under headings such as the Absolute, God, and Zeno's paradoxes. In Greek philosophy, for example in Anaximander, 'the Boundless' is the origin of all that is. He took the beginning or first principle to be an endless, unlimited primordial mass (ἄπειρον, apeiron). The Jain metaphysics and mathematics were the first to define and delineate different "types" of infinities. The work of the mathematician Georg Cantor first placed infinity into a coherent mathematical framework. Keenly aware of his departure from traditional wisdom, Cantor also presented a comprehensive historical and philosophical discussion of infinity. In Christian theology, for example in the work of Duns Scotus, the infinite nature of God invokes a sense of being without constraint, rather than a sense of being unlimited in quantity. Early thinking Greek Anaximander An early engagement with the idea of infinity was made by Anaximander who considered infinity to be a foundational and primitive basis of reality. Anaximander was the first in the Greek philosophical tradition to propose that the universe was infinite. Anaxagoras Anaxagoras (500–428 BCE) was of the opinion that matter of the universe had an innate capacity for infinite division. The Atomists A group of thinkers of ancient Greece (later identified as the Atomists) all similarly considered matter to be made of an infinite number of structures as considered by imagining dividing or separating matter from itself an infinite number of times. Aristotle and after Aristotle, alive for the period 384–322 BCE, is credited with being the root of a field of thought, in his influence of succeeding thinking for a period spanning more than one subsequent millennium, by his rejection of the idea of actual infinity. In Book 3 of his work entitled Physics, Aristotle deals with the concept of infinity in terms of his notion of actuality and of potentiality. This is often called potential infinity; however, there are two ideas mixed up with this. One is that it is always possible to find a number of things that surpasses any given number, even if there are not actually such things. The other is that we may quantify over infinite sets without restriction. For example, , which reads, "for any integer n, there exists an integer m > n such that P(m)". The second view is found in a clearer form by medieval writers such as William of Ockham: The parts are actually there, in some sense. However, in this view, no infinite magnitude can have a number, for whatever number we can imagine, there is always a larger one: "There are not so many (in number) that there are no more." Aristotle's views on the continuum foreshadow some topological aspects of modern mathematical theories of the continuum. Aristotle's emphasis on the connectedness of the continuum may have inspired—in different ways—modern philosophers and mathematicians such as Charles Sanders Peirce, Cantor, and LEJ Brouwer. Among the scholastics, Aquinas also argued against the idea that infinity could be in any sense complete or a totality. Aristotle deals with infinity in the context of the prime mover, in Book 7 of the same work, the reasoning of which was later studied and commented on by Simplicius. Roman Plotinus Plotinus considered infinity, while he was alive, during the 3rd century A.D. Simplicius Simplicius, alive circa 490 to 560 AD, thought the concept "Mind" was infinite. Augustine Augustine thought infinity to be "incomprehensible for the human mind". Early Indian thinking The Jain upanga āgama Surya Prajnapti (c. 400 BC) classifies all numbers into three sets: enumerable, innumerable, and infinite. Each of these was further subdivided into three orders: Enumerable: lowest, intermediate and highest Innumerable: nearly innumerable, truly innumerable and innumerably innumerable Infinite: nearly infinite, truly infinite, infinitely infinite The Jains were the first to discard the idea that all infinities were the same or equal. They recognized different types of infinities: infinite in length (one dimension), infinite in area (two dimensions), infinite in volume (three dimensions), and infinite perpetually (infinite number of dimensions). According to Singh (1987), Joseph (2000) and Agrawal (2000), the highest enumerable number N of the Jains corresponds to the modern concept of aleph-null (the cardinal number of the infinite set of integers 1, 2, ...), the smallest cardinal transfinite number. The Jains also defined a whole system of infinite cardinal numbers, of which the highest enumerable number N is the smallest. In the Jaina work on the theory of sets, two basic types of infinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between ("countless, innumerable") and ananta ("endless, unlimited"), between rigidly bounded and loosely bounded infinities. Views from the Renaissance to modern times Galileo Galileo Galilei (February 15, 1564 – January 8, 1642) discussed the example of comparing the square numbers {1, 4, 9, 16, ...} with the natural numbers {1, 2, 3, 4, ...} as follows: 1 → 1 2 → 4 3 → 9 4 → 16 … It appeared by this reasoning as though a "set" (Galileo did not use the terminology) which is naturally smaller than the "set" of which it is a part (since it does not contain all the members) is in some sense the same "size". Galileo found no way around this problem: The idea that size can be measured by one-to-one correspondence is today known as Hume's principle, although Hume, like Galileo, believed the principle could not be applied to the infinite. The same concept, applied by Georg Cantor, is used in relation to infinite sets. Thomas Hobbes Famously, the ultra-empiricist Hobbes (April 5, 1588 – December 4, 1679) tried to defend the idea of a potential infinity in light of the discovery, by Evangelista Torricelli, of a figure (Gabriel's Horn) whose surface area is infinite, but whose volume is finite. Not reported, this motivation of Hobbes came too late as curves having infinite length yet bounding finite areas were known much before. John Locke Locke (August 29, 1632 – October 28, 1704) in common with most of the empiricist philosophers, also believed that we can have no proper idea of the infinite. They believed all our ideas were derived from sense data or "impressions," and since all sensory impressions are inherently finite, so too are our thoughts and ideas. Our idea of infinity is merely negative or privative. He considered that in considerations on the subject of eternity, which he classified as an infinity, humans are likely to make mistakes. Modern philosophical views Modern discussion of the infinite is now regarded as part of set theory and mathematics. Contemporary philosophers of mathematics engage with the topic of infinity and generally acknowledge its role in mathematical practice. Although set theory is now widely accepted, this was not always so. Influenced by L.E.J Brouwer and verificationism in part, Wittgenstein (April 26, 1889 – April 29, 1951) made an impassioned attack upon axiomatic set theory, and upon the idea of the actual infinite, during his "middle period". Unlike the traditional empiricists, he thought that the infinite was in some way given to sense experience. Emmanuel Levinas The philosopher Emmanuel Levinas (January 12, 1906 – December 25, 1995) uses infinity to designate that which cannot be defined or reduced to knowledge or power. In Levinas' magnum opus Totality and Infinity he says : Levinas also wrote a work entitled Philosophy and the Idea of Infinity, which was published during 1957. See also Infinite monkey theorem Measure problem (cosmology) Philosophy of space and time Notes References D. P. Agrawal (2000). Ancient Jaina Mathematics: an Introduction, Infinity Foundation. L. C. Jain (1973). "Set theory in the Jaina school of mathematics", Indian Journal of History of Science. A. Newstead (2001). "Aristotle and Modern Mathematical Theories of the Continuum", in Aristotle and Contemporary Science II, D. Sfendoni-Mentzou, J. Hattiangadi, and D.M. Johnson, eds. Frankfurt: Peter Lang, 2001, 113–129, . A. Newstead (2009). "Cantor on Infinity in Nature, Number, and the Divine Mind", American Catholic Philosophical Quarterly, 83 (4), 533–553. Ian Pearce (2002). 'Jainism', MacTutor History of Mathematics archive. N. Singh (1988). 'Jaina Theory of Actual Infinity and Transfinite Numbers', Journal of Asiatic Society, Vol. 30. External links Thomas Taylor - A Dissertation on the Philosophy of Aristotle, in Four Books. In which his principle physical and metaphysical dogmas are unfolded, and it is shown, from undubitable evidence, that his philosophy has not been accurately known since the destruction of the Greeks. The insufficiency also of the philosophy that has been substituted by the moderns for that of Aristotle, is demonstrated published by Robert Wilks, London 1812 Metaphysical properties Physical cosmology
Infinity (philosophy)
[ "Physics", "Astronomy", "Mathematics" ]
1,991
[ "Theoretical physics", "Mathematical objects", "Infinity", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
11,931,424
https://en.wikipedia.org/wiki/InterSwitch%20Trunk
InterSwitch Trunk (IST) is one or more parallel point-to-point links (Link aggregation) that connect two switches together to create a single logical switch. The IST allows the two switches to share addressing information, forwarding tables, and state information, permitting rapid (less than one second) fault detection and forwarding path modification. The link may have different names depending on the vendor. For example, Brocade calls this an Inter-Chassis Link (ICL). Cisco calls this a VSL (Virtual Switch Link). Edge switches, servers or PCs see the two aggregate switches as one large switch. This allows any vendor's equipment configured to use the IEEE 802.3ad static link aggregation protocol to connect to both switches and take advantage of load balancing, redundant connections. The IST protocol was developed by Nortel (now acquired by Avaya, which is now acquired by Extreme Networks) to enhance the capabilities of Link aggregation, and is required to be configured prior to configuring the SMLT, DSMLT or R-SMLT functions on the two aggregate (core, distribution, or access) switches. The edge equipment can be configured with any of the following; Multi-Link Trunking (MLT), DMLT, IEEE 802.3ad static link aggregation, IEEE 802.3ad Static Gigabit EtherChannel (GEC), IEEE 802.3ad Static Fast EtherChannel (FEC), SMLT, DSMLT, and other static link aggregation protocols. Patent United States Patent 7173934 Product support IST is supported on Nortel's Routing Switch 1600, 5000, 8300, ERS 8600, MERS 8600 products and also on Avaya's Virtual Services Platform VSP 7000 and VSP 9000. See also Avaya Avaya Government Solutions References External links Designing a Resilient Network Passport 8600 Split Multi-Link Trunking Always on Networking Avaya Ethernet Link protocols Network architecture Network topology Nortel protocols
InterSwitch Trunk
[ "Mathematics", "Engineering" ]
409
[ "Network topology", "Topology", "Computer networks engineering", "Network architecture" ]
11,931,453
https://en.wikipedia.org/wiki/Southern%20Cross%20Astronomical%20Society
The Southern Cross Astronomical Society, founded in 1922, is one of the oldest amateur astronomy societies in the Western Hemisphere. It is located in the Physics Department of Florida International University in Miami, Florida. As of February 2007, the society had over 600 members. See also List of astronomical societies References External links Southern Cross Astronomical Society official website Scientific societies based in the United States 1922 establishments in Florida Florida International University
Southern Cross Astronomical Society
[ "Astronomy" ]
82
[ "Astronomy stubs", "Astronomy organizations", "Astronomy organization stubs" ]
11,932,146
https://en.wikipedia.org/wiki/Geopolymer
A geopolymer is a vague pseudo-chemical term used to describe inorganic, typically bulk ceramic-like material that forms covalently bonded, non-crystalline (amorphous) networks, often intermingled with other phases. Many geopolymers may also be classified as alkali-activated cements or acid-activated binders. They are mainly produced by a chemical reaction between a chemically reactive aluminosilicate powder e.g. metakaolin or other clay-derived powders, natural pozzolan, or suitable glasses, and an aqueous solution (alkaline or acidic) that causes this powder to react and re-form into a solid monolith. The most common pathway to produce geopolymers is by the reaction of metakaolin with sodium silicate, which is an alkaline solution, but other processes are also possible. Commercially produced geopolymers may be used for fire- and heat-resistant coatings and adhesives, medicinal applications, high-temperature ceramics, new binders for fire-resistant fiber composites, toxic and radioactive waste encapsulation, and as cementing components in making or repairing concretes. The properties and uses of geopolymers are being explored in many scientific and industrial disciplines such as modern inorganic chemistry, physical chemistry, colloid chemistry, mineralogy, geology, and in other types of engineering process technologies. The term geopolymer was coined by Joseph Davidovits in 1978 due to the rock-forming minerals of geological origin used in the synthesis process. These materials and associated terminology were popularized over the following decades via his work with the Institut Géopolymère (Geopolymer Institute). Geopolymers are synthesized in one of two conditions: in alkaline medium (Na+, K+, Li+, Cs+, Ca2+…) in acidic medium (phosphoric acid: ) The alkaline route is the most important in terms of research and development and commercial applications. Details on the acidic route have also been published. Composition In the 1950s, Viktor Glukhovsky developed concrete materials originally known as "soil silicate concretes" and "soil cements", but since the introduction of the geopolymer concept by Joseph Davidovits, the terminology and definitions of the word geopolymer have become more diverse and often conflicting. The word geopolymer is sometimes used to refer to naturally occurring organic macromolecules; that sense of the word differs from the now-more-common use of this terminology to discuss inorganic materials which can have either cement-like or ceramic-like character. A geopolymer is essentially a mineral chemical compound or mixture of compounds consisting of repeating units, for example silico-oxide (-Si-O-Si-O-), silico-aluminate (-Si-O-Al-O-), ferro-silico-aluminate (-Fe-O-Si-O-Al-O-) or alumino-phosphate (-Al-O-P-O-), created through a process of geopolymerization. This method of describing mineral synthesis (geosynthesis) was first presented by Davidovits at an IUPAC symposium in 1976. Even within the context of inorganic materials, there exist various definitions of the word geopolymer, which can include a relatively wide variety of low-temperature synthesized solid materials. The most typical geopolymer is generally described as resulting from the reaction between metakaolin (calcined kaolinitic clay) and a solution of sodium or potassium silicate (waterglass). Geopolymerization tends to result in a highly connected, disordered network of negatively charged tetrahedral oxide units balanced by the sodium or potassium ions. In the simplest form, an example chemical formula for a geopolymer can be written as Na2O·Al2O3·nSiO2·wH2O, where n is usually between 2 and 4, and w is around 11-15. Geopolymers can be formulated with a wide variety of substituents in both the framework (silicon, aluminium) and non-framework (sodium) sites; most commonly potassium or calcium takes on the non-framework sites, but iron or phosphorus can in principle replace some of the aluminum or silicon. Geopolymerization usually occurs at ambient or slightly elevated temperature; the solid aluminosilicate raw materials (e.g. metakaolin) dissolve into the alkaline solution, then cross-link and polymerize into a growing gel phase, which then continues to set, harden, and gain strength. Geopolymer synthesis Covalent bonding The fundamental unit within a geopolymer structure is a tetrahedral complex consisting of silicon or aluminum coordinated through covalent bonds to four oxygens. The geopolymer framework results from the cross-linking between these tetrahedra, which leads to a 3-dimensional aluminosilicate network, where the negative charge associated with tetrahedral aluminium is balanced by a small cationic species, most commonly an alkali metal cation (Na+, K+ etc). These alkali metal cations are often ion-exchangeable, as they are associated with, but only loosely bonded to the main covalent network, similarly to the non-framework cations present in zeolites. Oligomer formation Geopolymerization is the process of combining many small molecules known as oligomers into a covalently bonded network. This reaction process takes place via formation of oligomers (dimer, trimer, tetramer, pentamer) which are believed to contribute to the formation of the actual structure of the three-dimensional macromolecular framework, either through direct incorporation or through rearrangement via monomeric species. These oligomers are named by some geopolymer chemists as sialates following the scheme developed by Davidovits, although this terminology is not universally accepted within the research community due in part to confusion with the earlier (1952) use of the same word to refer to the salts of the important biomolecule sialic acid. The image shows five examples of small oligomeric potassium aluminosilicate species (labelled in the diagram according to the poly(sialate) / poly(sialate-siloxo) nomenclature), which are key intermediates in potassium-based alumino-silicate geopolymerization. The aqueous chemistry of aluminosilicate oligomers is complex, and plays an important role in the discussion of zeolite synthesis, a process which has many details in common with geopolymerization. Example of geopolymerization of a metakaolin precursor, in an alkaline medium The reaction process broadly involves four main stages: Alkaline hydrolysis of the layered structure of the calcined kaolinite Formation of monomeric and oligomeric species In the presence of waterglass (soluble potassium or sodium silicate), cyclic Al-Si structures can form (e.g. #5 in the figure), whereby the hydroxide is liberated by condensation reactions and can react again Geopolymerization (polycondensation) into polymeric 3D-networks. The reaction processes involving other aluminosilicate precursors (e.g. low-calcium fly ash, crushed or synthetic glasses, natural pozzolans) are broadly similar to the steps described above. Geopolymer 3D-frameworks and water Geopolymerization forms aluminosilicate frameworks that are similar to those of some rock-forming minerals, but lacking in long-range crystalline order, and generally containing water in both chemically bound sites (hydroxyl groups) and in molecular form as pore water. This water can be removed at temperatures above 100 – 200°C. Cation hydration and the locations, and mobility of water molecules in pores are important for lower-temperature applications, such as in usage of geopolymers as cements. The figure shows a geopolymer containing both bound (Si-OH groups) and free water (left in the figure). Some water is associated with the framework similarly to zeolitic water, and some is in larger pores and can be readily released and removed. After dehydroxylation (and dehydration), generally above 250°C, geopolymers can then crystallise above 800-1000°C (depending on the nature of the alkali cation present). Commercial applications There exists a wide variety of potential and existing applications. Some of the geopolymer applications are still in development, whereas others are already industrialized and commercialized. They are listed in three major categories: Geopolymer cements and concretes Building materials (for example, clay bricks) Low- cements and concretes Radioactive and toxic waste containment Geopolymer resins and binders Fire-resistant materials, thermal insulation, foams Low-energy ceramic tiles, refractory items, thermal shock refractories High-tech resin systems, paints, binders and grouts Bio-technologies (materials for medicinal applications) Foundry industry (resins), tooling for the manufacture of organic fiber composites Composites for infrastructure repair and strengthening Fire-resistant and heat-resistant high-tech carbon-fiber composites for aircraft interiors and automobiles Arts and archaeology Decorative stone artifacts, arts and decoration Cultural heritage, archaeology and history of sciences Geopolymer cements From a terminological point of view, geopolymer cement is a binding system that hardens at room temperature, like regular Portland cement. Geopolymer cement is being developed and utilised as an alternative to conventional Portland cement for use in transportation, infrastructure, construction and offshore applications. Production of geopolymer cement requires an aluminosilicate precursor material such as metakaolin or fly ash, a user-friendly alkaline reagent (for example, sodium or potassium soluble silicates with a molar ratio (MR) SiO2:M2O ≥ 1.65, M being sodium or potassium) and water (See the definition for "user-friendly" reagent below). Room temperature hardening is more readily achieved with the addition of a source of calcium cations, often blast furnace slag. Geopolymer cements can be formulated to cure more rapidly than Portland-based cements; some mixes gain most of their ultimate strength within 24 hours. However, they must also set slowly enough that they can be mixed at a batch plant, either for pre-casting or delivery in a concrete mixer. Geopolymer cement also has the ability to form a strong chemical bond with silicate rock-based aggregates. There is often confusion between the meanings of the terms 'geopolymer cement' and 'geopolymer concrete'. A cement is a binder, whereas concrete is the composite material resulting from the mixing and hardening of cement with water (or an alkaline solution in the case of geopolymer cement), and stone aggregates. Materials of both types (geopolymer cements and geopolymer concretes) are commercially available in various markets internationally. Alkali-activated materials vs. geopolymer cements There exists some confusion in the terminology applied to geopolymers, alkali-activated cements and concretes, and related materials, which have been described by a variety of names including also "soil silicate concretes" and "soil cements". Terminology related to alkali-activated materials or alkali-activated geopolymers is also in wide (but debated) use. These cements, sometimes abbreviated AAM, encompass the specific fields of alkali-activated slags, alkali-activated coal fly ashes, and various blended cementing systems. User-friendly alkaline-reagents Geopolymerization uses chemical ingredients that may be dangerous and therefore requires some safety procedures. Material Safety rules classify the alkaline products in two categories: corrosive products (named here: hostile) and irritant products (named here: friendly). The table lists some alkaline chemicals and their corresponding safety labels. Alkaline reagents belonging to the second (less elevated pH) class may also be termed as User-friendly, although the irritant nature of the alkaline component and the potential inhalation risk of powders still require the selection and use of appropriate personal protective equipment, as in any situation where chemicals or powders are handled. The development of some alkali-activated-cements, as shown in numerous published recipes (especially those based on fly ashes) use alkali silicates with molar ratios SiO2:M2O below 1.20, or are based on concentrated NaOH. These conditions are not considered so user-friendly as when more moderate pH values are used, and require careful consideration of chemical safety handling laws, regulations, and state directives. Conversely, geopolymer cement recipes employed in the field generally involve alkaline soluble silicates with starting molar ratios ranging from 1.45 to 1.95, particularly 1.60 to 1.85, i.e. user-friendly conditions. It may happen that for research, some laboratory recipes have molar ratios in the 1.20 to 1.45 range. Examples of materials that are sometimes called geopolymer cements Commercial geopolymer cements were developed in the 1980s, of the type (K,Na,Ca)-aluminosilicate (or "slag-based geopolymer cement") and resulted from the research carried out by Joseph Davidovits and J.L. Sawyer at Lone Star Industries, USA, marketed as Pyrament® cement. The US patent 4,509,985 was granted on April 9, 1985 with the title 'Early high-strength mineral polymer'. In the 1990s, using knowledge of the synthesis of zeolites from fly ashes, Wastiels et al., Silverstrim et al. and van Jaarsveld and van Deventer developed geopolymeric fly ash-based cements. Materials based on siliceous (EN 197), also called class F (ASTM C618), fly ashes are known: alkali-activated fly ash geopolymer: In many (but not all) cases requires heat curing at 60-80°C; not manufactured separately as a cement, but rather produced directly as a fly-ash based concrete. NaOH + fly ash: partially-reacted fly ash particles embedded in an alumino-silicate gel with Si:Al= 1 to 2, zeolitic type (chabazite-Na and sodalite) structures. slag/fly ash-based geopolymer cement: Room-temperature cement hardening. Alkali metal silicate solution + blast furnace slag + fly ash: fly ash particles embedded in a geopolymeric matrix with Si:Al ~ 2. Can be produced with "user-friendly" (not extremely high pH) activating solutions. The properties of iron-containing "ferri-sialate"-based geopolymer cements are similar to those of rock-based geopolymer cements but involve geological elements, or metallurgical slags, with high iron oxide content. The hypothesised binder chemistry is (Ca,K)-(Fe-O)-(Si-O-Al-O). Rock-based geopolymer cements can be formed by the reaction of natural pozzolanic materials under alkaline conditions, and geopolymers derived from calcined clays (e.g. metakaolin) can also be produced in the form of cements. emissions during manufacturing Geopolymer cements may be able to be designed to have a lower attributed emission of carbon dioxide than some other widely-used materials such as Portland cement. Geopolymers use industrial byproducts/waste containing aluminosilicate phases in manufacturing, which minimizes CO₂ emissions and has a lower environmental impact. The need for standards In June 2012, the institution ASTM International organized a symposium on Geopolymer Binder Systems. The introduction to the symposium states: When performance specifications for Portland cement were written, non-portland binders were uncommon...New binders such as geopolymers are being increasingly researched, marketed as specialty products, and explored for use in structural concrete. This symposium is intended to provide an opportunity for ASTM to consider whether the existing cement standards provide, on the one hand, an effective framework for further exploration of geopolymer binders and, on the other hand, reliable protection for users of these materials. The existing Portland cement standards are not adapted to geopolymer cements; they must be elaborated by an ad hoc committee. Yet, to do so requires the presence of standard geopolymer cements. Presently, every expert is presenting their own recipe based on local raw materials (wastes, by-products or extracted). There is a need for selecting the right geopolymer cement category. The 2012 State of the Geopolymer R&D, suggested to select two categories, namely: type 2 slag/fly ash-based geopolymer cement: fly ashes are available in the major emerging countries; ferro-sialate-based geopolymer cement: this geological iron-rich raw material is present in all countries throughout the globe. along with the appropriate user-friendly geopolymeric reagent. Health effects Geopolymers as ceramics Geopolymers can be used as a low-cost and/or chemically flexible route to ceramic production, both to produce monolithic specimens, and as the continuous (binder) phase in composites with particulate or fibrous dispersed phases. Room-temperature processed materials Geopolymers produced at room temperature are typically hard, brittle, castable, and mechanically strong. This combination of characteristics offers the opportunity for their usage in a variety of applications in which other ceramics (e.g. porcelain) are conventionally used. Some of the first patented applications of geopolymer-type materials - actually predating the coining of the term geopolymer by multiple decades - relate to use in automobile spark plugs. Thermal processing of geopolymers to produce ceramics It is also possible to use geopolymers as a versatile pathway to produce crystalline ceramics or glass-ceramics, by forming a geopolymer through room-temperature setting, and then heating (calcining) it at the necessary temperature to convert it from the crystallographically disordered geopolymer form to achieve the desired crystalline phases (e.g. leucite, pollucite and others). Geopolymer applications in arts and archaeology Because geopolymer artifacts can look like natural stone, several artists started to cast in silicone rubber molds replicas of their sculptures. For example, in the 1980s, the French artist Georges Grimal worked on several geopolymer castable stone formulations. Egyptian pyramid stones In the mid-1980s, Joseph Davidovits presented his first analytical results carried out on samples sourced from Egyptian pyramids. He claimed that the ancient Egyptians used a geopolymeric reaction to make re-agglomerated limestone blocks. Later on, several materials scientists and physicists took over these archaeological studies and have published results on pyramid stones, claiming synthetic origins. However, the theories of synthetic origin of pyramid stones have also been stridently disputed by other geologists, materials scientists, and archaeologists. Roman cements It has also been claimed that the Roman lime-pozzolan cements used in the building of some important structures, especially works related to water storage (cisterns, aqueducts), have chemical parallels to geopolymeric materials. See also Zeolite References External links Geopolymer science. Science Direct. Elsevier. 2024 Inorganic chemistry Geochemistry Polymers Inorganic polymers Silicates Aluminosilicates Ceramic materials Cement Resins Geopolymers Building materials
Geopolymer
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,215
[ "Ceramic engineering", "Geopolymers", "Resins", "Inorganic compounds", "Building engineering", "Inorganic polymers", "Unsolved problems in physics", "Architecture", "Construction", "Materials", "Ceramic materials", "nan", "Polymer chemistry", "Polymers", "Amorphous solids", "Matter", ...
11,932,245
https://en.wikipedia.org/wiki/Digital%20Rights%20Ireland
Digital Rights Ireland is a digital rights advocacy and lobbying group based in Ireland. The group works for civil liberties in a digital age. Telecommunications data retention In 2012, the group brought an action before the Irish High Court, which subsequently made a reference to the Court of Justice of the European Union to take legal action over telecommunications data retention provided for by the Criminal Justice (Terrorist Offences) Act of 2005. Digital Rights Ireland argues that the act led to Gardaí accessing retained data without having a specific crime to investigate, citing remarks by the Data Protection Commissioner. On 8 April 2014, the Court of Justice of the European Union declared the Directive invalid in response to a case brought by Digital Rights Ireland against the Irish authorities and others. File sharing The Irish Recorded Music Association has sent letters to people it accuses of file sharing their music, demanding damages for financial losses. One issue is how the files belonging to the alleged file-sharers were searched. MediaSentry software was used to search their machines, but as it doesn't limit itself to searching only folders used for file sharing, this led to questions of violation of privacy. MediaSentry itself is based in the United States, which has less legislation about data protection than the European Union. This has been an issue in cases in the Netherlands and France. Another issue is Internet service providers being compelled to identify users. Current action still causes concern to DRI. Former TD Dr. Jerry Cowley has requested that the complaints referee investigate whether his telephone is being tapped. DRI expressed concern, noting that there is no Irish equivalent of the Wilson Doctrine in Irish law. Fine Gael has also shown concern at the number of telephone taps authorised by former Minister for Justice Michael McDowell. DRI said that the reasons for withholding the information was unacceptable. Other areas of work Other issues addressed by the group include: ID cards Electronic passports Online defamation Leaking of confidential information by civil servants See also Internet censorship in the Republic of Ireland Digital rights References External links Digital Rights Ireland official website Dáil debate on Criminal Justice (Terrorist Offences) Act, 2005 (Also available in Acrobat format.) Internet privacy organizations Politics and technology Internet-related activism Computer law organizations Intellectual property activism Privacy organizations Radio-frequency identification Civil liberties advocacy groups Intellectual property organizations Political organisations based in the Republic of Ireland Copyright law organizations Digital rights organizations
Digital Rights Ireland
[ "Engineering" ]
471
[ "Radio-frequency identification", "Radio electronics" ]
11,932,515
https://en.wikipedia.org/wiki/Proprotor
A proprotor is a spinning airfoil that function as both an airplane-style propeller and a helicopter-style rotor. Several proprotor-equipped convertiplanes, such as the Bell Boeing V-22 Osprey tiltrotor, are capable of switching back and forth between flying akin to both helicopters and fixed-wing aircraft. Accordingly this type of airfoil has been predominantly applied to vertical takeoff and landing (VTOL) aircraft. The dual-role airfoil is accomplished by one of several design approaches: changing the angle of attack of the wing that the proprotor is attached to, from approximately zero degrees to around ninety degrees: a tiltwing aircraft, changing the angle of attack of only the rotor hub, and possibly the engine that drives it, as on a tiltrotor, changing the angle of attack of the entire aircraft, as on a tailsitter, which launches and lands on its tail. Application details On several aerial vehicles such as the AgustaWestland AW609 and V-22 Osprey, a pair of three-bladed proprotors have been used. Both the proprotors and engines are mounted on load-bearing rotatable pylon at the wingtips, allowing the proprotors to be positioned at various angles. In the case of the AW609's, while flown in helicopter mode, the proprotors can be positioned between a 75- and 95-degree angle from the horizontal, with 87 degrees being the typical selection for hovering vertically; and in aeroplane mode, the proprotors are rotated forward and locked in position at a zero-degree angle, spinning at 84% RPM. STOL rolling-takeoff and landing capability is achieved by having the nacelles tilted forward up to 45°. Typically, flight control software would perform much of the complex transition between the distinct helicopter and aeroplane modes; while automated systems are usually provided to inform crews on the optimal tilt angle and air speed to pursue. Furthermore, it is typical for flight controls, such as blade pitch, to both resemble and function akin to their counterparts on conventional rotorcraft, easing the transition of conventional helicopter pilots to such vehicles. Proprotors can be designed to fold for storage purposes. However, in the case of the V-22, in order to facilitate proprotor folding, the proprotor's diameter had to be constrained to a diameter of 38-foot (11.6 m), five feet (1.5 m) less than optimal for vertical takeoff; this difference has been attributed for causing relatively high disk loading. In a typical implementation, both proprotors must be rotating in order to maintain flight in helicopter mode. To guard against instances of single engine failure, on both the V-22 and AW609, both engines are connected by drive shafts to a common central gearbox so that one engine can power both proprotors if such a failure occurs. Despite this provision, the V-22 is generally not capable of hovering on a single engine. If a proprotor gearbox fails, that proprotor cannot be feathered, and both engines must be stopped prior to an emergency landing. The autorotation characteristics are poor partly due to the rotors' low inertia. Aircraft Bell XV-15 Bell Boeing V-22 Osprey Bell Eagle Eye Bell V-280 Valor AgustaWestland AW609 NASA Puffin Leonardo Next-Generation Civil Tiltrotor References Citations Bibliography Norton, Bill. Bell Boeing V-22 Osprey, Tiltrotor Tactical Transport. Earl Shilton, Leicester, UK: Midland Publishing, 2004. . Whittle, Richard. The Dream Machine: The Untold History of the Notorious V-22 Osprey. New York: Simon & Schuster, 2010. . Aircraft configurations
Proprotor
[ "Engineering" ]
777
[ "Aircraft configurations", "Aerospace engineering" ]
11,933,545
https://en.wikipedia.org/wiki/Corepressor
In genetics and molecular biology, a corepressor is a molecule that represses the expression of genes. In prokaryotes, corepressors are small molecules whereas in eukaryotes, corepressors are proteins. A corepressor does not directly bind to DNA, but instead indirectly regulates gene expression by binding to repressors. A corepressor downregulates (or represses) the expression of genes by binding to and activating a repressor transcription factor. The repressor in turn binds to a gene's operator sequence (segment of DNA to which a transcription factor binds to regulate gene expression), thereby blocking transcription of that gene. Function Prokaryotes In prokaryotes, the term corepressor is used to denote the activating ligand of a repressor protein. For example, the E. coli tryptophan repressor (TrpR) is only able to bind to DNA and repress transcription of the trp operon when its corepressor tryptophan is bound to it. TrpR in the absence of tryptophan is known as an aporepressor and is inactive in repressing gene transcription. Trp operon encodes enzymes responsible for the synthesis of tryptophan. Hence TrpR provides a negative feedback mechanism that regulates the biosynthesis of tryptophan. In short tryptophan acts as a corepressor for its own biosynthesis. Eukaryotes In eukaryotes, a corepressor is a protein that binds to transcription factors. In the absence of corepressors and in the presence of coactivators, transcription factors upregulate gene expression. Coactivators and corepressors compete for the same binding sites on transcription factors. A second mechanism by which corepressors may repress transcriptional initiation when bound to transcription factor/DNA complexes is by recruiting histone deacetylases which catalyze the removal of acetyl groups from lysine residues. This increases the positive charge on histones which strengthens the electrostatic attraction between the positively charged histones and negatively charged DNA, making the DNA less accessible for transcription. In humans several dozen to several hundred corepressors are known, depending on the level of confidence with which the characterisation of a protein as a corepressors can be made. Examples of corepressors NCoR NCoR (nuclear receptor co-repressor) directly binds to the D and E domains of nuclear receptors and represses their transcriptional activity. Class I histone deacetylases are recruited by NCoR through SIN3, and NCoR directly binds to class II histone deacetylases. Silencing mediator for retinoid and thyroid-hormone receptor SMRT (silencing mediator of retinoic acid and thyroid hormone receptor), also known as NCoR2, is an alternatively spliced SRC-1(steroid receptor coactivator-1). It is negatively and positively affected by MAPKKK (mitogen activated protein kinase kinase kinase) and casein kinase 2 phosphorylation, respectively. SMRT has two major mechanisms: first, similar to NCoR, SMRT also recruits class I histone deacetylases through SIN3 and directly binds to class II histone deacetylases. Second, it binds and sequesters components of the general transcriptional machinery, such as transcription factor II B. Role in biological processes Corepressors are known to regulate transcription through different activation and inactivation states. NCoR and SMRT act as a corepressor complex to regulate transcription by becoming activated once the ligand is bound. Knockouts of NCoR resulted in embryo death, indicating its importance in erythrocytic, thymic, and neural system development. Mutations in certain corepressors can result in deregulation of signals. SMRT contributes to cardiac muscle development, with knockouts of the complex resulting in less developed muscle and improper development. NCoR has also been found to be an important checkpoint in processes such as inflammation and macrophage activation. Recent evidence also suggests the role of corepressor RIP140 in metabolic regulation of energy homeostasis. Clinical significance Diseases Since corepressors participate and regulate a vast range of gene expression, it is not surprising that aberrant corepressor activities can cause diseases. Acute myeloid leukemia (AML) is a highly lethal blood cancer characterized by uncontrolled myeloid cell growth. Two homologous corepressor genes, BCOR (BCL6 corepressor) and BCORL1, are recurrently mutated in AML patients. BCOR works with multiple transcription factors and is known to play vital regulatory roles in embryonic development. Clinical results detected BCOR somatic mutations in ~4% of an unselected group of AML patients, and ~17% in a subset of patients who lack known AML-causing mutations. Similarly, BCORL1 is a corepressor that regulates cellular processes, and was found to be mutated in ~6% of tested AML patients. These studies point out a strong association between corepressor mutations and AML. Further corepressor research may reveal potential therapeutic targets for AML and other diseases. Therapeutic Potential Corepressors present many potential avenues for drugs to target a vast range of diseases. BCL6 upregulation is observed in cancers such as diffuse large B-cell lymphomas (DLBCLs), colorectal cancer, and lung cancer. BCL-6 corepressor, SMRT, NCoR, and other corepressors are able to interact with and transcriptionally repress BCL6. Small-molecule compounds, such as synthetic peptides that target BCL6 and corepressor interactions, as well as other protein-protein interaction inhibitors, have been shown to effectively kill cancer cells. Activated liver X receptor (LXR) forms a complex with corepressors to suppress the inflammatory response in rheumatoid arthritis, making LXR agonists like GW3965 a potential therapeutic strategy. Ursodeoxycholic acid (UDCA), by upregulating the corepressor small heterodimer partner interacting leucine zipper protein (SMILE), inhibits the expression of IL-17, an inflammatory cytokine, and suppresses Th17 cells, both implicated in rheumatoid arthritis. This effect is dose-dependent in humans, and UCDA is thought to be another prospective agent of rheumatoid arthritis therapy. See also Transcription coregulator TcoF-DB References External links Gene expression Transcription coregulators
Corepressor
[ "Chemistry", "Biology" ]
1,394
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
11,934,455
https://en.wikipedia.org/wiki/Primitive%20part%20and%20content
In algebra, the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit). A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial. Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts. As the computation of greatest common divisors is generally much easier than polynomial factorization, the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see ). Then the factorization problem is reduced to factorize separately the content and the primitive part. Content and primitive part may be generalized to polynomials over the rational numbers, and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers. Over the integers For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse. The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive. For example, the content of may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is and thus the primitive-part-content factorization is For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization Properties In the remainder of this article, we consider polynomials over a unique factorization domain , which can typically be the ring of integers, or a polynomial ring over a field. In , greatest common divisors are well defined, and are unique up to multiplication by a unit of . The content of a polynomial with coefficients in is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part of is the quotient of by its content; it is a polynomial with coefficients in , which is unique up to multiplication by a unit. If the content is changed by multiplication by a unit , then the primitive part must be changed by dividing it by the same unit, in order to keep the equality which is called the primitive-part-content factorization of . The main properties of the content and the primitive part are results of Gauss's lemma, which asserts that the product of two primitive polynomials is primitive, where a polynomial is primitive if 1 is the greatest common divisor of its coefficients. This implies: The content of a product of polynomials is the product of their contents: The primitive part of a product of polynomials is the product of their primitive parts: The content of a greatest common divisor of polynomials is the greatest common divisor (in ) of their contents: The primitive part of a greatest common divisor of polynomials is the greatest common divisor (in ) of their primitive parts: The complete factorization of a polynomial over is the product of the factorization (in ) of the content and of the factorization (in the polynomial ring) of the primitive part. The last property implies that the computation of the primitive-part-content factorization of a polynomial reduces the computation of its complete factorization to the separate factorization of the content and the primitive part. This is generally interesting, because the computation of the prime-part-content factorization involves only greatest common divisor computation in , which is usually much easier than factorization. Over the rationals The primitive-part-content factorization may be extended to polynomials with rational coefficients as follows. Given a polynomial with rational coefficients, by rewriting its coefficients with the same common denominator , one may rewrite as where is a polynomial with integer coefficients. The content of is the quotient by of the content of , that is and the primitive part of is the primitive part of : It is easy to show that this definition does not depend on the choice of the common denominator, and that the primitive-part-content factorization remains valid: This shows that every polynomial over the rationals is associated with a unique primitive polynomial over the integers, and that the Euclidean algorithm allows the computation of this primitive polynomial. A consequence is that factoring polynomials over the rationals is equivalent to factoring primitive polynomials over the integers. As polynomials with coefficients in a field are more common than polynomials with integer coefficients, it may seem that this equivalence may be used for factoring polynomials with integer coefficients. In fact, the truth is exactly the opposite: every known efficient algorithm for factoring polynomials with rational coefficients uses this equivalence for reducing the problem modulo some prime number (see Factorization of polynomials). This equivalence is also used for computing greatest common divisors of polynomials, although the Euclidean algorithm is defined for polynomials with rational coefficients. In fact, in this case, the Euclidean algorithm requires one to compute the reduced form of many fractions, and this makes the Euclidean algorithm less efficient than algorithms which work only with polynomials over the integers (see Polynomial greatest common divisor). Over a field of fractions The results of the preceding section remain valid if the ring of integers and the field of rationals are respectively replaced by any unique factorization domain and its field of fractions . This is typically used for factoring multivariate polynomials, and for proving that a polynomial ring over a unique factorization domain is also a unique factorization domain. Unique factorization property of polynomial rings A polynomial ring over a field is a unique factorization domain. The same is true for a polynomial ring over a unique factorization domain. To prove this, it suffices to consider the univariate case, as the general case may be deduced by induction on the number of indeterminates. The unique factorization property is a direct consequence of Euclid's lemma: If an irreducible element divides a product, then it divides one of the factors. For univariate polynomials over a field, this results from Bézout's identity, which itself results from the Euclidean algorithm. So, let be a unique factorization domain, which is not a field, and the univariate polynomial ring over . An irreducible element in is either an irreducible element in or an irreducible primitive polynomial. If is in and divides a product of two polynomials, then it divides the content Thus, by Euclid's lemma in , it divides one of the contents, and therefore one of the polynomials. If is not , it is a primitive polynomial (because it is irreducible). Then Euclid's lemma in results immediately from Euclid's lemma in , where is the field of fractions of . Factorization of multivariate polynomials For factoring a multivariate polynomial over a field or over the integers, one may consider it as a univariate polynomial with coefficients in a polynomial ring with one less indeterminate. Then the factorization is reduced to factorizing separately the primitive part and the content. As the content has one less indeterminate, it may be factorized by applying the method recursively. For factorizing the primitive part, the standard method consists of substituting integers to the indeterminates of the coefficients in a way that does not change the degree in the remaining variable, factorizing the resulting univariate polynomial, and lifting the result to a factorization of the primitive part. See also Rational root theorem References Page 181 of Algebra Polynomials
Primitive part and content
[ "Mathematics" ]
1,700
[ "Polynomials", "Algebra" ]
11,934,828
https://en.wikipedia.org/wiki/Moongel
Moongel is a translucent blue, sticky, gel-like substance produced by the drum practice products company RTOM. It has been incorporated into several products and come in packs of four or six which can applied to a drumhead or cymbal to diminish the higher overtones. It also has become a popular studio technique due to its damping properties. Therefore, it allows for drummers to get a "punchier" sound of out their toms. RTOM also makes Moongel Workout Pads, which are practice pads that are available in 7" and 14" diameter sizes. The manufacturer claims that unlike most practice pads they allow no "free rebounds", which is said to accelerate muscle development. Moongels are made from 53% PVC copolymer resin, 27% Dioctyl Terephthalate, 2.5% Epoxied Soybean Oil, 3% Calcium-Zinc Stabilizers, 7% PVC-based Thixotrope and 7.5% Adipate Plasticizer-based Thixotrope. Gel of this composition is commercially available from WRS SportsMed, a division of WRS Group, Inc. under the UltraSoft™ trademark. External links Moongel - RTOM Drumming Musical instrument parts and accessories
Moongel
[ "Technology" ]
264
[ "Components", "Musical instrument parts and accessories" ]
11,934,923
https://en.wikipedia.org/wiki/Natural%20topology
In any domain of mathematics, a space has a natural topology if there is a topology on the space which is "best adapted" to its study within the domain in question. In many cases this imprecise definition means little more than the assertion that the topology in question arises naturally or canonically (see mathematical jargon) in the given context. Note that in some cases multiple topologies seem "natural". For example, if Y is a subset of a totally ordered set X, then the induced order topology, i.e. the order topology of the totally ordered Y, where this order is inherited from X, is coarser than the subspace topology of the order topology of X. "Natural topology" does quite often have a more specific meaning, at least given some prior contextual information: the natural topology is a topology which makes a natural map or collection of maps continuous. This is still imprecise, even once one has specified what the natural maps are, because there may be many topologies with the required property. However, there is often a finest or coarsest topology which makes the given maps continuous, in which case these are obvious candidates for the natural topology. The simplest cases (which nevertheless cover many examples) are the initial topology and the final topology (Willard (1970)). The initial topology is the coarsest topology on a space X which makes a given collection of maps from X to topological spaces Xi continuous. The final topology is the finest topology on a space X which makes a given collection of maps from topological spaces Xi to X continuous. Two of the simplest examples are the natural topologies of subspaces and quotient spaces. The natural topology on a subset of a topological space is the subspace topology. This is the coarsest topology which makes the inclusion map continuous. The natural topology on a quotient of a topological space is the quotient topology. This is the finest topology which makes the quotient map continuous. Another example is that any metric space has a natural topology induced by its metric. See also Induced topology References (Recent edition published by Dover (2004) .) Mathematical structures Topology
Natural topology
[ "Physics", "Mathematics" ]
436
[ "Mathematical structures", "Mathematical objects", "Topology", "Space", "Geometry", "Spacetime" ]
5,584,334
https://en.wikipedia.org/wiki/Vacuum%20flange
A vacuum flange is a flange at the end of a tube used to connect vacuum chambers, tubing and vacuum pumps to each other. Vacuum flanges are used for scientific and industrial applications to allow various pieces of equipment to interact via physical connections and for vacuum maintenance, monitoring, and manipulation from outside a vacuum's chamber. Several flange standards exist with differences in ultimate attainable pressure, size, and ease of attachment. Vacuum flange types Several vacuum flange standards exist, and the same flange types are called by different names by different manufacturers and standards organizations. KF/QF The ISO standard quick-release flange is known by the names Quick Flange (QF) or Kleinflansch (KF, German which translates to "Small flange" in English). The KF designation has been adopted by ISO, DIN, and Pneurop. KF flanges are made with a chamfered back surface that is attached with a circular clamp and an elastomeric o-ring (AS568 specification) that is mounted in a metal centering ring. Standard sizes are indicated by the nominal inner diameter in millimeters for flanges 10 through 50 mm in diameter. Sizes 10, 20 and 32 are less common sizes (see Renard numbers). Some sizes share their flange dimensions with their respective larger neighbor and use the same clamp size. This means a DN10KF can mate to a DN16KF by using an adaptive centering ring. The same applies for DN20KF to DN25KF and DN32KF to DN40KF. ISO The ISO large flange standard is known as LF, LFB, MF or sometimes just ISO flange. As in KF flanges, the flanges are joined by a centering ring and an elastomeric o-ring. An extra spring-loaded circular clamp is often used around the large-diameter o-rings to prevent them from rolling off from the centering ring during mounting. ISO large flanges come in two varieties. ISO-K (or ISO LF) flanges are joined with double-claw clamps, which clamp to a circular groove on the tubing side of the flange. ISO-F (or ISO LFB) flanges have holes for attaching the two flanges with bolts. Two tubes with ISO-K and ISO-F flanges can be joined together by clamping the ISO-K side with single-claw clamps, which are then bolted to the holes on the ISO-F side. ISO large flanges are available in sizes from 63 to 500 mm nominal tube diameter: CF (Conflat) CF (ConFlat) flanges use an oxygen-free high thermal conductivity copper gasket and knife-edge flange to achieve an ultrahigh vacuum seal. The term "ConFlat" is a registered trademark of Varian, Inc., so "CF" is commonly used by other flange manufacturers. Each face of the two mating CF flanges has a knife edge, which cuts into the softer metal gasket, providing an extremely leak-tight, metal-to-metal seal. Deformation of the metal gasket fills small defects in the flange, allowing ConFlat flanges to operate down to 10−13 Torr (10−11 Pa) pressure. The knife edge is recessed in a groove in each flange. In addition to protecting the knife edge, the groove helps hold the gasket in place, which aligns the two flanges and also reduces gasket expansion during bake-out. For stainless-steel ConFlat flanges, baking temperatures of 450 °C can be achieved; the temperature is limited by the choice of gasket material. CF flanges are sexless and interchangeable. In North America, flange sizes are given by flange outer diameter in inches, while in Europe and Asia, sizes are given by tube inner diameter in millimeters. Despite the different naming conventions, the actual flanges are the same. ConFlat gaskets were originally invented by William Wheeler and other engineers at Varian in an attempt to build a flange that would not leak after baking. Wheeler A Wheeler flange is a large wire-seal flange often used on large vacuum chambers. American Standards Association (ASA) A flange standard popularized in the United States is codified by the American National Standards Institute (ANSI), and is also sometimes named after the organization's previous name, the American Standards Association (ASA). These flanges have elastomeric o-ring seals and can be used for both vacuum and pressure applications. Flange sizes are indicated by tube nominal inner diameter (ANSI naming convention) or by flange outer diameter in inches (ASA naming convention). Vacuum gaskets To achieve a vacuum seal, a gasket is required. An elastomeric o-ring gasket can be made of Buna rubber, viton fluoropolymer, silicone rubber or teflon. O-rings can be placed in a groove or may be used in combination with a centering ring or as a "captured" o-ring that is held in place by separate metal rings. Metal gaskets are used in ultra-high-vacuum systems where outgassing of the elastomer could be a significant gas load. A copper ring gasket is used with ConFlat flanges. Metal wire gaskets made of copper, gold or indium can be used. Vacuum feedthrough A vacuum feedthrough is a flange that contains a vacuum-tight electrical, physical or mechanical connection to the vacuum chamber. An electrical feedthrough allows voltages to be applied to components under vacuum, for example a filament or heater. An example of a physical feedthrough is a vacuum-tight connection for cooling water. A mechanical feedthrough is used for rotation and translation of components under vacuum. A wobble stick is a mechanical feedthrough device that can be used to pick up, move and otherwise manipulate objects in a vacuum chamber. See also Vacuum engineering Vacuum grease References External links ISO 1609:1986 Vacuum technology - Flange dimensions all vacuum flange manufacturers in vacuum-guide.com Flange Plumbing
Vacuum flange
[ "Physics", "Engineering" ]
1,338
[ "Plumbing", "Vacuum", "Construction", "Vacuum systems", "Matter" ]
5,584,696
https://en.wikipedia.org/wiki/Higher-order%20volition
Higher-order volitions (or higher-order desire), as opposed to action-determining volitions, are volitions about volitions. Higher-order volitions are potentially more often guided by long-term beliefs and reasoning. A higher-order volition can go unfulfilled due to uncontrolled lower-order volitions. History The concept of higher-order volitions was introduced by Harry Frankfurt, who used it to explain free will independently of determinism, of the thesis that what happens in the world is determined by predictable natural laws, which is however made unplausible by Heisenberg's uncertainty principle and resulting quantum noise. But even if the world were governed by such laws, one could be free in the sense that higher-order volitions determined the primacy of first-order desires. This view is called compatibilism. An example for a failure to follow higher-order volitions is the drug addict who takes drugs even though they would like to quit taking drugs. According to Frankfurt, the drug addict has established free will when their higher-order volition to stop wanting drugs determines the precedence of their changing, action-determining desires either to take drugs or not to take drugs. However, a higher order desire as described by Mark Alfano in his book Moral Psychology: An Introduction is "a desire about another(s) desire". In his example, Mark Alfano visualised a 'friend' whose birthday is coming up, you love her and hence wish to 'please' or 'surprise' her. To be 'motivated' to give your friend a special birthday present, you need to want to do something she wants. That want of yours, in philosophical jargon, this is called a higher order desire. The philosopher John Locke already claimed that free will was the ability to stop before making a decision, to consider what would be best to do, and the ability to decide and act based on the outcome of that thinking, which could be seen as equivalent to forming a higher-order volition. Locke concludes that when it comes to "chusing a remote [i.e., future] Good as an end to be pursued", agents are "at Liberty in respect of willing" and that "in [the power to suspend the prosecution of one's desires] lies the liberty Man has", that the power to suspend is "the source of all liberty". Locke argues that if the will were determined by the perceived greater good, every agent would be consistently focused on the attainment of "the infinite eternal Joys of Heaven", which consequently would be the topmost higher-order voliton to win Pascal's wager, corresponding to the drug addict's desire to survive his drug addiction. See also Akrasia Meta-emotion References Free will Motivation
Higher-order volition
[ "Biology" ]
587
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
5,584,703
https://en.wikipedia.org/wiki/Surface%20feet%20per%20minute
Surface feet per minute (SFPM or SFM) is the combination of a physical quantity (surface speed) and an imperial and American customary unit (feet per minute or FPM). It is defined as the number of linear feet that a location on a rotating component travels in one minute. Its most common use is in the measurement of cutting speed (surface speed) in machining. It is a unit of velocity that describes how fast the cutting edge of the cutting tool travels. It correlates directly to the machinability of the workpiece material and the hardness of the cutting tool material. It relates to spindle speed via variables such as cutter diameter (for rotating cutters) or workpiece diameter (for lathe work). SFM is a combination of diameter and the velocity (RPM) of the material measured in feet-per-minute as the spindle of a milling machine or lathe. 1 SFM equals 0.00508 surface meter per second (meter per second, or m/s, is the SI unit of speed). The faster the spindle turns, and/or the larger the diameter, the higher the SFM. The goal is to tool a job to run the SFM as high as possible to increase hourly part production. However some materials will run better at specific SFMs. When the SFM is known for a specific material (ex 303 annealed stainless steel = 120 SFM for high speed steel tooling), a formula can be used to determine spindle speed for live tools or spindle speeds for turning materials. In a milling machine, the tool diameter is used instead of the stock diameter in the following formulas when the tool is revolving and the stock is stationary. Spindle speed can be calculated using the following equation: SFM can be calculated using the following equation: See also Speeds and feeds References External links Surface Feet per Minute Machining Units of velocity Metalworking terminology Velocity
Surface feet per minute
[ "Physics", "Mathematics" ]
395
[ "Physical phenomena", "Physical quantities", "Quantity", "Motion (physics)", "Vector physical quantities", "Velocity", "Wikipedia categories named after physical quantities", "Units of velocity", "Units of measurement" ]
5,584,743
https://en.wikipedia.org/wiki/Riesz%27s%20lemma
In mathematics, Riesz's lemma (after Frigyes Riesz) is a lemma in functional analysis. It specifies (often easy to check) conditions that guarantee that a subspace in a normed vector space is dense. The lemma may also be called the Riesz lemma or Riesz inequality. It can be seen as a substitute for orthogonality when the normed space is not an inner product space. Statement If is a reflexive Banach space then this conclusion is also true when Metric reformulation As usual, let denote the canonical metric induced by the norm, call the set of all vectors that are a distance of from the origin , and denote the distance from a point to the set by The inequality holds if and only if for all and it formally expresses the notion that the distance between and is at least Because every vector subspace (such as ) contains the origin substituting in this infimum shows that for every vector In particular, when is a unit vector. Using this new notation, the conclusion of Riesz's lemma may be restated more succinctly as: holds for some Using this new terminology, Riesz's lemma may also be restated in plain English as: Given any closed proper vector subspace of a normed space for any desired minimum distance less than there exists some vector in the unit sphere of that is this desired distance away from the subspace. The proof can be found in functional analysis texts such as Kreyszig. An online proof from Prof. Paul Garrett is available. Minimum distances not satisfying the hypotheses When is trivial then it has no vector subspace and so Riesz's lemma holds vacuously for all real numbers The remainder of this section will assume that which guarantees that a unit vector exists. The inclusion of the hypotheses can be explained by considering the three cases: , and The lemma holds when since every unit vector satisfies the conclusion The hypotheses is included solely to exclude this trivial case and is sometimes omitted from the lemma's statement. Riesz's lemma is always false when because for every unit vector the required inequality fails to hold for (since ). Another consequence of being impossible is that the inequality holds if and only if equality holds. Reflexivity This leaves only the case for consideration, in which case the statement of Riesz’s lemma becomes: For every closed proper vector subspace of there exists some vector of unit norm that satisfies When is a Banach space, then this statement is true if and only if is a reflexive space. Explicitly, a Banach space is reflexive if and only if for every closed proper vector subspace there is some vector on the unit sphere of that is always at least a distance of away from the subspace. For example, if the reflexive Banach space is endowed with the usual Euclidean norm and if is the plane then the points satisfy the conclusion If is -axis then every point belonging to the unit circle in the plane satisfies the conclusion But if was endowed with the taxicab norm (instead of the Euclidean norm), then the conclusion would be satisfied by every point belonging to the “diamond” in the plane (a square with vertices at and ). In a non-reflexive Banach space, such as the Lebesgue space of all bounded sequences, Riesz’s lemma does not hold for . However, every finite dimensional normed space is a reflexive Banach space, so Riesz’s lemma does holds for when the normed space is finite-dimensional, as will now be shown. When the dimension of is finite then the closed unit ball is compact. Since the distance function is continuous, its image on the closed unit ball must be a compact subset of the real line, proving the claim. Some consequences Riesz's lemma guarantees that for any given every infinite-dimensional normed space contains a sequence of (distinct) unit vectors satisfying for or stated in plain English, these vectors are all separated from each other by a distance of more than while simultaneously also all lying on the unit sphere. Such an infinite sequence of vectors cannot be found in the unit sphere of any finite dimensional normed space (just consider for example the unit circle in ). This sequence can be constructed by induction for any constant Start by picking any element from the unit sphere. Let be the linear span of and (using Riesz's lemma) pick from the unit sphere such that where This sequence contains no convergent subsequence, which implies that the closed unit ball is not compact. Characterization of finite dimension Riesz's lemma can be applied directly to show that the unit ball of an infinite-dimensional normed space is never compact. This can be used to characterize finite dimensional normed spaces: if is a normed vector space, then is finite dimensional if and only if the closed unit ball in is compact. More generally, if a topological vector space is locally compact, then it is finite dimensional. The converse of this is also true. Namely, if a topological vector space is finite dimensional, it is locally compact. Therefore local compactness characterizes finite-dimensionality. This classical result is also attributed to Riesz. A short proof can be sketched as follows: let be a compact neighborhood of the origin in By compactness, there are such that We claim that the finite dimensional subspace spanned by is dense in or equivalently, its closure is Since is the union of scalar multiples of it is sufficient to show that By induction, for every But compact sets are bounded, so lies in the closure of This proves the result. For a different proof based on Hahn–Banach theorem see . Spectral theory The spectral properties of compact operators acting on a Banach space are similar to those of matrices. Riesz's lemma is essential in establishing this fact. Other applications As detailed in the article on infinite-dimensional Lebesgue measure, this is useful in showing the non-existence of certain measures on infinite-dimensional Banach spaces. Riesz's lemma also shows that the identity operator on a Banach space is compact if and only if is finite-dimensional. See also James's Theorem—a characterization of reflexivity given by a condition on the unit ball References Further reading https://mathoverflow.net/questions/470438/a-variation-of-the-riesz-lemma Functional analysis Lemmas in analysis Normed spaces
Riesz's lemma
[ "Mathematics" ]
1,355
[ "Theorems in mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Lemmas in mathematical analysis", "Lemmas" ]
5,584,806
https://en.wikipedia.org/wiki/Space%20%282001%20TV%20series%29
Space (Hyperspace in the United States) is a 2001 BBC documentary which ran for six episodes covering a number of topics in relation to outer space. The series is hosted and narrated by actor Sam Neill. Episodes DVD releases The series was released on region 2 DVD in 2001 by BBC Video. In 2002, the series was released in the United States on region 1 DVD (under the alternate title Hyperspace), also by BBC Video. External links (DVD) BBC television documentaries Documentary films about outer space 2001 British television series debuts 2001 British television series endings BBC television documentaries about science Astronomy education television series
Space (2001 TV series)
[ "Astronomy" ]
121
[ "Space art", "Documentary films about outer space" ]
5,584,994
https://en.wikipedia.org/wiki/Armoured%20vehicle-launched%20bridge
An armoured vehicle-launched bridge (AVLB) is a combat support vehicle, sometimes regarded as a subtype of military engineering vehicle, designed to assist militaries in rapidly deploying tanks and other armoured fighting vehicles across gap-type obstacles, such as rivers. The AVLB is usually a tracked vehicle converted from a tank chassis to carry a folding metal bridge instead of weapons. The AVLB's job is to allow armoured or infantry units to cross craters, anti-tank ditches, blown bridges, railroad cuts, canals, rivers and ravines, when a river too deep for vehicles to wade through is reached, and no bridge is conveniently located, or sufficiently sturdy, a substantial concern when moving 60-ton tanks. The bridge layer unfolds and launches its cargo, providing a ready-made bridge across the obstacle in only minutes. Once the span has been put in place, the AVLB vehicle detaches from the bridge, and moves aside to allow traffic to pass. Once all of the vehicles have crossed, it crosses the bridge itself and reattaches to the bridge on the other side. It then retracts the span ready to move off again. A similar procedure can be employed to allow crossings of small chasms or similar obstructions. AVLBs can carry bridges of 19 metres (60 feet) or greater in length. By using a tank chassis, the bridge layer is able to cover the same terrain as main battle tanks. The provision of armour allows them to operate even in the face of enemy fire. However, this is not a universal attribute: some exceptionally sturdy 6×6 or 8×8 truck chassis have lent themselves to bridge-layer applications. Origins The roots of the modern AVLB can be found in World War I, at the dawn of tank warfare. Having developed tanks, the United Kingdom and France were confronted with the problem of mounting tank advances in the face of the trenches that dominated the battlefields. Early engagements, such as at Cambrai demonstrated the tank's utility, but also highlighted its vulnerability to battlefield geography—many early tanks found themselves ignominiously stuck in the trenches, having insufficiently long tracks to cross them (as at right). To counter this disadvantage, tanks, especially the common British Heavy tanks, began to go into battle with fascines, sometimes as simple as a bundle of heavy sticks, carried on top. By dropping these into the trenches, they were able to create a wedge over which the tank could drive. Later, some tanks began to carry rails on their decks—the first AVLBs. By 1919, the British Army had, at its training centre in Christchurch, a Mark V** tank with lifting gear able to carry and place a bridge or carry out mine clearing and demolition. World War II and subsequent use It was in the World War II era that the importance of armoured bridge layers, as well as combat engineering vehicles and armoured recovery vehicles, became fully clear. With the advent of Blitzkrieg warfare, whole divisions had to advance along with tanks, which were suddenly far out-pacing the speed of infantry soldiers. Besides leading to the advent of self-propelled artillery/assault guns, mobile anti-aircraft and armoured personnel carriers/cars, it became clear that functions like vehicle repair, mine-clearing, and the like would have to be carried out by armoured vehicles advancing along with tanks. These forces would have to be able to cross all forms of terrain without losing speed, and without having to concentrate their thrusts over certain bridges. The rising weight of armoured vehicles meant that fewer bridges could support these massed crossings. The only feasible solution to the dilemma posed by the mobility of all-mechanised armed forces was a dedicated platform that could improvise river and obstacle crossings at short notice and in inconvenient locations. Tracked and armoured, it was capable of operating alongside combat units, crossing rough terrain and advancing in the face of light fire. To maximize on common parts and ease maintenance complications, they were usually based on existing tank chassis. One of the earliest series-produced examples is the Brückenleger IV, a German AVLB based on the Panzer IV, which entered service with the Wehrmacht in 1940. Twenty were built, but problems of excessive weight limited the vehicle's effectiveness, and eventually all 20 were converted back to tanks. A new scissors bridge design was brought out by the British in response to the war, sufficient to support a 24-ton load over . This was developed for the Covenanter tank. It developed into a 30-ton capacity and was carried by a turretless Valentine tank. It was used in Italy, North West Europe and Burma. The Allies developed similar equipment, mostly based on the ubiquitous Churchill infantry tank carrying the Small Box Girder, and the Sherman medium tank of the British and U.S. armies, respectively. In some early designs, bridge-layers could emplace bridges, but not retract them. Other vehicles were integral to the bridge themselves, such as the Churchill Ark, wading to the middle of a river or driving up against an obstacle and extending simple ramps in both directions. Following vehicles would drive directly over the bridge layer. Modern Most modern bridge layers are based on a main battle tank chassis. An example of a modern main battle tank (MBT) chassis being converted to a bridgelayer is the creation of the M104 Wolverine Armored Bridgelayer. Based on a modified M1A2 SEP MBT chassis, the Wolverine replaces the MBT turret with a bridge fitted atop the chassis. The bridge atop the M104 Wolverine measures 26m in length, and takes just 4 minutes to place across an obstacle securely. The bridge is built to be able to withstand countless crossings of vehicles as heavy as the M1A2 Abrams, which weighs around 70 tonnes. Another approach to bridge laying across water is the use of amphibious vehicles, which act as combination of pontoon and roadway. These enter the water and join to form a bridge. An example is the German M3 Amphibious Rig, a bridging vehicle used by Germany, the UK, Singapore, and Taiwan. Notable AVLBs in service India: DRDO Sarvatra Sarvatra is a truck-mounted, multi-span, mobile bridging system. India: T-72 Bridge Layered Tank, Bridge-Laying system mounted on T-72 tank. France: SPRAT système de pose rapide de travures PTA2, based on an 8x8 truck chassis Germany: Biber based on the Leopard 1 tank chassis and is used by most Leopard 1 users. Germany: Panzerschnellbrücke 2, based on the Leopard 2 Chassis. No production orders. Germany: Panzerschnellbrücke Leguan: This modular system combines a bridge module created by MAN Mobile Bridges GmbH with a tank chassis. The Bundeswehr is testing the Leguan on Leopard 2 chassis. China: GQC-003, based of Type 08 IFV Japan: Type 67 AVLB, based on Type 61 MBT. Japan: Type 91 AVLB, based on Type 74 MBT and Type 87 SPAAG. Poland / East Germany: BLG-67 based on T-55 tank Russia: MTU-72 AVLB, based on the T-72 MBT Russia: MT-55, based on the T-55 medium tank United Kingdom: Titan Armoured Vehicle Launcher Bridge, based on the Challenger 2 MBT. Replaces ChAVLB, the Chieftain tank-based AVLB. Canada: Beaver armoured bridgelayer vehicle based on the Leopard 1 United States: M60A1 AVLB, based on the M60 MBT; now supplanted by the M104 Wolverine and M1074 Joint Assault Bridge, based on the M1 Abrams MBT Saudi Arabia: AMX-30 Bridge, based on the French AMX-30. Turkey: SYHK, an amphibious bridging vehicle based on the FNSS Pars. See also Kartik BLT AM 50 automatically launched assault bridge Bailey bridge Callender-Hamilton bridge Mabey Logistic Support Bridge Medium Girder Bridge Military engineer Pontoon bridge References External links Titan Armoured Vehicle Launcher Bridge (AVLB) at Armedforces.co.uk Military bridging equipment English inventions
Armoured vehicle-launched bridge
[ "Engineering" ]
1,686
[ "Military bridging equipment", "Military engineering" ]
5,585,066
https://en.wikipedia.org/wiki/Western%20Australian%20Regional%20Computing%20Centre
Western Australian Regional Computing Centre (WARCC) was part of the University of Western Australia, formed to provide computing services to the university, other universities in Western Australia, government departments, and to some private companies. It specialised in technical and scientific computing. It was formed on 1 January 1972, and ceased in 1991, when parts of it were spun off to become Winthrop Technology. Among the services it provided were time-shared computer processing, facilities management, software development, microcomputer rental and sales. It was Digital Equipment Corporation's first customer for the PDP-6. Its first Director was Dennis Moore (1972–1979), followed by Alex Reid (1979–1991). WARCC's Data Communications group, headed by Terry Gent, developed computer networking hardware and software. Using a combination of equipment from Digital Equipment Corporation and other vendors, and hardware and software that the group developed, it built a campus-wide network and then extended that to link the networks of the universities in Western Australia in the first heterogeneous packet switching network in Australia. External links WARCC History Page UWA Computing History Page "Cyberhistory": MSc thesis by Keith Falloon, 2001 "Computing", Historical Encyclopedia of Western Australia, UWA Press, 2009, Gregory, J. & Gothard, J., editors, p223-224 University of Western Australia
Western Australian Regional Computing Centre
[ "Technology" ]
281
[ "Computing stubs" ]
5,585,532
https://en.wikipedia.org/wiki/VDX%20%28library%20software%29
VDX (standing for Virtual Document eXchange) is a software product for interlibrary loan (ILL) and document request management. VDX was developed by UK company Fretwell-Downing Informatics, a company which in 2005 was taken over by OCLC PICA, itself wholly acquired by OCLC Online Computer Library Center in 2007. VDX allows library staff to create and manage document borrowing and lending requests between participating libraries. VDX manages all the stages of the ILL process. It is also an efficient way to collect copyright fees for copyright holders such as authors and publishers. Description ILL requests are sent to VDX through a process called automediation. VDX validates the request for the necessary information — author, title, date of publication — and searches for the item. It then creates a routing list (or "rota") of libraries that own the item. The request is sent to the first library on the rota, which indicates whether it can supply the item. If the library cannot lend or process the request, it will be automatically directed to the next library on the routing list, and so on, until a library is found that can supply the document. The document is then sent to the requesting library. Throughout the document exchange process, the requesting library can check the status of the request at any time. VDX is based on ISO 10161, which is the international standard for ILL. ISO 10161 defines communication protocols and guarantees that ILL information can be communicated between different ILL programs (such as VDX and similar products). Another standard, ISO 10160, determines the terminology that is used for ILL transactions across various document exchange systems. Development of VDX ceased years ago (as of 2018), and it will not incorporate the new ISO 18626 standard. References External links VDX page at OCLC VDX Interlibrary Loan Manual at Access Pennsylvania Library automation OCLC
VDX (library software)
[ "Engineering" ]
394
[ "Library automation", "Automation" ]
5,585,683
https://en.wikipedia.org/wiki/ISO%2010160
ISO 10160 is the ISO standard, first published in 1993, that defines the terminology that is used for interlibrary loan transactions between various document exchange systems such as VDX. It is closely related to ISO 10161, the Interlibrary Loan Application Protocol. References 10160 Library automation
ISO 10160
[ "Technology", "Engineering" ]
63
[ "Library automation", "Computing stubs", "Automation" ]
5,585,715
https://en.wikipedia.org/wiki/ISO%2010161
ISO 10161 is the ISO standard, first published in 1993, that defines the interlibrary loan (ILL) application protocol for communication between various document exchange systems. It allows ILL systems at different libraries and residing on different hardware platforms and using different software packages such as VDX to communicate with each other to request and receive electronic documents. It is closely related to ISO 10160, the Interlibrary Loan Application Service Definition. References 10161 Library automation
ISO 10161
[ "Technology", "Engineering" ]
95
[ "Library automation", "Computing stubs", "Automation" ]
5,586,128
https://en.wikipedia.org/wiki/Meteotsunami
A meteotsunami or meteorological tsunami is a tsunami-like sea wave of meteorological origin. Meteotsunamis are generated when rapid changes in barometric pressure cause the displacement of a body of water. In contrast to impulse-type tsunami sources, a traveling atmospheric disturbance normally interacts with the ocean over a limited period of time (from several minutes to several hours). Tsunamis and meteotsunamis are otherwise similar enough that it can be difficult to distinguish one from the other, as in cases where there is a tsunami wave but there are no records of an earthquake, landslide, or volcanic eruption. Meteotsunamis, rather, are triggered due to extreme weather events including severe thunderstorms, squalls and storm fronts; all of which can quickly change atmospheric pressure. Meteotsunamis typically occur when severe weather is moving at the same speed and direction of the local wave action towards the coastline. The size of the wave is enhanced by coastal features such as shallow continental shelves, bays and inlets. Only about 3% of historical tsunami events (from 2000 BC through 2014) are known to have meteorological origins, although their true prevalence may be considerably higher than this because 10% of historical tsunamis have unknown origins, tsunami events in the past are often difficult to validate, and meteotsunamis may have previously been misclassified as seiche waves. Seiches are classified as a long-standing wave with longer periods and slower changes in water levels. They are also restricted to enclosed or partially enclosed basins. Characteristics Meteotsunamis are restricted to local effects because they lack the energy available to significant seismic tsunami. However, when they are amplified by resonance they can be hazardous. Meteotsunami events can last anywhere from a few minutes to a couple of hours. Their size, length and period is heavily dependent on the speed and severity of the storm front. They are progressive waves which can affect enclosed basins and also large areas of coastline. These events have produced waves over in height and can resemble storm surge flooding. Frequency of events In April 2019, NOAA determined that 25 meteotsunamis, on average, strike the East Coast of the United States every year. In the Great Lakes, even more of these events occur; on average, 126 times a year. In some parts of the world, they are common enough to have local names: rissaga or rissague (Catalan), ressaca or resarca (Portuguese), milgħuba (Maltese), marrobbio or marrubio (Italian), Seebär (German), sjösprång (Swedish), Sea Bar (Scots), abiki or yota (Japanese), šćiga (Croatian). Some bodies of water are more susceptible than others, including anywhere that the natural resonance frequency matches that of the waves, such as in long and narrow bays, particularly when the inlet is aligned with the oncoming wave. Examples of particularly susceptible areas include Nagasaki Bay, the eastern Adriatic Sea, and the Western Mediterranean. Examples of known events Other notable events In 1929, a wave 6 meters in height pulled ten people from the shore, to their deaths in Grand Haven, Michigan. A three-meter wave that hit the Chicago waterfront in 1954 swept people off of piers, drowning seven. A meteotsunami that struck Nagasaki Bay on 31 March 1979 achieved a maximum wave height of 5 meters; there were three fatalities. In June 2013, a derecho off the New Jersey coast triggered a widespread meteotsunami event, where tide gauges along the East Coast, Puerto Rico and Bermuda reported "tsunami-like" conditions. The peak wave amplitude was 1 foot above normal sea level in Newport, RI. In New Jersey, divers were pulled over a breakwater and three people were swept off a jetty, two seriously injured, when a six-foot wave struck the Barnegat Inlet. See also Deep-ocean Assessment and Reporting of Tsunamis (DART) List of tsunamis Rogue wave Sneaker wave Storm surge Tsunami warning system (TWS) Undular bore References External links Photos of the Rissaga in Spain (Ciutadella) 06-15-2006 Video of a meteotsunami at Sanibel Island, Florida Weather hazards Flood Tsunami Water waves
Meteotsunami
[ "Physics", "Chemistry", "Environmental_science" ]
880
[ "Physical phenomena", "Hydrology", "Weather hazards", "Water waves", "Weather", "Flood", "Waves", "Fluid dynamics" ]
5,586,309
https://en.wikipedia.org/wiki/Propanolamines
Propanolamines are a class of chemical compounds, many of which are pharmaceutical drugs. They are amino alcohols that are derivatives of 1-amino-2-propanol. Propanolamines include: Acebutolol Atenolol Betaxolol Bisoprolol Metoprolol Nadolol Penbutolol Phenylpropanolamine Pindolol Practolol Propranolol Ritodrine Timolol See also Propanolamine External links References Amino alcohols
Propanolamines
[ "Chemistry" ]
112
[ "Organic compounds", "Amino alcohols" ]
5,586,326
https://en.wikipedia.org/wiki/Stellar%20birthline
The stellar birthline is a predicted line on the Hertzsprung–Russell diagram that relates the effective temperature and luminosity of pre-main-sequence stars at the start of their contraction. Prior to this point, the objects are accreting protostars, and are so deeply embedded in the cloud of dust and gas from which they are forming that they radiate only in far infrared and millimeter wavelengths. Once stellar winds disperse this cloud, the star becomes visible as a pre-main-sequence object. The set of locations on the Hertzsprung–Russell diagram where these newly visible stars reside is called the birthline, and is found above the main sequence. The location of the stellar birthline depends in detail on the accretion rate onto the star and geometry of this accretion, i.e. whether or not it is occurring through an accretion disk. This means that the birthline is not an infinitely thin curve, but has a finite thickness in the Hertzsprung-Russell diagram. See also Hayashi track Henyey track Pre-main-sequence star Protostar Stellar isochrone T Tauri star References External links http://jila.colorado.edu/~pja/stars02/lecture29.ps – several low-quality plots with the stellar birthline Stellar evolution Hertzsprung–Russell classifications
Stellar birthline
[ "Physics", "Astronomy" ]
289
[ "Astronomy stubs", "Astrophysics", "Stellar evolution", "Stellar astronomy stubs", "Astrophysics stubs" ]
5,587,151
https://en.wikipedia.org/wiki/Germanium%20dioxide
Germanium dioxide, also called germanium(IV) oxide, germania, and salt of germanium, is an inorganic compound with the chemical formula GeO2. It is the main commercial source of germanium. It also forms as a passivation layer on pure germanium in contact with atmospheric oxygen. Structure The two predominant polymorphs of GeO2 are hexagonal and tetragonal. Hexagonal GeO2 has the same structure as α-quartz, with germanium having coordination number 4. Tetragonal GeO2 (the mineral argutite) has the rutile-like structure seen in stishovite. In this motif, germanium has the coordination number 6. An amorphous (glassy) form of GeO2 is similar to fused silica. Germanium dioxide can be prepared in both crystalline and amorphous forms. At ambient pressure the amorphous structure is formed by a network of GeO4 tetrahedra. At elevated pressure up to approximately 9 GPa the germanium average coordination number steadily increases from 4 to around 5 with a corresponding increase in the Ge–O bond distance. At higher pressures, up to approximately 15 GPa, the germanium coordination number increases to 6, and the dense network structure is composed of GeO6 octahedra. When the pressure is subsequently reduced, the structure reverts to the tetrahedral form. At high pressure, the rutile form converts to an orthorhombic CaCl2 form. Reactions Heating germanium dioxide with powdered germanium at 1000 °C forms germanium monoxide (GeO). The hexagonal (d = 4.29 g/cm3) form of germanium dioxide is more soluble than the rutile (d = 6.27 g/cm3) form and dissolves to form acid, H4GeO4, or Ge(OH)4. GeO2 is only slightly soluble in acid but dissolves more readily in alkali to give germanates. The germanic acid forms stable complexes with di- and polyfunctional carboxylic acids, poly-alcohols, and o-diphenols. In contact with hydrochloric acid, it releases the volatile and corrosive germanium tetrachloride. Uses The refractive index (1.7) and optical dispersion properties of germanium dioxide make it useful as an optical material for wide-angle lenses, in optical microscope objective lenses, and for the core of fiber-optic lines. See Optical fiber for specifics on the manufacturing process. Both germanium and its glass oxide, GeO2, are transparent to the infrared (IR) spectrum. The glass can be manufactured into IR windows and lenses, used for night-vision technology in the military, luxury vehicles, and thermographic cameras. GeO2 is preferred over other IR transparent glasses because it is mechanically strong and therefore preferred for rugged military usage. A mixture of silicon dioxide and germanium dioxide ("silica-germania") is used as an optical material for optical fibers and optical waveguides. Controlling the ratio of the elements allows precise control of refractive index. Silica-germania glasses have lower viscosity and higher refractive index than pure silica. Germania replaced titania as the silica dopant for silica fiber, eliminating the need for subsequent heat treatment, which made the fibers brittle. Germanium dioxide is used as a colorant in borosilicate glass, used in lampworking. When combined with copper oxide, it provides a more stable red. It gives the glass a very reactive/changeable color, “a wonderful rainbow effect” when combined with silver oxide, that can shift light amber to a somewhat reddish and even deep purple appearance. The color can vary based on flame chemistry of the flame used to melt the glass (whether it has more oxygen or whether it has more fuel) And also it can change colors depending on the temperature of the kiln used to anneal the glass. Germanium dioxide is also used as a catalyst in production of polyethylene terephthalate resin, and for production of other germanium compounds. It is used as a feedstock for production of some phosphors and semiconductor materials. Germanium dioxide is used in algaculture as an inhibitor of unwanted diatom growth in algal cultures, since contamination with the comparatively fast-growing diatoms often inhibits the growth of or outcompetes the original algae strains. GeO2 is readily taken up by diatoms and leads to silicon being substituted by germanium in biochemical processes within the diatoms, causing a significant reduction of the diatoms' growth rate or even their complete elimination, with little effect on non-diatom algal species. For this application, the concentration of germanium dioxide typically used in the culture medium is between 1 and 10 mg/L, depending on the stage of the contamination and the species. Toxicity and medical Germanium dioxide has low toxicity, but it is nephrotoxic in higher doses. Germanium dioxide is used as a germanium supplement in some questionable dietary supplements and "miracle cures". High doses of these resulted in several cases of germanium poisonings. References Germanium(IV) compounds Oxides Optical materials Ceramic materials Glass compositions Transparent materials
Germanium dioxide
[ "Physics", "Chemistry", "Engineering" ]
1,090
[ "Physical phenomena", "Glass chemistry", "Glass compositions", "Oxides", "Salts", "Optical phenomena", "Materials", "Optical materials", "Ceramic materials", "Transparent materials", "Ceramic engineering", "Matter" ]
5,587,875
https://en.wikipedia.org/wiki/Read%E2%80%93modify%E2%80%93write
In computer science, read–modify–write is a class of atomic operations (such as test-and-set, fetch-and-add, and compare-and-swap) that both read a memory location and write a new value into it simultaneously, either with a completely new value or some function of the previous value. These operations prevent race conditions in multi-threaded applications. Typically they are used to implement mutexes or semaphores. These atomic operations are also heavily used in non-blocking synchronization. Maurice Herlihy (1991) ranks atomic operations by their consensus numbers, as follows: : memory-to-memory move and swap, augmented queue, compare-and-swap, fetch-and-cons, sticky byte, load-link/store-conditional (LL/SC) : -register assignment : test-and-set, swap, fetch-and-add, queue, stack : atomic read and atomic write It is impossible to implement an operation that requires a given consensus number with only operations with a lower consensus number, no matter how many of such operations one uses. Read–modify–write instructions often produce unexpected results when used on I/O devices, as a write operation may not affect the same internal register that would be accessed in a read operation. This term is also associated with RAID levels that perform actual write operations as atomic read–modify–write sequences. Such RAID levels include RAID 4, RAID 5 and RAID 6. See also Linearizability Read–erase–modify–write References Concurrency control Computer memory
Read–modify–write
[ "Technology" ]
318
[ "Computing stubs", "Computer science", "Computer science stubs" ]
5,587,882
https://en.wikipedia.org/wiki/Grimm%E2%80%93Sommerfeld%20rule
In chemistry, the Grimm–Sommerfeld rule predicts that binary compounds with covalent character that have an average of 4 electrons per atom will have structures where both atoms are tetrahedrally coordinated (e.g. have the wurtzite structure). Examples are silicon carbide, the III-V semiconductors indium phosphide and gallium arsenide, the II-VI semiconductors, cadmium sulfide, cadmium selenide. Gorynova expanded the scope of the rules to include ternary compounds where the average number of valence electrons per atom was four. Examples of this are the I-IV2-V3 CuGe2P3 compound which has a zincblende structure. Compounds or phases that obey the Grimm–Sommerfeld rule are termed Grimm–Sommerfeld compounds or phases. The rule has also been extended to predict bond lengths in Grimm–Sommerfeld compounds. When the sum of the atomic numbers is the same the bond lengths are the same. An example is the series of bond lengths ranging from 244.7 pm to 246 pm. for the Ge–Ge bond in elemental germanium, the Ga–As bond in gallium arsenide, the Zn–Se bond in zinc selenide and the Cu–Br bond in copper(I) bromide. References Quantum chemistry
Grimm–Sommerfeld rule
[ "Physics", "Chemistry" ]
280
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
5,588,193
https://en.wikipedia.org/wiki/Verilog-AMS
Verilog-AMS is a derivative of the Verilog hardware description language that includes Analog and Mixed-Signal extensions (AMS) in order to define the behavior of analog and mixed-signal systems. It extends the event-based simulator loops of Verilog/SystemVerilog/VHDL, by a continuous-time simulator, which solves the differential equations in analog-domain. Both domains are coupled: analog events can trigger digital actions and vice versa. Overview The Verilog-AMS standard was created with the intent of enabling designers of analog and mixed signal systems and integrated circuits to create and use modules that encapsulate high-level behavioral descriptions as well as structural descriptions of systems and components. Verilog-AMS is an industry standard modeling language for mixed signal circuits. It provides both continuous-time and event-driven modeling semantics, and so is suitable for analog, digital, and mixed analog/digital circuits. It is particularly well suited for verification of very complex analog, mixed-signal and RF integrated circuits. Verilog and Verilog/AMS are not procedural programming languages, but event-based hardware description languages (HDLs). As such, they provide sophisticated and powerful language features for definition and synchronization of parallel actions and events. On the other hand, many actions defined in HDL program statements can run in parallel (somewhat similar to threads and tasklets in procedural languages, but much more fine-grained). However, Verilog/AMS can be coupled with procedural languages like the ANSI C language using the Verilog Procedural Interface of the simulator, which eases testsuite implementation, and allows interaction with legacy code or testbench equipment. The original intention of the Verilog-AMS committee was a single language for both analog and digital design, however due to delays in the merger process it remains at Accellera while Verilog evolved into SystemVerilog and went to the IEEE. Code example Verilog/AMS is a superset of the Verilog digital HDL, so all statements in digital domain work as in Verilog (see there for examples). All analog parts work as in Verilog-A. The following code example in Verilog-AMS shows a DAC which is an example for analog processing which is triggered by a digital signal: `include "constants.vams" `include "disciplines.vams" // Simple DAC model module dac_simple(aout, clk, din, vref); // Parameters parameter integer bits = 4 from [1:24]; parameter integer td = 1n from[0:inf); // Processing delay of the DAC // Define input/output input clk, vref; input [bits-1:0] din; output aout; //Define port types logic clk; logic [bits-1:0] din; electrical aout, vref; // Internal variables real aout_new, ref; integer i; // Change signal in the analog part analog begin @(posedge clk) begin // Change output only for rising clock edge aout_new = 0; ref = V(vref); for(i=0; i<bits; i=i+1) begin ref = ref/2; aout_new = aout_new + ref * din[i]; end end V(aout) <+ transition(aout_new, td, 5n); // Get a smoother transition when output level changes end endmodule The ADC model is reading analog signals in the digital blocks: `include "constants.vams" `include "disciplines.vams" // Simple ADC model module adc_simple(clk, dout, vref, vin); // Parameters parameter integer bits = 4 from[1:24]; // Number of bits parameter integer td = 1 from[0:inf); // Processing delay of the ADC // Define input/output input clk, vin, vref; output [bits-1:0] dout; //Define port types electrical vref, vin; logic clk; reg [bits-1:0] dout; // Internal variables real ref, sample; integer i; initial begin dout = 0; end // Perform sampling in the digital blocks for rising clock edge always @(posedge clk) begin sample = V(vin); ref = V(vref); for(i=0; i<bits; i=i+1) begin ref = ref/2; if(sample > ref) begin dout[i] <= #(td) 1; sample = sample - ref; end else dout[i] <= #(td) 0; end end endmodule Implementations While the language was initially only supported by commercial companies, parts of the behavioural modeling subset, "Verilog-A" was adopted by the transistor-modeling community. The ADMS translator supports it for open-source simulators like Xyce and ngSPICE. A more complete implementation is now available through OpenVAF. The post-SPICE simulator Gnucap was designed in accordance with the standard document, and its support for Verilog-AMS for both the simulator level and the behavioral modeling is growing. See also VHDL-AMS References External links I. Miller and T. Cassagnes, "Verilog-AMS Eases Mixed Mode Signal Simulation," Technical Proceedings of the 2000 International Conference on Modeling and Simulation of Microsystems, pp. 305–308, Available: https://web.archive.org/web/20070927051749/http://www.nsti.org/publ/MSM2000/T31.01.pdf General Accellera Verilog Analog Mixed-Signal Group verilogams.com — User's manual for Verilog-AMS and Verilog-A The Designer's Guide Community, Verilog-A/MS — Examples of models written in Verilog-AMS EDA.ORG AMS Wiki - Issues, future development, SystemVerilog integration Open Source Implementations OpenVAMS, an Open-Source VerilogAMS-1.3 Parser with internal VPI-like representation V2000 project - Verilog-AMS parser & elaborator OpenVAF Verilog-A compiler Xyce Gnucap Hardware description languages
Verilog-AMS
[ "Engineering" ]
1,387
[ "Electronic engineering", "Hardware description languages" ]
5,588,700
https://en.wikipedia.org/wiki/Torque%20motor
A torque motor is a specialized form of DC electric motor which can operate indefinitely while stalled, without incurring damage. In this mode of operation, the motor will apply a steady torque to the load (hence the name). A torque motor that cannot perform a complete rotation is known as a limited angle torque motor. Brushless torque motors are available; elimination of commutators and brushes allows higher speed operation. Construction Torque motors normally use toroidal construction, allowing them to have wider diameter, more torque, and better dissipation of heat. They differ from other motors in their higher torque, thermal performance, and ability to operate while drawing high current in a stalled state. Linear versions An analogous device, moving linearly rather than rotating, is described as a ''. These are widely used for refrigeration compressors and ultra-quiet air compressors, where the force motor produces simple harmonic motion in conjunction with a restoring spring. Applications Tape recorders A common application of a torque motor would be the supply- and take-up reel motors in a tape drive. In this application, driven from a low voltage, the characteristics of these motors allow a relatively constant light tension to be applied to the tape whether or not the capstan is feeding tape past the tape heads. Driven from a higher voltage, (and so delivering a higher torque), the torque motors can also achieve fast-forward and rewind operation without requiring any additional mechanics such as gears or clutches. Computer games In the computer gaming world, torque motors are used in force feedback steering wheels. Throttle control Another common application is the control of the throttle of an internal combustion engine in conjunction with an electronic governor. In this usage, the motor works against a return spring to move the throttle in accordance with the output of the governor. The latter monitors engine speed by counting electrical pulses from the ignition system or from a magnetic pickup and, depending on the speed, makes small adjustments to the amount of current applied to the motor. If the engine starts to slow down relative to the desired speed, the current will be increased, the motor will develop more torque, pulling against the return spring and opening the throttle. Should the engine run too fast, the governor will reduce the current being applied to the motor, causing the return spring to pull back and close the throttle. Actuators Torque motors can be used as actuators for direct-drive mechanisms in some situations where otherwise geared electric motors would be used: for example, in motion control systems or servomechanisms. Actuators are hardware devices that converts the controller command signal into a change in a physical parameters. References External links https://www.machinedesign.com/motors-drives/article/21832523/torque-motors-do-the-trick overview article in trade journal https://www.kollmorgen.com/sites/default/files/public_downloads/Kollmorgen%20Inland%20Motor%20Direct%20Drive%20DC%20Motors%20Catalog%20EN.pdf Electric motors Actuators
Torque motor
[ "Technology", "Engineering" ]
638
[ "Electrical engineering", "Engines", "Electric motors" ]
5,588,982
https://en.wikipedia.org/wiki/Nitrosourea
Nitrosourea is both the name of a molecule, and a class of compounds that include a nitroso (R-NO) group and a urea. Examples Examples include: Arabinopyranosyl-N-methyl-N-nitrosourea (Aranose) Carmustine (BCNU, BiCNU) Chlorozotocin Ethylnitrosourea (ENU) Fotemustine Lomustine (CCNU) Nimustine N-Nitroso-N-methylurea (NMU) Ranimustine (MCNU) Semustine Streptozocin (Streptozotocin) Nitrosourea compounds are DNA alkylating agents and are often used in chemotherapy. They are lipophilic and thus can cross the blood–brain barrier, making them useful in the treatment of brain tumors such as glioblastoma multiforme. Side effects Some nitrosoureas (e.g. lomustine) have been associated with the development of interstitial lung disease. References External links Nitrosamines Ureas
Nitrosourea
[ "Chemistry" ]
239
[ "Inorganic compounds", "Inorganic compound stubs", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs", "Ureas" ]
5,589,115
https://en.wikipedia.org/wiki/Camino%20Real%20de%20Tierra%20Adentro
El Camino Real de Tierra Adentro (), also known as the Silver Route, was a Spanish road between Mexico City and San Juan Pueblo (Ohkay Owingeh), New Mexico (in the modern U.S.), that was used from 1598 to 1882. It was the northernmost of the four major "royal roads" that linked Mexico City to its major tributaries during and after the Spanish colonial era. In 2010, 55 sites and five existing UNESCO World Heritage Sites along the Mexican section of the route were collectively added to the World Heritage List, including historic cities, towns, bridges, haciendas and other monuments along the route between the Historic Center of Mexico City (also a World Heritage Site on its own) and the town of Valle de Allende, Chihuahua. The section of the route within the United States was proclaimed the El Camino Real de Tierra Adentro National Historic Trail, a part of the National Historic Trail system, on October 13, 2000. The historic route is overseen by both the National Park Service and the U.S. Bureau of Land Management with aid from the El Camino Real de Tierra Adentro Trail Association (CARTA). A portion of the trail near San Acacia, New Mexico, was listed on the U.S. National Register of Historic Places in 2014. Route The road is identified as beginning at the Plaza Santo Domingo very close to the present Zócalo and Mexico City Metropolitan Cathedral in Mexico City. Traveling north through San Miguel de Allende, Guanajuato, the road's northern terminus is located at Ohkay Owingeh, New Mexico. History Pre-Columbian history Long before Europeans arrived, the various indigenous tribes and kingdoms that had arisen throughout the northern central steppe of Mexico had established the route that would later become the Camino Real de Tierra Adentro as a major thoroughfare for hunting and trading. The route connected the peoples of the Valley of Mexico with those of the north through the exchange of products such as turquoise, obsidian, salt and feathers. By the year AD 1000, a flourishing trade network existed from Mesoamerica to the Rocky Mountains. European incursion After Tenochtitlan was subdued in 1521, Spanish conquistadors and colonists began a series of expeditions with the purpose of expanding their domains and obtaining greater wealth for the Spanish Crown. Their initial efforts led them to follow the trails established by the natives who exchanged goods between the north and the south. In April 1598, a group of military scouts led by Juan de Oñate, the newly appointed colonial governor of the province of Santa Fe de Nuevo México, became lost in the desert south of Paso del Norte while seeking the best route to the Río del Norte. A local Indian they had captured named Mompil drew in the sand a map of the only safe passage to the river. The group arrived at the Río del Norte just south of present-day El Paso and Ciudad Juárez in late April, where they celebrated the Catholic Feast of the Ascension on April 30, before crossing the river. They then mapped and extended the route to what is now Española, where Oñate would establish the capital of the new province. This trail became the Camino Real de Tierra Adentro, the northernmost of the four main "royal roads" – the Caminos Reales – that linked Mexico City to its major tributaries in Acapulco, Veracruz, Audiencia (Guatemala) and Santa Fe. After the Pueblo Revolt of 1680, which violently forced the Spanish out of Nuevo México, the Spanish Crown decided not to abandon the province altogether but instead maintained a channel to the province so as not to completely abandon their subjects remaining there. The Viceroyalty organized a system, the so-called conducta, to supply the missions, presidios, and northern ranchos. The conducta consisted of wagon caravans that departed every three years from Mexico City to Santa Fe along the Camino Real de Tierra Adentro. The trip required a long and difficult journey of six months, including 2–3 weeks of rest along the way. Many were the uncertainties that the conducta and other travelers faced. River floods could force weeks of waiting on the banks until the caravan could wade across. At other times, prolonged droughts in the area could make water scarce and difficult to find. The most feared section of the journey was the crossing of the Jornada del Muerto beyond El Paso del Norte: nearly of expansive, barren desert without any water sources to hydrate the men and beasts. Beyond the sustenance needs, the greatest danger to the caravan was that of local assaults. Groups of bandits roamed throughout the territory and threatened the caravan from the current state of Mexico to the state of Querétaro, seeking articles of value. And from the southern part of Zacatecas onward to the north, the greatest threat was the native Chichimecas, who became more likely to attack as the caravan progressed further north. The main objective of the Chichimecas was horses, but they would also often take women and children. A series of presidios along the way allowed for relays of troops to provide additional protection to the caravans. At night in the most dangerous areas, the caravans would form a circle with their wagons with the people and animals inside. The Camino Real was actively used as a commercial route for more than 300 years, from the middle of the 16th century to the 19th century, mainly for the transport of silver extracted from northern mines. During this time, the road was continuously improved, and over time the risks became smaller as haciendas and population centers emerged. 18th century During the 18th century, the sites along the Camino Real de Tierra Adentro increased significantly. The area between the villas of Durango and Santa Fe came to be known as "the Chihuahua Trail". The villa of San Felipe el Real (today city of Chihuahua), established in 1709 to support the surrounding mines, became the most important commercial center and financial area along this segment. The villa of San Felipe Neri de Alburquerque (present-day Albuquerque, New Mexico) was founded in 1706 and it also became an important terminal. Because of its defensive position on the Camino Real, the Villa de Alburquerque became the center of commercial exchange between Nuevo México and the rest of New Spain during the 18th century, trading cattle, wool, textiles, animal skins, salt, and nuts. This exchange occurred mainly with the mining cities of Chihuahua, Santa Bárbara, and Parral. El Paso del Norte (present-day Ciudad Juárez) became another major terminal on the route. In 1765, the population of El Paso del Norte was estimated to be 2,635 inhabitants, which created what was then the largest urban center on the northern border of New Spain. El Paso del Norte became an important center of agriculture and rancheria, known for its wines, brandy, vinegar, and raisins. In the 18th century, the Spanish Crown authorized the establishment of fairs along the Camino Real to promote commerce (although some form of these had already been existing for some time prior). Some of the most important Fairs along the Camino Real included the Fair de San Juan de los Lagos in Jalisco, the Fair de Saltillo, and the Fair de Chihuahua, which was of great importance to Nuevo México merchants. The Fair de Taos was also an important annual event where the Comanches and the Utes traded weapons, ammunition, horses, agricultural products, furs, and meats with the Spanish. Spain at the same time maintained a monopoly on the products of its northern provinces, thus no trade occurred with the French colony of Louisiana. For the second half of the 18th century, the northern frontier of New Spain represented a fundamental interest for the Spanish Empire and its reformist policy, with the aim of ensuring Spanish sovereignty over its northern provinces, highly coveted geopolitically by other European powers – especially the English and the French. The Spanish Crown labored to incorporate the natives into the social and economic welfare of its provinces and give them reasons to participate in the defense of the Spanish border. Thus, Captain Nicolás de Lafora (assigned by the then Marqués de Rubí) gives a description of the frontier of New Spain in his "Viaje a los presidios internos de la América septentrional", the product of an expedition that took place between 1766 and 1768. This expedition was part of a larger commission on the defensive issues and military capabilities entrusted by the Spanish Crown to the Marquis of Rubí, to assess the tactical placement of the Presidios, inspect troop readiness, review military regulations and propose what might be done to strengthen the government and the defense of the State. From its review, the Marquis proposed a line of Presidios along the northern frontier of New Spain, to be established from the Gulf of Mexico to the Gulf of California to protect itself from the Utes, Apaches, Comanches, and Navajos. Don José de Gálvez, special commissioner to New Spain for Charles III, promoted a "Comandancia General de las Provincias Internas" ("General Commander of the Internal Provinces") for the northern provinces of New Spain. However, he also recognized that a long war with the natives would be impossible to win or sustain due to the lack of military resources in the area. With that view, he himself promoted the establishment of a strong peace in the provinces and a greater commercial presence in 1779. In 1786, the nephew of José de Gálvez, Bernardo de Gálvez, viceroy of New Spain published his "Instructions" which included three strategies for dealing with the Natives: Continuing the military pressure on hostile and unaligned tribes; Pursuing the formation of alliances with friendly tribes; and promoting economic dependency with those natives who had entered into peace treaties with the Spanish Crown. In the last decade of the 18th century, a tenuous peace was achieved between the Spaniards and the Apache tribes as a result of the aforementioned administrative and strategic changes. As a consequence, commerce along the Camino Real greatly expanded with products from all over the world, including products from the other provinces of New Spain, brought in over land; European products brought in by the Spanish fleet; and even those that came from the Manila galleon that arrived annually at Acapulco from the western Pacific. As an example, for this time, the most typical products sold by the merchants in the city of Parral along the "Chihuahua Trail" included: Platoncillos from Michoacán; Jarrillos from Cuautitlán of the State of Mexico; Majolica from the State of Puebla; Porcelain junks from China; and clay products from Guadalajara. 19th century The 19th century brought many changes for both Mexico and its northern border. From the Napoleonic Wars to the start of the Mexican War of Independence, the colonial government was unstable and struggled to continue sending resources to the northern provinces. This void led to the establishment of alternate suppliers and supply routes into those provinces. In 1807, American merchant and military agent Zebulon Pike was sent to explore the southwestern borders between the US and New Spain with the intention to find a trail to bring US commerce into Nuevo México and Nueva Vizcaya (Chihuahua). Pike was captured on 26 February 1807 by the Spanish authorities in northern Nuevo México, who sent him on the Camino Real to the city of Chihuahua for interrogation. While Pike was in this city, he gained access to several maps of México and learned of the discontent with Spanish domination. In 1821, after 11 years of struggle, Mexico gained its independence from Spain. The Camino Real maintained an important role in this period, since travelers brought communication about the events that were taking place in the center of the country to the towns and villages of the internal provinces. During the Mexican War of Independence, the Camino Real was used by both forces, rebels and royal forces. For example, after the liberator Miguel Hidalgo y Costilla launched the war of independence, he used the road to retreat from the Battle of the Bridge of Calderón fought on the banks of the Calderón River 60 km (37 mi) east of Guadalajara in present-day Zapotlanejo, Jalisco, northward, eventually arriving at the Wells of Baján in Coahuila where he was captured and executed by royal forces. Between 1821 and 1822, after the end of the war for the Independence of Mexico, the Santa Fe Trail was established to connect the US territory of Missouri with Santa Fe. At first, US merchants were arrested and imprisoned for bringing contraband into Mexican territory; however, the growing economic crisis in northern Mexico gave rise to an increased tolerance of this type of trade. In fact, the Santa Fe Trail (Sendero de Santa Fe) provided needed markets for local products (such as cotton) and manufactured products from New Mexico, so New Mexicans looked favorably on this new trade route. By 1827, a lucrative and commercial connection had been forged between Missouri, New Mexico, and Chihuahua. In 1846, the dispute over the Texas-Mexico border with the United States gave rise to the subsequent invasion by US military forces and the Mexican–American War began. One of these forces was commanded by the general Stephen Kearny, who traveled by the Santa Fe Trail to seize the capital of New Mexico. Another of the forces commanded by Colonel Alexander William Doniphan defeated a small group of Mexican contingents on the Camino Real in the Los Brazitos area south of what is now Las Cruces, New Mexico. Doniphan's forces went on to capture El Paso del Norte and, later, the city of Chihuahua. During 1846–1847, the Camino Real de Tierra Adentro became a path of continuous use, with American forces using it to travel into the interior of Mexico. On their journey, many American travelers kept journals and wrote home about what they saw as they travelled. One of the soldiers provided an estimate of the population of several cities along the Camino, including: Algodones, New Mexico, with 1,000 inhabitants; Bernalillo with 500; Sandía Pueblo with 300 to 400, Albuquerque without an estimated number but extant for seven or eight miles along the Rio Grande; Rancho de los Placeres with 200 or 300; Tomé with 2,000; Socorro, described as a "considerable city"; Paso del Norte with 5,000 to 6,000, and Carrizal, Chihuahua, with 400 inhabitants. The soldiers even kept notes of the products, prices, and animals that they found on their journeys. With the Treaty of Guadalupe Hidalgo signed in February 1848, the war officially ended, with Mexico ceding most of its northern territories to the US, including parts of what are now the US states of New Mexico, Colorado, Arizona, and all of California, Nevada and Utah. Uses of the name The name is sometimes a source of confusion, since during the Viceroyalty of New Spain all roads passable by horse and cart were called "Camino Real", and a significant number of roads throughout the viceroyalty bore this designation. Similarly, all of the interior territories outside of Mexico City were once called "Tierra Adentro", and particularly the northern parts of the Kingdom. This is why the portion of the road between Santiago de Querétaro and Saltillo was alternatively called "La Puerta de Tierra Adentro" ("The Door of Tierra Adentro"). There have historically been several designated "Caminos Reales de Tierra Adentro" throughout New Spain, perhaps the second most important one after the road to Santa Fe being the one that led out of Saltillo, Coahuila, to the Province of Texas. World Heritage Site The section of the road that runs through Mexico was nominated to the UNESCO World Heritage List in November 2001, under the cultural criteria (i) and (ii), which referred to i) "Representing a masterpiece of the creative genius of man"; and ii) "Being the manifestation of a considerable exchange of influences, during a specific period or in a specific cultural area, in the development of architecture or technology, monumental arts, urban planning or landscape design". Criteria (iv) "Offering an eminent example of a type of building, architectural, technological or landscape, that illustrates a significant stage of human history" was added in 2010. On August 1, 2010, UNESCO designated this road as a World Heritage Site. The designation identified a core zone of 3,102 hectares with a buffer zone of 268,057 hectares distributed across 60 historical sites. UNESCO identified / recognized 60 sites along the road in their declaration of the road being a World Heritage Site. Five of them (Mexico City, Querétaro, Guanajuato, San Miguel de Allende and Zacatecas) had been separately recognized in the past. The original historical route does not exactly match the route identified by UNESCO, since UNESCO's declaration omitted several sections such as the portion that ran north of Valle de Allende in Chihuahua and the portion that ran through the Hacienda de San Diego del Jaral de Berrio in Guanajuato, as well as the portion in the United States. For this reason, a possible expansion of the declaration has been proposed for the future. The Instituto Nacional de Antropología e Historia is conducting research to find and gather evidence for additional portions and sites of the original stretches of the historical road, such as bridges, pavements, haciendas, etc. that might be added to the original UNESCO designation. Declared sites Mexico City and State of Mexico 1351-000: Historic center of Mexico City. 1351-001: Old College of Templo de San Francisco Javier (Tepotzotlán) in Tepotzotlán. 1351-002: Aculco de Espinoza. 1351-003: Bridge of Atongo. 1351-004: Section of the Camino Real between Aculco de Espinoza and San Juan del Río. State of Hidalgo 1351-005: Templo and exconvento de San Francisco in Tepeji del Río de Ocampo and bridge. 1351-006: Section of the Camino Real between the bridge of La Colmena and the Hacienda de La Cañada. State of Querétaro 1351-007: Historic center of San Juan del Río. 1351-008: Hacienda de Chichimequillas. 1351-009: Chapel of the hacienda de Buenavista. 1351-010: Historic center of Santiago de Querétaro. State of Guanajuato 1351-011: Bridge of El Fraile. 1351-012: Antiguo Real Hospital de San Juan de Dios in San Miguel de Allende. 1351-013: Bridge of San Rafael in Guanajuato. 1351-014: Bridge La Quemada. 1351-015: Sanctuario de Jesús Nazareno de Atotonilco in the Municipality of San Miguel de Allende. 1351-016: Historic center of Guanajuato and its adjacent mines. State of Jalisco 1351-017: Historic center of Lagos de Moreno and bridge. 1351-018: Historic center of Ojuelos de Jalisco. 1351-019: Bridge of Ojuelos de Jalisco. 1351-020: Hacienda de Ciénega de Mata. 1351-021: Old Cemetery of Encarnación de Díaz. State of Aguascalientes 1351-022: Hacienda de Peñuelas. 1351-023: Hacienda de Cieneguilla. 1351-024: Historic center of Aguascalientes. 1351-025: Hacienda de Pabellón de Hidalgo. State of Zacatecas 1351-026: Chapel of San Nicolás Tolentino of the Hacienda de San Nicolás de Quijas. 1351-027: Town of Pinos. 1351-028: Templo de Nuestra Señora de los Ángeles of the town of Noria de Ángeles. 1351-029: Templo de Nuestra Señora de los Dolores in Villa González Ortega. 1351-030: Colegio de Nuestra Señora de Guadalupe de Propaganda Fide. 1351-031: Historic center of Sombrerete. 1351-032: Templo de San Pantaleón Mártir in the town of Noria de San Pantaleón. 1351-033: Sierra de Órganos. 1351-034: Architectural set of the town of Chalchihuites. 1351-035: Section of the Camino Real between Ojocaliente and Zacatecas. 1351-036: Cave of Ávalos. 1351-037: Historic center of Zacatecas. 1351-038: Sanctuary of Plateros. State of San Luis Potosí 1351-039: Historic center of San Luis Potosí. State of Durango 1351-040: Chapel of San Antonio of the Hacienda de Juana Guerra. 1351-041: Churches in the town of Nombre de Dios. 1351-042: Hacienda de San Diego de Navacoyán and Bridge del Diablo. 1351-043: Historic center of Durango. 1351-044: Churches in the town of Cuencamé and Cristo de Mapimí. 1351-045: Templo de Nuestra Señora del Refugio in the Hacienda La Pedriceña in Los Cuatillos, Cuencamé Municipality. 1351-046: Iglesia Principal of the town of San José de Avino. 1351-047: Chapel of the Hacienda de la Inmaculada Concepción of Palmitos de Arriba. 1351-048: Chapel of the Hacienda de la Limpia Concepción of Palmitos de Abajo. 1351-049: Architectural set of Nazas. 1351-050: Town of San Pedro del Gallo. 1351-051: Architectural set of the town of Mapimí. 1351-052: Town of Indé. 1351-053: Chapel of San Mateo of the Hacienda de San Mateo de la Zarca. 1351-054: Hacienda de la Limpia Concepción of Canutillo. 1351-055: Templo de San Miguel in the town of Villa Ocampo. 1351-056: Section of the Camino Real between Nazas and San Pedro del Gallo. 1351-057: Ojuela Mine. 1351-058: Cave of Las Mulas de Molino. State of Chihuahua 1351-059: Town of Valle de Allende. Undeclared historic locations of the Camino Real in State of Chihuahua Santa Bárbara Parral Chihuahua Carrizal Laguna de Patos Ojo el Lucero Puerto Ancho Ciudad Juárez Senucú San Lorenzo Misión de Nuestra Señora de Guadalupe Presidio del Nuestra Senora del Pilar del Paso del Rio Norte Location National Historic Trail In the United States, from the Texas–New Mexico border to San Juan Pueblo north of Española, the original route (at one point designated U.S. Route 85 but later superseded with US Interstate Highways 10 and 25) has been designated a National Scenic Byway called El Camino Real. Pedestrian, bicycle, and equestrian trails have been added to portions of the trade route corridor over the past few decades. These include the existing Paseo del Bosque Trail in Albuquerque and portions of the proposed Rio Grande Trail. Its northern terminus, Santa Fe, is also a terminus of the Old Spanish Trail and the Santa Fe Trail. Along the trail, parajes (stopovers) that have been preserved today include El Rancho de las Golondrinas. Fort Craig and Fort Selden are also located along the trail. CARTA The El Camino Real de Tierra Adentro Trail Association (CARTA) is a non-profit trail organization that aims to help promote, educate, and preserve the cultural and historic trail in collaboration with the U.S. National Park Service, the Bureau of Land Management, the New Mexico Department of Cultural Affairs, and various Mexican organizations. CARTA publishes an informative quarterly journal, Chronicles of the Trail, which provides people with further history and current affairs of the trail and what CARTA, as an organization, is doing to help preserve it. Chihuahua Trail The Chihuahua Trail is an alternate name used to describe the route as it passes from New Mexico through the state of Chihuahua to central Mexico. By the late 16th century, Spanish exploration and colonization had advanced from Mexico City northward by the great central plateau to its ultimate goal in Santa Fe. Until Mexican independence in 1821, all communications between New Mexico and the rest of the world were restricted to this trail. Over it came ox carts and mule trains, missionaries and governors, soldiers and colonists. When the Santa Fe Trail was established as an overland route between Santa Fe and Missouri, traders from the United States extended their operations southward down the Chihuahua Trail and beyond to Durango and Zacatecas. Ultimately superseded by railroads in the 19th century, the ancient Mexico City–Santa Fe road was revived in the mid-20th century as one of the great automobile highways of Mexico. The part that runs from Santa Fe, New Mexico to El Paso, Texas, US State Highway 85, was pioneered by Franciscan missionaries in 1581 and may be the oldest highway in the United States. See also Camino Real in New Mexico - El Camino Real de Tierra Adentro El Camino Real (California) – the California Mission Trail El Camino Real de Los Tejas – El Camino Real from Texas east to Louisiana National Register of Historic Places listings in Socorro County, New Mexico Old San Antonio Road – a section of El Camino Real de Los Tejas Scenic byways in the United States Supply of Franciscan missions in New Mexico References Further reading Dictionary of American History by James Truslow Adams, New York: Charles Scribner's Sons, 1940 Boyle, Susan Calafate. Los Capitalistas: Hispano Merchants and the Santa Fe Trade. Albuquerque: University of New Mexico Press, 1997. Moorhead, Max L. New Mexico's Royal Road. Norman: University of Oklahoma Press, 1958. Palmer, Gabrielle G., et al.. El Camino Real de Tierra Dentro. Santa Fe: Bureau of Land Management, 1993. Palmer, Gabrielle G. and Stephen L. Fosberg. El Camino Real de Tierra Dentro. Santa Fe: Bureau of Land Management, 1999. Preston, Douglas and José Antonio Esquibel. The Royal Road. Albuquerque: University of New Mexico Press, 1998. External links National Park Service: official El Camino Real de Tierra Adentro National Historic Trail website El Camino Real International Heritage Center El Camino Real de Tierra Adentro – Integrated education curriculum CARTA – El Camino Real de Tierra Adentro Trail Association: website N.M.-Monuments.org – "A Road Over Time" Historic trails and roads in Mexico Historic trails and roads in New Mexico Historic trails and roads in Texas Colonial Mexico Colonial New Mexico New Spain Spanish Texas National Historic Trails of the United States National Scenic Byways Bureau of Land Management areas in New Mexico Historic Civil Engineering Landmarks Protected areas established in 2000 Units of the National Landscape Conservation System Roads on the National Register of Historic Places in New Mexico New Mexico Scenic and Historic Byways World Heritage Sites in Mexico National Register of Historic Places in Socorro County, New Mexico 2000 establishments in Texas 2000 establishments in New Mexico 2000 establishments in Mexico
Camino Real de Tierra Adentro
[ "Engineering" ]
5,739
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
5,589,335
https://en.wikipedia.org/wiki/Penetration%20depth
Penetration depth is a measure of how deep light or any electromagnetic radiation can penetrate into a material. It is defined as the depth at which the intensity of the radiation inside the material falls to 1/e (about 37%) of its original value at (or more properly, just beneath) the surface. When electromagnetic radiation is incident on the surface of a material, it may be (partly) reflected from that surface and there will be a field containing energy transmitted into the material. This electromagnetic field interacts with the atoms and electrons inside the material. Depending on the nature of the material, the electromagnetic field might travel very far into the material, or may die out very quickly. For a given material, penetration depth will generally be a function of wavelength. Beer–Lambert law According to Beer–Lambert law, the intensity of an electromagnetic wave inside a material falls off exponentially from the surface as If denotes the penetration depth, we have Penetration depth is one term that describes the decay of electromagnetic waves inside of a material. The above definition refers to the depth at which the intensity or power of the field decays to 1/e of its surface value. In many contexts one is concentrating on the field quantities themselves: the electric and magnetic fields in the case of electromagnetic waves. Since the power of a wave in a particular medium is proportional to the square of a field quantity, one may speak of a penetration depth at which the magnitude of the electric (or magnetic) field has decayed to 1/e of its surface value, and at which point the power of the wave has thereby decreased to or about 13% of its surface value: Note that is identical to the skin depth, the latter term usually applying to metals in reference to the decay of electrical currents (which follow the decay in the electric or magnetic field due to a plane wave incident on a bulk conductor). The attenuation constant is also identical to the (negative) real part of the propagation constant, which may also be referred to as using a notation inconsistent with the above use. When referencing a source one must always be careful to note whether a number such as or refers to the decay of the field itself, or of the intensity (power) associated with that field. It can also be ambiguous as to whether a positive number describes attenuation (reduction of the field) or gain; this is usually obvious from the context. Attenuation constant The attenuation constant for an electromagnetic wave at normal incidence on a material is also proportional to the imaginary part of the material's refractive index n. Using the above definition of (based on intensity) the following relationship holds: where denotes the complex index of refraction, is the radian frequency of the radiation, c is the speed of light in vacuum and is the wavelength. Note that is very much a function of frequency, as is its imaginary part which is often not mentioned (it is essentially zero for transparent dielectrics). The complex refractive index of metals is also infrequently mentioned but has the same significance, leading to a penetration depth (or skin depth ) accurately given by a formula which is valid up to microwave frequencies. Relationships between these and other ways of specifying the decay of an electromagnetic field can be expressed by mathematical descriptions of opacity. This is only specifying the decay of the field which may be due to absorption of the electromagnetic energy in a lossy medium or may simply describe the penetration of the field in a medium where no loss occurs (or a combination of the two). For instance, a hypothetical substance may have a complex index of refraction . A wave will enter that medium without significant reflection and will be totally absorbed in the medium with a penetration depth (in field strength) of , where is the vacuum wavelength. A different hypothetical material with a complex index of refraction will also have a penetration depth of 16 wavelengths, however in this case the wave will be perfectly reflected from the material! No actual absorption of the radiation takes place, however the electric and magnetic fields extend well into the substance. In either case the penetration depth is found directly from the imaginary part of the material's refractive index as is detailed above. See also Skin effect Absorbance Attenuation coefficient Transmittance References Electromagnetic radiation Scattering, absorption and radiative transfer (optics) Spectroscopy
Penetration depth
[ "Physics", "Chemistry" ]
876
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Molecular physics", "Spectrum (physical sciences)", "Electromagnetic radiation", "Instrumental analysis", "Scattering", "Radiation", "Spectroscopy" ]
5,589,433
https://en.wikipedia.org/wiki/Methylhydrazines
Methylhydrazines are hydrazines that have additional methyl groups. Heavily methylated versions exist as hydrazinium salts. Members of this class include: Monomethylhydrazine Monomethylhydrazinium (cationic and exists as a variety of salts) Dimethylhydrazines Symmetrical dimethylhydrazine (1,2-dimethylhydrazine) Unsymmetrical dimethylhydrazine (1,1-dimethylhydrazine) Trimethylhydrazine 1,1,2-trimethylhydrazine 1,1,1-trimethylhydrazinium (cationic and exists as a variety of salts e.g. 1,1,1-trimethylhydrazinium iodide) Tetramethylhydrazine 1,1,2,2-tetramethylhydrazine 1,1,1,2-tetramethylhydrazinium (cationic and exists as a variety of salts) Pentamethylhydrazinium (cationic and exists as a variety of salts) Hexamethylhydrazinediium (dication, exists as a variety of salts) External links Hydrazines
Methylhydrazines
[ "Chemistry" ]
266
[ "Functional groups", "Hydrazines" ]
5,589,680
https://en.wikipedia.org/wiki/Picotee
Picotee describes flowers whose edge is a different colour from the flower's base colour. The word originates from the French picoté, meaning 'marked with points'. Examples References Flowers Plant morphology
Picotee
[ "Biology" ]
42
[ "Plant morphology", "Plants" ]
5,589,855
https://en.wikipedia.org/wiki/Sugar%20acid
In organic chemistry, a sugar acid or acidic sugar is a monosaccharide with a carboxyl group at one end or both ends of its chain. Main classes of sugar acids include: Aldonic acids, in which the aldehyde group () located at the initial end (position 1) of an aldose is oxidized. Ulosonic acids, in which the hydroxymethyl group () at the initial end of a 2-ketose is oxidized creating an α-ketoacid. Uronic acids, in which the group at the terminal end of an aldose or ketose is oxidized. Aldaric acids, in which both ends ( and ) of an aldose are oxidized. Examples Examples of sugar acids include: Aldonic acids Glyceric acid (3C) Xylonic acid (5C) Gluconic acid (6C) Ascorbic acid (6C, unsaturated lactone) Ulosonic acids Neuraminic acid (5-amino-3,5-dideoxy-D-glycero-D-galacto-non-2-ulosonic acid) Ketodeoxyoctulosonic acid (KDO or 3-deoxy-D-manno-oct-2-ulosonic acid) Uronic acids Glucuronic acid (6C) Galacturonic acid (6C) Iduronic acid (6C) Aldaric acids Tartaric acid (4C) meso-Galactaric acid (Mucic acid) (6C) D-Glucaric acid (Saccharic acid) (6C) References External links
Sugar acid
[ "Chemistry" ]
362
[ "Sugar acids", "Carbohydrates" ]
5,589,857
https://en.wikipedia.org/wiki/Trust%20%28company%29
Trust International B.V. is a Dutch company producing value digital lifestyle accessories including PC peripherals and accessories for video gaming. Based in Dordrecht, it was originally founded in 1983 as Aashima Technology B.V. before gaining its current name in 2003. Products The company's product lines are divided into Home & Office, Gaming, Smart Home and Business to Business (B2B). Products that the company has covered for many years include mice, keyboards, webcams and headsets. Trust's products are sold in specialist stores, large retailers, electronics chains and online stores in over 50 countries. In the past, Trust also produced peripherals such as scanners and modems. Sports sponsorship Dutch F1 driver Jos Verstappen used his strong Dutch links to gain sponsorship for the Minardi F1 Team in 2003 when Trust became one of the team sponsors. That sponsorship was moved to Jordan Grand Prix in 2004 when Verstappen was on the verge of a race seat with the team. Trust had a sponsorship agreement with Spyker F1 as the team started to bring in Dutch sponsorship. Trust was the head sponsor of the Arden International team, which competed in the GP2 and GP2 Asia series, and previously in Formula 3000. Because of the sponsorship, the team has been dubbed Trust Team Arden. Trust also sponsored Minardi Team USA in the 2007 Champ Car World Series for much of the season but ended their sponsorship at the end of the season after the team stopped competing at the end of the year due to the unification of Champ Car and Indycar. Trust sponsored Red Bull Racing in 2009, both Sebastian Vettel and Mark Webber had the Trust name visible on the chin bar of their helmets. See also List of Dutch companies References External links Computer companies of the Netherlands Computer hardware companies Electronics companies of the Netherlands Computer peripheral companies Videotelephony Electronics companies established in 1983 Dutch brands
Trust (company)
[ "Technology" ]
384
[ "Computer hardware companies", "Computers" ]
5,589,929
https://en.wikipedia.org/wiki/ANSI/ISA-95
ANSI/ISA-95, or ISA-95 as it is more commonly referred, is an international standard from the International Society of Automation for developing an automated interface between enterprise and control systems. This standard has been developed for global manufacturers. It was developed to be applied in all industries, and in all sorts of processes, like batch processes, continuous and repetitive processes. Objectives The objectives of ISA-95 are to provide consistent terminology that is a foundation for supplier and manufacturer communications, provide consistent information models, and to provide consistent operations models which is a foundation for clarifying application functionality and how information is to be used. Standard parts There are 5 parts of the ISA-95 standard. Part 1: Models and Terminology ANSI/ISA-95.00.01-2000, Enterprise-Control System Integration Part 1: Models and Terminology consists of standard terminology and object models, which can be used to decide which information should be exchanged. The models help define boundaries between the enterprise systems and the control systems. They help address questions like which tasks can be executed by which function and what information must be exchanged between applications. Here is a . ISA-95 Models Context Hierarchy Models Scheduling and control (Purdue) Equipment hierarchy Functional Data Flow Model Manufacturing Functions Data Flows Object Models Objects Object Relationships Object Attributes Operations Activity Models Operations Elements: PO, MO, QO, IO Operations Data Flow Model Operations Functions Operations Flows Part 2: Object Model Attributes ANSI/ISA-95.00.02-2001, Enterprise-Control System Integration Part 2: Object Model Attributes consists of attributes for every object that is defined in part 1. The objects and attributes of Part 2 can be used for the exchange of information between different systems, but these objects and attributes can also be used as the basis for relational databases. Part 3: Models of Manufacturing Operations Management ANSI/ISA-95.00.03-2005, Enterprise-Control System Integration, Part 3: Models of Manufacturing Operations Management focuses on the functions and activities at level 3 (Production / MES layer). It provides guidelines for describing and comparing the production levels of different sites in a standardized way. Part 4: Object models and attributes for Manufacturing Operations Management ISA-95.00.04 Object Models & Attributes Part 4 of ISA-95: "Object models and attributes for Manufacturing Operations Management" The SP95 committee is yet developing part 4 of ISA-95, which is entitled "Object Models and Attributes of Manufacturing Operations Management". This technical specification defines object models that determine which information is exchanged between MES activities (which are defined in part 3 by ISA-95). The models and attributes from part 4 are the basis for the design and the implementation of interface standards and make sure of a flexible lapse of the cooperation and information-exchange between the different MES activities. Part 5: Business to manufacturing transactions ISA-95.00.05 B2M Transactions Part 5 of ISA-95: "Business to manufacturing transactions" Also part 5 of ISA-95 is yet in development. This technical specification defines operation between office and production automations-systems, which can be used together with the object models out part 1 & 2. The operations connect and organise the production objects and activities that are defined through earlier parts of the standard. Such operations take place on all levels within a business, but the focus of this technical specification lies on the interface between enterprise- and control systems. On the basis of models, the operation will be described and becomes the operation processing logically explained. Within production areas activities are executed and information is passed back and forth. The standard provides reference models for production activities, quality activities, maintenance activities and inventory activities. See also International Society of Automation IEC 62264 Manufacturing execution system Manufacturing operations management External links ISA-95 Explained Enterprise-Control System Integration (ISA95) ANSI/ISA-95.00.01-2010 (IEC 62264-1 Mod), Enterprise-Control System Integration—Part 1: Models and Terminology ANSI/ISA-95.00.02-2018, Enterprise-Control System Integration—Part 2: Object Model Attributes ANSI/ISA-95.00.03-2013 (IEC 62264-3 Modified), Enterprise-Control System Integration—Part 3: Activity Models of Manufacturing Operations Management ANSI/ISA-95.00.04-2018, Enterprise-Control System Integration—Part 4: Objects and attributes for manufacturing operations management integration ANSI/ISA-95.00.05-2018, Enterprise-Control System Integration—Part 5: Business-to-Manufacturing Transactions ANSI/ISA-95.00.06-2014, Enterprise-Control System Integration—Part 6: Messaging Service Model ANSI/ISA-95.00.07-2017, Enterprise-Control System Integration—Part 7: Alias Service Model ANSI/ISA-95.00.08-2020, Enterprise-Control System Integration—Part 8: Information Exchange Profiles ISA-TR95.01-2018, Enterprise-Control System Integration—TR01: Master Data Profile Template Control engineering American National Standards Institute standards
ANSI/ISA-95
[ "Technology", "Engineering" ]
1,031
[ "American National Standards Institute standards", "Computer standards", "Control engineering" ]
5,590,160
https://en.wikipedia.org/wiki/Wayne%20Wesolowski
Wayne Wesolowski is a builder of miniature models. Wesolowski's models have been exhibited at the Chicago Museum of Science and Industry, the Springfield, Illinois, Lincoln Home Site, the West Chicago City Museum, the Batavia Depot Museum, and the National Railroad Museum. One of his more noted works is a model of Abraham Lincoln's funeral train. This model took 4½ years to build and is 15 feet (4½ meters) long. Wesolowski appeared on an episode of Tracks Ahead featuring this train and his model of Lincoln's home. Wesolowski has written scores of articles and four books on model building. He has been featured in videos shown on PBS television. Good Morning America selected and showed part of one tape as an example of video education. Bob Hundman of Mainline Modeler Magazine noted that "He's always leading those of us who like scratchbuilding down new roads. He's a very inventive modeler." Wesolowski holds a Ph.D. in chemistry from the University of Arizona and lectures there. Publications References External links Building Model Railroad Wood Structures with Wayne Wesolowski Rail transport modellers 21st-century American chemists Model makers Scale modeling University of Arizona faculty University of Arizona alumni Living people Year of birth missing (living people)
Wayne Wesolowski
[ "Physics" ]
264
[ "Model makers", "Scale modeling" ]
5,590,352
https://en.wikipedia.org/wiki/Clofibric%20acid
Clofibric acid is a biologically active metabolite of the lipid-lowering drugs clofibrate, etofibrate and with the molecular formula C10H11ClO3. It has been found in the environment following use of these drugs, for example in Swiss lakes and the North Sea. Some derivatives of clofibric acid are in a drug class called fibrates. See also Phenoxy herbicides to which the compound is chemically related References 4-Chlorophenyl compounds 2-Methyl-2-phenoxypropanoic acid derivatives
Clofibric acid
[ "Chemistry" ]
122
[]
5,590,879
https://en.wikipedia.org/wiki/NGC%202500
NGC 2500 is a barred spiral galaxy in the constellation Lynx which was discovered by William Herschel in 1788. Much like the local group in which our own Milky Way galaxy is situated, NGC 2500 is part of the NGC 2841 group of galaxies which also includes NGC 2541, NGC 2537 and NGC 2552. It has a H II nucleus and exhibits a weak inner ring structure. References External links Barred spiral galaxies 2500 04165 22525 Lynx (constellation)
NGC 2500
[ "Astronomy" ]
95
[ "Lynx (constellation)", "Constellations" ]
5,591,060
https://en.wikipedia.org/wiki/Estrogen%20receptor%20alpha
Estrogen receptor alpha (ERα), also known as NR3A1 (nuclear receptor subfamily 3, group A, member 1), is one of two main types of estrogen receptor, a nuclear receptor (mainly found as a chromatin-binding protein) that is activated by the sex hormone estrogen. In humans, ERα is encoded by the gene ESR1 (EStrogen Receptor 1). Structure The estrogen receptor (ER) is a ligand-activated transcription factor composed of several domains important for hormone binding, DNA binding, and activation of transcription. Alternative splicing results in several ESR1 mRNA transcripts, which differ primarily in their 5-prime untranslated regions. The translated receptors show less variability. Ligands Agonists Non-selective Endogenous estrogens (e.g., estradiol, estrone, estriol, estetrol) Natural estrogens (e.g., conjugated equine estrogens) Synthetic estrogens (e.g., ethinylestradiol, diethylstilbestrol) Selective Agonists of ERα selective over ERβ include: Propylpyrazoletriol (PPT) 16α-LE2 (Cpd1471) 16α-IE2 ERA-63 (ORG-37663) SKF-82,958 – also a D1-like receptor full agonist (R,R)-Tetrahydrochrysene ((R,R)-THC) – actually not selective over ERβ, but rather an antagonist instead of an agonist of ERβ Mixed Phytoestrogens (e.g., coumestrol, daidzein, genistein, miroestrol) Selective estrogen receptor modulators (e.g., tamoxifen, clomifene, raloxifene) Antagonists Non-selective Antiestrogens (e.g., fulvestrant, ICI-164384, ethamoxytriphetol) Selective Antagonists of ERα selective over ERβ include: Methylpiperidinopyrazole (MPP) Affinities Tissue distribution and function ERα plays a role in the physiological development and function of a variety of organ systems to varying degrees, including the reproductive, central nervous, skeletal, and cardiovascular systems. Accordingly, ERα is widely expressed throughout the body, including the uterus and ovary, male reproductive organs, mammary gland, bone, heart, hypothalamus, pituitary gland, liver, lung, kidney, spleen, and adipose tissue. The development and function of these tissues is disrupted in animal models lacking active ERα genes, such as the ERα knockout mouse (ERKO), providing a preliminary understanding of ERα function at specific target organs. Uterus and ovary ERα is essential in the maturation of the female reproductive phenotype. In the absence of ERα, the ERKO mouse develops an adult uterus, indicating that ERα may not mediate the initial growth of the uterus. However, ERα plays a role in the completion of this development, and the subsequent function of the tissue. Activation of ERα is known to trigger cell proliferation in the uterus. The uterus of female ERKO mice is hypoplastic, suggesting that ERα mediates mitosis and differentiation in the uterus in response to estrogen stimulation. Similarly, prepubertal female ERKO mice develop ovaries that are nearly indistinguishable from those of their wildtype counterparts. However, as the ERKO mice mature they progressively present an abnormal ovarian phenotype in both physiology and function. Specifically, female ERKO mice develop enlarged ovaries containing hemorrhagic follicular cysts, which also lack the corpus luteum, and therefore do not ovulate. This adult ovarian phenotype suggests that in the absence of ERα, estrogen is no longer able to perform negative feedback on the hypothalamus, resulting in chronically elevated LH levels and constant ovarian stimulation. These results identify a pivotal role for ERα in the hypothalamus, in addition to its role in the estrogen-driven maturation through theca and interstitial cells of the ovary. Male reproductive organs ERα is similarly essential in the maturation and maintenance of the male reproductive phenotype, as male ERKO mice are infertile and present undersized testes. The integrity of testicular structures of ERKO mice, such as the seminiferous tubules of the testes and the seminiferous epithelium, declines over time. Furthermore, the reproductive performance of male ERKO mice is hindered by abnormalities in sexual physiology and behavior, such as impaired spermatogenesis and loss of intromission and ejaculatory responses. Mammary gland Estrogen stimulation of ERα is known to stimulate cell proliferation in breast tissue. ERα is thought to be responsible for pubertal development of the adult phenotype, through mediation of mammary gland response to estrogens. This role is consistent with the abnormalities of female ERKO mice: the epithelial ducts of female ERKO mice fail to grow beyond their pre-pubertal length, and lactational structures do not develop. As a result, the functions of the mammary gland—including both lactation and release of prolactin—are greatly impaired in ERKO mice. Bone Though its expression in bone is moderate, ERα is known to be responsible for maintenance of bone integrity. It is hypothesized that estrogen stimulation of ERα may trigger the release of growth factors, such as epidermal growth factor or insulin-like growth factor-1, which in turn regulate bone development and maintenance. Accordingly, male and female ERKO mice exhibit decreased bone length and size. Brain Estrogen signaling through ERα appears to be responsible for various aspects of central nervous development, such as synaptogenesis and synaptic remodeling. In the brain, ERα is found in hypothalamus, and preoptic area, and arcuate nucleus, all three of which have been linked to reproductive behavior, and the masculinization of the mouse brain appears to take place through ERα function. Furthermore, studies in models of psychopathology and neurodegenerative disease states suggest that estrogen receptors mediate the neuroprotective role of estrogen in the brain. Finally, ERα appears to mediate positive feedback effects of estrogen on the brain's secretion of GnRH and LH, by way increasing expression of kisspeptin in neurons of the arcuate nucleus and anteroventral periventricular nucleus. Although classical studies have suggested that negative feedback effects of estrogen also operate through ERα, female mice lacking ERα in kisspeptin-expressing neurons continue to demonstrate a degree of negative feedback response. Clinical significance Estrogen insensitivity syndrome is a very rare condition characterized by a defective ERα that is insensitive to estrogens. The clinical presentation of a female was observed to include absence of breast development and other female secondary sexual characteristics at puberty, hypoplastic uterus, primary amenorrhea, enlarged multicystic ovaries and associated lower abdominal pain, mild hyperandrogenism (manifested as cystic acne), and delayed bone maturation as well as an increased rate of bone turnover. The clinical presentation in a male was reported to include lack of epiphyseal closure, tall stature, osteoporosis, and poor sperm viability. Both individuals were completely insensitive to exogenous estrogen treatment, even with high doses. Genetic polymorphisms in the gene encoding the ERα have been associated with breast cancer in women, gynecomastia in men and dysmenorrhea. In patients with breast cancer, mutations in the gene encoding ERα (ESR1) have been associated with resistance to endocrine therapy, especially aromatase inhibitors. Coactivators Coactivators of ER-α include: SRC-1 AIB1 – amplified in breast 1 PELP-1 – Proline-, glutamic acid-, leucine-rich protein 1 Interactions Estrogen receptor alpha has been shown to interact with: AKAP13 AHR BRCA1 CAV1 CCNC CDC25B CEBPB COBRA1 COUP-TFI CREBBP CRSP3 Cyclin D1 DNTTIP2 EP300 ESR2 FOXO1 GREB1 GTF2H1 HSPA1A HSPA8 HSP90AA1 ISL1 JARID1A MVP MED1 MED12 MED14 MED16 MED24 MED6 MGMT MNAT1 MTA1 NCOA6 NCOA1 NCOA2 NCOA3 NRIP1 PDLIM1 POU4F1 POU4F2 PRDM2 PRMT2 RBM39 RNF12 SAFB SAFB2 SHC1 SHP SMARCA4 SMARCE1 Src TR2 TR4 TDG TRIM24 and XBP1. References Further reading External links Intracellular receptors Transcription factors
Estrogen receptor alpha
[ "Chemistry", "Biology" ]
1,949
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
5,591,890
https://en.wikipedia.org/wiki/Puma%20armored%20engineering%20vehicle
The Puma (Hebrew: פומ"ה פורץ מכשולים הנדסי) is a heavily armored Combat engineering vehicle and armored personnel carrier that the Engineering Corps of the Israeli Defence Forces has used since the early 1990s. The vehicle can carry a crew of up to eight. The 50-ton vehicle's speed is 45 kilometers an hour. The Puma uses the hull of the Sho't, which is itself a modified British Centurion tank. Some Pumas are equipped with the Carpet mine-clearing system. This consists of 20 rockets that the crew can fire singly or all together. The rockets contain a fuel-air explosive warhead which spreads a cloud of fuel fumes that are then detonated. The overpressure from the explosion destroys most mines. The Puma then advances behind a set of rollers that trigger any mines the fuel-air explosion did not destroy. There is also electronic equipment for detonating roadside bombs or jamming detonation signals. The Puma is capable of towing a mobile bridge for deploying over trenches and enemy land obstacles during battle. Unlike the M60 AVLB launching the bridge, the Puma pulls the bridge during combat and pushes the bridge over the obstacle, allowing tanks and infantry personnel carriers to maneuver quickly in the battlefield. Armament consists of three 7.62 mm FN MAG general-purpose machine guns, including one in a remote turret that the crew can control from within the cabin by a Rafael Overhead Weapon Station (OWS). The vehicle also has a 60 mm mortar and two launchers for smoke grenades. Current developments Israel is forming a fourth Combat Engineer Battalion that will specialize in dealing with roadside bombs, mines and booby traps. As part of this effort, Israel will also upgrade its Pumas. The army is adding new equipment for dealing with roadside bombs and is training the crews to cope with the growing numbers of explosive devices encountered in regions such as Gaza. See also VIU-55 Munja – Serbian APC/Combat engineering vehicle based on T-55 References External links Puma (Israeli-Weapons) Armoured personnel carriers of Israel Military engineering vehicles Tracked armoured personnel carriers Israeli Combat Engineering Corps Military vehicles introduced in the 1990s Mine warfare countermeasures
Puma armored engineering vehicle
[ "Engineering" ]
463
[ "Engineering vehicles", "Military engineering", "Military engineering vehicles" ]
5,591,986
https://en.wikipedia.org/wiki/Saprophagy
Saprophages are organisms that obtain nutrients by consuming decomposing dead plant or animal biomass. They are distinguished from detritivores in that saprophages are sessile consumers while detritivores are mobile. Typical saprophagic animals include sedentary polychaetes such as amphitrites (Amphitritinae, worms of the family Terebellidae) and other terebellids. The eating of wood, whether live or dead, is known as xylophagy. The activity of animals feeding only on dead wood is called sapro-xylophagy and those animals, sapro-xylophagous. Ecology In food webs, saprophages generally play the roles of decomposers. There are two main branches of saprophages, broken down by nutrient source. There are necrophages which consume dead animal biomass, and thanatophages which consume dead plant biomass. See also Detritivore Decomposer Saprotrophic nutrition Consumer-resource systems References Eating behaviors Mycology Soil biology
Saprophagy
[ "Biology" ]
228
[ "Behavior", "Biological interactions", "Mycology", "Soil biology", "Eating behaviors" ]
14,617,395
https://en.wikipedia.org/wiki/Beurling%20algebra
In mathematics, the term Beurling algebra is used for different algebras introduced by , usually it is an algebra of periodic functions with Fourier series Example We may consider the algebra of those functions f where the majorants of the Fourier coefficients an are summable. In other words Example We may consider a weight function w on such that in which case is a unitary commutative Banach algebra. These algebras are closely related to the Wiener algebra. References Fourier series Algebras
Beurling algebra
[ "Mathematics" ]
97
[ "Algebras", "Mathematical structures", "Algebraic structures" ]