id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
319,484
https://en.wikipedia.org/wiki/Law%20of%20tangents
In trigonometry, the law of tangents or tangent rule is a statement about the relationship between the tangents of two angles of a triangle and the lengths of the opposing sides. In Figure 1, , , and are the lengths of the three sides of the triangle, and , , and are the angles opposite those three respective sides. The law of tangents states that The law of tangents, although not as commonly known as the law of sines or the law of cosines, is equivalent to the law of sines, and can be used in any case where two sides and the included angle, or two angles and a side, are known. Proof To prove the law of tangents one can start with the law of sines: where is the diameter of the circumcircle, so that and . It follows that Using the trigonometric identity, the factor formula for sines specifically we get As an alternative to using the identity for the sum or difference of two sines, one may cite the trigonometric identity (see tangent half-angle formula). Application The law of tangents can be used to compute the angles of a triangle in which two sides and and the enclosed angle are given. From compute the angle difference ; use that to calculate and then . Once an angle opposite a known side is computed, the remaining side can be computed using the law of sines. In the time before electronic calculators were available, this method was preferable to an application of the law of cosines , as this latter law necessitated an additional lookup in a logarithm table, in order to compute the square root. In modern times the law of tangents may have better numerical properties than the law of cosines: If is small, and and are almost equal, then an application of the law of cosines leads to a subtraction of almost equal values, incurring catastrophic cancellation. Spherical version On a sphere of unit radius, the sides of the triangle are arcs of great circles. Accordingly, their lengths can be expressed in radians or any other units of angular measure. Let , , be the angles at the three vertices of the triangle and let , , be the respective lengths of the opposite sides. The spherical law of tangents says History The law of tangents was discovered by Arab mathematician Abu al-Wafa in the 10th century. Ibn Muʿādh al-Jayyānī also described the law of tangents for planar triangles in the 11th century. The law of tangents for spherical triangles was described in the 13th century by Persian mathematician Nasir al-Din al-Tusi (1201–1274), who also presented the law of sines for plane triangles in his five-volume work Treatise on the Quadrilateral. Cyclic quadrilateral A generalization of the law of tangents holds for a cyclic quadrilateral Denote the lengths of sides and and angle measures .Then: This formula reduces to the law of tangents for a triangle when . See also Law of sines Law of cosines Law of cotangents Mollweide's formula Half-side formula Tangent half-angle formula Notes Trigonometry Articles containing proofs Theorems about triangles Angle
Law of tangents
[ "Physics", "Mathematics" ]
670
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Articles containing proofs", "Wikipedia categories named after physical quantities", "Angle" ]
319,491
https://en.wikipedia.org/wiki/Centrifugal%20compressor
Centrifugal compressors, sometimes called impeller compressors or radial compressors, are a sub-class of dynamic axisymmetric work-absorbing turbomachinery. They achieve pressure rise by adding energy to the continuous flow of fluid through the rotor/impeller. The equation in the next section shows this specific energy input. A substantial portion of this energy is kinetic which is converted to increased potential energy/static pressure by slowing the flow through a diffuser. The static pressure rise in the impeller may roughly equal the rise in the diffuser. Components of a simple centrifugal compressor A simple centrifugal compressor stage has four components (listed in order of throughflow): inlet, impeller/rotor, diffuser, and collector. Figure 1.1 shows each of the components of the flow path, with the flow (working gas) entering the centrifugal impeller axially from left to right. This turboshaft (or turboprop) impeller is rotating counter-clockwise when looking downstream into the compressor. The flow will pass through the compressors from left to right. Inlet The simplest inlet to a centrifugal compressor is typically a simple pipe. Depending upon its use/application inlets can be very complex. They may include other components such as an inlet throttle valve, a shrouded port, an annular duct (see Figure 1.1), a bifurcated duct, stationary guide vanes/airfoils used to straight or swirl flow (see Figure 1.1), movable guide vanes (used to vary pre-swirl adjustably). Compressor inlets often include instrumentation to measure pressure and temperature in order to control compressor performance. Bernoulli's fluid dynamic principle plays an important role in understanding vaneless stationary components like an inlet. In engineering situations assuming adiabatic flow, this equation can be written in the form: Equation-1.1 where: is the inlet of the compressor, station 0 is the inlet of the impeller, station 1 is the pressure is the density and indicates that it is a function of pressure is the flow speed is the ratio of the specific heats of the fluid Centrifugal impeller The identifying component of a centrifugal compressor stage is the centrifugal impeller rotor. Impellers are designed in many configurations including "open" (visible blades), "covered or shrouded", "with splitters" (every other inducer removed), and "w/o splitters" (all full blades). Figures 0.1, 1.2.1, and 1.3 show three different open full inducer rotors with alternating full blades/vanes and shorter length splitter blades/vanes. Generally, the accepted mathematical nomenclature refers to the leading edge of the impeller with subscript 1. Correspondingly, the trailing edge of the impeller is referred to as subscript 2. As working-gas/flow passes through the impeller from stations 1 to 2, the kinetic and potential energy increase. This is identical to an axial compressor with the exception that the gases can reach higher energy levels through the impeller's increasing radius. In many modern high-efficiency centrifugal compressors the gas exiting the impeller is traveling near the speed of sound. Most modern high-efficiency impellers use "backsweep" in the blade shape. A derivation of the general Euler equations (fluid dynamics) is Euler's pump and turbine equation, which plays an important role in understanding impeller performance. This equation can be written in the form: Equation-1.2 (see Figures 1.2.2 and 1.2.3 illustrating impeller velocity triangles) where: subscript 1 is the impeller leading edge (inlet), station 1 subscript 2 is the impeller trailing edge (discharge), station 2 is the energy added to the fluid is the acceleration due to gravity is the impeller's circumferential velocity, units velocity is the velocity of flow relative to the impeller, units velocity is the absolute velocity of flow relative to stationary, units velocity Diffuser The next component, downstream of the impeller within a simple centrifugal compressor may the diffuser. The diffuser converts the flow's kinetic energy (high velocity) into increased potential energy (static pressure) by gradually slowing (diffusing) the gas velocity. Diffusers can be vaneless, vaned, or an alternating combination. High-efficiency vaned diffusers are also designed over a wide range of solidities from less than 1 to over 4. Hybrid versions of vaned diffusers include wedge (see Figure 1.3), channel, and pipe diffusers. Some turbochargers have no diffuser. Generally accepted nomenclature might refer to the diffuser's lead edge as station 3 and the trailing edge as station 4. Bernoulli's fluid dynamic principle plays an important role in understanding diffuser performance. In engineering situations assuming adiabatic flow, this equation can be written in the form: Equation-1.3 where: is the inlet of the diffuser, station 2 is the discharge of the diffuser, station 4 (see inlet above.) Collector The collector of a centrifugal compressor can take many shapes and forms. When the diffuser discharges into a large empty circumferentially (constant area) chamber, the collector may be termed a Plenum. When the diffuser discharges into a device that looks somewhat like a snail shell, bull's horn, or a French horn, the collector is likely to be termed a volute or scroll. When the diffuser discharges into an annular bend the collector may be referred to as a combustor inlet (as used in jet engines or gas turbines) or a return-channel (as used in an online multi-stage compressor). As the name implies, a collector's purpose is to gather the flow from the diffuser discharge annulus and deliver this flow downstream into whatever component the application requires. The collector or discharge pipe may also contain valves and instrumentation to control the compressor. In some applications, collectors will diffuse flow (converting kinetic energy to static pressure) far less efficiently than a diffuser. Bernoulli's fluid dynamic principle plays an important role in understanding diffuser performance. In engineering situations assuming adiabatic flow, this equation can be written in the form: Equation-1.4 where: is the inlet of the diffuser, station 4 is the discharge of the diffuser, station 5 (see inlet above.) Historical contributions, the pioneers Over the past 100 years, applied scientists including Stodola (1903, 1927–1945), Pfleiderer (1952), Hawthorne (1964), Shepherd (1956), Lakshminarayana (1996), and Japikse (many texts including citations), have educated young engineers in the fundamentals of turbomachinery. These understandings apply to all dynamic, continuous-flow, axisymmetric pumps, fans, blowers, and compressors in axial, mixed-flow and radial/centrifugal configurations. This relationship is the reason advances in turbines and axial compressors often find their way into other turbomachinery including centrifugal compressors. Figures 2.1 and 2.2 illustrate the domain of turbomachinery with labels showing centrifugal compressors. Improvements in centrifugal compressors have not been achieved through large discoveries. Rather, improvements have been achieved through understanding and applying incremental pieces of knowledge discovered by many individuals. Aerodynamic-thermodynamic domain Figure 2.1 (shown right) represents the aero-thermo domain of turbomachinery. The horizontal axis represents the energy equation derivable from The first law of thermodynamics. The vertical axis, which can be characterized by Mach Number, represents the range of fluid compressibility (or elasticity). The Z-axis, which can be characterized by Reynolds number, represents the range of fluid viscosities (or stickiness). Mathematicians and physicists who established the foundations of this aero-thermo domain include: Isaac Newton, Daniel Bernoulli, Leonhard Euler, Claude-Louis Navier, George Stokes, Ernst Mach, Nikolay Yegorovich Zhukovsky, Martin Kutta, Ludwig Prandtl, Theodore von Kármán, Paul Richard Heinrich Blasius, and Henri Coandă. Physical-mechanical domain Figure 2.2 (shown right) represents the physical or mechanical domain of turbomachinery. Again, the horizontal axis represents the energy equation with turbines generating power to the left and compressors absorbing power to the right. Within the physical domain the vertical axis differentiates between high speeds and low speeds depending upon the turbomachinery application. The Z-axis differentiates between axial-flow geometry and radial-flow geometry within the physical domain of turbomachinery. It is implied that mixed-flow turbomachinery lie between axial and radial. Key contributors of technical achievements that pushed the practical application of turbomachinery forward include: Denis Papin, Kernelien Le Demour, Daniel Gabriel Fahrenheit, John Smeaton, Dr. A. C. E. Rateau, John Barber, Alexander Sablukov, Sir Charles Algernon Parsons, Ægidius Elling, Sanford Alexander Moss, Willis Carrier, Adolf Busemann, Hermann Schlichting, Frank Whittle and Hans von Ohain. Partial timeline of historical contributions Turbomachinery similarities Centrifugal compressors are similar in many ways to other turbomachinery and are compared and contrasted as follows: Similarities to axial compressor Centrifugal compressors are similar to axial compressors in that they are rotating airfoil-based compressors. Both are shown in the adjacent photograph of an engine with 5 stages of axial compressors and one stage of a centrifugal compressor. The first part of the centrifugal impeller looks very similar to an axial compressor. This first part of the centrifugal impeller is also termed an inducer. Centrifugal compressors differ from axials as they use a significant change in radius from inlet to exit of the impeller to produce a much greater pressure rise in a single stage (e.g. 8 in the Pratt & Whitney Canada PW200 series of helicopter engines) than does an axial stage. The 1940s-era German Heinkel HeS 011 experimental engine was the first aviation turbojet to have a compressor stage with radial flow-turning part-way between none for an axial and 90 degrees for a centrifugal. It is known as a mixed/diagonal-flow compressor. A diagonal stage is used in the Pratt & Whitney Canada PW600 series of small turbofans. Centrifugal fan Centrifugal compressors are also similar to centrifugal fans of the style shown in the neighboring figure as they both increase the energy of the flow through the increasing radius. In contrast to centrifugal fans, compressors operate at higher speeds to generate greater pressure rises. In many cases, the engineering methods used to design a centrifugal fan are the same as those to design a centrifugal compressor, so they can look very similar. For purposes of generalization and definition, it can be said that centrifugal compressors often have density increases greater than 5 percent. Also, they often experience relative fluid velocities above Mach number 0.3 when the working fluid is air or nitrogen. In contrast, fans or blowers are often considered to have density increases of less than five percent and peak relative fluid velocities below Mach 0.3. Squirrel-cage fan Squirrel-cage fans are primarily used for ventilation. The flow field within this type of fan has internal recirculations. In comparison, a centrifugal fan is uniform circumferentially. Centrifugal pump Centrifugal compressors are also similar to centrifugal pumps of the style shown in the adjacent figures. The key difference between such compressors and pumps is that the compressor working fluid is a gas (compressible) and the pump working fluid is liquid (incompressible). Again, the engineering methods used to design a centrifugal pump are the same as those to design a centrifugal compressor. Yet, there is one important difference: the need to deal with cavitation in pumps. Radial turbine Centrifugal compressors also look very similar to their turbomachinery counterpart the radial turbine as shown in the figure. While a compressor transfers energy into a flow to raise its pressure, a turbine operates in reverse, by extracting energy from a flow, thus reducing its pressure. In other words, power is input to compressors and output from turbines. Turbomachinery using centrifugal compressors Standards As turbomachinery became more common, standards have been created to guide manufacturers to assure end-users that their products meet minimum safety and performance requirements. Associations formed to codify these standards rely on manufacturers, end-users, and related technical specialists. A partial list of these associations and their standards are listed below: American Society of Mechanical Engineers:BPVC, PTC. American Petroleum Institute: API STD 617 8TH ED (E1), API STD 672 5TH ED (2019). American Society of Heating, Refrigeration, and Airconditioning Engineers: Handbook Fundamentals. Society of Automotive Engineers Compressed Air and Gas Institute International Organization for StandardizationISO 10439, ISO 10442, ISO 18740, ISO 6368, ISO 5389 Applications Below, is a partial list of centrifugal compressor applications each with a brief description of some of the general characteristics possessed by those compressors. To start this list two of the most well-known centrifugal compressor applications are listed; gas turbines and turbochargers. In gas turbines and auxiliary power units. Ref. Figures 4.1–4.2 In their simple form, modern gas turbines operate on the Brayton cycle. (ref Figure 5.1) Either or both axial and centrifugal compressors are used to provide compression. The types of gas turbines that most often include centrifugal compressors include small aircraft engines (i.e. turboshafts, turboprops, and turbofans), auxiliary power units, and micro-turbines. The industry standards applied to all centrifugal compressors used in aircraft applications are set by the relevant civilian and military certification authorities to achieve the safety and durability required in service. Centrifugal impellers used in gas turbines are commonly made from titanium alloy forgings. Their flow-path blades are commonly flank milled or point milled on 5-axis milling machines. When running clearances have to be as small as possible without the impeller rubbing its shroud the impeller is first drawn with its high-temperature, high-speed deflected shape and then drawn in its equivalent cold static shape for manufacturing. This is necessary because the impeller deflections at the most severe running condition can be 100 times larger than the required hot running clearance between the impeller and its shroud. In automotive engine and diesel engine turbochargers and superchargers. Ref. Figure 1.1 Centrifugal compressors used in conjunction with reciprocating internal combustion engines are known as turbochargers if driven by the engine's exhaust gas and turbo-superchargers if mechanically driven by the engine. Standards set by the industry for turbochargers may have been established by SAE. Ideal gas properties often work well for the design, test and analysis of turbocharger centrifugal compressor performance. In pipeline compressors of natural gas to move the gas from the production site to the consumer. Centrifugal compressors for such uses may be one- or multi-stage and driven by large gas turbines. Standards set by the industry (ANSI/API, ASME) result in thick casings to achieve a required level of safety. The impellers are often if not always of the covered style which makes them look much like pump impellers. This type of compressor is also often termed an API-style. The power needed to drive these compressors is most often in the thousands of horsepower (HP). The use of real gas properties is needed to properly design, test, and analyze the performance of natural gas pipeline centrifugal compressors. In oil refineries, natural-gas processing, petrochemical and chemical plants. Centrifugal compressors for such uses are often one-shaft multi-stage and driven by large steam or gas turbines. Their casings are termed horizontally split if the rotor is lowered into the bottom half during assembly or barrel if it has no lengthwise split-line with the rotor being slid in. Standards set by the industry (ANSI/API, ASME) for these compressors result in thick casings to achieve a required level of safety. The impellers are often of the covered style which makes them look much like pump impellers. This type of compressor is also often termed API-style. The power needed to drive these compressors is usually in the thousands of HP. Use of real gas properties is needed to properly design, test and analyze their performance. Air-conditioning and refrigeration and HVAC: Centrifugal compressors quite often supply the compression in water chillers cycles. Because of the wide variety of vapor compression cycles (thermodynamic cycle, thermodynamics) and the wide variety of working fluids (refrigerants), centrifugal compressors are used in a variety of sizes and configurations. Use of real gas properties is needed to properly design, test and analyze the performance of these machines. Standards set by the industry for these compressors include ASHRAE, ASME & API. In industry and manufacturing to supply compressed air for all types of pneumatic tools. Centrifugal compressors for such uses are often multistage and driven by electric motors. Inter-cooling is often needed between stages to control air temperature. Road-repair crews and automobile repair garages find screw compressors better adapt to their needs. Standards set by the industry for these compressors include ASME and government regulations that emphasize safety. Ideal gas relationships are often used to properly design, test, and analyze the performance of these machines. Carrier's equation is often used to deal with humidity. In air separation plants to manufacture purified end product gases. Centrifugal compressors for such uses are often multistage using inter-cooling to control air temperature. Standards set by the industry for these compressors include ASME and government regulations that emphasize safety. Ideal gas relationships are often used to properly design, test, and analyze the performance of these machines when the working gas is air or nitrogen. Other gases require real gas properties. In oil field re-injection of high-pressure natural gas to improve oil recovery. Centrifugal compressors for such uses are often one-shaft multi-stage and driven by gas turbines. With discharge pressures approaching 700 bar, casings are of the barrel style. Standards set by the industry (API, ASME) for these compressors result in large thick casings to maximize safety. The impellers are often if not always of the covered style which makes them look much like pump impellers. This type of compressor is also often termed API-style. The use of real gas properties is needed to properly design, test, and analyze their performance. Theory of operation In the case where flow passes through a straight pipe to enter a centrifugal compressor, the flow is axial, uniform, and has no vorticity, i.e. swirling motion. As the flow passes through the centrifugal impeller, the impeller forces the flow to spin faster as it gets further from the rotational axis. According to a form of Euler's fluid dynamics equation, known as the pump and turbine equation, the energy input to the fluid is proportional to the flow's local spinning velocity multiplied by the local impeller tangential velocity. In many cases, the flow leaving the centrifugal impeller is traveling near the speed of sound. It then flows through a stationary compressor causing it to decelerate. The stationary compressor is ducting with increasing flow-area where energy transformation takes place. If the flow has to be turned in a rearward direction to enter the next part of the machine, e.g. another impeller or a combustor, flow losses can be reduced by directing the flow with stationary turning vanes or individual turning pipes (pipe diffusers). As described in Bernoulli's principle, the reduction in velocity causes the pressure to rise. Performance While illustrating a gas turbine's Brayton cycle, Figure 5.1 includes example plots of pressure-specific volume and temperature-entropy. These types of plots are fundamental to understanding centrifugal compressor performance at one operating point. The two plots show that the pressure rises between the compressor inlet (station 1) and compressor exit (station 2). At the same time, the specific volume decreases while the density increases. The temperature-entropy plot shows that the temperature increases with increasing entropy (loss). Assuming dry air, and the ideal gas equation of state and an isentropic process, there is enough information to define the pressure ratio and efficiency for this one point. The compressor map is required to understand the compressor performance over its complete operating range. Figure 5.2, a centrifugal compressor performance map (either test or estimated), shows the flow, pressure ratio for each of 4 speed-lines (total of 23 data points). Also included are constant efficiency contours. Centrifugal compressor performance presented in this form provides enough information to match the hardware represented by the map to a simple set of end-user requirements. Compared to estimating performance which is very cost effective (thus useful in design), testing, while costly, is still the most precise method. Further, testing centrifugal compressor performance is very complex. Professional societies such as ASME (i.e. PTC–10, Fluid Meters Handbook, PTC-19.x), ASHRAE (ASHRAE Handbook) and API (ANSI/API 617–2002, 672–2007) have established standards for detailed experimental methods and analysis of test results. Despite this complexity, a few basic concepts in performance can be presented by examining an example test performance map. Performance maps Pressure ratio and flow are the main parameters needed to match the Figure 5.2 performance map to a simple compressor application. In this case, it can be assumed that the inlet temperature is sea-level standard. This assumption is not acceptable in practice as inlet temperature variations cause significant variations in compressor performance. Figure 5.2 shows: Corrected mass flow: 0.04 – 0.34 kg/s Total pressure ratio, inlet to discharge (PR = P/P): 1.0 – 2.6 As is standard practice, Figure 5.2 has a horizontal axis labeled with a flow parameter. While flow measurements use a variety of units, all fit one of 2 categories: Mass flow per unit time Mass flow units, such as kg/s, are the easiest to use in practice as there is little room for confusion. Questions remaining would involve inlet or outlet (which might involve leakage from the compressor or moisture condensation). For atmospheric air, the mass flow may be wet or dry (including or excluding humidity). Often, the mass flow specification will be presented on an equivalent Mach number basis, . It is standard in these cases that the equivalent temperature, equivalent pressure, and gas is specified explicitly or implied at a standard condition. Volume flow per unit time In contrast, all volume flow specifications require the additional specification of density. Bernoulli's fluid dynamic principle is of great value in understanding this problem. Confusion arises through either inaccuracies or misuse of pressure, temperature, and gas constants. Also as is standard practice, Figure 5.2 has a vertical axis labeled with a pressure parameter. There is a variety of pressure measurement units. They all fit one of two categories: A △pressure, ie increase from inlet to exit (measured with a manometer) A discharge pressure The pressure rise may alternatively be specified as a ratio that has no units: A pressure ratio (exit/inlet) Other features common to performance maps are: Constant speed-lines The two most common methods for producing a map for a centrifugal compressor are at constant shaft speed or with a constant throttle setting. If the speed is held constant, test points are taken along a constant speed line by changing throttle positions. In contrast, if a throttle valve is held constant, test points are established by changing speed and repeated with different throttle positions (common gas turbine practice). The map shown in Figure 5.2 illustrates the most common method; lines of constant speed. In this case, we see data points connected via straight lines at speeds of 50%, 71%, 87%, and 100% RPM. The first three speed-lines have 6 points each while the highest speed line has five. Constant efficiency islands The next feature to be discussed is the oval-shaped curves representing islands of constant efficiency. In this figure we see 11 contours ranging from 56% efficiency (decimal 0.56) to 76% efficiency (decimal 0.76). General standard practice is to interpret these efficiencies as isentropic rather than polytropic. The inclusion of efficiency islands effectively generates a 3-dimensional topology to this 2-dimensional map. With inlet density specified, it provides a further ability to calculate aerodynamic power. Lines of constant power could just as easily be substituted. Design or guarantee point(s) Regarding gas turbine operation and performance, there may be a series of guaranteed points established for the gas turbine's centrifugal compressor. These requirements are of secondary importance to the overall gas turbine performance as a whole. For this reason, it is only necessary to summarize that in the ideal case, the lowest specific fuel consumption would occur when the centrifugal compressor's peak efficiency curve coincides with the gas turbine's required operation line. In contrast to gas turbines, most other applications (including industrial) need to meet a less stringent set of performance requirements. Historically, centrifugal compressors applied to industrial applications were needed to achieve performance at a specific flow and pressure. Modern industrial compressors are often needed to achieve specific performance goals across a range of flows and pressures; thus taking a significant step toward the sophistication seen in gas turbine applications. If the compressor represented in Figure 5.2 is used in a simple application, any point (pressure and flow) within the 76% efficiency would provide very acceptable performance. An "End User" would be very happy with the performance requirements of 2.0 pressure ratio at 0.21 kg/s. Surge Surge - is a low flow phenomenon where the impeller cannot add enough energy to overcome the system resistance or backpressure. At low flow rate operation, the pressure ratio over the impeller is high, as is back system backpressure. Under critical conditions, the flow will reverse back over the tips of the rotor blades towards the impeller eye (inlet). This stalling flow reversal may go unnoticed as the fraction of mass flow or energy is too low. When large enough, rapid flow reversal occurs (i.e., surge). The reversed flow exiting the impeller inlet exhibits a strong rotational component, which affects lower radius flow angles (closer to the impeller hub) at the leading edge of the blades. The deterioration of the flow angles causes the impeller to be inefficient. A full flow reversal can occur. (Therefore, surge is sometimes referred to as axisymmetric stall.) When reversed flow reduces to a low enough level, the impeller recovers and regains stability for a short moment at which point the stage may surge again. These cyclic events cause large vibrations, increase temperature and change rapidly the axial thrust. These occurrences can damage the rotor seals, rotor bearings, the compressor driver, and cycle operation. Most turbomachines are designed to easily withstand occasional surging. However, if the machine is forced to surge repeatedly for a long period of time, or if it is poorly designed, repeated surges can result in a catastrophic failure. Of particular interest, is that while turbomachines may be very durable, their physical system can be far less robust. Surge line The surge-line shown in Figure 5.2 is the curve that passes through the lowest flow points of each of the four speed-lines. As a test map, these points would be the lowest flow points possible to record a stable reading within the test facility/rig. In many industrial applications, it may be necessary to increase the stall line due to the system backpressure. For example, at 100% RPM stalling flow might increase from approximately 0.170 kg/s to 0.215 kg/s because of the positive slope of the pressure ratio curve. As stated earlier, the reason for this is that the high-speed line in Figure 5.2 exhibits a stalling characteristic or positive slope within that range of flows. When placed in a different system those lower flows might not be achievable because of interaction with that system. System resistance or adverse pressure is proven mathematically to be the critical contributor to compressor surge. Maximum flow line versus choke Choke occurs under one of 2 conditions. Typically for high speed equipment, as flow increases the velocity of the flow can approach sonic speed somewhere within the compressor stage. This location may occur at the impeller inlet "throat" or at the vaned diffuser inlet "throat". In contrast, for lower speed equipment, as flows increase, losses increase such that the pressure ratio eventually drops to 1:1. In this case, the occurrence of choke is unlikely. The speed-lines of gas turbine centrifugal compressors typically exhibit choke. This is a situation where the pressure ratio of a speed line drops rapidly (vertically) with little or no change in flow. In most cases the reason for this is that close to Mach 1 velocities have been reached somewhere within the impeller and/or diffuser generating a rapid increase in losses. Higher pressure ratio turbocharger centrifugal compressors exhibit this same phenomenon. Real choke phenomena is a function of compressibility as measured by the local Mach number within an area restriction within the centrifugal pressure stage. The maximum flow line, shown in Figure 5.2, is the curve that passes through the highest flow points of each speed line. Upon inspection it may be noticed that each of these points has been taken near 56% efficiency. Selecting a low efficiency (<60%) is the most common practice used to terminate compressor performance maps at high flows. Another factor that is used to establish the maximum flow line is a pressure ratio near or equal to 1. The 50% speed line may be considered an example of this. The shape of Figure 5.2's speed-lines provides a good example of why it is inappropriate to use the term choke in association with a maximum flow of all centrifugal compressor speed-lines. In summary; most industrial and commercial centrifugal compressors are selected or designed to operate at or near their highest efficiencies and to avoid operation at low efficiencies. For this reason there is seldom a reason to illustrate centrifugal compressor performance below 60% efficiency. Many industrial and commercial multistage compressor performance maps exhibits this same vertical characteristic for a different reason related to what is known as stage stacking. Other operating limits Minimum operating speed The minimum speed for acceptable operation, below this value the compressor may be controlled to stop or go into an "idle" condition. Maximum allowable speed The maximum operating speed for the compressor. Beyond this value stresses may rise above prescribed limits and rotor vibrations may increase rapidly. At speeds above this level the equipment will likely become very dangerous and be controlled to lower speeds. Dimensional analysis To weigh the advantages between centrifugal compressors it is important to compare 8 parameters classic to turbomachinery. Specifically, pressure rise (p), flow (Q), angular speed (N), power (P), density (ρ), diameter (D), viscosity (μ) and elasticity (e). This creates a practical problem when trying to experimentally determine the effect of any one parameter. This is because it is nearly impossible to change one of these parameters independently. The method of procedure known as the Buckingham π theorem can help solve this problem by generating 5 dimensionless forms of these parameters. These Pi parameters provide the foundation for "similitude" and the "affinity-laws" in turbomachinery. They provide for the creation of additional relationships (being dimensionless) found valuable in the characterization of performance. For the example below Head will be substituted for pressure and sonic velocity will be substituted for elasticity. Buckingham Π theorem The three independent dimensions used in this procedure for turbomachinery are: mass (force is an alternative) length time According to the theorem each of the eight main parameters are equated to its independent dimensions as follows: Classic turbomachinery similitude Completing the task of following the formal procedure results in generating this classic set of five dimensionless parameters for turbomachinery. Full-similitude is achieved when each one of the 5 Pi-parameters is equivalent when comparing two different cases. This of course would mean the two turbomachines being compared are similar, both geometrically and in terms of performance. Turbomachinery analysts gain tremendous insight into performance by comparisons of the 5 parameters shown in the above table. Particularly, performance parameters such as efficiencies and loss-coefficients, which are also dimensionless. In general application, the Flow-coefficient and Head-coefficient are considered of primary importance. Generally, for centrifugal compressors, the Speed-coefficient is of secondary importance while the Reynolds-coefficient is of tertiary importance. In contrast, as expected for pumps, the Reynolds-coefficient becomes of secondary importance and the Speed-coefficient of tertiary importance. It may be found interesting that the Speed-coefficient may be chosen to define the y-axis of Figure 1.1, while at the same time the Reynolds coefficient may be chosen to define the z-axis. Other dimensionless combinations Demonstrated in the table below is another value of dimensional analysis. Any number of new dimensionless parameters can be calculated through exponents and multiplication. For example, a variation of the first parameter shown below is popularly used in aircraft engine system analysis. The third parameter is a simplified dimensional variation of the first and second. This third definition is applicable with strict limitations. The fourth parameter, specific speed, is very well known and useful in that it removes diameter. The fifth parameter, specific diameter, is a less often discussed dimensionless parameter found useful by Balje. It may be found interesting that the specific speed coefficient may be used in place of speed to define the y-axis of Figure 1.2, while at the same time, the specific diameter coefficient may be in place of diameter to define the z-axis. Affinity laws The following affinity laws are derived from the five Π-parameters shown above. They provide a simple basis for scaling turbomachinery from one application to the next. Aero-thermodynamic fundamentals The following equations outline a fully three-dimensional mathematical problem that is very difficult to solve even with simplifying assumptions. Until recently, limitations in computational power, forced these equations to be simplified to an inviscid two-dimensional problem with pseudo losses. Before the advent of computers, these equations were almost always simplified to a one-dimensional problem. Solving this one-dimensional problem is still valuable today and is often termed mean-line analysis. Even with all of this simplification it still requires large textbooks to outline and large computer programs to solve practically. Conservation of mass Also termed continuity, this fundamental equation written in general form is as follows: Conservation of momentum Also termed the Navier–Stokes equations, this fundamental is derivable from Newton's second law when applied to fluid motion. Written in compressible form for a Newtonian fluid, this equation may be written as follows: Conservation of energy The first law of thermodynamics is the statement of the conservation of energy. Under specific conditions, the operation of a Centrifugal compressor is considered a reversible process. For a reversible process, the total amount of heat added to a system can be expressed as where is temperature and is entropy. Therefore, for a reversible process: Since U, S and V are thermodynamic functions of state, the above relation holds also for non-reversible changes. The above equation is known as the fundamental thermodynamic relation. Equation of state The classical ideal gas law may be written: The ideal gas law may also be expressed as follows where is the density, is the adiabatic index (ratio of specific heats), is the internal energy per unit mass (the "specific internal energy"), is the specific heat at constant volume, and is the specific heat at constant pressure. With regard to the equation of state, it is important to remember that while air and nitrogen properties (near standard atmospheric conditions) are easily and accurately estimated by this simple relationship, there are many centrifugal compressor applications where the ideal relationship is inadequate. For example, centrifugal compressors used for large air conditioning systems (water chillers) use a refrigerant as a working gas that cannot be modeled as an ideal gas. Another example are centrifugal compressors design and built for the petroleum industry. Most of the hydrocarbon gases such as methane and ethylene are best modeled as a real gas equation of state rather than ideal gases. The Wikipedia entry for equations of state is very thorough. Pros and cons Pros Centrifugal compressors offer the advantages of simplicity of manufacturing and relatively low cost. This is due to requiring fewer stages to achieve the same pressure rise. Centrifugal compressors are used throughout industry because they have fewer rubbing parts, are relatively energy efficient, and give higher and non-oscillating constant airflow than a similarly sized reciprocating compressor or any other positive displacement pump. Centrifugal compressors are mostly used as turbochargers and in small gas turbine engines like in an APU (auxiliary power unit) and as main engine for smaller aircraft like helicopters. A significant reason for this is that with current technology, the equivalent airflow axial compressor will be less efficient due primarily to a combination of rotor and variable stator tip-clearance losses. Cons Their main drawback is that they cannot achieve the high compression ratio of reciprocating compressors without multiple stages. There are few one-stage centrifugal compressors capable of pressure ratios over 10:1, due to stress considerations which severely limit the compressor's safety, durability and life expectancy. Centrifugal compressors are impractical, compared to axial compressors, for use in large gas turbines and turbojet engines propelling large aircraft, due to the resulting weight and stress, and to the frontal area presented by the large diameter of the radial diffuser. Structural mechanics, manufacture and design compromise Ideally, centrifugal compressor impellers have thin air-foil blades that are strong, each mounted on a light rotor. This material would be easy to machine or cast and inexpensive. Additionally, it would generate no operating noise, and have a long life while operating in any environment. From the very start of the aero-thermodynamic design process, the aerodynamic considerations and optimizations [29,30] are critical to have a successful design. during the design, the centrifugal impeller's material and manufacturing method must be accounted for within the design, whether it be plastic for a vacuum cleaner blower, aluminum alloy for a turbocharger, steel alloy for an air compressor or titanium alloy for a gas turbine. It is a combination of the centrifugal compressor impeller shape, its operating environment, its material and its manufacturing method that determines the impeller's structural integrity. See also Angular momentum Axial compressor Centrifugal force Centripetal force Coandă effect Computational fluid dynamics Compressibility Compressor map Coriolis force Darcy–Weisbach equation Enthalpy Entropy Euler equations (fluid dynamics) Finite element method Fluid dynamics Gas laws Gustaf de Laval Ideal gas law Kinematics Mach number Multiphase flow Navier–Stokes equations Real gas Reynolds-averaged Navier–Stokes equations Reynolds transport theorem Reynolds number Rossby number Three-dimensional losses and correlation in turbomachinery Turbulence Viscosity von Karman Institute for Fluid Dynamics References External links MIT Gas Turbine Laboratory (1948), First Marine Gas Turbine in Service. Journal of the American Society for Naval Engineers, 60: 66–86. A history of Chrysler turbine cars To find API codes, standards & publications To find ASME codes, standards & publications To find ASHRAE codes, standards & publications Glenn Research Center at NASA Hydrodynamics of Pumps, by Christopher Earls Brennen Ctrend website to calculate the head of centrifugal compressor online Gas compressors pt:Compressor#Compressores Dinâmicos ru:Лопастной компрессор#Центробежный компрессор
Centrifugal compressor
[ "Chemistry" ]
8,630
[ "Gas compressors", "Turbomachinery" ]
319,506
https://en.wikipedia.org/wiki/Transient-voltage-suppression%20diode
A transient-voltage-suppression (TVS) diode, also transil, transorb or thyrector, is an electronic component used to protect electronics from voltage spikes induced on connected wires. Description The device operates by shunting excess current when the induced voltage exceeds the avalanche breakdown potential. It is a clamping device, suppressing all overvoltages above its breakdown voltage. It automatically resets when the overvoltage goes away, but absorbs much more of the transient energy internally than a similarly rated crowbar device. A transient-voltage-suppression diode may be either unidirectional or bidirectional. A unidirectional device operates as a rectifier in the forward direction like any other avalanche diode, but is made and tested to handle very large peak currents. A bidirectional transient-voltage-suppression diode can be represented by two mutually opposing avalanche diodes in series with one another and connected in parallel with the circuit to be protected. While this representation is schematically accurate, physically the devices are now manufactured as a single component. A transient-voltage-suppression diode can respond to over-voltages faster than other common over-voltage protection components such as varistors or gas discharge tubes. The actual clamping occurs in roughly one picosecond, but in a practical circuit the inductance of the wires leading to the device imposes a higher limit. This makes transient-voltage-suppression diodes useful for protection against very fast and often damaging voltage transients. These fast over-voltage transients are present on all distribution networks and can be caused by either internal or external events, such as lightning or motor arcing. Transient voltage suppressors will fail if they are subjected to voltages or conditions beyond those that the particular product was designed to accommodate. There are three key modes in which the TVS will fail: short, open, and degraded device. TVS diodes are sometimes referred to as transorbs, from the Vishay trademark TransZorb. Characterization A TVS diode is characterized by: Leakage current: the amount of current conducted when voltage applied is below the maximum reverse standoff voltage. Maximum reverse standoff voltage: the voltage below which no significant conduction occurs. Breakdown voltage: the voltage at which some specified and significant conduction occurs. Clamping voltage: the voltage at which the device will conduct its fully rated current (hundreds to thousands of amperes). Parasitic capacitance: The nonconducting diode behaves like a capacitor, which can distort and corrupt high-speed signals. Lower capacitance is generally preferred. Parasitic inductance: Because the actual over voltage switching is so fast, the package inductance is the limiting factor for response speed. Amount of energy it can absorb: Because the transients are so brief, all of the energy is initially stored internally as heat; a heat sink only affects the time to cool down afterwards. Thus, a high-energy TVS must be physically large. If this capacity is too small, the over voltage will possibly destroy the device and leave the circuit unprotected. See also Surge protector Trisil Zener diode References Further reading TVS/Zener Theory and Design Considerations; ON Semiconductor; 127 pages; 2005; HBD854/D. (Free PDF download) External links What are TVS diodes, Semtech Application Note SI96-01 Transient Suppression Devices and Principles, Littelfuse Application Note AN9768 Transil™ / Trisil™ Comparison, ST application note AN574 Transient Protection Solutions: Transil™ diode versus Varistor, ST application note AN1826 Diodes Electric power systems components Voltage stability
Transient-voltage-suppression diode
[ "Physics" ]
779
[ "Voltage", "Voltage stability", "Physical quantities" ]
319,515
https://en.wikipedia.org/wiki/Silicon%20controlled%20rectifier
A silicon controlled rectifier or semiconductor controlled rectifier is a four-layer solid-state current-controlling device. The name "silicon controlled rectifier" is General Electric's trade name for a type of thyristor. The principle of four-layer p–n–p–n switching was developed by Moll, Tanenbaum, Goldey, and Holonyak of Bell Laboratories in 1956. The practical demonstration of silicon controlled switching and detailed theoretical behavior of a device in agreement with the experimental results was presented by Dr Ian M. Mackintosh of Bell Laboratories in January 1958. The SCR was developed by a team of power engineers led by Gordon Hall and commercialized by Frank W. "Bill" Gutzwiller in 1957. Some sources define silicon-controlled rectifiers and thyristors as synonymous while other sources define silicon-controlled rectifiers as a proper subset of the set of thyristors; the latter being devices with at least four layers of alternating n- and p-type material. According to Bill Gutzwiller, the terms "SCR" and "controlled rectifier" were earlier, and "thyristor" was applied later, as usage of the device spread internationally. SCRs are unidirectional devices (i.e. can conduct current only in one direction) as opposed to TRIACs, which are bidirectional (i.e. charge carriers can flow through them in either direction). SCRs can be triggered normally only by a positive current going into the gate as opposed to TRIACs, which can be triggered normally by either a positive or a negative current applied to its gate electrode. Modes of operation There are three modes of operation for an SCR depending upon the biasing given to it: Forward blocking mode (off state) Forward conduction mode (on state) Reverse blocking mode (off state) Forward blocking mode In this mode of operation, the anode (+, p-doped side) is given a positive voltage while the cathode (−, n-doped side) is given a negative voltage, keeping the gate at zero (0) potential i.e. disconnected. In this case junction J1and J3 are forward-biased, while J2 is reverse-biased, allowing only a small leakage current from the anode to the cathode. When the applied voltage reaches the breakover value for J2, then J2 undergoes avalanche breakdown. At this breakover voltage J2 starts conducting, but below breakover voltage J2 offers very high resistance to the current and the SCR is said to be in the off state. Forward conduction mode An SCR can be brought from blocking mode to conduction mode in two ways: Either by increasing the voltage between anode and cathode beyond the breakover voltage, or by applying a positive pulse at the gate. Once the SCR starts conducting, no more gate voltage is required to maintain it in the ON state. The minimum current necessary to maintain the SCR in the ON state on removal of the gate voltage is called the latching current. There are two ways to turn it off: Reduce the current through it below a minimum value called the holding current, or With the gate turned off, short-circuit the anode and cathode momentarily with a push-button switch or transistor across the junction. Reverse blocking mode When a negative voltage is applied to the anode and a positive voltage to the cathode, the SCR is in reverse blocking mode, making J1 and J3 reverse biased and J2 forward biased. The device behaves as two diodes connected in series. A small leakage current flows. This is the reverse blocking mode. If the reverse voltage is increased, then at critical breakdown level, called the reverse breakdown voltage (VBR), an avalanche occurs at J1 and J3 and the reverse current increases rapidly. SCRs are available with reverse blocking capability, which adds to the forward voltage drop because of the need to have a long, low-doped P1 region. Usually, the reverse blocking voltage rating and forward blocking voltage rating are the same. The typical application for a reverse blocking SCR is in current-source inverters. An SCR incapable of blocking reverse voltage is known as an asymmetrical SCR, abbreviated ASCR. It typically has a reverse breakdown rating in the tens of volts. ASCRs are used where either a reverse conducting diode is applied in parallel (for example, in voltage-source inverters) or where reverse voltage would never occur (for example, in switching power supplies or DC traction choppers). Asymmetrical SCRs can be fabricated with a reverse conducting diode in the same package. These are known as RCTs, for reverse conducting thyristors. Thyristor turn-on methods forward-voltage triggering gate triggering dv/dt triggering thermal triggering light triggering Forward-voltage triggering occurs when the anode–cathode forward voltage is increased with the gate circuit opened. This is known as avalanche breakdown, during which junction J2 will break down. At sufficient voltages, the thyristor changes to its on state with low voltage drop and large forward current. In this case, J1 and J3 are already forward-biased. In order for gate triggering to occur, the thyristor should be in the forward blocking state where the applied voltage is less than the breakdown voltage, otherwise forward-voltage triggering may occur. A single small positive voltage pulse can then be applied between the gate and the cathode. This supplies a single gate current pulse that turns the thyristor onto its on state. In practice, this is the most common method used to trigger a thyristor. Temperature triggering occurs when the width of depletion region decreases as the temperature is increased. When the SCR is near VPO a very small increase in temperature causes junction J2 to be removed which triggers the device. Simple SCR circuit A simple SCR circuit can be illustrated using an AC voltage source connected to a SCR with a resistive load. Without an applied current pulse to the gate of the SCR, the SCR is left in its forward blocking state. This makes the start of conduction of the SCR controllable. The delay angle α, which is the instant the gate current pulse is applied with respect to the instant of natural conduction (ωt = 0), controls the start of conduction. Once the SCR conducts, the SCR does not turn off until the current through the SCR, is, becomes negative. is stays zero until another gate current pulse is applied and SCR once again begins conducting. Applications SCRs are mainly used in devices where the control of high power, possibly coupled with high voltage, is demanded. Their operation makes them suitable for use in medium- to high-voltage AC power control applications, such as lamp dimming, power regulators and motor control. SCRs and similar devices are used for rectification of high-power AC in high-voltage direct current power transmission. They are also used in the control of welding machines, mainly gas tungsten arc welding and similar processes. It is used as an electronic switch in various devices. Early solid-state pinball machines made use of these to control lights, solenoids, and other functions electronically, instead of mechanically, hence the name solid-state. Other applications include power switching circuits, controlled rectifiers, speed control of DC shunt motors, SCR crowbars, computer logic circuits, timing circuits, and inverters. Comparison with SCS A silicon-controlled switch (SCS) behaves nearly the same way as an SCR; but there are a few differences. Unlike an SCR, an SCS switches off when a positive voltage/input current is applied to another anode gate lead. Unlike an SCR, an SCS can be triggered into conduction when a negative voltage/output current is applied to that same lead. SCSs are useful in practically all circuits that need a switch that turns on/off through two distinct control pulses. This includes power-switching circuits, logic circuits, lamp drivers, and counters. Compared to TRIACs A TRIAC resembles an SCR in that both act as electrically controlled switches. Unlike an SCR, a TRIAC can pass current in either direction. Thus, TRIACs are particularly useful for AC applications. TRIACs have three leads: a gate lead and two conducting leads, referred to as MT1 and MT2. If no current/voltage is applied to the gate lead, the TRIAC switches off. On the other hand, if the trigger voltage is applied to the gate lead, the TRIAC switches on. TRIACs are suitable for light-dimming circuits, phase-control circuits, AC power-switching circuits, AC motor control circuits, etc. See also High-voltage direct current Gate turn-off thyristor Insulated-gate bipolar transistor Integrated gate-commutated thyristor Voltage regulator Snubber Crowbar (circuit) DIAC BJT References Further reading External links SCR at AllAboutCircuits SCR Circuit Design Solid state switches Power electronics Rectifiers General Electric inventions 1957 introductions 1957 in technology 20th-century inventions de:Thyristor#Geschichte
Silicon controlled rectifier
[ "Engineering" ]
1,934
[ "Electronic engineering", "Power electronics" ]
319,536
https://en.wikipedia.org/wiki/7400-series%20integrated%20circuits
The 7400 series is a popular logic family of transistor–transistor logic (TTL) integrated circuits (ICs). In 1964, Texas Instruments introduced the SN5400 series of logic chips, in a ceramic semiconductor package. A low-cost plastic package SN7400 series was introduced in 1966 which quickly gained over 50% of the logic chip market, and eventually becoming de facto standardized electronic components. Since the introduction of the original bipolar-transistor TTL parts, pin-compatible parts were introduced with such features as low power CMOS technology and lower supply voltages. Surface mount packages exist for several popular logic family functions. Overview The 7400 series contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic units (ALU). Specific functions are described in a list of 7400 series integrated circuits. Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. The less-common 64 and 84 prefixes on Texas Instruments parts indicated an industrial temperature range. Since the 1970s, new product families have been released to replace the original 7400 series. More recent TTL-compatible logic families were manufactured using CMOS or BiCMOS technology rather than TTL. Today, surface-mounted CMOS versions of the 7400 series are used in various applications in electronics and for glue logic in computers and industrial electronics. The original through-hole devices in dual in-line packages (DIP/DIL) were the mainstay of the industry for many decades. They are useful for rapid breadboard-prototyping and for education and remain available from most manufacturers. The fastest types and very low voltage versions are typically surface-mount only, however. The first part number in the series, the 7400, is a 14-pin IC containing four two-input NAND gates. Each gate uses two input pins and one output pin, with the remaining two pins being power (+5 V) and ground. This part was made in various through-hole and surface-mount packages, including flat pack and plastic/ceramic dual in-line. Additional characters in a part number identify the package and other variations. Unlike the older resistor-transistor logic integrated circuits, bipolar TTL gates were unsuitable to be used as analog devices, providing low gain, poor stability, and low input impedance. Special-purpose TTL devices were used to provide interface functions such as Schmitt triggers or monostable multivibrator timing circuits. Inverting gates could be cascaded as a ring oscillator, useful for purposes where high stability was not required. History Although the 7400 series was the first de facto industry standard TTL logic family (i.e. second-sourced by several semiconductor companies), there were earlier TTL logic families such as: Sylvania Universal High-level Logic in 1963 Motorola MC4000 MTTL National Semiconductor DM8000 Fairchild 9300 series Signetics 8200 and 8T00 The 7400 quad 2-input NAND gate was the first product in the series, introduced by Texas Instruments in a military grade metal flat package (5400W) in October 1964. The pin assignment of this early series differed from the de facto standard set by the later series in DIP packages (in particular, ground was connected to pin 11 and the power supply to pin 4, compared to pins 7 and 14 for DIP packages). The extremely popular commercial grade plastic DIP (7400N) followed in the third quarter of 1966. The 5400 and 7400 series were used in many popular minicomputers in the 1970s and early 1980s. Some models of the DEC PDP-series 'minis' used the 74181 ALU as the main computing element in the CPU. Other examples were the Data General Nova series and Hewlett-Packard 21MX, 1000, and 3000 series. In 1965, typical quantity-one pricing for the SN5400 (military grade, in ceramic welded flat-pack) was around 22 USD. As of 2007, individual commercial-grade chips in molded epoxy (plastic) packages can be purchased for approximately US$0.25 each, depending on the particular chip. Families 7400 series parts were constructed using bipolar junction transistors (BJT), forming what is referred to as transistor–transistor logic or TTL. Newer series, more or less compatible in function and logic level with the original parts, use CMOS technology or a combination of the two (BiCMOS). Originally the bipolar circuits provided higher speed but consumed more power than the competing 4000 series of CMOS devices. Bipolar devices are also limited to a fixed power-supply voltage, typically 5 V, while CMOS parts often support a range of supply voltages. Milspec-rated devices for use in extended temperature conditions are available as the 5400 series. Texas Instruments also manufactured radiation-hardened devices with the prefix RSN, and the company offered beam-lead bare dies for integration into hybrid circuits with a BL prefix designation. Regular-speed TTL parts were also available for a time in the 6400 series these had an extended industrial temperature range of −40 °C to +85 °C. While companies such as Mullard listed 6400-series compatible parts in 1970 data sheets, by 1973 there was no mention of the 6400 family in the Texas Instruments TTL Data Book. Texas Instruments brought back the 6400 series in 1989 for the SN64BCT540. The SN64BCTxxx series is still in production as of 2023. Some companies have also offered industrial extended temperature range variants using the regular 7400-series part numbers with a prefix or suffix to indicate the temperature grade. As integrated circuits in the 7400 series were made in different technologies, usually compatibility was retained with the original TTL logic levels and power-supply voltages. An integrated circuit made in CMOS is not a TTL chip, since it uses field-effect transistors (FETs) and not bipolar junction transistors (BJT), but similar part numbers are retained to identify similar logic functions and electrical (power and I/O voltage) compatibility in the different subfamilies. Over 40 different logic subfamilies use this standardized part number scheme. The headings in the following table are: Vcc power-supply voltage; tpd maximum gate delay; IOL maximum output current at low level; IOH maximum output current at high level; tpd, IOL, and IOH apply to most gates in a given family. Driver or buffer gates have higher output currents. Many parts in the CMOS HC, AC, AHC, and VHC families are also offered in "T" versions (HCT, ACT, AHCT and VHCT) which have input thresholds that are compatible with both TTL and 3.3 V CMOS signals. The non-T parts have conventional CMOS input thresholds, which are more restrictive than TTL thresholds. Typically, CMOS input thresholds require high-level signals to be at least 70% of Vcc and low-level signals to be at most 30% of Vcc. (TTL has the input high level above 2.0 V and the input low level below 0.8 V, so a TTL high-level signal could be in the forbidden middle range for 5 V CMOS.) The 74H family is the same basic design as the 7400 family with resistor values reduced. This reduced the typical propagation delay from 9 ns to 6 ns but increased the power consumption. The 74H family provided a number of unique devices for CPU designs in the 1970s. Many designers of military and aerospace equipment used this family over a long period and as they need exact replacements, this family is still produced by Lansdale Semiconductor. The 74S family, using Schottky circuitry, uses more power than the 74, but is faster. The 74LS family of ICs is a lower-power version of the 74S family, with slightly higher speed but lower power dissipation than the original 74 family; it became the most popular variant once it was widely available. Many 74LS ICs can be found in microcomputers and digital consumer electronics manufactured in the 1980s and early 1990s. The 74F family was introduced by Fairchild Semiconductor and adopted by other manufacturers; it is faster than the 74, 74LS and 74S families. Through the late 1980s and 1990s newer versions of this family were introduced to support the lower operating voltages used in newer CPU devices. Part numbering Part number schemes varied by manufacturer. The part numbers for 7400-series logic devices often use the following designators: Often first, a two or three letter prefix, denoting the manufacturer and flow class of the device. These codes are no longer closely associated with a single manufacturer, for example, Fairchild Semiconductor manufactures parts with MM and DM prefixes, and no prefixes. Examples: SN: Texas Instruments using a commercial processing SNV: Texas Instruments using military processing M: ST Microelectronics DM: National Semiconductor UT: Cobham PLC SG: Sylvania Two digits for temperature range. Examples: 54: military temperature range 64: short-lived historical series with intermediate "industrial" temperature range 74: commercial temperature range device Zero to four letters denoting the logic subfamily. Examples: zero letters: basic bipolar TTL LS: low power Schottky HCT: High-speed CMOS compatible with TTL Two or more arbitrarily assigned digits that identify the function of the device. There are hundreds of different devices in each family. Additional suffix letters and numbers may be appended to denote the package type, quality grade, or other information, but this varies widely by manufacturer. For example, "SN5400N" signifies that the part is a 7400-series IC probably manufactured by Texas Instruments ("SN" originally meaning "Semiconductor Network") using commercial processing, is of the military temperature rating ("54"), and is of the TTL family (absence of a family designator), its function being the quad 2-input NAND gate ("00") implemented in a plastic through-hole DIP package ("N"). Many logic families maintain a consistent use of the device numbers as an aid to designers. Often a part from a different 74x00 subfamily could be substituted ("drop-in replacement") in a circuit, with the same function and pin-out yet more appropriate characteristics for an application (perhaps speed or power consumption), which was a large part of the appeal of the 74C00 series over the competing CD4000B series, for example. But there are a few exceptions where incompatibilities (mainly in pin-out) across the subfamilies occurred, such as: some flat-pack devices (e.g. 7400W) and surface-mount devices, some of the faster CMOS series (for example 74AC), a few low-power TTL devices (e.g. 74L86, 74L9 and 74L95) have a different pin-out than the regular (or even 74LS) series part. five versions of the 74x54 (4-wide AND-OR-INVERT gates IC), namely 7454(N), 7454W, 74H54, 74L54W and 74L54N/74LS54, are different from each other in pin-out and/or function, Second sources from Europe and Eastern Bloc Some manufacturers, such as Mullard and Siemens, had pin-compatible TTL parts, but with a completely different numbering scheme; however, data sheets identified the 7400-compatible number as an aid to recognition. At the time the 7400 series was being made, some European manufacturers (that traditionally followed the Pro Electron naming convention), such as Philips/Mullard, produced a series of TTL integrated circuits with part names beginning with FJ. Some examples of FJ series are: FJH101 (=7430) single 8-input NAND gate, FJH131 (=7400) quadruple 2-input NAND gate, FJH181 (=7454N or J) 2+2+2+2 input AND-OR-NOT gate. The Soviet Union started manufacturing TTL ICs with 7400-series pinout in the late 1960s and early 1970s, such as the K155ЛA3, which was pin-compatible with the 7400 part available in the United States, except for using a metric spacing of 2.5 mm between pins instead of the pin-to-pin spacing used in the west. Another peculiarity of the Soviet-made 7400 series was the packaging material used in the 1970s–1980s. Instead of the ubiquitous black resin, they had a brownish-green body colour with subtle swirl marks created during the moulding process. It was jokingly referred to in the Eastern Bloc electronics industry as the "elephant-dung packaging", due to its appearance. The Soviet integrated circuit designation is different from the Western series: the technology modifications were considered different series and were identified by different numbered prefixes – К155 series is equivalent to plain 74, К555 series is 74LS, К1533 is 74ALS, etc.; the function of the unit is described with a two-letter code followed by a number: the first letter represents the functional group – logical, triggers, counters, multiplexers, etc.; the second letter shows the functional subgroup, making the distinction between logical NAND and NOR, D- and JK-triggers, decimal and binary counters, etc.; the number distinguishes variants with different number of inputs or different number of elements within a die – ЛА1/ЛА2/ЛА3 (LA1/LA2/LA3) are 2 four-input / 1 eight-input / 4 two-input NAND elements respectively (equivalent to 7420/7430/7400). Before July 1974 the two letters from the functional description were inserted after the first digit of the series. Examples: К1ЛБ551 and К155ЛА1 (7420), К1ТМ552 and К155ТМ2 (7474) are the same ICs made at different times. Clones of the 7400 series were also made in other Eastern Bloc countries: Bulgaria (Mikroelektronika Botevgrad) used a designation somewhat similar to that of the Soviet Union, e.g. 1ЛБ00ШМ (1LB00ShM) for a 74LS00. Some of the two-letter functional groups were borrowed from the Soviet designation, while others differed. Unlike the Soviet scheme, the two or three digit number after the functional group matched the western counterpart. The series followed at the end (i.e. ШМ for LS). Only the LS series is known to have been manufactured in Bulgaria. Czechoslovakia (TESLA) used the 7400 numbering scheme with manufacturer prefix MH. Example: MH7400. Tesla also produced industrial grade (8400, −25 ° to 85 °C) and military grade (5400, −55 ° to 125 °C) ones. Poland (Unitra CEMI) used the 7400 numbering scheme with manufacturer prefixes UCA for the 5400 and 6400 series, as well as UCY for the 7400 series. Examples: UCA6400, UCY7400. Note that ICs with the prefix MCY74 correspond to the 4000 series (e.g. MCY74002 corresponds to 4002 and not to 7402). Hungary (Tungsram, later Mikroelektronikai Vállalat / MEV) also used the 7400 numbering scheme, but with manufacturer suffix – 7400 is marked as 7400APC. Romania (I.P.R.S.) used a trimmed 7400 numbering with the manufacturer prefix CDB (example: CDB4123E corresponds to 74123) for the 74 and 74H series, where the suffix H indicated the 74H series. For the later 74LS series, the standard numbering was used. East Germany (HFO) also used trimmed 7400 numbering without manufacturer prefix or suffix. The prefix D (or E) designates digital IC, and not the manufacturer. Example: D174 is 7474. 74LS clones were designated by the prefix DL; e.g. DL000 = 74LS00. In later years East German made clones were also available with standard 74* numbers, usually for export. A number of different technologies were available from the Soviet Union, Czechoslovakia, Poland, and East Germany. The 8400 series in the table below indicates an industrial temperature range from −25 °C to +85 °C (as opposed to −40 °C to +85 °C for the 6400 series). Around 1990 the production of standard logic ceased in all Eastern European countries except the Soviet Union and later Russia and Belarus. As of 2016, the series 133, К155, 1533, КР1533, 1554, 1594, and 5584 were in production at "Integral" in Belarus, as well as the series 130 and 530 at "NZPP-KBR", 134 and 5574 at "VZPP", 533 at "Svetlana", 1564, К1564, КР1564 at "NZPP", 1564, К1564 at "Voshod", 1564 at "Exiton", and 133, 530, 533, 1533 at "Mikron" in Russia. The Russian company Angstrem manufactures 54HC circuits as the 5514БЦ1 series, 54AC as the 5514БЦ2 series, and 54LVC as the 5524БЦ2 series. See also Electronic component Logic gate, Logic family List of 7400-series integrated circuits 4000-series integrated circuits List of 4000-series integrated circuits Linear integrated circuit List of linear integrated circuits List of LM-series integrated circuits Push–pull output Open-collector/drain output Three-state output Schmitt trigger input Programmable logic device Pin compatibility References Further reading Books 50 Circuits Using 7400 Series IC's; 1st Ed; R.N. Soar; Bernard Babani Publishing; 76 pages; 1979; . (archive) TTL Cookbook; 1st Ed; Don Lancaster; Sams Publishing; 412 pages; 1974; . (archive) Designing with TTL Integrated Circuits; 1st Ed; Robert Morris, John Miller; Texas Instruments and McGraw-Hill; 322 pages; 1971; . (archive) App Notes Understanding and Interpreting Standard-Logic Data Sheets; Stephen Nolan, Jose Soltero, Shreyas Rao; Texas Instruments; 60 pages; 2016. Comparison of 74HC / 74S / 74LS / 74ALS Logic; Fairchild; 6 pages, 1983. Interfacing to 74HC Logic; Fairchild; 10 pages; 1998. 74AHC / 74AHCT Designer's Guide; TI; 53pages; 1998. Compares 74HC / 74AHC / 74AC (CMOS I/O) and 74HCT / 74AHCT / 74ACT (TTL I/O). Fairchild Semiconductor / ON Semiconductor Historical Data Books: TTL (1978, 752 pages), FAST (1981, 349 pages) Logic Selection Guide (2008, 12 pages) Nexperia / NXP Semiconductor Logic Selection Guide (2020, 234 pages) Logic Application Handbook Design Engineer's Guide' (2021, 157 pages) ''Logic Translators''' (2021, 62 pages) Texas Instruments / National Semiconductor Historical Catalog: (1967, 375 pages) Historical Databooks: TTL Vol1 (1984, 339 pages), TTL Vol2 (1985, 1402 pages), TTL Vol3 (1984, 793 pages), TTL Vol4 (1986, 445 pages) Digital Logic Pocket Data Book (2007, 794 pages), Logic Reference Guide (2004, 8 pages), Logic Selection Guide (1998, 215 pages) Little Logic Guide (2018, 25 pages), Little Logic Selection Guide (2004, 24 pages) Toshiba General-Purpose Logic ICs (2012, 55 pages) External links Understanding 7400-series digital logic ICs - Nuts and Volts magazine Thorough list of 7400-series ICs - Electronics Club Integrated circuits Digital electronics 1964 introductions
7400-series integrated circuits
[ "Technology", "Engineering" ]
4,276
[ "Electronic engineering", "Computer engineering", "Integrated circuits", "Digital electronics" ]
319,545
https://en.wikipedia.org/wiki/Messier%2014
Messier 14 (also known as M14 or NGC 6402) is a globular cluster of stars in the constellation Ophiuchus. It was discovered by Charles Messier in 1764. At a distance of about 30,000 light-years, M14 contains several hundred thousand stars. At an apparent magnitude of +7.6 it can be easily observed with binoculars. Medium-sized telescopes will show some hint of the individual stars of which the brightest is of magnitude +14. The total luminosity of M14 is in the order of 400,000 times that of the Sun corresponding to an absolute magnitude of -9.12. The shape of the cluster is decidedly elongated. M14 is about 100 light-years across. A total of 70 variable stars are known in M14, many of the W Virginis variety common in globular clusters. In 1938, a nova appeared, although this was not discovered until photographic plates from that time were studied in 1964. It is estimated that the nova reached a maximum brightness of magnitude +9.2, over five times brighter than the brightest 'normal' star in the cluster. Slightly over 3° southwest of M14 lies the faint globular cluster NGC 6366. Gallery See also List of Messier objects References External links SEDS Messier pages on M14 M14, Galactic Globular Clusters Database page - one of the two being M14 Messier 014 Messier 014 014 Messier 014 17640601 Discoveries by Charles Messier
Messier 14
[ "Astronomy" ]
312
[ "Ophiuchus", "Constellations" ]
319,560
https://en.wikipedia.org/wiki/Instrument%20amplifier
An instrument amplifier is an electronic amplifier that converts the often barely audible or purely electronic signal of a musical instrument into a larger electronic signal to feed to a loudspeaker. An instrument amplifier is used with musical instruments such as an electric guitar, an electric bass, electric organ, electric piano, synthesizers and drum machine to convert the signal from the pickup (with guitars and other string instruments and some keyboards) or other sound source (e.g, a synthesizer's signal) into an electronic signal that has enough power, produced by a power amplifier, to drive one or more loudspeaker that can be heard by the performers and audience. Combination (combo) amplifiers include a preamplifier, a power amplifier, tone controls, and one or more speakers in a cabinet, a housing or box usually made of wood. Instrument amplifiers for some instruments are also available without an internal speaker; these amplifiers, called heads, must plug into one or more separate speaker cabinets. Instrument amplifiers also have features that let the performer modify the signal's tone, such as changing the equalization (adjusting bass and treble tone) or adding electronic effects such as intentional distortion or overdrive, reverb or chorus effect. Instrument amplifiers are available for specific instruments, including the electric guitar, electric bass, electric and electronic keyboards, and acoustic instruments such as the mandolin and banjo. Some amplifiers are designed for specific styles of music, such as the Fender tweed guitar amplifiers, such as the Fender Bassman used by blues and country music musicians, and the Marshall amplifiers used by hard rock and heavy metal bands. Unlike home hi-fi amplifiers or public Address systems, which are designed to accurately reproduce the source sound signals with as little distortion as possible, instrument amplifiers are often designed to add additional tonal coloration to the original signal, and in many cases intentionally add some degree of distortion. Types Guitar amplifiers A guitar amplifier amplifies the electrical signal of an electric guitar so that it can drive a loudspeaker at sufficient volume for the performer and audience to hear. Most guitar amplifiers can also modify the instrument's sound with controls that emphasize or de-emphasize certain frequencies and add electronic effects. String vibrations are sensed by a pickup. For electric guitars, strings are made of metal, and the pickup works by electromagnetic induction. Standard amps Standard amplifiers, such as the Fender tweed-style amps (e.g., the Fender Bassman) and Gibson amps, are often used by traditional rock, blues, and country musicians who wish to create a vintage 1950s-style sound. They are used by electric guitarists, pedal steel guitar players, and blues harmonica ("harp") players. Combo amplifiers such as the Fender Super Reverb have powerful, loud tube amplifiers, four 10" speakers, and they often have built-in reverb and vibrato effects units. Smaller guitar amps are also available, which have fewer speakers (some have only one speaker) and lighter, less powerful amplifier units. Smaller guitar amps are easier to transport to gigs and sound recording sessions. Smaller amps are widely used in small venue shows (nightclubs) and in recordings, because players can obtain the tone they want without having to have an excessively loud volume. One of the challenge with the large, powerful 4x10 Fender Bassman-type amps is that to get the tone a player wants, they have to turn up the amp to a loud volume. These amps are designed to produce a variety of sounds ranging from a clean, warm sound (when used in country and soft rock) to a growling, natural overdrive, when the volume is set near its maximum, (when used for blues, rockabilly, psychobilly, and roots rock). These amplifiers usually have a sharp treble roll-off at 5 kHz to reduce the extreme high frequencies, and a bass roll-off at 60–100 Hz to reduce unwanted boominess. The nickname tweed refers to the lacquered beige-light brown fabric covering used on these amplifiers. The smallest combo amplifiers, which are mainly used for individual practice and warm-up purposes, may have only a single 8" or 10" speaker. Some harmonica players use these small combo amplifiers for concert performances, though, because it is easier to create natural overdrive with these lower-powered amplifiers. Larger combo amplifiers, with one 12 inch speaker or two or four 10 or 12 inch speakers are used for club performances and larger venues. For large concert venues such as stadiums, performers may also use an amplifier head with several separate speaker cabinets (which usually contain two or four 12" speakers). Hard rock and heavy metal Electric guitar amplifiers designed for heavy metal are used to add an aggressive drive, intensity, and edge to the guitar sound with distortion effects, preamplification boost controls (sometimes with multiple stages of preamps), and tone filters. While many of the most expensive, high-end models use 1950s-style tube amplifiers (even in the 2000s), there are also many models that use transistor amplifiers, or a mixture of the two technologies (i.e., a tube preamplifier with a transistor power amplifier). Amplifiers of this type, such as Marshall amplifiers, are used in a range of the louder, heavier genres of rock, including hard rock, heavy metal, and hardcore punk. This type of amplifier is available in a range of formats, ranging from small, self-contained combo amplifiers for rehearsal and warm-ups to heavy heads that are used with separate speaker cabinets—colloquially referred to as a stack. In the late 1960s and early 1970s, public address systems at rock concerts were used mainly for the vocals. As a result, to get a loud electric guitar sound, early heavy metal and rock-blues bands often used stacks of 4x12" Marshall speaker cabinets on the stage. In 1969, Jimi Hendrix used four stacks to create a powerful lead sound, and in the early 1970s by the band Blue Öyster Cult used an entire wall of Marshall Amplifiers to create a roaring wall of sound that projected massive volume and sonic power. In the 1980s, metal bands such as Slayer and Yngwie Malmsteen also used walls of over 20 Marshall cabinets. However, by the 1980s and 1990s, most of the sound at live concerts was produced by the sound reinforcement system rather than the onstage guitar amplifiers, so most of these cabinets were not connected to an amplifier. Instead, walls of speaker cabinets were used for aesthetic reasons. Amplifiers for harder, heavier genres often use valve amplifiers (known as tube amplifiers in North America) also. Valve amplifiers are perceived by musicians and fans to have a warmer tone than those of transistor amps, particularly when overdriven (turned up to the level that the amplifier starts to clip or shear off the waveforms). Instead of abruptly clipping off the signal at cut-off and saturation levels, the signal is rounded off more smoothly. Vacuum tubes also exhibit different harmonic effects than transistors. In contrast to the tweed-style amplifiers, which use speakers in an open-backed cabinet, companies such as Marshall tend to use 12" speakers in a closed-back cabinet. These amplifiers usually allow users to switch between clean and distorted tones (or a rhythm guitar-style crunch tone and a sustained "lead" tone) with a foot-operated switch. Bass Bass amplifiers are designed for bass guitars or more rarely, for upright bass. They differ from amplifiers for the electric guitar in several respects, with extended low-frequency response, and tone controls optimized for the needs of bass players. Higher-cost bass amplifiers may include built-in bass effects units, such as audio compressor or limiter features, to avoid unwanted distorting at high volume levels and potential damage to speakers; equalizers; and bass overdrive. Bass amps may provide an XLR DI output for plugging the bass amp signal directly into a mixing board or PA system. Larger, more powerful bass amplifiers (300 or more watts) are often provided with internal or external metal heat sinks and/or fans to help keep the components cool. Speaker cabinets designed for bass usually use larger loudspeakers (or more loudspeakers, such as four ten-inch speakers) than the cabinets used for other instruments, so that they can move the larger amounts of air needed to reproduce low frequencies. Bass players have to use more powerful amplifiers than the electric guitarists, because deep bass frequencies take more power to amplify. While the largest speakers commonly used for regular electric guitar have twelve-inch cones, electric bass speaker cabinets often use 15" speakers. Bass players who play styles of music that require an extended low-range response, such as death metal, sometimes use speaker cabinets with 18" speakers or add a large subwoofer cabinet to their rig. Speakers for bass instrument amplification tend to be heavier-duty than those for regular electric guitar, and the speaker cabinets are typically more rigidly constructed and heavily braced, to prevent unwanted buzzes and rattles. Bass cabinets often include bass reflex ports, vents or openings in the cabinet, which improve the bass response and low-end, especially at high volumes. Keyboard A keyboard amplifier, used for the stage piano, synthesizer, clonewheel organs and similar instruments, is distinct from other types of amplification systems due to the particular challenges associated with keyboards; namely, to provide solid low-frequency sound reproduction and crisp high-frequency sound reproduction. It is typically a combination amplifier that contains a two, three, or four-channel mixer, a pre-amplifier for each channel, equalization controls, a power amplifier, a speaker, and a horn, all in a single cabinet. Notable exceptions include keyboard amplifiers for specific keyboard types. The vintage Leslie speaker cabinet and modern recreations, which are generally used for Hammond organs, use a tube amplifier that is often turned up to add a warm, growling overdrive. Some electric pianos have built-in amplifiers and speakers, in addition to outputs for external amplification. Acoustic amplifiers These amplifiers are intended for acoustic instruments such as violin ("fiddle"), mandolin, harp, and acoustic guitar—especially for the way musicians play these instruments in quieter genres such as folk and bluegrass. They are similar to keyboard amplifiers, in that they have a relatively flat frequency response and avoid tonal coloration. To produce this relatively clean sound, these amplifiers often have very powerful amplifiers (up to 800 watts RMS), to provide additional headroom and prevent unwanted distortion. Since an 800-watt amplifier built with standard Class AB technology would be heavy, some acoustic amplifier manufacturers use lightweight Class D, "switching amplifiers". Acoustic amplifier designs strive to produce a clean, transparent, acoustic sound that does not—except for reverb and other effects—alter the natural instrument sound, other than to make it louder. Amplifiers often come with a simple mixer to blend signals from a pickup and microphone. Since the early 2000s, it is increasingly common for acoustic amplifiers to provided digital effects, such as reverb and compression. Some also contain feedback-suppressing devices, such as notch filters or parametric equalizers. Acoustic guitars do not usually have a built-in pickup or microphone, at least with entry-level and beginner instruments. Some acoustic guitars have a small condenser microphone mounted inside the body, which designed to convert acoustic vibrations into an electrical signal, but usually they do so from direct contact with the strings (replacing the guitar's bridge) or with the guitar's body, rather than having a membrane-like general-purpose microphone. Acoustic guitars may also use a piezoelectric pickup, which converts the vibrations of the instrument into an electronic signal. More rarely, a magnetic pickup may be mounted in the sound hole of an acoustic guitar; while magnetic pickups do not have the same acoustic tone that microphones and piezo pickups can produce, magnetic pickups are more resistant to acoustic feedback. Roles Instrument amplifiers have a different purpose than 'Hi-Fi' (high fidelity) stereo amplifiers in radios and home stereo systems. Hi-fi home stereo amplifiers strive to accurately reproduce signals from pre-recorded music, with as little harmonic distortion as possible. In contrast, instrument amplifiers are add additional tonal coloration to the original signal or emphasize certain frequencies. For electric instruments such as electric guitar, the amplifier helps to create the instrument's tone by boosting the input signal gain and distorting the signal, and by emphasizing frequencies deemed desirable (e.g., low frequencies) and de-emphasizing frequencies deemed undesirable (e.g., very high frequencies). Size and power rating In the 1960s and 1970s, large, heavy, high-output power amplifiers were preferred for instrument amplifiers, especially for large concerts, because public address systems were generally only used to amplify the vocals. Moreover, in the 1960s, PA systems typically did not use monitor speaker systems to amplify the music for the onstage musicians. Instead, the musicians were expected to have instrument amplifiers that were powerful enough to provide amplification for the stage and audience. In late 1960s and early 1970s rock concerts, bands often used large stacks of speaker cabinets powered by heavy tube amplifiers such as the Super Valve Technology (SVT) amplifier, which was often used with eight 10" speakers. However, over subsequent decades, PA systems substantially improved, and used different approaches, such as horn-loaded bass bins (in the 1980s) and subwoofers (1990s and 2000s) to amplify bass frequencies. As well, in the 1980s and 1990s, monitor systems substantially improved, which helped sound engineers provide onstage musicians with a better reproduction of their instruments' sound. As a result of improvements to PA and monitor systems, musicians in the 2000s no longer need huge, powerful amplifier systems. A small combo amplifier patched into the PA suffices. In the 2000s, virtually all sound reaching the audience in large venues comes from the PA system. Onstage instrument amplifiers are more likely to be at a low volume, because high volume levels onstage make it harder for the sound engineer to control the sound mix. As a result, in many large venues much of the onstage sound reaching the musicians now comes from in-ear monitors, not from the instrument amplifiers. While stacks of huge speaker cabinets and amplifiers are still used in concerts (especially in heavy metal), this is often mainly for aesthetics or to create a more authentic tone. The switch to smaller instrument amplifiers makes it easier for musicians to transport their equipment to performances. As well, it makes concert stage management easier at large clubs and festivals where several bands are performing in sequence, because the bands can be moved on and off the stage more quickly. Amplifier technology Instrument amplifiers may be based on thermionic (tube or valve) or solid state (transistor) technology. Tube amplifiers Vacuum tubes were the dominant active electronic components in amplifiers from the 1930s through the early 1970s, and tube amplifiers remain preferred by many musicians and producers. Some musicians feel that tube amplifiers produce a warmer or more natural sound than solid state units, and a more pleasing overdrive sound when overdriven. However, these subjective assessments of the attributes of tube amplifiers' sound qualities are the subject of ongoing debate. Tube amps are more fragile, require more maintenance, and are usually more expensive than solid-state amps. Tube amplifiers produce more heat than solid-state amplifiers, but few manufacturers of these units include cooling fans in the chassis. While tube amplifiers do need to attain a proper operating temperature, if the temperature goes above this operating temperature, it may shorten the tubes' lifespan and lead to tonal inconsistencies. Solid-state amplifiers By the 1960s and 1970s, semiconductor transistor-based amplifiers began to become more popular because they are less expensive, more resistant to bumps during transportation, lighter-weight, and require less maintenance. In some cases, tube and solid-state technologies are used together in amplifiers. A common setup is the use of a tube preamplifier with a solid-state power amplifier. There are also an increasing range of products that use digital signal processing and digital modeling technology to simulate many different combinations of amp and cabinets. The output transistors of solid-state amplifiers can be passively cooled by using metal fins called heatsinks to radiate away the heat. For high-wattage amplifiers (over 800 watts), a fan is often used to move air across internal heatsinks. Hybrid The most common hybrid amp design is to use a tube preamp with a solid-state power amplifier. This gives users the pleasing preamp and overdrive tone of a tube amp with the lowered cost, maintenance and weight of a solid-state power amp. See also Amplifier Electronic amplifier Guitar amplifier Guitar speaker Guitar speaker cabinet Isolation cabinet (guitar) Valve sound Bass instrument amplification Effects unit Distortion (guitar) Power attenuator (guitar) Sound reinforcement system Tone stack References External links Duncan's amp pages: information about valve (tube) guitar amplifiers List of books about guitar amplifiers and guitar amplifier tone Tons of Tones, a website for guitar amplifier modelling on digital multi-effect units Rock music instruments Blues instruments Sound reinforcement system Consumer electronics Loudspeakers
Instrument amplifier
[ "Engineering" ]
3,510
[ "Sound reinforcement system", "Audio engineering" ]
319,610
https://en.wikipedia.org/wiki/Biological%20life%20cycle
In biology, a biological life cycle (or just life cycle when the biological context is clear) is a series of stages of the life of an organism, that begins as a zygote, often in an egg, and concludes as an adult that reproduces, producing an offspring in the form of a new zygote which then itself goes through the same series of stages, the process repeating in a cyclic fashion. "The concept is closely related to those of the life history, development and ontogeny, but differs from them in stressing renewal." Transitions of form may involve growth, asexual reproduction, or sexual reproduction. In some organisms, different "generations" of the species succeed each other during the life cycle. For plants and many algae, there are two multicellular stages, and the life cycle is referred to as alternation of generations. The term life history is often used, particularly for organisms such as the red algae which have three multicellular stages (or more), rather than two. Life cycles that include sexual reproduction involve alternating haploid (n) and diploid (2n) stages, i.e., a change of ploidy is involved. To return from a diploid stage to a haploid stage, meiosis must occur. In regard to changes of ploidy, there are three types of cycles: haplontic life cycle — the haploid stage is multicellular and the diploid stage is a single cell, meiosis is "zygotic". diplontic life cycle — the diploid stage is multicellular and haploid gametes are formed, meiosis is "gametic". haplodiplontic life cycle (also referred to as diplohaplontic, diplobiontic, or dibiontic life cycle) — multicellular diploid and haploid stages occur, meiosis is "sporic". The cycles differ in when mitosis (growth) occurs. Zygotic meiosis and gametic meiosis have one mitotic stage: mitosis occurs during the n phase in zygotic meiosis and during the 2n phase in gametic meiosis. Therefore, zygotic and gametic meiosis are collectively termed "haplobiontic" (single mitotic phase, not to be confused with haplontic). Sporic meiosis, on the other hand, has mitosis in two stages, both the diploid and haploid stages, termed "diplobiontic" (not to be confused with diplontic). Discovery The study of reproduction and development in organisms was carried out by many botanists and zoologists. Wilhelm Hofmeister demonstrated that alternation of generations is a feature that unites plants, and published this result in 1851 (see plant sexuality). Some terms (haplobiont and diplobiont) used for the description of life cycles were proposed initially for algae by Nils Svedelius, and then became used for other organisms. Other terms (autogamy and gamontogamy) used in protist life cycles were introduced by Karl Gottlieb Grell. The description of the complex life cycles of various organisms contributed to the disproof of the ideas of spontaneous generation in the 1840s and 1850s. Haplontic life cycle A zygotic meiosis is a meiosis of a zygote immediately after karyogamy, which is the fusion of two cell nuclei. This way, the organism ends its diploid phase and produces several haploid cells. These cells divide mitotically to form either larger, multicellular individuals, or more haploid cells. Two opposite types of gametes (e.g., male and female) from these individuals or cells fuse to become a zygote. In the whole cycle, zygotes are the only diploid cell; mitosis occurs only in the haploid phase. The individuals or cells as a result of mitosis are haplonts, hence this life cycle is also called haplontic life cycle. Haplonts are: In archaeplastidans: some green algae (e.g., Chlamydomonas, Zygnema, Chara) In stramenopiles: some golden algae In alveolates: many dinoflagellates, e.g., Ceratium, Gymnodinium, some apicomplexans (e.g., Plasmodium) In rhizarians: some euglyphids, ascetosporeans In excavates: some parabasalids In amoebozoans: Dictyostelium In opisthokonts: most fungi (some chytrids, zygomycetes, some ascomycetes, basidiomycetes) Diplontic life cycle In gametic meiosis, instead of immediately dividing meiotically to produce haploid cells, the zygote divides mitotically to produce a multicellular diploid individual or a group of more unicellular diploid cells. Cells from the diploid individuals then undergo meiosis to produce haploid cells or gametes. Haploid cells may divide again (by mitosis) to form more haploid cells, as in many yeasts, but the haploid phase is not the predominant life cycle phase. In most diplonts, mitosis occurs only in the diploid phase, i.e. gametes usually form quickly and fuse to produce diploid zygotes. In the whole cycle, gametes are usually the only haploid cells, and mitosis usually occurs only in the diploid phase. The diploid multicellular individual is a diplont, hence a gametic meiosis is also called a diplontic life cycle. Diplonts are: In archaeplastidans: some green algae (e.g., Cladophora glomerata, Acetabularia) In stramenopiles: some brown algae (the Fucales, however, their life cycle can also be interpreted as strongly heteromorphic-diplohaplontic, with a highly reduced gametophyte phase, as in the flowering plants), some xanthophytes (e.g., Vaucheria), most diatoms, some oomycetes (e.g., Saprolegnia, Plasmopara viticola), opalines, some "heliozoans" (e.g., Actinophrys, Actinosphaerium) In alveolates: ciliates In excavates: some parabasalids In opisthokonts: animals, some fungi (e.g., some ascomycetes) Haplodiplontic life cycle In sporic meiosis (also commonly known as intermediary meiosis), the zygote divides mitotically to produce a multicellular diploid sporophyte. The sporophyte creates spores via meiosis which also then divide mitotically producing haploid individuals called gametophytes. The gametophytes produce gametes via mitosis. In some plants the gametophyte is not only small-sized but also short-lived; in other plants and many algae, the gametophyte is the "dominant" stage of the life cycle. Haplodiplonts are: In archaeplastidans: red algae (which have two sporophyte generations), some green algae (e.g., Ulva), land plants In stramenopiles: most brown algae In rhizarians: many foraminiferans, plasmodiophoromycetes In amoebozoa: myxogastrids In opisthokonts: some fungi (some chytrids, some ascomycetes like the brewer's yeast) Other eukaryotes: haptophytes Some animals have a sex-determination system called haplodiploid, but this is not related to the haplodiplontic life cycle. Vegetative meiosis Some red algae (such as Bonnemaisonia and Lemanea) and green algae (such as Prasiola) have vegetative meiosis, also called somatic meiosis, which is a rare phenomenon. Vegetative meiosis can occur in haplodiplontic and also in diplontic life cycles. The gametophytes remain attached to and part of the sporophyte. Vegetative (non-reproductive) diploid cells undergo meiosis, generating vegetative haploid cells. These undergo many mitosis, and produces gametes. A different phenomenon, called vegetative diploidization, a type of apomixis, occurs in some brown algae (e.g., Elachista stellaris). Cells in a haploid part of the plant spontaneously duplicate their chromosomes to produce diploid tissue. Parasitic life cycle Parasites depend on the exploitation of one or more hosts. Those that must infect more than one host species to complete their life cycles are said to have complex or indirect life cycles. Dirofilaria immitis, or the heartworm, has an indirect life cycle, for example. The microfilariae must first be ingested by a female mosquito, where it develops into the infective larval stage. The mosquito then bites an animal and transmits the infective larvae into the animal, where they migrate to the pulmonary artery and mature into adults. Those parasites that infect a single species have direct life cycles. An example of a parasite with a direct life cycle is Ancylostoma caninum, or the canine hookworm. They develop to the infective larval stage in the environment, then penetrate the skin of the dog directly and mature to adults in the small intestine. If a parasite has to infect a given host in order to complete its life cycle, then it is said to be an obligate parasite of that host; sometimes, infection is facultative—the parasite can survive and complete its life cycle without infecting that particular host species. Parasites sometimes infect hosts in which they cannot complete their life cycles; these are accidental hosts. A host in which parasites reproduce sexually is known as the definitive, final or primary host. In intermediate hosts, parasites either do not reproduce or do so asexually, but the parasite always develops to a new stage in this type of host. In some cases a parasite will infect a host, but not undergo any development, these hosts are known as paratenic or transport hosts. The paratenic host can be useful in raising the chance that the parasite will be transmitted to the definitive host. For example, the cat lungworm (Aelurostrongylus abstrusus) uses a slug or snail as an intermediate host; the first stage larva enters the mollusk and develops to the third stage larva, which is infectious to the definitive host—the cat. If a mouse eats the slug, the third stage larva will enter the mouse's tissues, but will not undergo any development. Evolution The primitive type of life cycle probably had haploid individuals with asexual reproduction. Bacteria and archaea exhibit a life cycle like this, and some eukaryotes apparently do too (e.g., Cryptophyta, Choanoflagellata, many Euglenozoa, many Amoebozoa, some red algae, some green algae, the imperfect fungi, some rotifers and many other groups, not necessarily haploid). However, these eukaryotes probably are not primitively asexual, but have lost their sexual reproduction, or it just was not observed yet. Many eukaryotes (including animals and plants) exhibit asexual reproduction, which may be facultative or obligate in the life cycle, with sexual reproduction occurring more or less frequently. Individual organisms participating in a biological life cycle ordinarily age and die, while cells from these organisms that connect successive life cycle generations (germ line cells and their descendants) are potentially immortal. The basis for this difference is a fundamental problem in biology. The Russian biologist and historian Zhores A. Medvedev considered that the accuracy of genome replicative and other synthetic systems alone cannot explain the immortality of germlines. Rather Medvedev thought that known features of the biochemistry and genetics of sexual reproduction indicate the presence of unique information maintenance and restoration processes at the gametogenesis stage of the biological life cycle. In particular, Medvedev considered that the most important opportunities for information maintenance of germ cells are created by recombination during meiosis and DNA repair; he saw these as processes within the germ line cells that were capable of restoring the integrity of DNA and chromosomes from the types of damage that cause irreversible ageing in non-germ line cells, e.g. somatic cells. The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential of the biological life cycle over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline over successive cell cycle generations depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination. See also Metamorphosis – Profound change in body structure during the postembryonic development of an organism References Sources Further reading Reproduction
Biological life cycle
[ "Biology" ]
2,948
[ "Biological interactions", "Behavior", "Reproduction" ]
319,611
https://en.wikipedia.org/wiki/Life%20history%20%28sociology%29
Life history is an interviewing method used to record autobiographical history from an ordinary person's perspective, often gathered from traditionally marginalized groups. It was begun by anthropologists studying Native American groups around the 1900s, and was taken up by sociologists and other scholars, though its popularity has waxed and waned since. One of the major strengths of the life history method is that it provides a kind of voice from a social milieu that is often overlooked or indeed invisible in intellectual discourse. Life history method The method was first used when interviewing indigenous peoples of the Americas and specifically Native American leaders who were asked by an interviewer to describe their lives with an insight as to what it was like to be that particular person. The purpose of the interview was to capture a living picture of a disappearing (as such) people/way of life. Later the method was used to interview criminals and prostitutes in Chicago. Interviewers looked at social and police-records, as well as the society in general, and asked subjects to talk about their lives. The resulting report discussed (i) Chicago at that particular time; (ii) how the subject viewed their own life (i.e. 'how it was like to be this particular person') and (iii) how society viewed the subject and whether they would be incarcerated, receive help, perform social work, etc. The landmark of the life history method was developed in the 1920s and most significantly embodied in The Polish Peasant in Europe and America by W. I. Thomas and Florian Znaniecki. The authors employed a Polish immigrant to write his own life story which they then interpreted and analyzed. According to Martin Bulmer, it was "the first systematically collected sociological life history". The approach later lost momentum as quantitative methods became more prevalent in American sociology. The method was revived in the 1970s, mainly through the efforts of French sociologist Daniel Bertaux and Paul Thompson whose life history research focused on such professions as bakers and fishermen. Major initiatives of the life history method were undertaken also in Germany, Italy, and Finland. In the German context, the life history method is closely associated with the development of biographical research and biographical-narrative interviews. The narrative interview as a method for conducting open narrative interviews in empirical social research was developed in Germany around 1975. It borrowed concepts from phenomenology (Alfred Schütz), symbolic interactionism (George Herbert Mead), ethnomethodology (Harold Garfinkel), and sociology of knowledge (Karl Mannheim). The development and improvement of the method are closely connected to German sociologist Fritz Schütze, part of the Bielefeld Sociologist's Working Group, which maintained close academic cooperation with American sociolinguists and social scientists such as Erving Goffman, Harvey Sacks, John Gumpertz, and Anselm Strauss. The analysis of life histories was further developed by the biographical case reconstruction method of German sociologist Gabriele Rosenthal for the analysis of life history and life story. Rosenthal differentiates between the level of analysis of the narrated life story (erzählte Lebensgeshichte) and the experienced life history (erlebte Lebensgeschichte). Technique In this method, the interviewer allows the subject to tell the story of their life on their own terms, as opposed to those of the researcher. It is common practice to begin the interview with the subject's early childhood and to proceed chronologically to the present. Another approach, dating from the Polish Peasant, is to ask participants to write their own life stories. This can be done either through competitions (as in Poland, Finland or Italy) or by collecting written life stories written spontaneously. In these countries, there are already large collections of life stories, which can be used by researchers. References Footnotes Bibliography Further reading Human development
Life history (sociology)
[ "Biology" ]
777
[ "Behavioural sciences", "Behavior", "Human development" ]
319,613
https://en.wikipedia.org/wiki/Voltage%20spike
In electrical engineering, spikes are fast, short duration electrical transients in voltage (voltage spikes), current (current spikes), or transferred energy (energy spikes) in an electrical circuit. Fast, short duration electrical transients (overvoltages) in the electric potential of a circuit are typically caused by Lightning strikes Power outages Tripped circuit breakers Short circuits Power transitions in other large equipment on the same power line Malfunctions caused by the power company Electromagnetic pulses (EMP) with electromagnetic energy distributed typically up to the 100 kHz and 1 MHz frequency range. Inductive spikes In the design of critical infrastructure and military hardware, one concern is of pulses produced by nuclear explosions, whose nuclear electromagnetic pulses distribute large energies in frequencies from 1 kHz into the gigahertz range through the atmosphere. The effect of a voltage spike is to produce a corresponding increase in current (current spike). However some voltage spikes may be created by current sources. Voltage would increase as necessary so that a constant current will flow. Current from a discharging inductor is one example. For sensitive electronics, excessive current can flow if this voltage spike exceeds a material's breakdown voltage, or if it causes avalanche breakdown. In semiconductor junctions, excessive electric current may destroy or severely weaken that device. An avalanche diode, transient voltage suppression diode, varistor, overvoltage crowbar, or a range of other overvoltage protective devices can divert (shunt) this transient current thereby minimizing voltage. Voltage spikes, also known as surges, may be created by a rapid buildup or decay of a magnetic field, which may induce energy into the associated circuit. However voltage spikes can also have more mundane causes such as a fault in a transformer or higher-voltage (primary circuit) power wires falling onto lower-voltage (secondary circuit) power wires as a result of accident or storm damage. Voltage spikes may be longitudinal (common) mode or metallic (normal or differential) mode. Some equipment damage from surges and spikes can be prevented by use of surge protection equipment. Each type of spike requires selective use of protective equipment. For example, a common mode voltage spike may not even be detected by a protector installed for normal mode transients. Power increases or decreases which last multiple cycles are called swells or sags, respectively. An uninterrupted voltage increase that lasts more than a minute is called an overvoltage. These are usually caused by malfunctions of the electric power distribution system. See also - a device to channel inductive spikes back through the coil producing them References Power electronics Spike pl:Przepięcie
Voltage spike
[ "Physics", "Engineering" ]
542
[ "Physical quantities", "Electronic engineering", "Voltage", "Voltage stability", "Power electronics" ]
319,632
https://en.wikipedia.org/wiki/Verisign
Verisign, Inc. is an American company based in Reston, Virginia, that operates a diverse array of network infrastructure, including two of the Internet's thirteen root nameservers, the authoritative registry for the , , and generic top-level domains and the country-code top-level domains, and the back-end systems for the and sponsored top-level domains. In 2010, Verisign sold its authentication business unit – which included Secure Sockets Layer (SSL) certificate, public key infrastructure (PKI), Verisign Trust Seal, and Verisign Identity Protection (VIP) services – to Symantec for $1.28 billion. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Symantec later sold this unit to DigiCert in 2017. On October 25, 2018, NeuStar, Inc. acquired VeriSign's Security Service Customer Contracts. The acquisition effectively transferred Verisign Inc.'s Distributed Denial of Service (DDoS) protection, Managed DNS, DNS Firewall and fee-based Recursive DNS services customer contracts. Verisign's former chief financial officer (CFO) Brian Robins announced in August 2010 that the company would move from its original location of Mountain View, California, to Dulles in Northern Virginia by 2011 due to 95% of the company's business being on the East Coast. The company is incorporated in Delaware. History Verisign was founded in 1995 as a spin-off of the RSA Security certification services business. The new company received licenses to key cryptographic patents held by RSA (set to expire in 2000) and a time-limited non-compete agreement. The new company served as a certificate authority (CA) and its initial mission was "providing trust for the Internet and Electronic Commerce through our Digital Authentication services and products". Prior to selling its certificate business to Symantec in 2010, Verisign had more than 3 million certificates in operation for everything from military to financial services and retail applications, making it the largest CA in the world. In 2000, Verisign acquired Network Solutions for $21billion, which operated the , and TLDs under agreements with the Internet Corporation for Assigned Names and Numbers (ICANN) and the United States Department of Commerce. Those core registry functions formed the basis for Verisign's naming division, which by then had become the company's largest and most significant business unit. In 2002, Verisign was charged with violation of the Securities Exchange Act. Verisign divested the Network Solutions retail (domain name registrar) business in 2003 for $100million, retaining the domain name registry (wholesale) function as its core Internet addressing business. For the year ended December 31, 2010, Verisign reported revenue of $681 million, up 10% from $616 million in 2009. Verisign operates two businesses, Naming Services, which encompasses the operation of top-level domains and critical Internet infrastructure, and Network Intelligence and Availability (NIA) Services, which encompasses DDoS mitigation, managed DNS and threat intelligence. On August 9, 2010, Symantec completed its approximately $1.28 billion acquisition of Verisign's authentication business, including the Secure Sockets Layer (SSL) Certificate Services, the Public Key Infrastructure (PKI) Services, the Verisign Trust Services, the Verisign Identity Protection (VIP) Authentication Service, and the majority stake in Verisign Japan. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Following ongoing controversies regarding Symantec's handling of certificate validation, which culminated in Google untrusting Symantec-issued certificates in its Chrome web browser, Symantec sold this unit to DigiCert in 2017 for $950 Million. On 14 December 2021, the Ministry of Justice, Communication and Foreign Affairs of the Tuvalu Government announced on Facebook that they have selected GoDaddy Registry as the new registry service provider for the domain after Verisign did not participate in the renewal process. In 2011, Verisign was selected by the General Services Administration (GSA) to operate the registry services for the top-level domain. They continued to operate service until 2023, when Cybersecurity and Infrastructure Security Agency (CISA) chose Cloudflare to replace Verisign as the .gov operator. Verisign's share price tumbled in early 2014, hastened by the U.S. government's announcement that it would "relinquish oversight of the Internet's domain-naming system to a non-government entity". Ultimately ICANN chose to continue VeriSign's role as the root zone maintainer and the two entered into a new contract in 2016. Naming services Verisign's core business is its naming services division. The division operates the authoritative domain name registries for two of the Internet's most important top-level domains, and , and .name. It is the primary technical subcontractor for the and top-level domains for their respective registry operators, which are non-profit organizations; in this role Verisign maintains the zone files for these particular domains and hosts the domains from their domain servers. In addition, Verisign is also the contracted registry operator for the country code top-level domain (Cocos Islands). Registry operators are the "wholesalers" of Internet domain names, while domain name registrars act as the “retailers”, working directly with consumers to register a domain name address. It formerly was the contracted registry for .gov top-level domains as well as for the country code top-level domain .tv (Tuvalu). Verisign also operates two of the Internet's thirteen "root servers" which are identified by the letters A-M (Verisign operates the “A” and “J” root servers). The root servers form the top of the hierarchical Domain Name System that supports most modern Internet communication. Verisign also generates the globally recognized root zone file and is also responsible for processing changes to that file once they are ordered by ICANN via IANA and approved by the U.S. Department of Commerce. Changes to the root zone were originally distributed via the A root server, but now they are distributed to all thirteen servers via a separate distribution system which Verisign maintains. Verisign is the only one of the 12 root server operators to operate more than one of the thirteen root nameservers. The A and J root servers are "anycasted” and are no longer operated from any of the company's own datacenters as a means to increase redundancy and availability and mitigate the threat of a single point of failure. In 2016, the Department of Commerce ended its role in managing the Internet's DNS and transferred full control to ICANN. While this initially negatively impacted VeriSign's stock, ICANN eventually chose to contract with Verisign to continue its role as the root zone maintainer. VeriSign's naming services division dates back to 1993 when Network Solutions was awarded a contract by the National Science Foundation to manage and operate the civilian side of the Internet's domain name registrations. Network Solutions was the sole registrar for all of the Internet's non-governmental generic top-level domains until 1998 when ICANN was established and the new system of competitive registrars was implemented. As a result of these new policies, Network Solutions divided itself into two divisions. The NSI Registry division was established to manage the authoritative registries that the company would still operate, and was separated from the customer-facing registrar business that would have to compete with other registrars. The divisions were even geographically split with the NSI Registry moving from the corporate headquarters in Herndon, Virginia, to nearby Dulles, Virginia. In 2000, VeriSign purchased Network Solutions taking over its role in the Internet's DNS. The NSI Registry division eventually became VeriSign's naming services division while the remainder of Network Solutions was later sold by Verisign in 2003 to Pivotal Equity Group. Company properties Following the sale of its authentication services division in 2010, Verisign relocated from its former headquarters in Mountain View, California, to the headquarters of the naming division in Sterling, Virginia (originally NSI Registry's headquarters). Verisign began shopping that year for a new permanent home shortly after moving. They signed a lease for 12061 Bluemont Way in Reston, the former Sallie Mae headquarters, in 2010 and decided to purchase the building in September 2011. They have since terminated their lease of their current space in two buildings at Lakeside@Loudoun Technology Center. The company completed its move at the end of November 2011. The new headquarters is located in the Reston Town Center development which has become a major commercial and business hub for the region. In addition to its Reston headquarters, Verisign owns three data center properties. One at 22340 Dresden Street in Dulles, Virginia, not far from its corporate headquarters (within the large Broad Run Technology Park), one at 21 Boulden Circle in New Castle, Delaware, and a third in Fribourg, Switzerland. Their three data centers are mirrored so that a disaster at one data center has a minimal impact on operations. Verisign also leases an office suite in downtown Washington, D.C., on K street where its government relations office is located. It also has leased server space in numerous internet data centers around the world where the DNS constellation resolution sites are located, mostly at major internet peering facilities. One such facility is at the Equinix Ashburn Datacenter in Ashburn, Virginia, one of the world's largest datacenters and internet transit hubs. Controversies 2001: Code signing certificate mistake In January 2001, Verisign mistakenly issued two Class 3 code signing certificates to an individual claiming to be an employee of Microsoft. The mistake was not discovered and the certificates were not revoked until two weeks later during a routine audit. Because Verisign code-signing certificates do not specify a Certificate Revocation List Distribution Point, there was no way for them to be automatically detected as having been revoked, placing Microsoft's customers at risk. Microsoft had to later release a special security patch in order to revoke the certificates and mark them as being fraudulent. 2002: Domain transfer law suit In 2002, Verisign was sued for domain slamming – transferring domains from other registrars to themselves by making the registrants believe they were merely renewing their domain name. Although they were found not to have broken the law, they were barred from suggesting that a domain was about to expire or claim that a transfer was actually a renewal. 2003: Site Finder legal case In September 2003, Verisign introduced a service called Site Finder, which redirected Web browsers to a search service when users attempted to go to non-existent or domain names. ICANN asserted that Verisign had overstepped the terms of its contract with the U.S. Department of Commerce, which in essence grants Verisign the right to operate the DNS for and , and Verisign shut down the service. Subsequently, Verisign filed a lawsuit against ICANN in February 2004, seeking to gain clarity over what services it could offer in the context of its contract with ICANN. The claim was moved from federal to California state court in August 2004. In late 2005, Verisign and ICANN announced a proposed settlement which defined a process for the introduction of new registry services in the registry. The documents concerning these settlements are available at ICANN.org. The ICANN comments mailing list archive documents some of the criticisms that have been raised regarding the settlement. Additionally, Verisign was involved in the matter decided by the Ninth Circuit. 2003: Gives up domain In keeping with ICANN's charter to introduce competition to the domain name marketplace, Verisign agreed to give up its operation of top-level domain in 2003 in exchange for a continuation of its contract to operate , which, at the time had more than 34 million registered addresses. 2005: Retains domain In mid-2005, the existing contract for the operation of expired and five companies, including Verisign, bid for management of it. Verisign enlisted numerous IT and telecom heavyweights including Microsoft, IBM, Sun Microsystems, MCI, and others, to assert that Verisign had a perfect record operating . They proposed Verisign continue to manage the DNS due to its critical importance as the domain underlying numerous "backbone" network services. Verisign was also aided by the fact that several of the other bidders were based outside the United States, which raised concerns in national security circles. On June 8, 2005, ICANN announced that Verisign had been approved to operate until 2011. More information on the bidding process is available at ICANN. On July 1, 2011, ICANN announced that VeriSign's approval to operate .net was extended another six years, until 2017. 2010: Data breach and disclosure controversy In February 2012, Verisign revealed that their network security had been repeatedly breached in 2010. Verisign stated that the breach did not impact the Domain Name System (DNS) that they maintain, but would not provide details about the loss of data. Verisign was widely criticized for not disclosing the breach earlier and apparently attempting to hide the news in an October 2011 SEC filing. Because of the lack of details provided by Verisign, it was not clear whether the breach impacted the certificate signing business, acquired by Symantec in late 2010. Some, such as Oliver Lavery, the Director of Security and Research for nCircle, doubted whether sites using Verisign SSL certificates could be trusted. 2010: Web site domain seizures On November 29, 2010, the U.S. Immigration and Customs Enforcement (U.S. ICE) issued seizure orders against 82 web sites with Internet addresses that were reported to be involved in the illegal sale and distribution of counterfeit goods. As registry operator for , Verisign performed the required takedowns of the 82 sites under order from law enforcement. InformationWeek reported that "Verisign will say only that it received sealed court orders directing certain actions to be taken with respect to specific domain names". The removal of the 82 websites was cited as an impetus for the launch of "the Dot-P2P Project" in order to create a decentralized DNS service without centralized registry operators. Following the disappearance of WikiLeaks during the following week and its forced move to wikileaks.ch, a Swiss domain, the Electronic Frontier Foundation warned of the dangers of having key pieces of Internet infrastructure such as DNS name translation under corporate control. 2012: Web site domain seizure In March 2012, the U.S. government declared that it has the right to seize domains ending in , , , , , and if the companies administering the domains are based in the U.S. The U.S. government can seize the domains ending in , , , , and by serving a court-order on Verisign, which manages those domains. The domain is managed by the Virginia-based non-profit Public Interest Registry. In March 2012, Verisign shut down the sports-betting site Bodog.com after receiving a court order, even though the domain name was registered to a Canadian company. References External links Digicert SSL Certificates - formerly from Verisign Oral history interview with James Bidzos, Charles Babbage Institute University of Minnesota, Minneapolis. Bidzos discusses his leadership of software security firm RSA Data Security as it sought to commercialize encryption technology as well as his role in creating the RSA Conference and founding Verisign. Oral history interview 2004, Mill Valley, California. Internet technology companies of the United States American companies established in 1995 Domain Name System Computer companies established in 1995 Companies based in Reston, Virginia Companies listed on the Nasdaq Former certificate authorities Radio-frequency identification Domain name registries 1995 establishments in Virginia DDoS mitigation companies 1998 initial public offerings Corporate spin-offs Domain name seizures by United States
Verisign
[ "Engineering" ]
3,340
[ "Radio-frequency identification", "Radio electronics" ]
319,641
https://en.wikipedia.org/wiki/Marlex
Marlex is a trademarked name for a crystalline polypropylene and high-density polyethylene (HDPE). These plastics were invented by J. Paul Hogan and Robert Banks, two research chemists at the Phillips Petroleum Company in 1951. Interest in the material in the 1950s arose from its high melting point and tensile strength, making it more desirable than the more common form of polyethylene. For example, the medical community in 1958 was eager to use Marlex 50 crystalline polyethylene which softens at . Objects made of Marlex could be sterilized in high-temperature autoclaves without affecting their form. Marlex was used by Wham-O for their Hula Hoops in the 1950s, which helped create a market for this form of plastic. It is now an integral part to a wide variety of products and markets around the world. Additionally, it can be used surgically as a reinforcing mesh in inguinal hernia repair. Notes References Polymers
Marlex
[ "Chemistry", "Materials_science" ]
211
[ "Polymer stubs", "Polymers", "Polymer chemistry", "Organic chemistry stubs" ]
319,653
https://en.wikipedia.org/wiki/Bidet
A bidet ( or ) is a bowl or receptacle designed to be sat upon in order to wash a person's genitalia, perineum, inner buttocks, and anus. The modern variety has a plumbed-in water supply and a drainage opening, and is thus a plumbing fixture subject to local hygiene regulations. The bidet is designed to promote personal hygiene and is used after defecation, and before and after sexual intercourse. It can also be used to wash feet, with or without filling it up with water. Some people even use bidets to bathe babies or pets. In several European countries, a bidet is now required by law to be present in every bathroom containing a toilet bowl. It was originally located in the bedroom, near the chamber-pot and the marital bed, but in modern times is located near the toilet bowl in the bathroom. Fixtures that combine a toilet seat with a washing facility include the electronic bidet. Opinions as to the necessity of the bidet vary widely over different nationalities and cultures. In cultures that use it habitually, such as parts of Western, Central and Southeastern Europe (especially Italy and Portugal), Eastern Asia and some Latin American countries such as Argentina or Paraguay, it is considered an indispensable tool in maintaining good personal hygiene. It is commonly used in North African countries, such as Egypt. It is rarely used in sub-Saharan Africa, Australia, and North America. "Bidet" is a French loanword meaning "pony" due to the straddling position adopted in its usage. Applications Bidets are primarily used to wash and clean the genitalia, perineum, inner buttocks, and anus. Some bidets have a vertical jet intended to give easy access for washing and rinsing the perineum and anal area. The traditional separate bidet is like a wash-basin which is used with running warm water with the help of specific soaps, and may then be used for many other purposes such as washing feet. Types Bidet shower A bidet shower (also known as "bidet spray", "bidet sprayer", or "health faucet") is a hand-held triggered nozzle, similar to that on a kitchen sink sprayer, that delivers a spray of water to assist in anal cleansing and cleaning the genitals after defecation and urination. In contrast to a bidet that is integrated with the toilet, a bidet shower has to be held by the hands, and cleaning does not take place automatically. Bidet showers are common in countries where water is considered essential for anal cleansing. Drawbacks include the possibility of wetting a user's clothing if used carelessly. In addition, a user must be reasonably mobile and flexible to use a hand-held bidet shower. Conventional or standalone bidet A bidet is a plumbing fixture that is installed as a separate unit in the bathroom besides toilet, shower and sink, which users have to straddle. Some bidets resemble a large hand basin, with taps and a stopper so they can be filled up; other designs have a nozzle that squirts a jet of water to aid in cleansing. Add-on bidets There are bidets that are attachable to toilet bowls, saving space and obviating additional plumbing. A bidet may be a movable or fixed nozzle, either attached to an existing toilet on the back or side toilet rim, or replacing the toilet seat. In these cases, its use is restricted to cleaning the anus and genitals. Some bidets of this type produce a vertical water jet and others a more-or-less oblique one. Other bidets have one nozzle on the side rim aimed at both anal and genital areas, and other designs have two nozzles on the back rim. The shorter one, called the "family nozzle", is used for washing the area around the anus, and the longer one ("bidet nozzle") is designed for washing the vulva. Such attachable bidets (also called "combined toilets", "bidet attachments", or "add-on bidets") are controlled either mechanically, by turning a valve, or electronically. Electronic bidets are controlled with waterproof electrical switches rather than a manual valve. There are models that have a heating element which blows warm air to dry the user after washing, that offer heated seats, wireless remote controls, illumination through built in night lights, or built in deodorizers and activated carbon filters to remove odours. Further refinements include adjustable water pressure, temperature compensation, and directional spray control. Where bathroom appearance is of concern, under-the-seat mounting types have become more popular. An add-on bidet typically connects to the existing water supply of a toilet via the addition of a threaded tee pipe adapter, and requires no soldering or other plumbing work. Electronic add-on bidets also require a GFCI protected grounded electrical outlet. Usage and health Personal hygiene is improved and maintained more accurately and easily with the use of both toilet paper and a bidet as compared to the use of toilet paper alone. In some add-on bidets with vertical jets, little water is used and toilet paper may not be necessary. Addressing hemorrhoids and genital health issues might also be facilitated by the use of bidet fixtures. Because of the large surface of the basin, after-use and routine disinfection of stand-alone bidets require thoroughness, or microbial contamination from one user to the next could take place. Bidet attachments are sometimes included on hospital toilets because of their utility in maintaining hygiene. Hospitals must consider the use of bidet properly and consider the clinical background of patients to prevent cross-infection. Warm-water bidets may harbor dangerous microbes if not properly disinfected. Environmental aspects From an environmental standpoint, bidets can reduce the need for toilet paper. Considering that an average person uses only 0.5 litre (1/8 US gallon) of water for cleansing by using a bidet, much less water is used than for manufacturing toilet paper. An article in Scientific American concluded that using a bidet is "much less stressful on the environment than using paper". Scientific American has also reported that if the US switched to using bidets, 15 million trees could be saved every year. In the US, UK, and some other countries, wet wipes are heavily marketed as an upgrade from dry toilet paper. However, this product has been criticized for its adverse environmental impact, due to the non-biodegradable plastic fibers composing most versions. Although the wipes are promoted as "flushable", they absorb waste fats and agglomerate into massive "fatbergs" which can clog sewer systems and must be cleared at great expense. Bidets are being marketed as cleaning better than toilet paper or wet wipes, with fewer negative environmental effects. Society and culture The bidet is common in Catholic countries and required by law in some. It is also found in some traditionally Eastern Orthodox and Protestant countries such as Greece and Finland respectively, where bidet showers are common. In Islam, there are many strict rules concerning excretion; in particular, anal washing with water is required. Consequently, in Middle Eastern regions where Islam is the predominant religion, water for anal washing is provided in most toilets, usually in the form of a hand-held "bidet shower" or shattaf. Prevalence Bidets are becoming increasingly popular with the elderly and disabled. Combined toilet/bidet installations make self-care toileting possible for many people, affording greater independence. There are often special units with higher toilet seats allowing easier wheelchair transfer, and with some form of electronic remote control that benefits an individual with limited mobility or otherwise requiring assistance. Bidets are common bathroom fixtures in the Arab world and in Catholic countries, such as Italy (the installation of a bidet in a bathroom has been mandatory since 1975), Spain (but in recent times new or renewed houses tend to have bathrooms without bidets, except the luxurious ones), and Portugal (installation is mandatory since 1975). They are also found in Southeastern European countries such as Albania, Bosnia and Herzegovina, Romania, Greece and Turkey. They are very popular in some South American countries, particularly Argentina, Paraguay and Uruguay. Electronic bidet-integrated toilets, often with functions such as toilet seat warming, are commonly found in Japan, and are becoming more popular in other Asian countries. In Northern Europe, bidets are rare, although in Finland, bidet showers are common. Bidet showers are most commonly found in Southeast Asia, South Asia, and the Middle East. In 1980, the first "paperless toilet" was launched in Japan by manufacturer Toto, a combination of toilet and bidet which also dries the user after washing. These combination toilet-bidets (washlet) with seat warmers, or attachable bidets are particularly popular in Japan and South Korea, and are found in approximately 76% of Japanese households . They are commonly found in hotels and some public facilities. These bidet-toilets, along with toilet seat and bidet units (to convert an existing toilet) are sold in many countries, including the United States. Bidet seat conversions are much easier and lower cost to install than traditional bidets, and have disrupted the market for the older fixtures. After a slow start in the 1990s, electronic bidets are starting to become more available in the United States. American distributors were directly influenced by their Japanese predecessors, as the founders of Brondell (established in 2003) have indicated. The popularity of add-on bidet units is steadily increasing in the United States, Canada and the United Kingdom, in part because of their ability to treat hemorrhoids or urogenital infections. In addition, shortages of toilet paper due to the coronavirus pandemic have led to an increased interest in bidets. Etymology Bidet is a French word for "pony", and in Old French, meant "to trot". This etymology comes from the notion that one "rides" or straddles a bidet much like a pony is ridden. The word "bidet" was used in 15th-century France to refer to the pet ponies that French royalty kept. History The bidet appears to have been an invention of French furniture makers in the late 17th century, although no exact date or inventor is known. The earliest written reference to the bidet is in 1726 in Italy. Even though there are records of Maria Carolina of Austria, Queen of Naples and Sicily, requesting a bidet for her personal bathroom in the Royal Palace of Caserta in the second half of the 18th century, the bidet did not become widespread in Italy until after the Second World War. The bidet is possibly associated with the chamber pot and the bourdaloue, the latter being a small, hand-held chamber pot. Historical antecedents and early functions of the bidet are believed to include devices used for contraception. Bidets are considered ineffective by today's standards of contraception, and their use for that function was quickly abandoned and forgotten following the advent of modern contraceptives such as the pill. By 1900, due to plumbing improvements, the bidet (and chamber pot) moved from the bedroom to the bathroom and became more convenient to fill and drain. In 1928, in the United States, John Harvey Kellogg applied for a patent on an "anal douche". In his application, he used the term to describe a system comparable to what today might be called a bidet nozzle, which can be attached to a toilet to perform anal cleansing with water. In 1965, the American Bidet Company featured an adjustable spray nozzle and warm water option, seeking to make the bidet a household item. The fixture was expensive, and required floor space to install; it was eventually discontinued without a replacement model. The early 1980s saw the introduction of the electronic bidet from Japan, with names such as Clean Sense, Galaxy, Infinity, Novita, and of non-electric attachments such as Gobidet. These devices have attachments that connect to existing toilet water supplies, and can be used in bathrooms lacking the space for a separate bidet and toilet. Many models have additional features, such as instant-heating warm water, night lights, or a heated seat. See also Anal hygiene Cleanliness Ecological sanitation Feminine hygiene Improved sanitation Infection prevention and control Istinja Public health Sustainable sanitation Tabo (hygiene) Toilet seat Toilets in Japan Washlet References External links Bathroom equipment French inventions Hygiene Plumbing
Bidet
[ "Engineering" ]
2,594
[ "Construction", "Plumbing" ]
319,657
https://en.wikipedia.org/wiki/Claude%20All%C3%A8gre
Claude Allègre (; 31 March 1937 – 4 January 2025) was a French politician and scientist. His work in the field of isotope geochemistry was recognised with the award of many senior medals, including the Crafoord Prize for geosciences in 1986 and the William Bowie Medal of the American Geophysical Union in 1995. His political service included a three-year term as Minister of Education in France, from 1997 to 2000. Early life Allègre was born in Paris on 31 March 1937, and was the eldest of four children. His father was a professor of natural sciences, and his mother was a school headteacher. Allègre's family was from the Hérault region of France. Background and scientific work Allègre's main area of research was in geochemistry. He started work in this field for his doctoral research, where he focussed on ways of dating rocks using isotope geochemistry; specifically radiometric dating. After realising that there was no laboratory in France where he could make measurements with the accuracy he was seeking, Allègre received a NATO grant and spent the summer of 1965 working at the California Institute of Technology in Pasadena, California. Here, Allègre began working with Jerry Wasserburg, and learned the techniques required for rubidium-strontium dating of rocks by mass spectrometry. Allègre returned to France, and over the next three years built a laboratory and began making isotopic measurements. He completed his doctoral thesis, titled 'introduction to the systematic geochronology of open systems', at the University of Paris in 1967. In 1968, he took up a position at the Institut de Physique du Globe de Paris (IPGP), where he then spent much of his scientific career, including a ten year stint as IPGP director from 1976 to 1986. Over the next thirty years, Allègre and his research students, post-doctoral researchers and collaborators developed techniques that meant they were able to measure isotope abundances in rocks and minerals by mass spectrometry that set new standards of sensitivity and precision. This allowed Allègre and his team to develop new ideas about the age and chemical evolution of the outer parts of the Earth, and also provide new information and insight into the early history of the solar system, by dating meteorites. Allègre defined the new field of 'chemical geodynamics'. This combined data from isotope geochemistry with constraints from geophysics to develop ideas about the long-term chemical evolution of the planet, from core-formation to crustal growth. Allègre's work had a substantial impact on the field of geochemistry, for which he received a number of awards and elections to national academies, including the US National Academy of Sciences in 1985, and the Royal Society in 2002. He was also awarded senior medals for his work, from the Geochemical Society (V.M. Goldschmidt award, 1986) and the American Geophysical Union (Bowie medal, in 1995). In 1986, he was jointly awarded the Crafoord Prize with Wasserburg, in recognition of their 'pioneering work in isotope geology'. Scientific administration Allègre made many contributions to the organisation of the geological and geochemical sciences in France and Europe throughout his career. In 1981, he became the first president of the European Union of Geosciences (EUG), which was established to coordinate a biennial scientific congress for geoscientists across Europe. The EUG later merged with the European Geophysical Union, to become the European Geosciences Union (EGU), in 2004. In 1988, Allègre created the European Association of Geochemistry and presided over an inaugural international conference on geochemistry in Paris. This led to the establishment of the annual 'Goldschmidt Conferences' of the international geochemistry community, in cooperation with the Geochemical Society which are held in alternate years in Europe, and in the United States. From 1992 to 1997, Allègre was director of the French national geological survey, the Bureau de Recherches Géologiques et Minières. In 2004, Allègre was presented with the distinguished service award of the Geochemical Society for his 'enormous' service to the geochemical profession. In his citation, Al Hofmann commented that Allègre's 'actions have not always been popular ... but they have always been guided by far-sighted strategic thinking and planning, and usually by deep insight.' He also characterised Allègre's approach to service as one that involved 'hatching a far-flung idea ... hand picking a few people ... and then letting them do the work.' Scientific works Over the course of his career, Allègre published many scientific papers. He also authored a number of scientific monographs and textbooks, including: Introduction to geochemistry (1974) Trace elements in igneous petrology : a volume in memory of Paul W. Gast (1978) From stone to star : a view of modern geology (1992) Isotope geology (2008) Allègre also wrote a number of popular science texts, on topics such as the history of the Earth and the plate tectonic revolution. His 1988 book, The behaviour of the Earth, gained praise from reviewers for presenting a perspective on the French scientific contributions to the history of plate tectonics. Historian of geology, David Leveson, cautioned that the narrative promoted a 'Whiggish' telling of the story of plate tectonics as one of progress, from the viewpoint of an insider. While Allègre's account of the new global geology of plate tectonics was 'lyrical' and 'rhapsodic', Leveson argued that Allègre's focus on progress meant that he was not able to successfully place 'mobilist geology in its "proper" sociological context' in this book. Scientific controversies In 1976, Allègre and volcanologist Haroun Tazieff became involved in an intense and public quarrel about whether inhabitants should evacuate the areas surrounding the la Soufrière volcano on the Caribbean island of Guadeloupe, which had begun to show signs of unrest, including steam explosions. Allègre held that inhabitants should be evacuated, while Tazieff held that the Soufrière was harmless because all analyses pointed to a purely phreatic eruption with no sign of fresh magma. In part out of caution, the authorities decided to follow Allègre's advice and evacuate. The eruptive crisis did not result in any damage, but the evacuation had very significant negative consequences. Allègre, as the director of Institut de Physique du Globe de Paris, subsequently expelled Tazieff from that institute. The controversy dragged on for many years after the end of the eruption, and ended up in court. Political career A one time member of the French Socialist Party, Allègre is better known to the general public for his political responsibilities, which included serving as Minister of Education of France in the Jospin cabinet from 4 June 1997 to March 2000, when he was replaced by Jack Lang. His outpourings of critiques against teaching personnel, as well as his reforms, made him increasingly unpopular in the teaching world. In 1996, Allegre published La Défaite de Platon ("The defeat of Plato"), described by mathematician Pierre Schapira in the Spring 1997 edition of Mathematical Intelligencer as "one of the most savage broadsides against conceptual thought." In the run-up to the 2007 French presidential election, he endorsed Lionel Jospin, then Dominique Strauss-Kahn, for the Socialist nomination, and finally sided with the ex-Socialist Jean-Pierre Chevènement, against Ségolène Royal. When Chevènement decided not to run, he publicly declined to support Royal's bid for the presidency, citing differences over nuclear energy, GMOs and stem-cell research. He later became close to conservative president Nicolas Sarkozy. Views on global warming In an article in 2006 entitled "The Snows of Kilimanjaro" in L'Express, a French weekly, Allègre cited evidence that Antarctica's gaining ice and that Mount Kilimanjaro's retreating snow caps, among other global-warming concerns, might be due to natural causes. He said that "[t]he cause of this climate change is unknown". This represented a change of mind, since Allègre wrote in 1987 that "By burning fossil fuels, man increased the concentration of carbon dioxide in the atmosphere which, for example, has raised the global mean temperature by half a degree in the last century". Allègre accused those agreeing with the mainstream scientific view of global warming of being motivated by money, saying that "the ecology of helpless protesting has become a very lucrative business for some people!" In 2009, when it was suggested that Claude Allègre might be offered a position as minister in President Nicolas Sarkozy's government, TV presenter and environmental activist Nicolas Hulot stated: "He doesn't think the same as the 2,500 scientists of the IPCC, who are warning the world about a disaster; that's his right. But if he were to be recruited in government, it would become policy, and it would be a bras d'honneur to those scientists. [...] [It] would be a tragic signal, six months before the Copenhagen Conference, and something incomprehensible coming from France, which has been a leading country for years in the fight against climate change!" In a 2010 petition, more than 500 French researchers asked Science Minister Valérie Pécresse to dismiss Allègre's book L'imposture climatique, claiming the book was "full of factual mistakes, distortions of data, and plain lies". Allègre described the petition as "useless and stupid". Later life and death Allègre suffered a heart attack while at a scientific conference in Chile in 2013. He was hospitalised, but survived. He died in Paris on 4 January 2025, at the age of 87. Awards and honors Foreign Associate of the National Academy of Sciences (1985) V. M. Goldschmidt Award, (1986) Crafoord Prize for geology along with Gerald J. Wasserburg, (1986) Foreign Honorary Member of the American Academy of Arts and Sciences, (1987) Wollaston Medal of the Geological Society of London, (1987) Member of the American Philosophical Society (1992) Gold Medal of the CNRS, (1994) French Academy of Sciences, (1995) William Bowie Medal, American Geophysical Union (1995) Arthur Holmes Medal, European Geosciences Union (1995) Honorary Doctorate, Université Libre de Bruxelles (1998) Foreign Member, Royal Society (2002) Atoms for Peace prize (2011) National honours Commander, Legion of Honour (2000) Commander, Ordre des Palmes académiques (2000) See also Politics of France References Sources External links Senate Article – Global Warming Skepticism Canada National Post Article — Allegre's second thoughts 1937 births 2025 deaths Politicians from Paris Scientists from Paris Socialist Party (France) politicians French geochemists Foreign members of the Royal Society Members of the French Academy of Sciences Foreign associates of the National Academy of Sciences Fellows of the American Academy of Arts and Sciences Foreign fellows of the Indian National Science Academy Wollaston Medal winners Ministers of national education of France Members of the American Philosophical Society Recipients of the V. M. Goldschmidt Award
Claude Allègre
[ "Chemistry" ]
2,377
[ "Geochemists", "Recipients of the V. M. Goldschmidt Award", "French geochemists" ]
319,671
https://en.wikipedia.org/wiki/Met%20Office
The Met Office, until November 2000 officially the Meteorological Office, is the United Kingdom's national weather and climate service. It is an executive agency and trading fund of the Department for Science, Innovation and Technology and is led by CEO Penelope Endersby, who took on the role as Chief Executive in December 2018 and is the first woman to do so. The Met Office makes meteorological predictions across all timescales from weather forecasts to climate change. History The Met Office was established on 1 August 1854 as a small department within the Board of Trade under Vice Admiral Robert FitzRoy as a service to mariners. The loss of the passenger vessel, the Royal Charter, and 459 lives off the coast of Anglesey in a violent storm in October 1859 led to the first gale warning service. FitzRoy established a network of 15 coastal stations from which visual gale warnings could be provided for ships at sea. The new electric telegraph enabled rapid dissemination of warnings and also led to the development of an observational network which could then be used to provide synoptic analysis. The Met Office started in 1861 to provide weather forecasts to newspapers. FitzRoy requested the daily traces of the photo-barograph at Kew Observatory (invented by Francis Ronalds) to assist in this task and similar barographs and as well as instruments to continuously record other meteorological parameters were later provided to stations across the observing network. Publication of forecasts ceased in May 1866 after FitzRoy's death but recommenced in April 1879. Connection with the Ministry of Defence Following the First World War, the Met Office became part of the Air Ministry in 1919, the weather observed from the top of Adastral House (where the Air Ministry was based) giving rise to the phrase "The weather on the Air Ministry roof". As a result of the need for weather information for aviation, the Met Office located many of its observation and data collection points on RAF airfields, and this accounts for the large number of military airfields mentioned in weather reports even today. In 1936 the Met Office split with services to the Royal Navy being provided by its own forecasting services. It became an executive agency of the Ministry of Defence in April 1990, a quasi-governmental role, being required to act commercially. Changes of ministry Following a machinery of government change, the Met Office became part of the Department for Business, Innovation and Skills on 18 July 2011, and subsequently part of the Department for Business, Energy and Industrial Strategy following the merger of BIS and the Department of Energy and Climate Change on 14 July 2016. Although no longer part of the MOD, the Met Office maintains strong links with the military through its front line offices at RAF and Army bases both in the UK and overseas and its involvement in the Joint Operations Meteorology and Oceanography Centre (JOMOC) with the Royal Navy. The Mobile Met Unit (MMU) are a unit consisting of Met Office staff who are also RAF reservists who accompany forward units in times of conflict advising the armed forces of the conditions for battle, particularly the RAF. Locations In September 2003 the Met Office moved its headquarters from Bracknell in Berkshire to a purpose-built £80m structure at Exeter Business Park, near junction 29 of the M5 motorway. The new building was officially opened on 21 June 2004 – a few weeks short of the Met Office's 150th anniversary – by Robert May, Baron May of Oxford. It has a worldwide presenceincluding a forecasting centre in Aberdeen, and offices in Gibraltar and on the Falklands. Other outposts lodge in establishments such as the MetOffice@Reading (formerly the Joint Centre for Mesoscale Meteorology) at University of Reading in Berkshire, the Joint Centre for Hydro-Meteorological Research (JCHMR) site at Wallingford in Oxfordshire, and there is a Met Office presence at Army and Air Force bases within the UK and abroad (including frontline units in conflict zones). Royal Navy weather forecasts are generally provided by naval officers, not Met Office personnel. Forecasts Shipping Forecast The Shipping Forecast is produced by the Met Office and broadcast on BBC Radio 4, for those traversing the seas around the British Isles. Weather forecasting and warnings The Met Office issues Severe Weather Warnings for the United Kingdom through the National Severe Weather Warning Service (NSWWS). These warn of weather events that may affect transport infrastructure and endanger people's lives. In March 2008, the system was improved and a new stage of warning was introduced, the 'Advisory'. The Met Office along with Irish counterpart Met Éireann introduced a storm naming system in September 2015 to provide a single authoritative naming system for the storms that affect the UK and Ireland. The first named storm under this system, Abigail was announced on 10 November 2015. In 2019, the Met Office and Met Éireann were joined by Dutch national weather forecasting service the Royal Netherlands Meteorological Institute (KNMI). Weather prediction models The main role of the Met Office is to produce forecast models by gathering information from weather satellites in space and observations on earth, then processing it with a variety of models, based on a software package known as the unified model. The principal weather products for UK customers are 36-hour forecasts from the operational 1.5 km resolution UKV model covering the UK and surroundings (replacing the 4 km model), 48-hour forecasts from the 12 km resolution NAE model covering Europe and the North Atlantic, and 144-hour forecasts from the 25 km resolution global model (replacing the 40 km global model). The Met Office's Global Model forecast has consistently been in the top 3 for global weather forecast performance (in the decades up to 2010) in independent verification to WMO standards. Products for other regions of the globe are sold to customers abroad, provided for MOD operations abroad or provided free to developing countries in Africa. If necessary, forecasters may make adjustments to the computer forecasts. Data is stored in the Met Office's own PP-format. Flood Forecasting Centre Formed in 2009, the Flood Forecasting Centre (FFC) is a joint venture between the Environment Agency and the Met Office to provide flood risk guidance for England and Wales. The Centre is jointly staffed from both parent organisations and is based in the Operations Centre at the Met Office headquarters in Exeter. In Scotland this role is performed by the Scottish Flood Forecasting Service, a joint venture between the Scottish Environment Protection Agency (SEPA) and the Met Office. Seasonal forecasts The Met Office makes seasonal and long range forecasts and distributes them to customers and users globally. The Met Office was the first climate and weather forecast provider to be recognised as a Global Producing Centre of long range forecasts by the World Meteorological Organisation and continues to provide forecasts to the WMO for dissemination to other national meteorological services worldwide. Met Office research has broken new ground in seasonal forecasting for the extratropics and has demonstrated its abilities in its seasonal predictions of the North Atlantic Oscillation and winter climate for Europe and North America. Supply of forecasts for broadcasting companies One of the main media companies, ITV produce forecasts for ITV Weather using the Met Office's data and animated weather symbols. The BBC used to use Met Office forecasts for all of its output, but on 23 August 2015, it was announced that the BBC would be replacing the Met Office with MeteoGroup, a competing provider, as part of the corporation's legal obligation to provide best value for money for the licence fee payers. The BBC still uses some Met Office data for certain forecasts, particularly severe weather warnings and the Shipping Forecast. World Area Forecast Centre The Met Office is one of only two World Area Forecast Centres or WAFCs, and is referred to as WAFC London. The other WAFC is located in Kansas City, Missouri, and known as WAFC Washington. WAFC data is used daily to safely and economically route aircraft, particularly on long-haul journeys. The data provides details of wind speed and direction, air temperature, cloud type and tops, and other features. Volcanic Ash Advisory Centre As part of its aviation forecast operation the Met Office operates the London Volcanic Ash Advisory Centre (VAAC). This provides forecasts to the aviation industry of volcanic ash clouds that could enter aircraft flight paths and impact aviation safety. The London VAAC, one of nine worldwide, is responsible for the area covering the British Isles, the north east Atlantic and Iceland. The VAAC were set up by the International Civil Aviation Organization (ICAO), an agency of the United Nations, as part of the International Airways Volcano Watch (IAVW). The London VAAC makes use of satellite images, plus seismic, radar and visual observation data from Iceland, the location of all of the active volcanoes in its area of responsibility. The NAME dispersion model developed by the Met Office is used to forecast the movement of the ash clouds 6, 12 and 18 hours from the time of the alert at different flight levels. Air quality The Met Office issues air quality forecasts made using NAME, the Met Office's medium-to-long-range atmospheric dispersion model. It was developed as a nuclear accident model following the Chernobyl accident in 1986, but has since evolved into an all-purpose dispersion model capable of predicting the transport, transformation and deposition of a wide class of airborne materials. NAME is used operationally by the Met Office as an emergency response model as well as for routine air quality forecasting. Aerosol dispersion is calculated using the United Kingdom Chemistry and Aerosols model. The forecast is produced for pollutants and their typical health effects are shown in the following table. Decadal Predictions The Met Office coordinates the production and collation of decadal climate prediction from climate centres around the world as part of its responsibilities as World Meteorological Organisation Lead Centre for Annual to Decadal Climate Prediction. These predictions are updated each year and a summary, the Global Annual to Decadal Climate Update is published each year. IPCC Until 2001 the Met Office hosted the Intergovernmental Panel on Climate Change working group, chaired by John Houghton, on climate science. In 2001 the working group moved to the National Oceanic and Atmospheric Administration. High performance computing Due to the large amount of computation needed for Numerical Weather Prediction and the Unified model, the Met Office has had some of the most powerful supercomputers in the world. In November 1997 the Met Office supercomputer was ranked third in the world. Customer service Since 2012 the Met Office Contact Centre (known as the Weather Desk) has been part of Contact Centre Panel's 'Top 50 Companies for Customer Service' programme. In 2015 the Met Office won awards in the following categories: Rated 1st Overall for Combined Channels Most Improved Overall for Social Media Rated 2nd Overall for Call Service Rated 1st Overall for Email Service Best in Public Sector Best Extra Small Centre Weather stations Reports (observations) from weather stations can be automatic (totally machine produced), semi-automatic (part-machine and part manual), or manual. Some stations produce manual observations during business hours and revert to automatic observations outside these times. Many stations feature "present weather" sensors, CCTV, etc. There is also a network of 'upper air' stations, using radiosondes. The six main radiosonde stations in the UK are Camborne, Lerwick, Albemarle, Watnall, Castor Bay and Herstmonceux. Some stations have limited reporting times, while other report continuously, mainly RAF and Army Air Corps stations where a staffed met office is provided for military operations. The "standard" is a once-hourly reporting schedule, but automatic stations can often be "polled" as required, whilst stations at airfields report twice-hourly, with additional (often frequent in times of bad weather) special reports as necessary to inform airfield authorities of changes to the weather that may affect aviation operations. Some stations report only CLIMAT data (e.g. maximum and minimum temperatures, rainfall totals over a period, etc.) and these are usually recorded at 0900 and 2100 hours daily. Weather reports are often performed by observers not specifically employed by the Met Office, such as Air traffic control staff, coastguards, university staff and so on. Eskdalemuir Observatory Lerwick Observatory Penkridge weather station Prestatyn weather station Stonyhurst Sutton Bonington Wye weather station RAF Benson RAF Brize Norton weather station RAF Coningsby RAF Cottesmore RAF Cranwell weather station RAF Kinloss weather station RAF Leeming weather station RAF Leuchars weather station RAF Linton-on-Ouse weather station RAF Little Rissington weather station (supported by RAF Brize Norton) RAF Lossiemouth weather station RAF Lyneham weather station RAF Marham weather station RAF Northolt weather station 51.55 N 0.417 W RAF Odiham weather station RAF Shawbury RAF Waddington weather station Wattisham Flying Station weather station RAF Valley Middle Wallop Flying Station weather station Meteorological Research Unit and the Facility for Airborne Atmospheric Measurements (FAAM) Meteorological Research was carried out at RAE Bedford with instruments being carried by barrage balloons until the RAE facility closed in the 1980s. The Met Office association with Cardington continues by maintaining a Meteorological Research Unit (MRU). This is responsible for conducting research into part of the atmosphere called the boundary layer by using a tethered balloon which is kept in a small portable hangar. FAAM The Facility for Airborne Atmospheric Measurements (FAAM), part of the National Centre for Atmospheric Science, is based at Cranfield Airport. It is a collaboration with the Natural Environment Research Council. The FAAM was established as part of the National Centre for Atmospheric Science (NCAS), itself part of NERC, to provide aircraft measurement for use by UK atmospheric research organisations on worldwide campaigns. The main equipment is a modified BAe 146 type 301 aircraft, registration G-LUXE, owned and operated by BAE Systems on behalf of Directflight Limited. Areas of application include: Radiative transfer studies in clear and cloudy air; Tropospheric chemistry measurements; Cloud physics and dynamic studies; Dynamics of mesoscale weather systems; Boundary layer and turbulence studies; Remote sensing: verification of ground-based instruments; Satellite ground truth: radiometric measurements and winds; Satellite instrument test-bed; Campaigns in the UK and abroad. Directors General and Chief Executives Sir William Napier Shaw 1905–1920 Sir Graham Sutton 1954–1965 Sir Basil John Mason 1965–1983 Sir John Houghton 1983–1991 Julian Hunt 1992–1997 Peter Ewins 1997–2004 David Rogers 2004–2005 Mark Hutchinson 2005–2007 John Hirst 2007–2014 Rob Varley 2014–2018 Penelope Endersby 2018– See also Climatic Research Unit email controversy Climate of the United Kingdom Climate change in the United Kingdom Burns' Day storm Eskdalemuir Observatory European Centre for Medium-Range Weather Forecasts Great Storm of 1987 Met Éireann, the Irish meteorological service, which separated from the UK Met Office in 1936. North West Shelf Operational Oceanographic System Weather system naming in Europe References Further reading Hunt, Roger, "The end of weather forecasting at Met Office London", Weather magazine, Royal Meteorological Society, June 2007, v.62, no.6, pp. 143–146 Walker, Malcolm (J M), History of the Meteorological Office (December 2011) Cambridge University Press External links BBC Weather Centre BBC Shipping Forecast page Met Office (National Meteorological) Archive MetOffice@Reading at University of Reading Joint Centre for Hydro-Meteorological Research Met Office news blog Atmospheric dispersion modeling Climate of the United Kingdom Companies based in Exeter Department for Science, Innovation and Technology Economy of Devon Executive agencies of the United Kingdom government Governmental meteorological agencies in Europe Information technology organisations based in the United Kingdom Organisations based in Devon Organizations established in 1854 Science and technology in Devon Trading funds of the United Kingdom government 1854 establishments in the United Kingdom
Met Office
[ "Chemistry", "Engineering", "Environmental_science" ]
3,224
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
319,712
https://en.wikipedia.org/wiki/Well-founded%20relation
In mathematics, a binary relation is called well-founded (or wellfounded or foundational) on a set or, more generally, a class if every non-empty subset has a minimal element with respect to ; that is, there exists an such that, for every , one does not have . In other words, a relation is well-founded if: Some authors include an extra condition that is set-like, i.e., that the elements less than any given element form a set. Equivalently, assuming the axiom of dependent choice, a relation is well-founded when it contains no infinite descending chains, which can be proved when there is no infinite sequence of elements of such that for every natural number . In order theory, a partial order is called well-founded if the corresponding strict order is a well-founded relation. If the order is a total order then it is called a well-order. In set theory, a set is called a well-founded set if the set membership relation is well-founded on the transitive closure of . The axiom of regularity, which is one of the axioms of Zermelo–Fraenkel set theory, asserts that all sets are well-founded. A relation is converse well-founded, upwards well-founded or Noetherian on , if the converse relation is well-founded on . In this case is also said to satisfy the ascending chain condition. In the context of rewriting systems, a Noetherian relation is also called terminating. Induction and recursion An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if () is a well-founded relation, is some property of elements of , and we want to show that holds for all elements of , it suffices to show that: If is an element of and is true for all such that , then must also be true. That is, Well-founded induction is sometimes called Noetherian induction, after Emmy Noether. On par with induction, well-founded relations also support construction of objects by transfinite recursion. Let be a set-like well-founded relation and a function that assigns an object to each pair of an element and a function on the initial segment of . Then there is a unique function such that for every , That is, if we want to construct a function on , we may define using the values of for . As an example, consider the well-founded relation , where is the set of all natural numbers, and is the graph of the successor function . Then induction on is the usual mathematical induction, and recursion on gives primitive recursion. If we consider the order relation , we obtain complete induction, and course-of-values recursion. The statement that is well-founded is also known as the well-ordering principle. There are other interesting special cases of well-founded induction. When the well-founded relation is the usual ordering on the class of all ordinal numbers, the technique is called transfinite induction. When the well-founded set is a set of recursively-defined data structures, the technique is called structural induction. When the well-founded relation is set membership on the universal class, the technique is known as ∈-induction. See those articles for more details. Examples Well-founded relations that are not totally ordered include: The positive integers , with the order defined by if and only if divides and . The set of all finite strings over a fixed alphabet, with the order defined by if and only if is a proper substring of . The set of pairs of natural numbers, ordered by if and only if and . Every class whose elements are sets, with the relation ∈ ("is an element of"). This is the axiom of regularity. The nodes of any finite directed acyclic graph, with the relation defined such that if and only if there is an edge from to . Examples of relations that are not well-founded include: The negative integers , with the usual order, since any unbounded subset has no least element. The set of strings over a finite alphabet with more than one element, under the usual (lexicographic) order, since the sequence is an infinite descending chain. This relation fails to be well-founded even though the entire set has a minimum element, namely the empty string. The set of non-negative rational numbers (or reals) under the standard ordering, since, for example, the subset of positive rationals (or reals) lacks a minimum. Other properties If is a well-founded relation and is an element of , then the descending chains starting at are all finite, but this does not mean that their lengths are necessarily bounded. Consider the following example: Let be the union of the positive integers with a new element ω that is bigger than any integer. Then is a well-founded set, but there are descending chains starting at ω of arbitrary great (finite) length; the chain has length for any . The Mostowski collapse lemma implies that set membership is a universal among the extensional well-founded relations: for any set-like well-founded relation on a class that is extensional, there exists a class such that is isomorphic to . Reflexivity A relation is said to be reflexive if holds for every in the domain of the relation. Every reflexive relation on a nonempty domain has infinite descending chains, because any constant sequence is a descending chain. For example, in the natural numbers with their usual order ≤, we have . To avoid these trivial descending sequences, when working with a partial order ≤, it is common to apply the definition of well foundedness (perhaps implicitly) to the alternate relation < defined such that if and only if and . More generally, when working with a preorder ≤, it is common to use the relation < defined such that if and only if and . In the context of the natural numbers, this means that the relation <, which is well-founded, is used instead of the relation ≤, which is not. In some texts, the definition of a well-founded relation is changed from the definition above to include these conventions. References Just, Winfried and Weese, Martin (1998) Discovering Modern Set Theory. I, American Mathematical Society . Karel Hrbáček & Thomas Jech (1999) Introduction to Set Theory, 3rd edition, "Well-founded relations", pages 251–5, Marcel Dekker
Well-founded relation
[ "Mathematics" ]
1,335
[ "Mathematical induction", "Order theory", "Wellfoundedness" ]
319,834
https://en.wikipedia.org/wiki/Rutherford%20scattering%20experiments
The Rutherford scattering experiments were a landmark series of experiments by which scientists learned that every atom has a nucleus where all of its positive charge and most of its mass is concentrated. They deduced this after measuring how an alpha particle beam is scattered when it strikes a thin metal foil. The experiments were performed between 1906 and 1913 by Hans Geiger and Ernest Marsden under the direction of Ernest Rutherford at the Physical Laboratories of the University of Manchester. The physical phenomenon was explained by Rutherford in a classic 1911 paper that eventually lead to the widespread use of scattering in particle physics to study subatomic matter. Rutherford scattering or Coulomb scattering is the elastic scattering of charged particles by the Coulomb interaction. The paper also initiated the development of the planetary Rutherford model of the atom and eventually the Bohr model. Rutherford scattering is now exploited by the materials science community in an analytical technique called Rutherford backscattering. Summary Thomson's model of the atom The prevailing model of atomic structure before Rutherford's experiments was devised by J. J. Thomson. Thomson had discovered the electron through his work on cathode rays and proposed that they existed within atoms, and an electric current is electrons hopping from one atom to an adjacent one in a series. There logically had to be a commensurate amount of positive charge to balance the negative charge of the electrons and hold those electrons together. Having no idea what the source of this positive charge was, he tentatively proposed that the positive charge was everywhere in the atom, adopting a spherical shape for simplicity. Thomson imagined that the balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson also believed the electrons could move around in this sphere, and in that regard he likened the substance of the sphere to a liquid. In fact the positive sphere was more of an abstraction than anything material. He did not propose a positively-charged subatomic particle; a counterpart to the electron. Thomson was never able to develop a complete and stable model that could predict any of the other known properties of the atom, such as emission spectra and valencies. The Japanese scientist Hantaro Nagaoka rejected Thomson's model on the grounds that opposing charges cannot penetrate each other. He proposed instead that electrons orbit the positive charge like the rings around Saturn. However this model was also known to be unstable. Alpha particles and the Thomson atom An alpha particle is a positively charged particle of matter that is spontaneously emitted from certain radioactive elements. Alpha particles are so tiny as to be invisible, but they can be detected with the use of phosphorescent screens, photographic plates, or electrodes. Rutherford discovered them in 1899. In 1906, by studying how alpha particle beams are deflected by magnetic and electric fields, he deduced that they were essentially helium atoms stripped of two electrons. Thomson and Rutherford knew nothing about the internal structure of alpha particles. Prior to 1911 they were thought to have a diameter similar to helium atoms and contain ten or so electrons. Thomson's model was consistent with the experimental evidence available at the time. Thomson studied beta particle scattering which showed small angle deflections modelled as interactions of the particle with many atoms in succession. Each interaction of the particle with the electrons of the atom and the positive background sphere would lead to a tiny deflection, but many such collisions could add up. The scattering of alpha particles was expected to be similar. Rutherford's team would show that the multiple scattering model was not needed: single scattering from a compact charge at the centre of the atom would account for all of the scattering data. Rutherford, Geiger, and Marsden Ernest Rutherford was Langworthy Professor of Physics at the Victoria University of Manchester (now the University of Manchester). He had already received numerous honours for his studies of radiation. He had discovered the existence of alpha rays, beta rays, and gamma rays, and had proved that these were the consequence of the disintegration of atoms. In 1906, he received a visit from the German physicist Hans Geiger, and was so impressed that he asked Geiger to stay and help him with his research. Ernest Marsden was a physics undergraduate student studying under Geiger. In 1908, Rutherford sought to independently determine the charge and mass of alpha particles. To do this, he wanted to count the number of alpha particles and measure their total charge; the ratio would give the charge of a single alpha particle. Alpha particles are too tiny to see, but Rutherford knew about Townsend discharge, a cascade effect from ionisation leading to a pulse of electric current. On this principle, Rutherford and Geiger designed a simple counting device which consisted of two electrodes in a glass tube. (See #1908 experiment.) Every alpha particle that passed through the tube would create a pulse of electricity that could be counted. It was an early version of the Geiger counter. The counter that Geiger and Rutherford built proved unreliable because the alpha particles were being too strongly deflected by their collisions with the molecules of air within the detection chamber. The highly variable trajectories of the alpha particles meant that they did not all generate the same number of ions as they passed through the gas, thus producing erratic readings. This puzzled Rutherford because he had thought that alpha particles were too heavy to be deflected so strongly. Rutherford asked Geiger to investigate how far matter could scatter alpha rays. The experiments they designed involved bombarding a metal foil with a beam of alpha particles to observe how the foil scattered them in relation to its thickness and material. They used a phosphorescent screen to measure the trajectories of the particles. Each impact of an alpha particle on the screen produced a tiny flash of light. Geiger worked in a darkened lab for hours on end, counting these tiny scintillations using a microscope. For the metal foil, they tested a variety of metals, but favoured gold because they could make the foil very thin, as gold is the most malleable metal. As a source of alpha particles, Rutherford's substance of choice was radium, which is thousands of times more radioactive than uranium. Scattering theory and the new atomic model In a 1909 experiment, Geiger and Marsden discovered that the metal foils could scatter some alpha particles in all directions, sometimes more than 90°. This should have been impossible according to Thomson's model. According to Thomson's model, all the alpha particles should have gone straight through. In Thomson's model of the atom, the sphere of positive charge that fills the atom and encapsulates the electrons is permeable; the electrons could move around in it, after all. Therefore, an alpha particle should be able to pass through this sphere if the electrostatic forces within permit it. Thomson himself did not study how an alpha particle might be scattered in such a collision with an atom, but he did study beta particle scattering. He calculated that a beta particle would only experience very small deflection when passing through an atom, and even after passing through many atoms in a row, the total deflection should still be less than 1°. Alpha particles typically have much more momentum than beta particles and therefore should likewise experience only the slightest deflection. The extreme scattering observed forced Rutherford to revise the model of the atom. The issue in Thomson's model was that the charges were too diffuse to produce a sufficiently strong electrostatic force to cause such repulsion. Therefore they had to be more concentrated. In Rutherford's new model, the positive charge does not fill the entire volume of the atom but instead constitutes a tiny nucleus at least 10,000 times smaller than the atom as a whole. All that positive charge concentrated in a much smaller volume produces a much stronger electric field near its surface. The nucleus also carried most of the atom's mass. This meant that it could deflect alpha particles by up to 180° depending on how close they pass. The electrons surround this nucleus, spread throughout the atom's volume. Because their negative charge is diffuse and their combined mass is low, they have a negligible effect on the alpha particle. To verify his model, Rutherford developed a scientific model to predict the intensity of alpha particles at the different angles they scattered coming out of the gold foil, assuming all of the positive charge was concentrated at the centre of the atom. This model was validated in an experiment performed in 1913. His model explained both the beta scattering results of Thomson and the alpha scattering results of Geiger and Marsden. Legacy There was little reaction to Rutherford's now-famous 1911 paper in the first years. The paper was primarily about alpha particle scattering in an era before particle scattering was a primary tool for physics. The probability techniques he used and confusing collection of observations involved were not immediately compelling. Nuclear physics The first impacts were to encourage new focus on scattering experiments. For example the first results from a cloud chamber, by C.T.R. Wilson shows alpha particle scattering and also appeared in 1911. Over time, particle scattering became a major aspect of theoretical and experimental physics; Rutherford's concept of a "cross-section" now dominates the descriptions of experimental particle physics. The historian Silvan S. Schweber suggests that Rutherford's approach marked the shift to viewing all interactions and measurements in physics as scattering processes. After the nucleus - a term Rutherford introduced in 1912 - became the accepted model for the core of atoms, Rutherford's analysis of the scattering of alpha particles created a new branch of physics, nuclear physics. Atomic model Rutherford's new atom model caused no stir. Rutherford explicitly ignores the electrons, only mentioning Hantaro Nagaoka's Saturnian model of electrons orbiting a tiny "sun", a model that had been previously rejected as mechanically unstable. By ignoring the electrons Rutherford also ignores any potential implications for atomic spectroscopy for chemistry. Rutherford himself did not press the case for his atomic model: his own 1913 book on "Radioactive substances and their radiations" only mentions the atom twice; other books by other authors around this time focus on Thomson's model. The impact of Rutherford's nuclear model came after Niels Bohr arrived as a post-doctoral student in Manchester at Rutherford's invitation. Bohr dropped his work on the Thomson model in favour of Rutherford's nuclear model, developing the Rutherford–Bohr model over the next several years. Eventually Bohr incorporated early ideas of quantum mechanics into the model of the atom, allowing prediction of electronic spectra and concepts of chemistry. Hantaro Nagaoka, who had proposed a Saturnian model of the atom, wrote to Rutherford from Tokyo in 1911: "I have been struck with the simpleness of the apparatus you employ and the brilliant results you obtain." The astronomer Arthur Eddington called Rutherford's discovery the most important scientific achievement since Democritus proposed the atom ages earlier. Rutherford has since been hailed as "the father of nuclear physics". In a lecture delivered on 15 October 1936 at Cambridge University, Rutherford described his shock at the results of the 1909 experiment: Rutherford's claim of surprise makes a good story but by the time of the Geiger-Marsden experiment the result confirmed suspicions Rutherford developed from his many previous experiments. Experiments Alpha particle scattering: 1906 and 1908 experiments Rutherford's first steps towards his discovery of the nature of the atom came from his work to understand alpha particles. In 1906, Rutherford noticed that alpha particles passing through sheets of mica were deflected by the sheets by as much as 2 degrees. Rutherford placed a radioactive source in a sealed tube ending with a narrow slits followed by a photographic plate. Half of the slit was covered by a thin layer of mica. A magnetic field around the tube was altered every 10 minutes to reject the effect of beta rays, known to be sensitive to magnetic fields. The tube was evacuated to different amounts and a series of images recorded. At the lowest pressure the image of the open slit was clear, while images of the mica covered slit or the open slit at higher pressures were fuzzy. Rutherford explained these results as alpha-particle scattering in a paper published in 1906. He already understood the implications of the observation for models of atoms: "such a result brings out clearly the fact that the atoms of matter must be the seat of very intense electrical forces". A 1908 paper by Geiger, On the Scattering of α-Particles by Matter, describes the following experiment. He constructed a long glass tube, nearly two metres long. At one end of the tube was a quantity of "radium emanation" (R) as a source of alpha particles. The opposite end of the tube was covered with a phosphorescent screen (Z). In the middle of the tube was a 0.9 mm-wide slit. The alpha particles from R passed through the slit and created a glowing patch of light on the screen. A microscope (M) was used to count the scintillations on the screen and measure their spread. Geiger pumped all the air out of the tube so that the alpha particles would be unobstructed, and they left a neat and tight image on the screen that corresponded to the shape of the slit. Geiger then allowed some air into the tube, and the glowing patch became more diffuse. Geiger then pumped out the air and placed one or two gold foils over the slit at AA. This too caused the patch of light on the screen to become more spread out, with the larger spread for two layers. This experiment demonstrated that both air and solid matter could markedly scatter alpha particles. Alpha particle reflection: the 1909 experiment The results of the initial alpha particle scattering experiments were confusing. The angular spread of the particle on the screen varied greatly with the shape of the apparatus and its internal pressure. Rutherford suggested that Ernest Marsden, a physics undergraduate student studying under Geiger, should look for diffusely reflected or back-scattered alpha particles, even though these were not expected. Marsden's first crude reflector got results, so Geiger helped him create a more sophisticated apparatus. They were able to demonstrate that 1 in 8000 alpha particle collisions were diffuse reflections. Although this fraction was small, it was much larger than the Thomson model of the atom could explain. These results where published in a 1909 paper, On a Diffuse Reflection of the α-Particles, where Geiger and Marsden described the experiment by which they proved that alpha particles can indeed be scattered by more than 90°. In their experiment, they prepared a small conical glass tube (AB) containing "radium emanation" (radon), "radium A" (actual radium), and "radium C" (bismuth-214); its open end was sealed with mica. This was their alpha particle emitter. They then set up a lead plate (P), behind which they placed a fluorescent screen (S). The tube was held on the opposite side of plate, such that the alpha particles it emitted could not directly strike the screen. They noticed a few scintillations on the screen because some alpha particles got around the plate by bouncing off air molecules. They then placed a metal foil (R) to the side of the lead plate. They tested with lead, gold, tin, aluminium, copper, silver, iron, and platinum. They pointed the tube at the foil to see if the alpha particles would bounce off it and strike the screen on the other side of the plate, and observed an increase in the number of scintillations on the screen. Counting the scintillations, they observed that metals with higher atomic mass, such as gold, reflected more alpha particles than lighter ones such as aluminium. Geiger and Marsden then wanted to estimate the total number of alpha particles that were reflected. The previous setup was unsuitable for doing this because the tube contained several radioactive substances (radium plus its decay products) and thus the alpha particles emitted had varying ranges, and because it was difficult for them to ascertain at what rate the tube was emitting alpha particles. This time, they placed a small quantity of radium C (bismuth-214) on the lead plate, which bounced off a platinum reflector (R) and onto the screen. They concluded that approximately 1 in 8,000 of the alpha particles that struck the reflector bounced onto the screen. By measuring the reflection from thin foils they showed that the effect due to a volume and not a surface effect. When contrasted with the vast number of alpha particles that pass unhindered through a metal foil, this small number of large angle reflections was a strange result that meant very large forces were involved. Dependence on foil material and thickness: the 1910 experiment A 1910 paper by Geiger, The Scattering of the α-Particles by Matter, describes an experiment to measure how the most probable angle through which an alpha particle is deflected varies with the material it passes through, the thickness of the material, and the velocity of the alpha particles. He constructed an airtight glass tube from which the air was pumped out. At one end was a bulb (B) containing "radium emanation" (radon-222). By means of mercury, the radon in B was pumped up the narrow glass pipe whose end at A was plugged with mica. At the other end of the tube was a fluorescent zinc sulfide screen (S). The microscope which he used to count the scintillations on the screen was affixed to a vertical millimetre scale with a vernier, which allowed Geiger to precisely measure where the flashes of light appeared on the screen and thus calculate the particles' angles of deflection. The alpha particles emitted from A was narrowed to a beam by a small circular hole at D. Geiger placed a metal foil in the path of the rays at D and E to observe how the zone of flashes changed. He tested gold, tin, silver, copper, and aluminium. He could also vary the velocity of the alpha particles by placing extra sheets of mica or aluminium at A. From the measurements he took, Geiger came to the following conclusions: the most probable angle of deflection increases with the thickness of the material the most probable angle of deflection is proportional to the atomic mass of the substance the most probable angle of deflection decreases with the velocity of the alpha particles Rutherford's Structure of the Atom paper (1911) Considering the results of these experiments, Rutherford published a landmark paper in 1911 titled "The Scattering of α and β Particles by Matter and the Structure of the Atom" wherein he showed that single scattering from a very small and intense electric charge predicts primarily small-angle scattering with small but measurable amounts of backscattering. For the purpose of his mathematical calculations he assumed this central charge was positive, but he admitted he could not prove this and that he had to wait for other experiments to develop his theory. Rutherford developed a mathematical equation that modelled how the foil should scatter the alpha particles if all the positive charge and most of the atomic mass was concentrated in a point at the centre of an atom. From the scattering data, Rutherford estimated the central charge qn to be about +100 units. Rutherford's paper does not discuss any electron arrangement beyond discussions on the scattering from Thomson's plum pudding model and Nagaoka's Saturnian model. He shows that the scattering results predicted by Thomson's model are also explained by single scattering, but that Thomson's model does not explain large angle scattering. He says that Nagaoka's model, having a compact charge, would agree with the scattering data. The Saturnian model had previously been rejected on other grounds. The so-called Rutherford model of the atom with orbiting electrons was not proposed by Rutherford in the 1911 paper. Confirming the scattering theory: the 1913 experiment In a 1913 paper, The Laws of Deflexion of α Particles through Large Angles, Geiger and Marsden describe a series of experiments by which they sought to experimentally verify Rutherford's equation. Rutherford's equation predicted that the number of scintillations per minute s that will be observed at a given angle Φ should be proportional to: cosec4 thickness of foil t magnitude of the square of central charge Qn Their 1913 paper describes four experiments by which they proved each of these four relationships. To test how the scattering varied with the angle of deflection (i.e. if s ∝ csc4). Geiger and Marsden built an apparatus that consisted of a hollow metal cylinder mounted on a turntable. Inside the cylinder was a metal foil (F) and a radiation source containing radon (R), mounted on a detached column (T) which allowed the cylinder to rotate independently. The column was also a tube by which air was pumped out of the cylinder. A microscope (M) with its objective lens covered by a fluorescent zinc sulfide screen (S) penetrated the wall of the cylinder and pointed at the metal foil. They tested with silver and gold foils. By turning the table, the microscope could be moved a full circle around the foil, allowing Geiger to observe and count alpha particles deflected by up to 150°. Correcting for experimental error, Geiger and Marsden found that the number of alpha particles that are deflected by a given angle Φ is indeed proportional to csc4. Geiger and Marsden then tested how the scattering varied with the thickness of the foil (i.e. if s ∝ t). They constructed a disc (S) with six holes drilled in it. The holes were covered with metal foil (F) of varying thickness, or none for control. This disc was then sealed in a brass ring (A) between two glass plates (B and C). The disc could be rotated by means of a rod (P) to bring each window in front of the alpha particle source (R). On the rear glass pane was a zinc sulfide screen (Z). Geiger and Marsden found that the number of scintillations that appeared on the screen was indeed proportional to the thickness, as long as the thickness was small. Geiger and Marsden reused the apparatus to measure how the scattering pattern varied with the square of the nuclear charge (i.e. if s ∝ Qn2). Geiger and Marsden did not know what the positive charge of the nucleus of their metals were (they had only just discovered the nucleus existed at all), but they assumed it was proportional to the atomic weight, so they tested whether the scattering was proportional to the atomic weight squared. Geiger and Marsden covered the holes of the disc with foils of gold, tin, silver, copper, and aluminium. They measured each foil's stopping power by equating it to an equivalent thickness of air. They counted the number of scintillations per minute that each foil produced on the screen. They divided the number of scintillations per minute by the respective foil's air equivalent, then divided again by the square root of the atomic weight (Geiger and Marsden knew that for foils of equal stopping power, the number of atoms per unit area is proportional to the square root of the atomic weight). Thus, for each metal, Geiger and Marsden obtained the number of scintillations that a fixed number of atoms produce. For each metal, they then divided this number by the square of the atomic weight, and found that the ratios were about the same. Thus they proved that s ∝ Qn2. Finally, Geiger and Marsden tested how the scattering varied with the velocity of the alpha particles (i.e. if s ∝ ). Using the same apparatus, they slowed the alpha particles by placing extra sheets of mica in front of the alpha particle source. They found that, within the range of experimental error, the number of scintillations was indeed proportional to . Positive charge on nucleus: 1913 In his 1911 paper (see above), Rutherford assumed that the central charge of the atom was positive, but a negative charge would have fitted his scattering model just as well. In a 1913 paper, Rutherford declared that the "nucleus" (as he now called it) was indeed positively charged, based on the result of experiments exploring the scattering of alpha particles in various gases. In 1917, Rutherford and his assistant William Kay began exploring the passage of alpha particles through gases such as hydrogen and nitrogen. In this experiment, they shot a beam of alpha particles through hydrogen, and they carefully placed their detector—a zinc sulfide screen—just beyond the range of the alpha particles, which were absorbed by the gas. They nonetheless picked up charged particles of some sort causing scintillations on the screen. Rutherford interpreted this as alpha particles knocking the hydrogen nuclei forwards in the direction of the beam, not backwards. Rutherford's scattering model Rutherford begins his 1911 paper with a discussion of Thomson's results on scattering of beta particles, a form of radioactivity that results in high velocity electrons. Thomson's model had electrons circulating inside of a sphere of positive charge. Rutherford highlights the need for compound or multiple scattering events: the deflections predicted for each collision are much less than one degree. He then proposes a model which will produce large deflections on a single encounter: place all of the positive charge at the centre of the sphere and ignore the electron scattering as insignificant. The concentrated charge will explain why most alpha particles do not scatter to any measurable degree – they fly past too far from the charge – and yet particles that do pass very close to the centre scatter through large angles. Maximum nuclear size estimate Rutherford begins his analysis by considering a head-on collision between the alpha particle and atom. This will establish the minimum distance between them, a value which will be used throughout his calculations. Assuming there are no external forces and that initially the alpha particles are far from the nucleus, the inverse-square law between the charges on the alpha particle and nucleus gives the potential energy gained by the particle as it approaches the nucleus. For head-on collisions between alpha particles and the nucleus, all the kinetic energy of the alpha particle is turned into potential energy and the particle stops and turns back. Where the particle stops at a distance from the centre, the potential energy matches the original kinetic energy: where Rearranging: For an alpha particle: (mass) = = (for the alpha particle) = 2 × = (for gold) = 79 × = (initial velocity) = (for this example) The distance from the alpha particle to the centre of the nucleus () at this point is an upper limit for the nuclear radius. Substituting these in gives the value of about , or 27 fm. (The true radius is about 7.3 fm.) The true radius of the nucleus is not recovered in these experiments because the alphas do not have enough energy to penetrate to more than 27 fm of the nuclear centre, as noted, when the actual radius of gold is 7.3 fm. Rutherford's 1911 paper started with a slightly different formula suitable for head-on collision with a sphere of positive charge: In Rutherford's notation, e is the elementary charge, N is the charge number of the nucleus (now also known as the atomic number), and E is the charge of an alpha particle. The convention in Rutherford's time was to measure charge in electrostatic units, distance in centimeters, force in dynes, and energy in ergs. The modern convention is to measure charge in coulombs, distance in meters, force in newtons, and energy in joules. Using coulombs requires using the Coulomb constant (k) in the equation. Rutherford used b as the turning point distance (called rmin above) and R is the radius of the atom. The first term is the Coulomb repulsion used above. This form assumes the alpha particle could penetrate the positive charge. At the time of Rutherford's paper, Thomson's plum pudding model proposed a positive charge with the radius of an atom, thousands of times larger than the rmin found above. Figure 1 shows how concentrated this potential is compared to the size of the atom. Many of Rutherford's results are expressed in terms of this turning point distance rmin, simplifying the results and limiting the need for units to this calculation of turning point. Single scattering by a heavy nucleus From his results for a head on collision, Rutherford knows that alpha particle scattering occurs close to the centre of an atom, at a radius 10,000 times smaller than the atom. The electrons have negligible effect. He begins by assuming no energy loss in the collision, that is he ignores the recoil of the target atom. He will revisit each of these issues later in his paper. Under these conditions, the alpha particle and atom interact through a central force, a physical problem studied first by Isaac Newton. A central force only acts along a line between the particles and when the force varies with the inverse square, like Coulomb force in this case, a detailed theory was developed under the name of the Kepler problem. The well-known solutions to the Kepler problem are called orbits and unbound orbits are hyperbolas. Thus Rutherford proposed that the alpha particle will take a hyperbolic trajectory in the repulsive force near the centre of the atom as shown in Figure 2. To apply the hyperbolic trajectory solutions to the alpha particle problem, Rutherford expresses the parameters of the hyperbola in terms of the scattering geometry and energies. He starts with conservation of angular momentum. When the particle of mass and initial velocity is far from the atom, its angular momentum around the centre of the atom will be where is the impact parameter, which is the lateral distance between the alpha particle's path and the atom. At the point of closest approach, labeled A in Figure 2, the angular momentum will be . Therefore Rutherford also applies the law of conservation of energy between the same two points: The left hand side and the first term on the right hand side are the kinetic energies of the particle at the two points; the last term is the potential energy due to the Coulomb force between the alpha particle and atom at the point of closest approach (A). qa is the charge of the alpha particle, qg is the charge of the nucleus, and k is the Coulomb constant. The energy equation can then be rearranged thus: For convenience, the non-geometric physical variables in this equation can be contained in a variable , which is the point of closest approach in a head-on collision scenario which was explored in a previous section of this article: This allows Rutherford simplify the energy equation to: This leaves two simultaneous equations for , the first derived from the conservation of momentum equation and the second from the conservation of energy equation. Eliminating and gives at a new formula for : The next step is to find a formula for . From Figure 2, is the sum of two distances related to the hyperbola, SO and OA. Using the following logic, these distances can be expressed in terms of angle and impact parameter . The eccentricity of a hyperbola is a value that describes the hyperbola's shape. It can be calculated by dividing the focal distance by the length of the semi-major axis, which per Figure 2 is . As can be seen in Figure 3, the eccentricity is also equal to , where is the angle between the major axis and the asymptote. Therefore: As can be deduced from Figure 2, the focal distance SO is and therefore With these formulas for SO and OA, the distance can be written in terms of and simplified using a trigonometric identity known as a half-angle formula: Applying a trigonometric identity known as the cotangent double angle formula and the previous equation for gives a simpler relationship between the physical and geometric variables: The scattering angle of the particle is and therefore . With the help of a trigonometric identity known as a reflection formula, the relationship between θ and b can be resolved to: which can be rearranged to give Rutherford gives some illustrative values as shown in this table: Rutherford's approach to this scattering problem remains a standard treatment in textbooks on classical mechanics. Intensity vs angle To compare to experiments the relationship between impact parameter and scattering angle needs to be converted to probability versus angle. The scattering cross section gives the relative intensity by angles: In classical mechanics, the scattering angle is uniquely determined the initial kinetic energy of the incoming particles and the impact parameter . Therefore, the number of particles scattered into an angle between and must be the same as the number of particles with associated impact parameters between and . For an incident intensity , this implies: Thus the cross section depends on scattering angle as: Using the impact parameter as a function of angle, , from the single scattering result above produces the Rutherford scattering cross section: s = the number of alpha particles falling on unit area at an angle of deflection Φ r = distance from point of incidence of α rays on scattering material X = total number of particles falling on the scattering material n = number of atoms in a unit volume of the material t = thickness of the foil qn = positive charge of the atomic nucleus qa = positive charge of the alpha particles m = mass of an alpha particle v = velocity of the alpha particle This formula predicted the results that Geiger measured in the coming year. The scattering probability into small angles greatly exceeds the probability in to larger angles, reflecting the tiny nucleus surrounded by empty space. However, for rare close encounters, large angle scattering occurs with just a single target. At the end of his development of the cross section formula, Rutherford emphasises that the results apply to single scattering and thus require measurements with thin foils. For thin foils the degree of scattering is proportional to the foil thickness in agreement with Geiger's measurements. Comparison to JJ Thomson's results At the time of Rutherford's paper, JJ Thomson was the "undisputed world master in the design of atoms". Rutherford needed to compare his new approach to Thomson's. Thomson's model, presented in 1910, modelled the electron collisions with hyperbolic orbits from his 1906 paper combined with a factor for the positive sphere. Multiple resulting small deflections compounded using a random walk. In his paper Rutherford emphasised that single scattering alone could account for Thomson's results if the positive charge were concentrated in the centre. Rutherford computes the probability of single scattering from a compact charge and demonstrates that it is 3 times larger than Thomson's multiple scattering probability. Rutherford completes his analysis including the effects of density and foil thickness, then concludes that thin foils are governed by single scattering, not multiple scattering. Later analysis showed Thomson's scattering model could not account for large scattering. The maximum angular deflection from electron scattering or from the positive sphere each come to less than 0.02°; even many such scattering events compounded would result in less than a one degree average deflection and a probability of scattering through 90° of less than one in 103500. Target recoil Rutherford's analysis assumed that alpha particle trajectories turned at the centre of the atom but the exit velocity was not reduced. This is equivalent to assuming that the concentrated charge at the centre had infinite mass or was anchored in place. Rutherford discusses the limitations of this assumption by comparing scattering from lighter atoms like aluminium with heavier atoms like gold. If the concentrated charge is lighter it will recoil from the interaction, gaining momentum while the alpha particle loses momentum and consequently slows down. Modern treatments analyze this type of Coulomb scattering in the centre of mass reference frame. The six coordinates of the two particles (also called "bodies") are converted into three relative coordinates between the two particles and three centre-of-mass coordinates moving in space (called the lab frame). The interaction only occurs in the relative coordinates, giving an equivalent one-body problem just as Rutherford solved, but with different interpretations for the mass and scattering angle. Rather than the mass of the alpha particle, the more accurate formula including recoil uses reduced mass: For Rutherford's alpha particle scattering from gold, with mass of 197, the reduced mass is very close to the mass of the alpha particle: For lighter aluminium, with mass 27, the effect is greater: a 13% difference in mass. Rutherford notes this difference and suggests experiments be performed with lighter atoms. The second effect is a change in scattering angle. The angle in the relative coordinate system or centre of mass frame needs to be converted to an angle in the lab frame. In the lab frame, denoted by a subscript L, the scattering angle for a general central potential is For a heavy particle like gold used by Rutherford, the factor can be neglected at almost all angles. Then the lab and relative angles are the same, . The change in scattering angle alters the formula for differential cross-section needed for comparison to experiment. For any central potential, the differential cross-section in the lab frame is related to that in the centre-of-mass frame by where Limitations to Rutherford's scattering formula Very light nuclei and higher energies In 1919 Rutherford analyzed alpha particle scattering from hydrogen atoms, showing the limits of the 1911 formula even with corrections for reduced mass. Similar issues with smaller deviations for helium, magnesium, aluminium lead to the conclusion that the alpha particle was penetrating the nucleus in these cases. This allowed the first estimates of the size of atomic nuclei. Later experiments based on cyclotron acceleration of alpha particles striking heavier nuclei provided data for analysis of interaction between the alpha particle and the nuclear surface. However at energies that push the alpha particles deeper they are strongly absorbed by the nuclei, a more complex interaction. Quantum mechanics Rutherford's treatment of alpha particle scattering seems to rely on classical mechanics and yet the particles are of sub-atomic dimensions. However the critical aspects of the theory ultimately rely on conservation of momentum and energy. These concepts apply equally in classical and quantum regimes: the scattering ideas developed by Rutherford apply to subatomic elastic scattering problems like neutron-proton scattering. An alternative method to find the scattering angle This section presents an alternative method to find the relation between the impact parameter and deflection angle in a single-atom encounter, using a force-centric approach as opposed to the energy-centric one that Rutherford used. The scattering geometry is shown in this diagram The impact parameter b is the distance between the alpha particle's initial trajectory and a parallel line that goes through the nucleus. Smaller values of b bring the particle closer to the atom so it feels more deflection force resulting in a larger deflection angle θ. The goal is to find the relationship between b and the deflection angle. The alpha particle's path is a hyperbola and the net change in momentum runs along the axis of symmetry. From the geometry in the diagram and the magnitude of the initial and final momentum vectors, , the magnitude of can be related to the deflection angle: A second formula for involving b will give the relationship to the deflection angle. The net change in momentum can also be found by adding small increments to momentum all along the trajectory using the integral where is the distance between the alpha particle and the centre of the nucleus and is its angle from the axis of symmetry. These two are the polar coordinates of the alpha particle at time . Here the Coulomb force exerted along the line between the alpha particle and the atom is and the factor gives that part of the force causing deflection. The polar coordinates r and φ depend on t in the integral, but they must be related to each other as they both vary as the particle moves. Changing the variable and limits of integration from t to φ makes this connection explicit: The factor is the reciprocal of the angular velocity the particle. Since the force is only along the line between the particle and the atom, the angular momentum, which is proportional to the angular velocity, is constant: This law of conservation of angular momentum gives a formula for : Replacing in the integral for ΔP simultaneously eliminates the dependence on r: Applying the trigonometric identities and to simplify this result gives the second formula for : Solving for θ as a function of b gives the final result Why the plum pudding model was wrong J. J. Thomson himself didn't study alpha particle scattering, but he did study beta particle scattering. In his 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented equations that modelled how beta particles scatter in a collision with an atom. Rutherford adapted those equations to alpha particle scattering in his 1911 paper "The Scattering of α and β Particles by Matter and the Structure of the Atom". Deflection by the positive sphere In Thomson's 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented the following equation (in this article's notation) that isolates the effect of the positive sphere in the plum pudding model on an incoming beta particle. Thomson did not explain how he arrived at this equation, but this section provides an educated guess and at the same time adapts the equation to alpha particle scattering. Consider an alpha particle passing by a positive sphere of pure positive charge (no electrons) with a radius R and mass equal to those of a gold atom. The alpha particle passes just close enough to graze the edge of the sphere, which is where the electric field of the sphere is strongest. An earlier section of this article presented an equation which models how an incoming charged particle is deflected by another charged particle at a fixed position. This equation can be used to calculate the deflection angle in the special case in Figure 4 by setting the impact parameter b to the same value as the radius of the sphere R. So long as the alpha particle does not penetrate the sphere, there is no practical difference between a sphere of charge and a point charge. qg = positive charge of the gold atom = = qa = charge of the alpha particle = = R = radius of the gold atom = v = speed of the alpha particle = m = mass of the alpha particle = k = Coulomb constant = This shows that the largest possible deflection will be very small, to the point that the path of the alpha particle passing through the positive sphere of a gold atom is almost a straight line. Therefore in computing the average deflection, which will be smaller still, we will treat the particle's path through the sphere as a chord of length L. Inside a sphere of uniformly distributed positive charge, the force exerted on the alpha particle at any point along its path through the sphere is The lateral component of this force is The lateral change in momentum py is therefore The deflection angle is given by where px is the average horizontal momentum, which is first reduced then restored as horizontal force changes direction as the alpha particle goes across the sphere. Since the deflection is very small, can be treated as equal to . The chord length , per Pythagorean theorem. The average deflection angle sums the angle for values of b and L across the entire sphere and divides by the cross-section of the sphere: This matches Thomson's formula in his 1910 paper. Deflection by the electrons Consider an alpha particle passing through an atom of radius R along a path of length L. The effect of the positive sphere is ignored so as to isolate the effect of the atomic electrons. As with the positive sphere, deflection by the electrons is expected to be very small, to the point that the path is practically a straight line. For the electrons within an arbitrary distance s of the alpha particle's path, their mean distance will be s. Therefore, the average deflection per electron will be where qe is the elementary charge. The average net deflection by all the electrons within this arbitrary cylinder of effect around the alpha particle's path is where N0 is the number of electrons per unit volume and is the volume of this cylinder. Treating L as a straight line, where b is the distance of this line from the centre. The mean of is therefore To obtain the mean deflection , replace in the equation for : where N is the number of electrons in the atom, equal to . Cumulative effect Applying Thomson's equations described above to an alpha particle colliding with a gold atom, using the following values: qg = positive charge of the gold atom = = qa = charge of the alpha particle = = qe = elementary charge = R = radius of the gold atom = v = speed of the alpha particle = m = mass of the alpha particle = k = Coulomb constant = N = number of electrons in the gold atom = 79 gives the average angle by which the alpha particle should be deflected by the atomic electrons as: The average angle by which an alpha particle should be deflected by the positive sphere is: The net deflection for a single atomic collision is: On average the positive sphere and the electrons alike provide very little deflection in a single collision. Thomson's model combined many single-scattering events from the atom's electrons and a positive sphere. Each collision may increase or decrease the total scattering angle. Only very rarely would a series of collisions all line up in the same direction. The result is similar to the standard statistical problem called a random walk. If the average deflection angle of the alpha particle in a single collision with an atom is , then the average deflection after n collisions is The probability that an alpha particle will be deflected by a total of more than 90° after n deflections is given by: where e is Euler's number (≈2.71828...). A gold foil with a thickness of 1.5 micrometers would be about 10,000 atoms thick. If the average deflection per atom is 0.008°, the average deflection after 10,000 collisions would be 0.8°. The probability of an alpha particle being deflected by more than 90° will be While in Thomson's plum pudding model it is mathematically possible that an alpha particle could be deflected by more than 90° after 10,000 collisions, the probability of such an event is so low as to be undetectable. This extremely small number shows that Thomson's model cannot explain the results of the Geiger-Mardsen experiment of 1909. Notes on historical measurements Rutherford assumed that the radius of atoms in general to be on the order of 10−10 m and the positive charge of a gold atom to be about 100 times that of hydrogen (). The atomic weight of gold was known to be around 197 since early in the 19th century. From an experiment in 1906, Rutherford measured alpha particles to have a charge of and an atomic weight of 4, and alpha particles emitted by radon to have velocity of . Rutherford deduced that alpha particles are essentially helium atoms stripped of two electrons, but at the time scientists only had a rough idea of how many electrons atoms have and so the alpha particle was thought to have up to 10 electrons left. In 1906, J. J. Thomson measured the elementary charge to be about (). In 1909 Robert A. Millikan provided a more accurate measurement of , only 0.6% off the current accepted measurement. Jean Perrin in 1909 measured the mass of hydrogen to be , and if alpha particles are four times as heavy as that, they would have an absolute mass of . The convention in Rutherford's time was to measure charge in electrostatic units, distance in centimeters, force in dynes, and energy in ergs. The modern convention is to measure charge in coulombs, distance in meters, force in newtons, and energy in joules. Using coulombs requires using the Coulomb constant (k) in certain equations. In this article, Rutherford and Thomson's equations have been rewritten to fit modern notation conventions. See also Atomic theory Rutherford backscattering spectroscopy List of scattering experiments References Bibliography Chapter 4 Central forces External links Description of the experiment, from cambridgephysics.org Foundational quantum physics Physics experiments 1909 in science Ernest Rutherford Fixed-target experiments
Rutherford scattering experiments
[ "Physics" ]
9,803
[ "Quantum mechanics", "Foundational quantum physics", "Experimental physics", "Physics experiments" ]
319,888
https://en.wikipedia.org/wiki/Blame
Blame is the act of censuring, holding responsible, or making negative statements about an individual or group that their actions or inaction are socially or morally irresponsible, the opposite of praise. When someone is morally responsible for doing something wrong, their action is blameworthy. By contrast, when someone is morally responsible for doing something right, it may be said that their action is praiseworthy. There are other senses of praise and blame that are not ethically relevant. One may praise someone's good dress sense, and blame their own sense of style for their own dress sense. Philosophy Philosophers discuss the concept of blame as one of the reactive attitudes, a term coined by P. F. Strawson, which includes attitudes like blame, praise, gratitude, resentment, and forgiveness. In contrast to physical or intellectual concepts, reactive attitudes are formed from the point of view of an active participant regarding objects. This is to be distinguished from the objective standpoint. Neurology Blaming appears to relate to include brain activity in the temporoparietal junction (TPJ). The amygdala has been found to contribute when we blame others, but not when we respond to their positive actions. Sociology and psychology Humans—consciously and unconsciously—constantly make judgments about other people. The psychological criteria for judging others may be partly ingrained, negative, and rigid, indicating some degree of grandiosity. Blaming provides a way of devaluing others, with the result that the blamer feels superior, seeing others as less worthwhile and/or making the blamer "perfect". Off-loading blame means putting the other person down by emphasizing their flaws. Victims of manipulation and abuse frequently feel responsible for causing negative feelings in the manipulator/abuser towards them and the resultant anxiety in themselves. This self-blame often becomes a major feature of victim status. The victim gets trapped into a self-image of victimization. The psychological profile of victimization includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of guilt, shame, remorse, self-blame, and depression. This way of thinking can lead to hopelessness and despair. Self-blame Two main types of self-blame exist: behavioral self-blame – undeserved blame based on actions. Victims who experience behavioral self-blame feel that they should have done something differently, and therefore feel at fault. characterological self-blame – undeserved blame based on character. Victims who experience characterological self-blame feel there is something inherently wrong with them which has caused them to deserve to be victimized. Behavioral self-blame is associated with feelings of guilt within the victim. While the belief that one had control during the abuse (past control) is associated with greater psychological distress, the belief that one has more control during the recovery process (present control) is associated with less distress, less withdrawal, and more cognitive reprocessing. Counseling responses found helpful in reducing self-blame include: supportive responses psychoeducational responses (for example, learning about rape trauma syndrome) responses addressing the issue of blame. A helpful type of therapy for self-blame is cognitive restructuring or cognitive–behavioral therapy. Cognitive reprocessing is the process of taking the facts and forming a logical conclusion from them that is less influenced by shame or guilt. Victim blaming Victim blaming is holding the victims of a crime, an accident, or any type of abusive maltreatment to be entirely or partially responsible for the incident that has occurred. The fundamental attribution error concept explains how people tend to blame negative behavior more on the victims traits than the situation at the time of the event. Individual blame versus system blame In sociology, individual blame is the tendency of a group or society to hold the individual responsible for their situation, whereas system blame is the tendency to focus on social factors that contribute to one's fate. Blame shifting Blaming others can lead to a "kick the dog" effect where individuals in a hierarchy blame their immediate subordinate, and this propagates down a hierarchy until the lowest rung (the "dog"). A 2009 experimental study has shown that blaming can be contagious even for uninvolved onlookers. In complex international organizations, such as enforcers of national and supranational policies and regulations, the blame is usually attributed to the last echelon, the implementing actors. As a propaganda technique Labeling theory accounts for blame by postulating that when intentional actors act out to continuously blame an individual for nonexistent psychological traits and for nonexistent variables, those actors aim to induce irrational guilt at an unconscious level. Blame in this case becomes a propaganda tactic, using repetitive blaming behaviors, innuendos, and hyperbole in order to assign negative status to normative humans. When innocent people are blamed fraudulently for nonexistent psychological states and nonexistent behaviors, and there is no qualifying deviance for the blaming behaviors, the intention is to create a negative valuation of innocent humans to induce fear, by using fear mongering. For centuries, governments have used blaming in the form of demonization to influence public perceptions of various other governments, as well as to induce feelings of nationalism in the public. Blame can objectify people, groups, and nations, typically negatively influencing the intended subjects of propaganda, compromising their objectivity. Blame is utilized as a social-control technique. In organizations The flow of blame in an organization may be a primary indicator of that organization's robustness and integrity. Blame flowing downwards, from management to staff, or laterally between professionals or partner organizations, indicates organizational failure. In a blame culture, problem-solving is replaced by blame-avoidance. Blame coming from the top generates "fear, malaise, errors, accidents, and passive-aggressive responses from the bottom", with those at the bottom feeling powerless and lacking emotional safety. Employees have expressed that organizational blame culture made them fear prosecution for errors and/or accidents and thus unemployment, which may make them more reluctant to report accidents, since trust is crucial to encourage accident reporting. This makes it less likely that weak and/or long-term indicators of safety threats get picked up, thus preventing the organization from taking adequate measures to prevent minor problems from escalating into uncontrollable situations. Several issues identified in organizations with a blame culture contradict the best practices adopted by high reliability organizations. Organisational chaos, such as confused roles and responsibilities, is strongly associated with blame culture and workplace bullying. Blame culture promotes a risk aversive approach, which prevent organizations and their agents from adequately assessing risks. According to Mary Douglas, blame is systematically used in the micro-politics of institutions, with three latent functions: explaining disasters, justifying allegiances, and stabilizing existing institutional regimes. Within a politically stable regime, blame tends to be asserted on the weak or unlucky one, but in a less stable regime, blame shifting may involve a battle between rival factions. Douglas was interested in how blame stabilizes existing power structures within institutions or social groups. She devised a two-dimensional typology of institutions, the first attribute being named "group", which is the strength of boundaries and social cohesion, the second "grid", the degree and strength of the hierarchy. According to Douglas, blame will fall on different entities depending on the institutional type. For markets, blame is used in power struggles between potential leaders. In bureaucracies, blame tends to flow downwards and is attributed to a failure to follow rules. In a clan, blame is asserted on outsiders or involves allegations of treachery, to suppress dissidence and strengthen the group's ties. In the 4th type, isolation, the individuals are facing the competitive pressures of the marketplace alone; in other words, there is a condition of fragmentation with a loss of social cohesion, potentially leading to feelings of powerlessness and fatalism, and this type was renamed by various other authors into "donkey jobs". It is suggested that the progressive changes in managerial practices in healthcare is leading to an increase in donkey jobs. The requirement of accountability and transparency, assumed to be key for good governance, worsen the behaviors of blame avoidance, both at the individual and institutional levels, as is observed in various domains such as politics and healthcare. Indeed, institutions tend to be risk-averse and blame-averse, and where the management of societal risks (the threats to society) and institutional risks (threats to the organizations managing the societal risks) are not aligned, there may be organizational pressures to prioritize the management of institutional risks at the expense of societal risks. Furthermore, "blame-avoidance behaviour at the expense of delivering core business is a well-documented organizational rationality". The willingness of maintaining one's reputation may be a key factor explaining the relationship between accountability and blame avoidance. This may produce a "risk colonization", where institutional risks are transferred to societal risks, as a strategy of risk management. Some researchers argue that there is "no risk-free lunch" and "no blame-free risk", an analogy to the "no free lunch" adage. See also References Further reading Douglas, Tom. Scapegoats: Transferring Blame, London-New York, Routledge, 1995. Wilcox, Clifton W. Scapegoat: Targeted for Blame, Denver, Outskirts Press, 2009. External links Blaming Moral Responsibility (also on praise and blame), in the Stanford Encyclopedia of Philosophy Praise and Blame, in the Internet Encyclopedia of Philosophy Social psychology Concepts in ethics Behavior Accountability Moral psychology
Blame
[ "Biology" ]
1,968
[ "Behavior" ]
319,941
https://en.wikipedia.org/wiki/Thaumatin
Thaumatin (also known as talin) is a low-calorie sweetener and taste modifier. The protein is often used primarily for its flavor-modifying properties and not exclusively as a sweetener. The thaumatins were first found as a mixture of proteins isolated from the katemfe fruit (Thaumatococcus daniellii) (Marantaceae) of West Africa. Although very sweet, thaumatin's taste is markedly different from sugar's. The sweetness of thaumatin builds very slowly. Perception lasts a long time, leaving a liquorice-like aftertaste at high concentrations. Thaumatin is highly water soluble, stable to heating, and stable under acidic conditions. Biological role Thaumatin production is induced in katemfe in response to an attack upon the plant by viroid pathogens. Several members of the thaumatin protein family display significant in vitro inhibition of hyphal growth and sporulation by various fungi. The thaumatin protein is considered a prototype for a pathogen-response protein domain. This thaumatin domain has been found in species as diverse as rice and Caenorhabditis elegans. Thaumatins are pathogenesis-related (PR) proteins, which are induced by various agents ranging from ethylene to pathogens themselves, and are structurally diverse and ubiquitous in plants: They include thaumatin, osmotin, tobacco major and minor PR proteins, alpha-amylase/trypsin inhibitor, and P21 and PWIR2 soybean and wheat leaf proteins. The proteins are involved in systematically-acquired stress resistance and stress responses in plants, although their precise role is unknown. Thaumatin is an intensely sweet-tasting protein (on a molar basis about 100,000 times as sweet as sucrose) found in the fruit of the West African plant Thaumatococcus daniellii: it is induced by attack by viroids, which are single-stranded unencapsulated RNA molecules that do not code for protein. The thaumatin protein I consists of a single polypeptide chain of 207 residues. Like other PR proteins, thaumatin is predicted to have a mainly beta structure, with a high content of beta-turns and little helix. Tobacco cells exposed to gradually increased salt concentrations develop a greatly increased tolerance to salt, due to the expression of osmotin, a member of the PR protein family. Wheat plants attacked by barley powdery mildew express a PR protein (PWIR2), which results in resistance against that infection. The similarity between this PR protein and other PR proteins and the maize alpha-amylase/trypsin inhibitor has suggested that PR proteins may act as some form of inhibitor. Within West Africa, the katemfe fruit has been locally cultivated and used to flavour foods and beverages for some time. The fruit's seeds are encased in a membranous sac, or aril, that is the source of thaumatin. In the 1970s, Tate and Lyle began extracting thaumatin from the fruit. In 1990, researchers at Unilever reported the isolation and sequencing of the two principal proteins found in thaumatin, which they dubbed thaumatin I and thaumatin II. These researchers were also able to express thaumatin in genetically engineered bacteria. Thaumatin has been approved as a sweetener in the European Union (E957), Israel, and Japan. In the United States, it is generally recognized as safe as a flavouring agent (FEMA GRAS 3732) but not as a sweetener. Crystallization Since thaumatin crystallizes very quickly and easily in the presence of tartrate ions, thaumatin-tartrate mixtures are frequently used as model systems to study protein crystallization. The solubility of thaumatin, its crystal habit, and mechanism of crystal formation are dependent upon the chirality of precipitant used. When crystallized with L- tartrate, thaumatin forms bipyramidal crystals and displays a solubility that increases with temperature; with D- and meso-tartrate, it forms stubby and prismatic crystals and displays a solubility that decreases with temperature. This suggests control of precipitant chirality may be an important factor in protein crystallization in general. Characteristics As a food ingredient, thaumatin is considered to be safe for consumption. In a chewing gum production plant, thaumatin has been identified as an allergen. Switching from using powdered thaumatin to liquid thaumatin reduced symptoms among affected workers. Additionally, eliminating contact with powdered gum arabic (a known allergen) resulted in the disappearance of symptoms in all affected workers. Thaumatin interacts with human TAS1R3 receptor to produce a sweet taste. The interacting residues are specific to old world monkeys and apes (including humans); only these animals can perceive it as sweet. See also Curculin, a sweet protein from Malaysia with taste-modifying activity Miraculin, a protein from West Africa with taste-modifying activity Monellin, a sweet protein found in West Africa Stevia, a non-nutritive sweetener up to 150 times sweeter than sugar Lugduname, a sweetening agent up to 300,000 times sweeter than sugar References Further reading External links Sugar substitutes Protein domains Taste modifiers Food additives Plant proteins E-number additives
Thaumatin
[ "Biology" ]
1,130
[ "Protein domains", "Protein classification" ]
320,025
https://en.wikipedia.org/wiki/Automatic%20label%20placement
Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels. The typical features depicted on a geographic map are line features (e.g. roads), area features (countries, parcels, forests, lakes, etc.), and point features (villages, cities, etc.). In addition to depicting the map's features in a geographically accurate manner, it is of critical importance to place the names that identify these features, in a way that the reader knows instantly which name describes which feature. Automatic text placement is one of the most difficult, complex, and time-consuming problems in mapmaking and GIS (Geographic Information System). Other kinds of computer-generated graphics – like charts, graphs etc. – require good placement of labels as well, not to mention engineering drawings, and professional programs which produce these drawings and charts, like spreadsheets (e.g. Microsoft Excel) or computational software programs (e.g. Mathematica). Naively placed labels overlap excessively, resulting in a map that is difficult or even impossible to read. Therefore, a GIS must allow a few possible placements of each label, and often also an option of resizing, rotating, or even removing (suppressing) the label. Then, it selects a set of placements that results in the least overlap, and has other desirable properties. For all but the most trivial setups, the problem is NP-hard. Rule-based algorithms Rule-based algorithms try to emulate an experienced human cartographer. Over centuries, cartographers have developed the art of mapmaking and label placement. For example, an experienced cartographer repeats road names several times for long roads, instead of placing them once, or in the case of Ocean City depicted by a point very close to the shore, the cartographer would place the label "Ocean City" over the land to emphasize that it is a coastal town. Cartographers work based on accepted conventions and rules, such as those itemized by Swiss cartographer Eduard Imhof in 1962. For example, New York City, Vienna, Berlin, Paris, or Tokyo must show up on country maps because they are high-priority labels. Once those are placed, the cartographer places the next most important class of labels, for example major roads, rivers, and other large cities. In every step they ensure that (1) the text is placed in a way that the reader easily associates it with the feature, and (2) the label does not overlap with those already placed on the map. However, if a particular label placement problem can be formulated as a mathematical optimization problem, using mathematics to solve the problem is usually better than using a rule-based algorithm. Local optimization algorithms The simplest greedy algorithm places consecutive labels on the map in positions that result in minimal overlap of labels. Its results are not perfect even for very simple problems, but it is extremely fast. Slightly more complex algorithms rely on local optimization to reach a local optimum of a placement evaluation function – in each iteration placement of a single label is moved to another position, and if it improves the result, the move is preserved. It performs reasonably well for maps that are not too densely labelled. Slightly more complex variations try moving 2 or more labels at the same time. The algorithm ends after reaching some local optimum. A simple algorithm – simulated annealing – yields good results with relatively good performance. It works like local optimization, but it may keep a change even if it worsens the result. The chance of keeping such a change is , where is the change in the evaluation function, and is the temperature. The temperature is gradually lowered according to the annealing schedule. When the temperature is high, simulated annealing performs almost random changes to the label placement, being able to escape a local optimum. Later, when hopefully a very good local optimum has been found, it behaves in a manner similar to local optimization. The main challenges in developing a simulated annealing solution are choosing a good evaluation function and a good annealing schedule. Generally too fast cooling will degrade the solution, and too slow cooling will degrade the performance, but the schedule is usually quite a complex algorithm, with more than just one parameter. Another class of direct search algorithms are the various evolutionary algorithms, e.g. genetic algorithms. Divide-and-conquer algorithms One simple optimization that is important on real maps is dividing a set of labels into smaller sets that can be solved independently. Two labels are rivals if they can overlap in one of the possible placements. Transitive closure of this relation divides the set of labels into possibly much smaller sets. On uniformly and densely labelled maps, usually the single set will contain the majority of labels, and on maps for which the labelling is not uniform it may bring very big performance benefits. For example, when labelling a map of the world, America is labelled independently from Eurasia etc. 2-satisfiability algorithms If a map labeling problem can be reduced to a situation in which each remaining label has only two potential positions in which it can be placed, then it may be solved efficiently by using an instance of 2-satisfiability to find a placement avoiding any conflicting pairs of placements; several exact and approximate label placement algorithms for more complex types of problems are based on this principle. Other algorithms Automatic label placement algorithms can use any of the algorithms for finding the maximum disjoint set from the set of potential labels. Other algorithms can also be used, like various graph solutions, integer programming etc. Integer Programming Some versions of the map label placement problem can be formulated as multiple choices integer programming (MCIP) problems where the objective function is to minimize the sum of numerical penalties for moving individual labels away from their optimal placement to avoid overlaps. The problem constraints are that each label be placed in one of a finite number of allowed positions on the map. (Or deleted from the map to allow other labels to be placed.) A close to optimal solution to this MCIP can usually be found in a practical amount of computer time using Lagrangian relaxation to solve the dual formulation of the optimization problem. The first commercial solution to the map label problem, formulated as a MCIP problem and solved by Lagrangian relaxation, was to place well and seismic shot point labels on petroleum industry base maps. Since that first solution was published there have many other mathematical optimization algorithms proposed and used to solve this MCIP for other cartographic applications. Notes References Freeman, H., Map data processing and the annotation problem, Proc. 3rd Scandinavian Conf. on Image Analysis, Chartwell-Bratt Ltd. Copenhagen, 1983. Ahn, J. and Freeman, H., “A program for automatic name placement,” Proc. AUTO-CARTO 6, Ottawa, 1983. 444–455. Freeman, H., “Computer Name Placement,” ch. 29, in Geographical Information Systems, 1, D.J. Maguire, M.F. Goodchild, and D.W. Rhind, John Wiley, New York, 1991, 449–460. Podolskaya N. N. Automatic Label De-Confliction Algorithms for Interactive Graphics Applications. Information technologies (ISSN 1684-6400), 9, 2007, p. 45–50. In Russian: Подольская Н.Н. Алгоритмы автоматического отброса формуляров для интерак тивных графических приложений. Информационные технологии, 9, 2007, с. 45–50. Kameda, T. and K. Imai. 2003. Map label placement for points and curves. IEICE Transactions of Fundamentals of Electronics Communications and Computer Sciences. E86A(4):835–840. Ribeiro Glaydston and Luiz Lorena. 2006. Heuristics for cartographic label placement problems. Computers & Geosciences. 32:739–748. Wagner, F., A. Wolff, V. Kapoor, and T. Strijk. 2001. Three Rules Suffice for Good Label Placement. Algorithmica. 30:334–349. External links Alexander Wolff's Map Labeling Site The Map-Labeling Bibliography Label placement An Empirical Study of Algorithms for Point-Feature Label Placement Optimization algorithms and methods Geographic information systems
Automatic label placement
[ "Technology" ]
1,806
[ "Information systems", "Geographic information systems" ]
320,026
https://en.wikipedia.org/wiki/Index%20notation
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program. In mathematics It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays). The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details. One-dimensional arrays (vectors) A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context): Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions. For example, given the vector: then some entries are . The notation can be applied to vectors in mathematics and physics. The following vector equation can also be written in terms of the elements of the vector (aka components), that is where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly Hence, index notation serves as an efficient shorthand for representing the general structure to an equation, while applicable to individual components. Two-dimensional arrays More than one index is used to describe arrays of numbers, in two or more dimensions, such as the elements of a matrix, (see also image to right); The entry of a matrix A is written using two indices, say i and j, with or without commas to separate the indices: aij or ai,j, where the first subscript is the row number and the second is the column number. Juxtaposition is also used as notation for multiplication; this may be a source of confusion. For example, if then some entries are . For indices larger than 9, the comma-based notation may be preferable (e.g., a3,12 instead of a312). Matrix equations are written similarly to vector equations, such as in terms of the elements of the matrices (aka components) for all values of i and j. Again this expression represents a set of equations, one for each index. If the matrices each have m rows and n columns, meaning and , then there are mn equations. Multi-dimensional arrays The notation allows a clear generalization to multi-dimensional arrays of elements: tensors. For example, representing a set of many equations. In tensor analysis, superscripts are used instead of subscripts to distinguish covariant from contravariant entities, see covariance and contravariance of vectors and raising and lowering indices. In computing In several programming languages, index notation is a way of addressing elements of an array. This method is used since it is closest to how it is implemented in assembly language whereby the address of the first element is used as a base, and a multiple (the index) of the element size is used to address inside the array. For example, if an array of integers is stored in a region of the computer's memory starting at the memory cell with address 3000 (the base address), and each integer occupies four cells (bytes), then the elements of this array are at memory locations 0x3000, 0x3004, 0x3008, …, 0x3000 + 4(n − 1) (note the zero-based numbering). In general, the address of the ith element of an array with base address b and element size s is . Implementation details In the C programming language, we can write the above as (pointer form) or (array indexing form), which is exactly equivalent because the C standard defines the array indexing form as a transformation to pointer form. Coincidentally, since pointer addition is commutative, this allows for obscure expressions such as which is equivalent to . Multidimensional arrays Things become more interesting when we consider arrays with more than one index, for example, a two-dimensional table. We have three possibilities: make the two-dimensional array one-dimensional by computing a single index from the two consider a one-dimensional array where each element is another one-dimensional array, i.e. an array of arrays use additional storage to hold the array of addresses of each row of the original array, and store the rows of the original array as separate one-dimensional arrays In C, all three methods can be used. When the first method is used, the programmer decides how the elements of the array are laid out in the computer's memory, and provides the formulas to compute the location of each element. The second method is used when the number of elements in each row is the same and known at the time the program is written. The programmer declares the array to have, say, three columns by writing e.g. . One then refers to a particular element of the array by writing . The compiler computes the total number of memory cells occupied by each row, uses the first index to find the address of the desired row, and then uses the second index to find the address of the desired element in the row. When the third method is used, the programmer declares the table to be an array of pointers, like in . When the programmer subsequently specifies a particular element , the compiler generates instructions to look up the address of the row specified by the first index, and use this address as the base when computing the address of the element specified by the second index. void mult3x3f(float result[][3], const float A[][3], const float B[][3]) { int i, j, k; for (i = 0; i < 3; ++i) { for (j = 0; j < 3; ++j) { result[i][j] = 0; for (k = 0; k < 3; ++k) result[i][j] += A[i][k] * B[k][j]; } } } In other languages In other programming languages such as Pascal, indices may start at 1, so indexing in a block of memory can be changed to fit a start-at-1 addressing scheme by a simple linear transformation – in this scheme, the memory location of the ith element with base address b and element size s is . References Programming with C++, J. Hubbard, Schaum's Outlines, McGraw Hill (USA), 1996, Tensor Calculus, D.C. Kay, Schaum's Outlines, McGraw Hill (USA), 1988, Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, Mathematical notation Programming constructs
Index notation
[ "Mathematics" ]
1,530
[ "nan" ]
320,033
https://en.wikipedia.org/wiki/Habitat%20conservation
Habitat conservation is a management practice that seeks to conserve, protect and restore habitats and prevent species extinction, fragmentation or reduction in range. It is a priority of many groups that cannot be easily characterized in terms of any one ideology. History of the conservation movement For much of human history, nature was seen as a resource that could be controlled by the government and used for personal and economic gain. The idea was that plants only existed to feed animals and animals only existed to feed humans. The value of land was limited only to the resources it provided such as fertile soil, timber, and minerals. Throughout the 18th and 19th centuries, social views started to change and conservation principles were first practically applied to the forests of British India. The conservation ethic that began to evolve included three core principles: 1) human activities damage the environment, 2) there was a civic duty to maintain the environment for future generations, and 3) scientific, empirically-based methods should be applied to ensure this duty was carried out. Sir James Ranald Martin was prominent in promoting this ideology, publishing numerous medico-topographical reports that demonstrated the damage from large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment of Forest Departments. The Madras Board of Revenue started local conservation efforts in 1842, headed by Alexander Gibson, a professional botanist who systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world. Governor-General Lord Dalhousie introduced the first permanent and large-scale forest conservation program in 1855, a model that soon spread to other colonies, as well to the United States, where Yellowstone National Park was opened in 1872 as the world's first national park. Rather than focusing on the economic or material benefits from nature, humans began to appreciate the value of nature itself and the need to protect it. By the mid-20th century, countries such as the United States, Canada, and Britain instigated laws and legislation in order to ensure that the most fragile and beautiful environments would be protected for posterity. Today, with the help of NGO's and governments worldwide, a strong movement is mobilizing with the goal of protecting habitats and preserving biodiversity on a global scale. The commitments and actions of small volunteer associations in villages and towns, that endeavour to emulate the work of well known conservation organisations, are paramount in ensuring generations that follow understand the importance of natural resource conservation. Values of natural habitat Natural habitats can provide Ecosystem services to humans, which are "any positive benefit that wildlife or ecosystems provide to people." The natural environment is a source for a wide range of resources that can be exploited for economic profit, for example timber is harvested from forests and clean water is obtained from natural streams. However, land development from anthropogenic economic growth often causes a decline in the ecological integrity of nearby natural habitat. For instance, this was an issue in the northern Rocky Mountains of the US. However, there is also the economic value in conserving natural habitats. Financial profit can be made from tourist revenue, for example in the tropics where species diversity is high, or in recreational sports which take place in natural environments such as hiking and mountain biking. The cost of repairing damaged ecosystems is considered to be much higher than the cost of conserving natural ecosystems. Measuring the worth of conserving different habitat areas is often criticized as being too utilitarian from a philosophical point of view. Biodiversity Habitat conservation is important in maintaining biodiversity, which refers to the variability in populations, organisms, and gene pools, as well as habitats and ecosystems. Biodiversity is also an essential part of global food security. There is evidence to support a trend of accelerating erosion of the genetic resources of agricultural plants and animals. An increase in genetic similarity of agricultural plants and animals means an increased risk of food loss from major epidemics. Wild species of agricultural plants have been found to be more resistant to disease, for example the wild corn species Teosinte is resistant to 4 corn diseases that affect human grown crops. A combination of seed banking and habitat conservation has been proposed to maintain plant diversity for food security purposes. It has been shown that focusing conversation efforts on ecosystems "within multiple trophic levels" can lead to a better functioning ecosystem with more biomass. Classifying environmental values Pearce and Moran outlined the following method for classifying environmental uses: Direct extractive uses: e.g. timber from forests, food from plants and animals Indirect uses: e.g. ecosystem services like flood control, pest control, erosion protection Optional uses: future possibilities e.g. unknown but potential use of plants in chemistry/medicine Non-use values: Bequest value (benefit of an individual who knows that others may benefit from it in future) Passive use value (sympathy for natural environment, enjoyment of the mere existence of a particular species) Impacts Natural causes Habitat loss and destruction can occur both naturally and through anthropogenic causes. Events leading to natural habitat loss include climate change, catastrophic events such as volcanic explosions and through the interactions of invasive and non-invasive species. Natural climate change, events have previously been the cause of many widespread and large scale losses in habitat. For example, some of the mass extinction events generally referred to as the "Big Five" have coincided with large scale such as the Earth entering an ice age, or alternate warming events. Other events in the big five also have their roots in natural causes, such as volcanic explosions and meteor collisions. The Chicxulub impact is one such example, which has previously caused widespread losses in habitat as the Earth either received less sunlight or grew colder, causing certain fauna and flora to flourish whilst others perished. Previously known warm areas in the tropics, the most sensitive habitats on Earth, grew colder, and areas such as Australia developed radically different flora and fauna to those seen today. The big five mass extinction events have also been linked to sea level changes, indicating that large scale marine species loss was strongly influenced by loss in marine habitats, particularly shelf habitats. Methane-driven oceanic eruptions have also been shown to have caused smaller mass extinction events. Human impacts Humans have been the cause of many species’ extinction. Due to humans’ changing and modifying their environment, the habitat of other species often become altered or destroyed as a result of human actions. The altering of habitats will cause habitat fragmentation, reducing the species' habitat and decreasing their dispersal range. This increases species isolation which then causes their population to decline. Even before the modern industrial era, humans were having widespread, and major effects on the environment. A good example of this is found in Aboriginal Australians and Australian megafauna. Aboriginal hunting practices, which included burning large sections of forest at a time, eventually altered and changed Australia's vegetation so much that many herbivorous megafauna species were left with no habitat and were driven into extinction. Once herbivorous megafauna species became extinct, carnivorous megafauna species soon followed. In the recent past, humans have been responsible for causing more extinctions within a given period of time than ever before. Deforestation, pollution, anthropogenic climate change and human settlements have all been driving forces in altering or destroying habitats. The destruction of ecosystems such as rainforests has resulted in countless habitats being destroyed. These biodiversity hotspots are home to millions of habitat specialists, which do not exist beyond a tiny area. Once their habitat is destroyed, they cease to exist. This destruction has a follow-on effect, as species which coexist or depend upon the existence of other species also become extinct, eventually resulting in the collapse of an entire ecosystem. These time-delayed extinctions are referred to as the extinction debt, which is the result of destroying and fragmenting habitats. As a result of anthropogenic modification of the environment, the extinction rate has climbed to the point where the Earth is now within a sixth mass extinction event, as commonly agreed by biologists. This has been particularly evident, for example, in the rapid decline in the number of amphibian species worldwide. Approaches and methods of habitat conservation Adaptive management addresses the challenge of scientific uncertainty in habitat conservation plans by systematically gathering and applying reliable information to enhance conservation strategies over time. This approach allows for adjustments in management practices based on new insights, making conservation efforts more effective. Determining the size, type and location of habitat to conserve is a complex area of conservation biology. Although difficult to measure and predict, the conservation value of a habitat is often a reflection of the quality (e.g. species abundance and diversity), endangerment of encompassing ecosystems, and spatial distribution of that habitat. Habitat Restoration Habitat restoration is a subset of habitat conservation and its goals include improving the habitat and resources ranging from one species to several species The Society for Ecological Restoration's International Science and Policy Working Group define restoration as "the process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed." The scale of habitat restoration efforts can range from small to large areas of land depending on the goal of the project. Elements of habitat restoration include developing a plan and embedding goals within that plan, and monitoring and evaluating species. Considerations such as the species type, environment, and context are aspects of planning a habitat restoration project. Efforts to restore habitats that have been altered by anthropogenic activities has become a global endeavor, and is used to counteract the effects of habitat destruction by humans. Miller and Hobbs state three constraints on restoration: "ecological, economic, and social" constraints. Habitat restoration projects include Marine Debris Mitigation for Navassa Island National Wildlife Refuge in Haiti and Lemon Bay Preserve Habitat Restoration in Florida. Identifying priority habitats for conservation Habitat conservation is vital for protecting species and ecological processes. It is important to conserve and protect the space/ area in which that species occupies. Therefore, areas classified as ‘biodiversity hotspots’, or those in which a flagship, umbrella, or endangered species inhabits are often the habitats that are given precedence over others. Species that possess an elevated risk of extinction are given the highest priority and as a result of conserving their habitat, other species in that community are protected thus serving as an element of gap analysis. In the United States of America, a Habitat Conservation Plan (HCP) is often developed to conserve the environment in which a specific species inhabits. Under the U.S. Endangered Species Act (ESA) the habitat that requires protection in an HCP is referred to as the ‘critical habitat’. Multiple-species HCPs are becoming more favourable than single-species HCPs as they can potentially protect an array of species before they warrant listing under the ESA, as well as being able to conserve broad ecosystem components and processes . As of January 2007, 484 HCPs were permitted across the United States, 40 of which covered 10 or more species. The San Diego Multiple Species Conservation Plan (MSCP) encompasses 85 species in a total area of 26,000-km2. Its aim is to protect the habitats of multiple species and overall biodiversity by minimizing development in sensitive areas. HCPs require clearly defined goals and objectives, efficient monitoring programs, as well as successful communication and collaboration with stakeholders and land owners in the area. Reserve design is also important and requires a high level of planning and management in order to achieve the goals of the HCP. Successful reserve design often takes the form of a hierarchical system with the most valued habitats requiring high protection being surrounded by buffer habitats that have a lower protection status. Like HCPs, hierarchical reserve design is a method most often used to protect a single species, and as a result habitat corridors are maintained, edge effects are reduced and a broader suite of species are protected. How much habitat is needed A range of methods and models currently exist that can be used to determine how much habitat is to be conserved in order to sustain a viable population, including Resource Selection Function and Step Selection models. Modelling tools often rely on the spatial scale of the area as an indicator of conservation value. There has been an increase in emphasis on conserving few large areas of habitat as opposed to many small areas. This idea is often referred to as the "single large or several small", SLOSS debate, and is a highly controversial area among conservation biologists and ecologists. The reasons behind the argument that "larger is better" include the reduction in the negative impacts of patch edge effects, the general idea that species richness increases with habitat area and the ability of larger habitats to support greater populations with lower extinction probabilities. Noss & Cooperrider support the "larger is better" claim and developed a model that implies areas of habitat less than 1000ha are "tiny" and of low conservation value. However, Shwartz suggests that although "larger is better", this does not imply that "small is bad". Shwartz argues that human induced habitat loss leaves no alternative to conserving small areas. Furthermore, he suggests many endangered species which are of high conservation value, may only be restricted to small isolated patches of habitat, and thus would be overlooked if larger areas were given a higher priority. The shift to conserving larger areas is somewhat justified in society by placing more value on larger vertebrate species, which naturally have larger habitat requirements. Examples of current conservation organizations The Nature Conservancy Since its formation in 1951 The Nature Conservancy has slowly developed into one of the world's largest conservation organizations. Currently operating in over 30 countries, across five continents worldwide, The Nature Conservancy aims to protect nature and its assets for future generations. The organization purchases land or accepts land donations with the intention of conserving its natural resources. In 1955 The Nature Conservancy purchased its first 60-acre plot near the New York/Connecticut border in the United States of America. Today the Conservancy has expanded to protect over 119 million acres of land, 5,000 river miles as well as participating in over 1,000 marine protection programs across the globe . Since its beginnings The Nature Conservancy has understood the benefit in taking a scientific approach towards habitat conservation. For the last decade the organization has been using a collaborative, scientific method known as "Conservation by Design." By collecting and analyzing scientific data The Conservancy is able to holistically approach the protection of various ecosystems. This process determines the habitats that need protection, specific elements that should be conserved as well as monitoring progress so more efficient practices can be developed for the future. The Nature Conservancy currently has a large number of diverse projects in operation. They work with countries around the world to protect forests, river systems, oceans, deserts and grasslands. In all cases the aim is to provide a sustainable environment for both the plant and animal life forms that depend on them as well as all future generations to come. World Wildlife Fund (WWF) The World Wildlife Fund (WWF) was first formed in after a group of passionate conservationists signed what is now referred to as the Morges Manifesto. WWF is currently operating in over 100 countries across 5 continents with a current listing of over 5 million supporters. One of the first projects of WWF was assisting in the creation of the Charles Darwin Research Foundation which aided in the protection of diverse range of unique species existing on the Galápagos’ Islands, Ecuador. It was also a WWF grant that helped with the formation of the College of African Wildlife Management in Tanzania which today focuses on teaching a wide range of protected area management skills in areas such as ecology, range management and law enforcement. The WWF has since gone on to aid in the protection of land in Spain, creating the Coto Doñana National Park in order to conserve migratory birds and The Democratic Republic of Congo, home to the world's largest protected wetlands. The WWF also initiated a debt-for-nature concept which allows the country to put funds normally allocated to paying off national debt, into conservation programs that protect its natural landscapes. Countries currently participating include Madagascar, the first country to participate which since 1989 has generated over $US50 million towards preservation, Bolivia, Costa Rica, Ecuador, Gabon, the Philippines and Zambia. Rare Conservation Rare has been in operation since 1973 with current global partners in over 50 countries and offices in the United States of America, Mexico, the Philippines, China and Indonesia. Rare focuses on the human activity that threatens biodiversity and habitats such as overfishing and unsustainable agriculture. By engaging local communities and changing behaviour Rare has been able to launch campaigns to protect areas in most need of conservation. The key aspect of Rare's methodology is their "Pride Campaigns". For example, in the Andes in South America, Rare has incentives to develop watershed protection practices. In the Southeast Asia's "coral triangle" Rare is training fishers in local communities to better manage the areas around the coral reefs in order to lessen human impact. Such programs last for three years with the aim of changing community attitudes so as to conserve fragile habitats and provide ecological protection for years to come. WWF Netherlands WWF Netherlands, along with ARK Nature, Wild Wonders of Europe, and Conservation Capital have started the Rewilding Europe project. This project intents to rewild several areas in Europe. See also Biodiversity Biotope Conservation biology Conservation ethic Ecology Ecotope Environment Environmental impact of reservoirs Environmental protection Environmentalism Habitat corridor Habitat fragmentation Marine conservation Natural capital Natural environment Natural landscape Natural resource Nature Recycling Refuge (ecology) Renewable resource Sustainability Sustainable agriculture Sustainable development Sustainable land management Trail ethics Water conservation Wildlife Wildlife corridor Wildlife crossing International Union for Conservation of Nature References External links A-Z of Areas of Biodiversity Importance: Habitat/Species Management Area Economics of Habitat Protection & Restoration NOAA Economics A Technical Guide for Monitoring Wildlife Habitat U.S. Forest Service Conservation biology Habitats Conservation-reliant species Habitat Wildlife conservation
Habitat conservation
[ "Biology" ]
3,624
[ "Wildlife conservation", "Conservation biology", "Biodiversity" ]
320,056
https://en.wikipedia.org/wiki/Pyrex
Pyrex (trademarked as PYREX and pyrex) is a brand introduced by Corning Inc. in 1915, initially for a line of clear, low-thermal-expansion borosilicate glass used for laboratory glassware and kitchenware. It was later expanded in the 1930s to include kitchenware products made of soda–lime glass and other materials. Its name has become famous for making rectangular glass roasters. In 1998, the kitchenware division of Corning Inc. responsible for the development of Pyrex spun off from its parent company as Corning Consumer Products Company, subsequently renamed Corelle Brands. Corning Inc. no longer manufactures or markets consumer products, only industrial ones. History Borosilicate glass was first made by German chemist and glass technologist Otto Schott, founder of Schott AG in 1893, 22 years before Corning produced the Pyrex brand. Schott AG sells the product under the name "Duran". In 1908, Eugene Sullivan, director of research at Corning Glass Works, developed Nonex, a borosilicate low-expansion glass, to reduce breakage in shock-resistant lantern globes and battery jars. Sullivan had learned about Schott's borosilicate glass as a doctoral student in Leipzig, Germany. Jesse Littleton of Corning discovered the cooking potential of borosilicate glass by giving his wife Bessie Littleton a casserole dish made from a cut-down Nonex battery jar. Corning removed the lead from Nonex and developed it as a consumer product. Pyrex made its public debut in 1915 during World War I, positioned as an American-produced alternative to Duran. A Corning executive gave the following account of the etymology of the name "Pyrex": Corning purchased the Macbeth-Evans Glass Company in 1936 and their Charleroi, PA plant was used to produce Pyrex opal ware bowls and bakeware made of tempered soda–lime glass. In 1958 an internal design department was started by John B. Ward. He redesigned the Pyrex ovenware and Flameware. Over the years, designers such as Penny Sparke, Betty Baugh, Smart Design, TEAMS Design, and others have contributed to the design of the line. Corning divested itself of the Corning Consumer Products Company (now known as Corelle Brands) in 1998 and production of consumer Pyrex products went with it. Its previous licensing of the name to Newell Cookware Europe remained in effect. France-based cookware maker Arc International acquired Newell's European business in early 2006 to own rights to the brand in Europe, the Middle East and Africa. In 2007, Arc closed the Pyrex soda–lime factory in Sunderland, UK moving all European production to France. The Sunderland factory had first started making Pyrex in 1922. In 2014, Arc International sold off its Arc International Cookware division which operated the Pyrex business to Aurora Capital for its Resurgence Fund II. The division was renamed the International Cookware group. London-based private equity firm Kartesia purchased International Cookware in 2020. In 2021, Pyrex rival Duralex was acquired by International Cookware group for €3.5 million (US$4.2m). In March 2019, Corelle Brands, the makers of Pyrex in the United States, merged with Instant Brands, the makers of the Instant Pot. On June 12, 2023, Instant Brands filed for Chapter 11 bankruptcy after high interest rates and waning access to credit hit its cash position and made its debts unsustainable. The company emerged from bankruptcy on February 27, 2024 under the previous Corelle Brands moniker, after having sold off its appliance business ("Instant" branded products). Trademark In Europe, Africa, and the Middle East, a variation of the PYREX (all uppercase) trademark is licensed by International Cookware for bakeware that has been made of numerous materials including borosilicate and soda–lime glass, stoneware, metal, plus vitroceramic cookware. The pyrex (all lowercase, introduced in 1975) trademark is now used for kitchenware sold in the United States, South America, and Asia. In the past, the brand name has also been used for kitchen utensils and bakeware by other companies in regions such as Japan and Australia. It is a common misconception that the logo style alone indicates the type of glass used to manufacture the bakeware. Additionally, Corning's introduction of soda-lime-glass-based Pyrex in the 1940s predates the introduction of the all lowercase logo by nearly 30 years. Composition Older clear-glass Pyrex manufactured by Corning, Arc International's Pyrex products, and Pyrex laboratory glassware are made of borosilicate glass. According to the National Institute of Standards and Technology, borosilicate Pyrex is composed of (as percentage of weight): 4.0% boron, 54.0% oxygen, 2.8% sodium, 1.1% aluminum, 37.7% silicon, and 0.3% potassium. According to glass supplier Pulles and Hannique, borosilicate Pyrex is made of Corning 7740 glass and is equivalent in formulation to Schott Glass 8330 glass sold under the "Duran" brand name. The composition of both Corning 7740 and Schott 8330 is given as 80.6% , 12.6% , 4.2% , 2.2% , 0.1% , 0.1% , 0.05% , and 0.04% . In the late 1930s and 1940s, Corning also introduced new product lines under the Pyrex brand using different types of glass. Opaque tempered soda–lime glass was used to create decorated opal ware bowls and bakeware, and aluminosilicate glass was used for Pyrex Flameware stovetop cookware. The latter product had a bluish tint caused by the addition of alumino-sulfate. Beginning in the 1980s, production of clear Pyrex glass products manufactured in the USA by Corning was also shifted to tempered soda–lime glass, like their popular opal bakeware. This change was justified by stating that soda–lime glass has higher mechanical strength than borosilicatemaking it more resistant to physical damage when dropped, which is believed to be the most common cause of breakage in glass bakeware. The glass is also cheaper to produce and more environmentally friendly. Its thermal shock resistance is lower than borosilicate's, leading to potential breakage from heat stress if used contrary to recommendations. Since the closure of the soda–lime plant in England in 2007, European Pyrex has been made solely from borosilicate. The differences between Pyrex-branded glass products has also led to controversy regarding safety issuesin 2008, the U.S. Consumer Product Safety Commission reported it had received 66 complaints by users reporting that their Pyrex glassware had shattered over the prior ten years yet concluded that Pyrex glass bakeware does not present a safety concern. The consumer affairs magazine Consumer Reports investigated the issue and released test results, in January 2011, confirming that borosilicate glass bakeware was less susceptible to thermal shock breakage than tempered soda lime bakeware. They admitted their testing conditions were "contrary to instructions" provided by the manufacturer. STATS analyzed the data available and found that the most common way that users were injured by glassware was via mechanical breakage, being hit or dropped, and that "the change to soda lime represents a greater net safety benefit." Use in telescopes Because of its low expansion characteristics, borosilicate glass is often the material of choice for reflective optics in astronomy applications. In 1932, George Ellery Hale approached Corning with the challenge of fabricating the telescope mirror for the California Institute of Technology's Palomar Observatory project. A previous effort to fabricate the optic from fused quartz had failed, with the cast blank having voids. The mirror was cast by Corning during 1934–1936 out of borosilicate glass. After a year of cooling, during which it was almost lost to a flood, the blank was completed in 1935. The first blank now resides in the Corning Museum of Glass. See also Jena glass Citations General and cited references External links Pyrex Love, a vintage Pyrex reference site American brands Boron compounds Corning Inc. Glass trademarks and brands Kitchenware brands Kitchenware Low-expansion glass Products introduced in 1915 Companies that filed for Chapter 11 bankruptcy in 2023 Transparent materials
Pyrex
[ "Physics" ]
1,788
[ "Physical phenomena", "Optical phenomena", "Materials", "Transparent materials", "Matter" ]
320,233
https://en.wikipedia.org/wiki/Instrumentation%20amplifier
An instrumentation amplifier (sometimes shorthanded as in-amp or InAmp) is a type of differential amplifier that has been outfitted with input buffer amplifiers, which eliminate the need for input impedance matching and thus make the amplifier particularly suitable for use in measurement and test equipment. Additional characteristics include very low DC offset, low drift, low noise, very high open-loop gain, very high common-mode rejection ratio, and very high input impedances. Instrumentation amplifiers are used where great accuracy and stability of the circuit both short- and long-term are required. Although the instrumentation amplifier is usually shown schematically identical to a standard operational amplifier (op-amp), the electronic instrumentation amplifier is almost always internally composed of 3 op-amps. These are arranged so that there is one op-amp to buffer each input (+, −), and one to produce the desired output with adequate impedance matching for the function. The most commonly used instrumentation amplifier circuit is shown in the figure. The gain of the circuit is The rightmost amplifier, along with the resistors labelled and is just the standard differential-amplifier circuit, with gain and differential input resistance . The two amplifiers on the left are the buffers. With removed (open-circuited), they are simple unity-gain buffers; the circuit will work in that state, with gain simply equal to and high input impedance because of the buffers. The buffer gain could be increased by putting resistors between the buffer inverting inputs and ground to shunt away some of the negative feedback; however, the single resistor between the two inverting inputs is a much more elegant method: it increases the differential-mode gain of the buffer pair while leaving the common-mode gain equal to 1. This increases the common-mode rejection ratio (CMRR) of the circuit and also enables the buffers to handle much larger common-mode signals without clipping than would be the case if they were separate and had the same gain. Another benefit of the method is that it boosts the gain using a single resistor rather than a pair, thus avoiding a resistor-matching problem and very conveniently allowing the gain of the circuit to be changed by changing the value of a single resistor. A set of switch-selectable resistors or even a potentiometer can be used for , providing easy changes to the gain of the circuit, without the complexity of having to switch matched pairs of resistors. The ideal common-mode gain of an instrumentation amplifier is zero. In the circuit shown, common-mode gain is caused by mismatch in the resistor ratios and by the mismatch in common-mode gains of the two input op-amps. Obtaining very closely matched resistors is a significant difficulty in fabricating these circuits, as is optimizing the common-mode performance. An instrumentation amplifier can also be built with two op-amps to save on cost, but the gain must be higher than two (+6 dB). Instrumentation amplifiers can be built with individual op-amps and precision resistors, but are also available in integrated circuit from several manufacturers (including Texas Instruments, Analog Devices, and Renesas Electronics). An IC instrumentation amplifier typically contains closely matched laser-trimmed resistors, and therefore offers excellent common-mode rejection. Examples include INA128, AD8221, LT1167 and MAX4194. Instrumentation amplifiers can also be designed using "indirect current-feedback architecture", which extend the operating range of these amplifiers to the negative power supply rail, and in some cases the positive power supply rail. This can be particularly useful in single-supply systems, where the negative power rail is simply the circuit ground (GND). Examples of parts utilizing this architecture are MAX4208/MAX4209 and AD8129/AD8130 . Types Feedback-free instrumentation amplifier Feedback-free instrumentation amplifier is the high-input-impedance differential amplifier designed without the external feedback network. This allows reduction in the number of amplifiers (one instead of three), reduced noise (no thermal noise is brought on by the feedback resistors) and increased bandwidth (no frequency compensation is needed). Chopper-stabilized (or zero-drift) instrumentation amplifiers such as the LTC2053 use a switching-input frontend to eliminate DC offset errors and drift. See also Isolation amplifier Operational amplifier applications References External links Interactive analysis of the Instrumentation Amplifier Opamp Instrumentation Amplifier Lessons In Electric Circuits — Volume III — The instrumentation amplifier A Practical Review of Common Mode and Instrumentation Amplifiers Instrumentation Amplifier Solutions, Circuits and Applications Fixed-gain CMOS differential amplifiers with no external feedback for a wide temperature range (Cryogenics) Electronic amplifiers
Instrumentation amplifier
[ "Technology" ]
967
[ "Electronic amplifiers", "Amplifiers" ]
1,505,909
https://en.wikipedia.org/wiki/Radio%20silence
In telecommunications, radio silence or emissions control (EMCON) is a status in which all fixed or mobile radio stations in an area are asked to stop transmitting for safety or security reasons. The term "radio station" may include anything capable of transmitting a radio signal. A single ship, aircraft, or spacecraft, or a group of them, may also maintain radio silence. Amateur radio Wilderness Protocol The Wilderness Protocol recommends that those stations able to do so should monitor the primary (and secondary, if possible) frequency every three hours starting at 7 AM, local time, for 5 minutes starting at the top of every hour, or even continuously. The Wilderness Protocol is now included in both the ARRL ARES Field Resources Manual and the ARES Emergency Resources Manual. Per the manual, the protocol is: The Wilderness protocol (see page 101, August 1995 QST) calls for hams in the wilderness to announce their presence on, and to monitor, the national calling frequencies for five minutes beginning at the top of the hour, every three hours from 7 AM to 7 PM while in the back country. A ham in a remote location may be able to relay emergency information through another wilderness ham who has better access to a repeater. National calling frequencies: 52.525, 146.52, 223.50, 446.00, 1294.50 MHz. Priority transmissions should begin with the LITZ (Long Interval Tone Zero or Long Time Zero) DTMF signal for at least 5 seconds. CQ like calls (to see who is out there) should not take place until after 4 minutes after the hour. Maritime mobile service Distress calls Radio silence can be used in nautical and aeronautical communications to allow faint distress calls to be heard (see Mayday). In the latter case, the controlling station can order other stations to stop transmitting with the proword "Seelonce Seelonce Seelonce". (The word uses an approximation of the French pronunciation of the word silence, "See-LAWNCE."). Once the need for radio silence is finished, the controlling station lifts radio silence by the prowords "Seelonce FINI." Disobeying a Seelonce Mayday order constitutes a serious criminal offence in most countries. The aviation equivalent of Seelonce Mayday is the phrase or command "Stop Transmitting - Distress (or Mayday)". "Distress traffic ended" is the phrase used when the emergency is over. Again, disobeying such an order is extremely dangerous and is therefore a criminal offence in most countries. Silent periods Up until the procedure was replaced by the Global Maritime Distress and Safety System (August 1, 2013 in the U.S.), maritime radio stations were required to observe radio silence on 500 kHz (radiotelegraph) for the three minutes between 15 and 18 minutes past the top of each hour, and for the three minutes between 45 and 48 minutes past the top of the hour; and were also required to observe radio silence on 2182 kHz (upper-sideband radiotelephony) for the first three minutes of each hour (H+00 to H+03) and for the three minutes following the bottom of the hour (H+30 to H+33). For 2182 kHz, this is still a legal requirement, according to 47 CFR 80.304 - Watch requirement during silence periods. Military An order for Radio silence is generally issued by the military where any radio transmission may reveal troop positions, either audibly from the sound of talking, or by radio direction finding. In extreme scenarios Electronic Silence ('Emissions Control' or EMCON) may also be put into place as a defence against interception. In the British Army, the imposition and lifting of radio silence will be given in orders or ordered by control using 'Battle Code' (BATCO). Control is the only authority to impose or lift radio silence either fully or selectively. The lifting of radio silence can only be ordered on the authority of the HQ that imposed it in the first place. During periods of radio silence a station may, with justifiable cause, transmit a message. This is known as Breaking Radio Silence. The necessary replies are permitted but radio silence is automatically re-imposed afterwards. The breaking station transmits its message using BATCO to break radio silence. The command for imposing radio silence is: Hello all stations, this is 0. Impose radio silence. Over. Other countermeasures are also applied to protect secrets against enemy signals intelligence. Electronic emissions can be used to plot a line of bearing to an intercepted signal, and if more than one receiver detects it, triangulation can estimate its location. Radio direction finding (RDF) was critically important during the Battle of Britain and reached a high state of maturity in early 1943 with the aid of United States institutions aiding British Research and Development under the pressures of the continuing Battle of the Atlantic during World War II when locating U-boats. One key breakthrough was marrying MIT/Raytheon developed CRT technology with pairs of RDF antennas giving a differentially derived instant bearing useful in tactical situations, enabling escorts to run down the bearing to an intercept. The U-boat command of Wolfpacks required a minimum once daily communications check-in, allowing new Hunter-Killer groups to localize U-boats tactically from April on, leading to dramatic swings in the fortunes of war in the battles between March, when the U-boats sank over 300 allied ships and "Black May" when the allies sank at least 44 U-boats—each without orders to exercise EMCON/radio silence. Other uses Radio silence can be maintained for other purposes, such as for highly sensitive radio astronomy. Radio silence can also occur for spacecraft whose antenna is temporarily pointed away from Earth in order to perform observations, or there is insufficient power to operate the radio transmitter, or during re-entry when the hot plasma surrounding the spacecraft blocks radio signals. In the USA, CONELRAD and EBS (which are now discontinued), and EAS (which is currently active) are also ways of maintaining radio silence, mainly in broadcasting, in the event of an attack. Examples of radio silence orders Radio silencing helped hide the Japanese attack on Pearl Harbor in World War II. The attackers had used AM radio station KGU in Honolulu as a homing signal. On June 2, 1942, during World War II, a nine-minute air-raid alert, including at 9:22 pm a radio silence order applied to all radio stations from Mexico to Canada. In January 1965, Syrian Armed Forces observed a period of radio silence which successfully detected Mossad spy Eli Cohen who was transmitting espionage work to Israel. See also Dead air Guard band Mapimí Silent Zone Radio quiet zone CONELRAD References Military communications Radio communications Spacecraft communication Emergency Alert System Civil defense Silence
Radio silence
[ "Engineering" ]
1,386
[ "Telecommunications engineering", "Spacecraft communication", "Radio communications", "Military communications", "Aerospace engineering" ]
1,505,927
https://en.wikipedia.org/wiki/Coliform%20bacteria
Coliform bacteria are defined as either motile or non-motile Gram-negative non-spore forming bacilli that possess β-galactosidase to produce acids and gases under their optimal growth temperature of 35–37 °C. They can be aerobes or facultative aerobes, and are a commonly used indicator of low sanitary quality of foods, milk, and water. Coliforms can be found in the aquatic environment, in soil and on vegetation; they are universally present in large numbers in the feces of warm-blooded animals as they are known to inhabit the gastrointestinal system. While coliform bacteria are not normally the cause of serious illness, they are easy to culture, and their presence is used to infer that other pathogenic organisms of fecal origin may be present in a sample, or that said sample is not safe to consume. Such pathogens include disease-causing bacteria, viruses, or protozoa and many multicellular parasites. Every drinking water source must be tested for the presence of these total coliform bacteria. Genera Typical genera include: Citrobacter are peritrichous facultative anaerobic bacilli between 0.6–6 μm in length. Citrobacter species inhabit intestinal flora without causing harm, but can lead to urinary tract infections, bacteremia, brain abscesses, pneumonia, intra abdominal sepsis, meningitis, and joint infections if they are given the opportunity. Infections of a Citrobacter species has a mortality rate between 33–48%, with infants and immunocompromised individuals being more susceptible. Enterobacter are motile, flagellated bacilli known for causing infections such as bacteremia, respiratory tract infections, urinary tract infections, infections of areas where surgery occurred, and in extreme cases meningitis, sinusitis and osteomyelitis. To determine the presence of Enterobacter in a sample, they are first grown on MacConkey agar to confirm they are lactose fermenting. An indole test will differentiate Enterobacter from Escherichia, as Enterobacter are indole negative and Escherichia is positive. Enterobacter are distinguished from Klebsiella because of their differences in motility. Klebsiella are non-motile, Gram-negative bacilli ranging from 1–2 μm in length. They are facultative anaerobes with a capsule composed of complex acid polysaccharides that allows them to withstand drying for several months. Klebsiella pneumoniae is the most common Klebsiella species found in humans, the gastrointestinal tracts of animals, in sewage and in soil. On carbohydrate-rich media, Klebsiella colonies appear greyish-white in colour with a mucosal outer surface. The media used for selecting for Klebsiella species in a mixed sample is an agar including ornithine, raffinose, and Koser citrate, where members of this genus will form yellow, wet-looking colonies. Escherichia species normally inhabit the human intestine and those of other warm-blooded animals, and are the most commonly responsible for causing disease in humans. Escherichia coli specifically is the most common organism seen in the human intestine and are known to cause a variety of diseases in humans. Most E. coli strains are motile and have obtained many of their virulence features from horizontal gene transfer. There are several different pathotypes of E. coli causing gastrointestinal syndromes: diarrheagenic E. coli (DEC), enterotoxigenic E. coli (ETEC); EPEC; Shiga toxin–producingE. coli (STEC), which includes EHEC; enteroaggregative E. coli (EAEC); and enteroinvasive E. coli (EIEC). There are different ways to identify E. coli based on variation of their O, H and K polysaccharides on their cell surface or by using selective medias. Escherichia coli (E. coli) can be distinguished from most other coliforms by its ability to ferment lactose at 44 °C in the fecal coliform test, and by its growth and color reaction on certain types of culture media. When cultured on an eosin methylene blue (EMB) plate, a positive result for E. coli is metallic green colonies on a dark purple medium. Also can be cultured on Tryptone Bile X-Glucuronide (TBX) to appear as blue or green colonies after incubation period of 24 hours. Escherichia coli have an incubation period of 12–72 hours with the optimal growth temperature being 37 °C. Unlike the general coliform group, E. coli are almost exclusively of fecal origin and their presence is thus an effective confirmation of fecal contamination. Most strains of E. coli are harmless, but some can cause serious illness in humans. Infection symptoms and signs include bloody diarrhea, stomach cramps, vomiting and occasionally, fever. The bacteria can also cause pneumonia, other respiratory illnesses and urinary tract infections. An easy way to differentiate between different types of coliform bacteria is by using an eosin methylene blue agar plate. This plate is partially inhibitory to Gram (+) bacteria, and will produce a color change in the Gram (-) bacterial colonies based on lactose fermentation abilities. Strong lactose fermenters will appear as dark blue/purple/black, and E.coli (which also ferments lactose) colonies will be dark colored, but will also appear to have a metallic green sheen. Other coliform bacteria will appear as thick, slimy colonies, with non-fermenters being colorless, and weak fermenters being pink. Incidence of coliform outbreaks Escherichia coli O157 As of November 15, 2021, seven states in the USA declared ten cases of illnesses from an E. coli O157:H7 strain. These cases were reported from October 15, 2021 through October 27, 2021 and an investigation was carried out by the Minnesota Department of Agriculture and FDA. It was concluded that packages of spinach collected from homes of infected people were contaminated with a strain of E. coli that matched the strain causing illness. This was determined by performing whole genome sequencing on the strain extracted from the spinach and comparing it to the strain taken from infected individuals. As of February 7, 2022, the provinces of Alberta and Saskatchewan in Canada reported a collective fourteen confirmed cases of E. coli O157 strain illnesses. These were reported between December 2021 and January 2022, and the Public Health Agency of Canada (PHAC), the Canadian Food Inspection Agency (CFIA), and Health Canada were able to determine a specific brand of Original Kimchi to be the source of the organism. On January 28, 2022 and February 6, 2022, the CFIA issued a recall on Hankook Original Kimchi. Detection of coliform bacteria in drinking water PCR Amplification of the beta-galactosidase gene is used to detect coliforms in general, because all coliform organisms produce this compound. The amplification of the beta-D glucuronidase is used to detect E. coli, or the amplification of their verotoxin gene(s) to detect verotoxin-producing E. coli. Chemiluminescent in-situ hybridization Specific areas of the 16S rRNA in the Enterobacteriaceae genus are bound by oligonucleotide probes, which aids in monitoring the quality of drinking water. Specifically, E. coli is labelled with a soybean peroxidase-labeled peptide nucleic acid (PNA) probes that bind to a specific sequence in their 16S rRNA. When used in conjunction with a chemiluminescent substrate, light is produced where each colony of E. coli is located, indicating that they are present in the sample. Violet red bile agar The solid medium is used to grow lactose-fermenting coliforms and utilizes a neutral red pH indicator. Pink colonies appear when lactose is fermented and are surrounded by bile that has precipitated out. To confirm if these colonies are coliforms, they are transferred to brilliant green lactose bile (BGLB) and incubated. If gas is visible after incubation, it can be confirmed that the sample had coliforms present. Membrane filter method Test samples are filtered through standard filter paper and then transferred to M-endo or LES Endo Agar mediums. Colonies appear pinkish-red with green metallic sheen after 22–24 hours of incubation. These colonies can be confirmed as coliforms if they are inoculated in lauryl tryptose (LST), produce gas, and then inoculated in BGLB. If there is gas production in the BGLB tubes, the test is positive for the presence of coliform bacteria. See also Bacteriological water analysis Coliform index Fecal coliform Indicator bacteria Pathogenic Escherichia coli References Bacteria Foodborne illnesses Water quality indicators
Coliform bacteria
[ "Chemistry", "Biology", "Environmental_science" ]
1,943
[ "Prokaryotes", "Water pollution", "Water quality indicators", "Bacteria", "Microorganisms" ]
1,505,956
https://en.wikipedia.org/wiki/Noosfera%20%28icebreaker%29
Noosfera () is a polar supply and research ship operated by the National Antarctic Scientific Center of Ukraine. Until 2021, she was operated by the British Antarctic Survey and named RRS James Clark Ross. History British Antarctic Survey RRS James Clark Ross was constructed at Swan Hunter Shipbuilders in Wallsend, UK and was named after the British explorer James Clark Ross. She replaced the in 1991. She was launched by Her Majesty Queen Elizabeth II 1 December 1990. In March 2018, RRS James Clark Ross was due to sample the marine life around the world's biggest iceberg, A-68, but was unable to reach the site due to sea ice conditions. After 30 years' service, James Clark Ross was sold to the National Antarctic Scientific Center of Ukraine, in August 2021. Gallery See also Vernadsky Research Base , a former British Antarctic Survey Royal Research Ship. , a new Royal Research Ship which entered service in 2021. James Ross Island Footnotes History of Antarctica Hydrography Icebreakers of the United Kingdom Oceanographic instrumentation Research vessels of the United Kingdom 1990 ships Ships built by Swan Hunter Ships built on the River Tyne British Antarctic Survey Icebreakers of Ukraine Research vessels of Ukraine Ukraine and the Antarctic
Noosfera (icebreaker)
[ "Technology", "Engineering", "Environmental_science" ]
246
[ "Hydrography", "Hydrology", "Oceanographic instrumentation", "Measuring instruments" ]
1,505,977
https://en.wikipedia.org/wiki/Bishop%27s%20Ring
A Bishop's Ring is a diffuse brown or bluish halo observed around the sun. It is typically observed after large volcanic eruptions. The first recorded observation of a Bishop's Ring was by Rev. Sereno Edwards Bishop of Honolulu, after the Krakatoa eruption of August 27, 1883. This gigantic explosion threw a vast quantity of dust and volatile gases into the atmosphere. Sulfate aerosols remained in the stratosphere, causing colorful sunrises and sunsets for several years. The first observation of this ring was published in 1883, being described as a “faint halo” around the sun. Bishop observed the phenomenon on September 5, 1883; the phenomenon was subsequently named after him, and was the subject of an 1886 professorial dissertation (Habilitationsschrift) by Albert Riggenbach. Most observations agree that the inner rim of the ring is whitish or bluish white and that its outside is reddish, brownish or purple. The area enclosed by the ring is significantly brighter than its surroundings. From the sequence of colors with the red on the outside one can conclude that the phenomenon is caused by diffraction because halos always have their red part on their inside. On average, the radius of the ring is about 28°, but it can vary between 10° and 30°, depending on the dust size. The maximum of 30° is a rather big radius which can only be caused by very small dust particles (0.002 mm) which all have to be of about the same size. Sulfur compound aerosols derived from volcanic eruptions have been found to be the source for the Bishop's Ring effect. A Bishop's Ring was observed for a long period of time in Japan after the eruption of Mt. Pinatubo. References External links Photograph of a Bishop's Ring, with commentary. Meteorology glossary entry for Bishop's Ring. Atmospheric optical phenomena
Bishop's Ring
[ "Physics" ]
387
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
1,506,024
https://en.wikipedia.org/wiki/Crystal%20violet
Crystal violet or gentian violet, also known as methyl violet 10B or hexamethyl pararosaniline chloride, is a triarylmethane dye used as a histological stain and in Gram's method of classifying bacteria. Crystal violet has antibacterial, antifungal, and anthelmintic (vermicide) properties and was formerly important as a topical antiseptic. The medical use of the dye has been largely superseded by more modern drugs, although it is still listed by the World Health Organization. The name gentian violet was originally used for a mixture of methyl pararosaniline dyes (methyl violet), but is now often considered a synonym for crystal violet. The name refers to its colour, being like that of the petals of certain gentian flowers; it is not made from gentians or violets. Production A number of possible routes can be used to prepare crystal violet. The original procedure developed by the German chemists Kern and Caro involved the reaction of dimethylaniline with phosgene to give 4,4′-bis(dimethylamino)benzophenone (Michler's ketone) as an intermediate. This was then reacted with additional dimethylaniline in the presence of phosphorus oxychloride and hydrochloric acid. The dye can also be prepared by the condensation of formaldehyde and dimethylaniline to give a leuco dye: CH2O + 3 C6H5N(CH3)2 → CH(C6H4N(CH3)2)3 + H2O Second, this colourless compound is oxidized to the coloured cationic form (hereafter with oxygen, but a typical oxidizing agent is manganese dioxide, MnO2): CH(C6H4N(CH3)2)3 + HCl +  O2 → [C(C6H4N(CH3)2)3]Cl + H2O Dye colour When dissolved in water, the dye has a blue-violet colour with an absorbance maximum at 590 nm and an extinction coefficient of 87,000 M−1 cm−1. The colour of the dye depends on the acidity of the solution. At a pH of +1.0, the dye is green with absorption maxima at 420 nm and 620 nm, while in a strongly acidic solution (pH −1.0), the dye is yellow with an absorption maximum at 420 nm. The different colours are a result of the different charged states of the dye molecule. In the yellow form, all three nitrogen atoms carry a positive charge, of which two are protonated, while the green colour corresponds to a form of the dye with two of the nitrogen atoms positively charged. At neutral pH, both extra protons are lost to the solution, leaving only one of the nitrogen atoms positive charged. The pKa for the loss of the two protons are approximately 1.15 and 1.8. In alkaline solutions, nucleophilic hydroxyl ions attack the electrophilic central carbon to produce the colourless triphenylmethanol or carbinol form of the dye. Some triphenylmethanol is also formed under very acidic conditions when the positive charges on the nitrogen atoms lead to an enhancement of the electrophilic character of the central carbon, which allows the nucleophilic attack by water molecules. This effect produces a slight fading of the yellow colour. Applications Industry Crystal violet is used as a textile and paper dye, and is a component of navy blue and black inks for printing, ball-point pens, and inkjet printers. Historically, it was the most common dye used in early duplication machines, such as the mimeograph and the ditto machine. It is sometimes used to colourize diverse products such as fertilizer, antifreeze, detergent, and leather. Marking blue, used to mark out pieces in metalworking, is composed of methylated spirits, shellac, and gentian violet. Science When conducting DNA gel electrophoresis, crystal violet can be used as a nontoxic DNA stain as an alternative to fluorescent, intercalating dyes such as ethidium bromide. Used in this manner, it may be either incorporated into the agarose gel or applied after the electrophoresis process is finished. Used at a 10&nbspppm concentration and allowed to stain a gel after electrophoresis for 30 minutes, it can detect as little as 16 ng of DNA. Through use of a methyl orange counterstain and a more complex staining method, sensitivity can be improved further to 8 ng of DNA. When crystal violet is used as an alternative to fluorescent stains, it is not necessary to use ultraviolet illumination; this has made crystal violet popular as a means of avoiding UV-induced DNA destruction when performing DNA cloning in vitro. In biomedical research, crystal violet can be used to stain the nuclei of adherent cells. In this application, crystal violet works as an intercalating dye and allows the quantification of DNA which is proportional to the number of cells. The dye is used as a histological stain, particularly in Gram staining for classifying bacteria. In forensics, crystal violet was used to develop fingerprints. Crystal violet is also used as a tissue stain in the preparation of light microscopy sections. In laboratory, solutions containing crystal violet and formalin are often used to simultaneously fix and stain cells grown in tissue culture to preserve them and make them easily visible, since most cells are colourless. It is also sometimes used as a cheap way to put identification markings on laboratory mice; since many strains of lab mice are albino, the purple colour stays on their fur for several weeks. Crystal violet can be used as an alternative to Coomassie brilliant blue (CBB) in staining of proteins separated by SDS-PAGE, reportedly showing a 5x improved sensitivity vs CBB. Medical Gentian violet has antibacterial, antifungal, antihelminthic, antitrypanosomal, antiangiogenic, and antitumor properties. It is used medically for these properties, in particular for dentistry, and is also known as "pyoctanin" (or "pyoctanine"). It is commonly used for: Marking the skin for surgery preparation and allergy testing; Treating Candida albicans and related fungal infections, such as thrush, yeast infections, various types of tinea (ringworm, athlete's foot, jock itch); Treating impetigo; it was used primarily before the advent of antibiotics, but still useful to persons who may be allergic to penicillin. In resource-limited settings, gentian violet is used to manage burn wounds, inflammation of the umbilical cord stump (omphalitis) in the neonatal period, oral candidiasis in HIV-infected patients and mouth ulcers in children with measles. In body piercing, gentian violet is commonly used to mark the location for placing piercings, including surface piercings. Veterinary Because of its antimicrobial activity, it is used to treat ich in fish. However, it usually is illegal to use in fish intended for human consumption. History Synthesis Crystal violet is one of the components of methyl violet, a dye first synthesized by Charles Lauth in 1861. From 1866, methyl violet was manufactured by the Saint-Denis-based firm of Poirrier et Chappat and marketed under the name "Violet de Paris". It was a mixture of the tetra-, penta- and hexamethylated pararosanilines. Crystal violet itself was first synthesized in 1883 by (1850–1893) working in Basel at the firm of Bindschedler & Busch. To optimize the difficult synthesis which used the highly toxic phosgene, Kern entered into a collaboration with the German chemist Heinrich Caro at BASF. Kern also found that by starting with diethylaniline rather than dimethylaniline, he could synthesize the closely related violet dye now known as C.I. 42600 or C.I. Basic violet 4. Gentian violet The name "gentian violet" (or Gentianaviolett in German) is thought to have been introduced by the German pharmacist Georg Grübler, who in 1880 started a company in Leipzig that specialized in the sale of staining reagents for histology. The gentian violet stain marketed by Grübler probably contained a mixture of methylated pararosaniline dyes. The stain proved popular and in 1884 was used by Hans Christian Gram to stain bacteria. He credited Paul Ehrlich for the aniline-gentian violet mixture. Grübler's gentian violet was probably very similar, if not identical, to Lauth's methyl violet, which had been used as a stain by Victor André Cornil in 1875. Although the name gentian violet continued to be used for the histological stain, the name was not used in the dye and textile industries. The composition of the stain was not defined and different suppliers used different mixtures. In 1922, the Biological Stain Commission appointed a committee chaired by Harold Conn to look into the suitability of the different commercial products. In his book Biological Stains, Conn describes gentian violet as a "poorly defined mixture of violet rosanilins". The German ophthalmologist Jakob Stilling is credited with discovering the antiseptic properties of gentian violet. He published a monograph in 1890 on the bactericidal effects of a solution that he christened "pyoctanin", which was probably a mixture of aniline dyes similar to gentian violet. He set up a collaboration with E. Merck & Co. to market "Pyoktanin caeruleum" as an antiseptic. In 1902, Drigalski and Conradi found that although crystal violet inhibited the growth of many bacteria, it has little effect on Bacillus coli (Escherichia coli) and Bacillus typhi (Salmonella typhi), which are both gram-negative bacteria. A much more detailed study of the effects of Grübler's gentian violet on different strains of bacteria was published by John Churchman in 1912. He found that most gram-positive bacteria (tainted) were sensitive to the dye, while most gram-negative bacteria (not tainted) were not, and observed that the dye tended to act as a bacteriostatic agent rather than a bactericide. Precautions One study in mice demonstrated dose-related carcinogenic potential at several different organ sites. The Food and Drug Administration in the US (FDA) has determined that gentian violet has not been shown by adequate scientific data to be safe for use in animal feed. Use of gentian violet in animal feed causes the feed to be adulterated and is a violation of the Federal Food, Drug, and Cosmetic Act in the US. On June 28, 2007, the FDA issued an "import alert" on farm raised seafood from China because unapproved antimicrobials, including gentian violet, had been consistently found in the products. The FDA report states: "Like MG (malachite green), CV (crystal violet) is readily absorbed into fish tissue from water exposure and is reduced metabolically by fish to the leuco moiety, leucocrystal violet (LCV). Several studies by the National Toxicology Program reported the carcinogenic and mutagenic effects of crystal violet in rodents. The leuco form induces renal, hepatic and lung tumor in mice." In 2019, Health Canada found medical devices that use gentian violet to be safe for use but recommended to stop using all drug products that contain gentian violet, including on animals, causing Canadian engineering schools to revisit the usage of this dye during orientation. See also Methyl green Methyl violet Fluorescein Prussian blue Egyptian blue Methyl blue Methylene blue New methylene blue Han purple Potassium ferrocyanide Potassium ferricyanide References Further reading . External links Triarylmethane dyes Antifungals Disinfectants Staining dyes PH indicators Chlorides Dimethylamino compounds
Crystal violet
[ "Chemistry", "Materials_science" ]
2,545
[ "Chlorides", "Titration", "Inorganic compounds", "PH indicators", "Chromism", "Chemical tests", "Salts", "Equilibrium chemistry" ]
1,506,031
https://en.wikipedia.org/wiki/Lean-burn
Lean-burn refers to the burning of fuel with an excess of air in an internal combustion engine. In lean-burn engines the air–fuel ratio may be as lean as 65:1 (by mass). The air / fuel ratio needed to stoichiometrically combust gasoline, by contrast, is 14.64:1. The excess of air in a lean-burn engine emits far less hydrocarbons. High air–fuel ratios can also be used to reduce losses caused by other engine power management systems such as throttling losses. Principle A lean burn mode is a way to reduce throttling losses. An engine in a typical vehicle is sized for providing the power desired for acceleration, but must operate well below that point in normal steady-speed operation. Ordinarily, the power is cut by partially closing a throttle. However, the extra work done in pulling air through the throttle reduces efficiency. If the fuel/air ratio is reduced, then lower power can be achieved with the throttle closer to fully open, and the efficiency during normal driving (below the maximum torque capability of the engine) can be higher. The engines designed for lean-burning can employ higher compression ratios and thus provide better performance, efficient fuel use and low exhaust hydrocarbon emissions than those found in conventional gasoline engines. Ultra lean mixtures with very high air–fuel ratios can only be achieved by direct injection engines. The main drawback of lean-burning is that a complex catalytic converter system is required to reduce NOx emissions. Lean-burn engines do not work well with modern 3-way catalytic converter—which require a pollutant balance at the exhaust port so they can carry out oxidation and reduction reactions—so most modern engines tend to cruise and coastdown at or near the stoichiometric point. Chrysler Electronic Lean-Burn From 1976 through 1989, Chrysler equipped many vehicles with their Electronic Lean-Burn (ELB) system, which consisted of a spark control computer and various sensors and transducers. The computer adjusted spark timing based on manifold vacuum, engine speed, engine temperature, throttle position over time, and incoming air temperature. Engines equipped with ELB used fixed-timing distributors without the traditional vacuum and centrifugal timing advance mechanisms. The ELB computer also directly drove the ignition coil, eliminating the need for a separate ignition module. ELB was produced in both open-loop and closed-loop variants; the open-loop systems produced exhaust clean enough for many vehicle variants so equipped to pass 1976 and 1977 US Federal emissions regulations, and Canadian emissions regulations through 1980, without a catalytic converter. The closed-loop version of ELB used an oxygen sensor and a feedback carburetor, and was phased into production as emissions regulations grew more stringent starting in 1981, but open-loop ELB was used as late as 1990 in markets with lax emissions regulations, on vehicles such as the Mexican Chrysler Spirit. The spark control and engine parameter sensing and transduction strategies introduced with ELB remained in use through 1995 on Chrysler vehicles equipped with throttle-body fuel injection. Heavy-duty gas engines Lean-burn concepts are often used for the design of heavy-duty natural gas, biogas, and liquefied petroleum gas (LPG) fuelled engines. These engines can either be full-time lean-burn, where the engine runs with a weak air–fuel mixture regardless of load and engine speed, or part-time lean-burn (also known as "lean mix" or "mixed lean"), where the engine runs lean only during low load and at high engine speeds, reverting to a stoichiometric air–fuel mixture in other cases. Heavy-duty lean-burn gas engines admit twice as much air as theoretically needed for complete combustion into the combustion chambers. The extremely weak air–fuel mixtures lead to lower combustion temperatures and therefore lower NOx formation. While lean-burn gas engines offer higher theoretical thermal efficiencies, transient response and performance may be compromised in certain situations. However, advances in fuel control and closed loop technology by companies like North American Repower have led to production of modern CARB certified lean burn heavy duty engines for use in commercial vehicle fleets. Lean-burn gas engines are almost always turbocharged, resulting in high power and torque figures not achievable with stoichiometric engines due to high combustion temperatures. Heavy duty gas engines may employ precombustion chambers in the cylinder head. A lean gas and air mixture is first highly compressed in the main chamber by the piston. A much richer, though much lesser volume gas/air mixture is introduced to the precombustion chamber and ignited by spark plug. The flame front spreads to the lean gas air mixture in the cylinder. This two stage lean-burn combustion produces low NOx and no particulate emissions. Thermal efficiency is better as higher compression ratios are achieved. Manufacturers of heavy-duty lean-burn gas engines include MTU, Cummins, Caterpillar, MWM, GE Jenbacher, MAN Diesel & Turbo, Wärtsilä, Mitsubishi Heavy Industries, Dresser-Rand Guascor, Waukesha Engine and Rolls-Royce Holdings. Honda lean-burn systems One of the newest lean-burn technologies available in automobiles currently in production uses very precise control of fuel injection, a strong air–fuel swirl created in the combustion chamber, a new linear air–fuel sensor (LAF type O2 sensor) and a lean-burn NOx catalyst to further reduce the resulting NOx emissions that increase under "lean-burn" conditions and meet NOx emissions requirements. This stratified-charge approach to lean-burn combustion means that the air–fuel ratio is not equal throughout the cylinder. Instead, precise control over fuel injection and intake flow dynamics allows a greater concentration of fuel closer to the spark plug tip (richer), which is required for successful ignition and flame spread for complete combustion. The remainder of the cylinders' intake charge is progressively leaner with an overall average air:fuel ratio falling into the lean-burn category of up to 22:1. The older Honda engines that used lean-burn (not all did) accomplished this by having a parallel fuel and intake system that fed a pre-chamber the "ideal" ratio for initial combustion. This burning mixture was then opened to the main chamber where a much larger and leaner mix then ignited to provide sufficient power. During the time this design was in production this system (CVCC, Compound Vortex Controlled Combustion) primarily allowed lower emissions without the need for a catalytic converter. These were carbureted engines and the relative "imprecise" nature of such limited the MPG abilities of the concept that now under MPI (Multi-Port fuel Injection) allows for higher MPG too. The newer Honda stratified charge (lean-burn engines) operate on air–fuel ratios as high as 22:1. The amount of fuel drawn into the engine is much lower than a typical gasoline engine, which operates at 14.7:1—the chemical stoichiometric ideal for complete combustion when averaging gasoline to the petrochemical industries' accepted standard of C8H18. This lean-burn ability by the necessity of the limits of physics, and the chemistry of combustion as it applies to a current gasoline engine must be limited to light load and lower RPM conditions. A "top" speed cut-off point is required since leaner gasoline fuel mixtures burn slower and for power to be produced combustion must be "complete" by the time the exhaust valve opens. Applications 1992–95 Civic VX 1996–2005 Civic Hx 2002–05 Civic Hybrid 2000–06 Insight Manual transmission & Japanese spec Cvt only Toyota lean-burn engines In 1984, Toyota released the 4A-ELU engine. This was the first engine in the world to use a lean-burn combustion control system with a lean mixture sensor, called "TTC-L" (Toyota Total Clean-Lean-Burn) by Toyota. Toyota also referred to an earlier lean burn system as "Turbulence Generating Pot" (TGP). TTC-L was used in Japan on Toyota Carina T150 replacing the TTC-V (Vortex) exhaust gas recirculation approach used earlier, Toyota Corolla E80, and Toyota Sprinter. The lean mixture sensor was provided in the exhaust system to detect air–fuel ratios leaner than the theoretical air–fuel ratio. The fuel injection volume was then accurately controlled by a computer using this detection signal to achieve lean air–fuel ratio feedback. For optimal combustion, the following items were applied: program independent injection that accurately changed the injection volume and timing for individual cylinders, platinum plugs for improving ignition performance with lean mixtures, and high performance igniters. The lean-burn versions of the 1587cc 4A-FE and 1762cc 7A-FE 4-cylinder engines have 2 inlet and 2 exhaust valves per cylinder. Toyota uses a set of butterflies to restrict flow in every second inlet runner during lean-burn operation. This creates a large amount of swirl in the combustion chamber. Injectors are mounted in the head, rather than conventionally in the intake manifold. Compression ratio 9.5:1. The 1998cc 3S-FSE engine is a direct injection petrol lean-burn engine. Compression ratio 10:1. Applications Nissan lean-burn engines Nissan QG engines are a lean-burn aluminum DOHC 4-valve design with variable valve timing and optional NEO Di direct injection. The 1497cc QG15DE has a Compression ratio of 9.9:1 and 1769cc QG18DE 9.5:1. Applications Mitsubishi Vertical Vortex (MVV) In 1991, Mitsubishi developed and began producing the MVV (Mitsubishi Vertical Vortex) lean-burn system first used in Mitsubishi's 1.5 L 4G15 straight-4 single-overhead-cam 1,468-cc engine. The vertical vortex engine has an idle speed of 600 rpm and a compression ratio of 9.4:1 compared with respective figures of 700 rpm and 9.2:1 for the conventional version. The lean-burn MVV engine can achieve complete combustion with an air–fuel ratio as high as 25:1, this boasts a 10–20% gain in fuel economy (on the Japanese 10-mode urban cycle) in bench tests compared with its conventional MPI powerplant of the same displacement, which means lower CO2 emissions. The heart of the Mitsubishi's MVV system is the linear air–fuel ratio exhaust gas oxygen sensor. Compared with standard oxygen sensors, which essentially are on-off switches set to a single air/fuel ratio, the lean oxygen sensor is more of a measurement device covering the air/fuel ratio range from about 15:1 to 26:1. To speed up the otherwise slow combustion of lean mixtures, the MVV engine uses two intake valves and one exhaust valve per cylinder. The separate specially shaped (twin intake port design) intake ports are the same size, but only one port receives fuel from an injector. This creates two vertical vortices of identical size, strength and rotational speed within the combustion chamber during the intake stroke: one vortex of air, the other of an air/fuel mixture. The two vortices also remain independent layers throughout most of the compression stroke. Near the end of the compression stroke, the layers collapse into uniform minute turbulences, which effectively promote lean-burn characteristics. More importantly, ignition occurs in the initial stages of breakdown of the separate layers while substantial amounts of each layer still exist. Because the spark plug is located closer to the vortex consisting of air/fuel mixture, ignition arises in an area of the pentroof-design combustion chamber where fuel density is higher. The flame then spreads through the combustion chamber via the small turbulences. This provides stable combustion even at normal ignition-energy levels, thereby realizing lean-burn. The engine computer stores optimum air fuel ratios for all engine-operating conditions—from lean (for normal operation) to richest (for heavy acceleration) and all points in between. Full-range oxygen sensors (used for the first time) provide essential information that allows the computers to properly regulate fuel delivery. Diesel engines All diesel engines can be considered to be lean-burning with respect to the total volume, however the fuel and air is not well mixed before the combustion. Most of the combustion occurs in rich zones around small droplets of fuel. Locally rich combustion is a source of particulate matter (PM) emissions. See also Engine knocking Hydrogen fuel enhancement Footnotes Citations References "Advanced Technology Vehicle Modeling in PERE, EPA, Office of Transportation and Air Quality" Engine technology
Lean-burn
[ "Technology" ]
2,585
[ "Engine technology", "Engines" ]
1,506,069
https://en.wikipedia.org/wiki/Outline%20of%20electrical%20engineering
The following outline is provided as an overview of and topical guide to electrical engineering. Electrical engineering – field of engineering that generally deals with the study and application of electricity, electronics and electromagnetism. The field first became an identifiable occupation in the late nineteenth century after commercialization of the electric telegraph and electrical power supply. It now covers a range of subtopics including power, electronics, control systems, signal processing and telecommunications. Classification Electrical engineering can be described as all of the following: Academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. Branch of engineering – discipline, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and build structures, machines, devices, systems, materials and processes. Branches of electrical engineering Power engineering Control engineering Electronic engineering Microelectronics Signal processing Radio-frequency engineering and Radar Telecommunications engineering Instrumentation engineering Electro-Optical Engineering and Optoelectronics Computer engineering Related disciplines Biomedical engineering Engineering physics Mechanical engineering Mechatronics History of electrical engineering History of electrical engineering Timeline of electrical and electronic engineering General electrical engineering concepts Electromagnetism Electromagnetism Electricity Magnetism Electromagnetic spectrum Optical spectrum Electrostatics Electric charge Coulomb's law Electric field Gauss's law Electric potential Magnetostatics Electric current Ampère's law Magnetic field Magnetic moment Electrodynamics Lorentz force law Electromotive force Electromagnetic induction Faraday-Lenz law Displacement current Maxwell's equations Electromagnetic field Electromagnetic radiation Electrical circuits Antenna Electrical resistance Capacitance Inductance Impedance Resonant cavity Transmission line Waveguide Physical laws Physical laws Ampère's law Coulomb's law Faraday's law of induction/Faraday-Lenz law Gauss's law Kirchhoff's circuit laws Current law Voltage law Maxwell's equations Gauss's law Faraday's law of induction Ampère's law Ohm's law Control engineering Control engineering Control theory Adaptive control Control theory Digital control Nonlinear control Optimal control Intelligent control Fuzzy control Model predictive control System properties: Exponential stability Marginal stability BIBO stability Lyapunov stability (i.e., asymptotic stability) Input-to-state (ISS) stability Controllability Observability Negative feedback Positive feedback System modeling and analysis: System identification State observer First principles modeling Least squares Kalman filter Root locus Extended Kalman filter Signal-flow graph State space representation Artificial neural networks Controllers: Closed-loop controller PID controller Programmable logic controller Embedded controller Field oriented controller Direct torque controller Digital signal controller Pulse-width modulation controller Control applications: Industrial Control Systems Process Control Distributed Control System Mechatronics Motion control Supervisory control (SCADA) Electronics Electronics Electrical network/Circuit Circuit laws Kirchhoff's circuit laws Current law Voltage law Y-delta transform Ohm's law Electrical element/Discretes Passive elements: Capacitor Inductor Resistor Hall effect sensor Active elements: Microcontroller Operational amplifier Semiconductors: Diode Zener diode Light-emitting diode PIN diode Schottky diode Avalanche diode Laser diode DIAC Thyristor Transistor Bipolar transistor (BJT) Field effect transistor (FET) Darlington transistor IGBT TRIAC Mosfet Electronic design automation Power engineering Power engineering Generation Electrical generator Renewable electricity Hydropower Transmission Electricity pylon Transformer Transmission line Distribution Processes: Alternating current Direct current Single-phase electric power Two-phase electric power Three-phase power Power electronics / Electro-mechanical Inverter Static VAR compensator Variable-frequency drive Ward Leonard control Electric vehicles Electric vehicles Electric motor Hybrid electric vehicle Plug-in hybrid Rechargeable battery Vehicle-to-grid Smart Grid Signal processing Signal processing Analog signal processing Digital signal processing Quantization Sampling Analog-to-digital converter, Digital-to-analog converter Continuous signal, Discrete signal Down sampling Nyquist frequency Nyquist–Shannon sampling theorem Oversampling Sample and hold Sampling frequency Undersampling Upsampling Audio signal processing Audio noise reduction Speech processing Equalization (audio) Digital image processing Geometric transformation Color correction Computer vision Image noise reduction Edge detection Image editing Segmentation Data compression Lossless data compression Lossy data compression Filtering Analog filter Audio filter Digital filter Finite impulse response Infinite impulse response Electronic filter Analogue filter Filter (signal processing) Band-pass filter Band-stop filter Butterworth filter Chebyshev filter High-pass filter Kalman filter Low-pass filter Notch filter Sallen Key filter Wiener filter Transforms Advanced Z-transform Bilinear transform Continuous Fourier transform Discrete cosine transform Discrete Fourier transform, Fast Fourier transform (FFT) Discrete sine transform Fourier transform Hilbert transform Laplace transform, Two-sided Laplace transform Z-transform Instrumentation Actuator Electric motor Oscilloscope Telecommunication Telecommunication Telephone Pulse-code modulation (PCM) Main distribution frame (MDF) Carrier system Mobile phone Wireless network Optical fiber Modulation Carrier wave Communication channel Information theory Error correction and detection Digital television Digital audio broadcasting Satellite radio Satellite Electrical engineering occupations Occupations in electrical/electronics engineering Electrical Technologist Electrical engineering organizations International Electrotechnical Commission (IEC) Electrical engineering publications IEEE Spectrum IEEE series of journals Hawkins Electrical Guide Iterative Receiver Design Journal of Electrical Engineering Persons influential in electrical engineering List of electrical engineers and their contributions List of Russian electrical engineers See also Index of electrical engineering articles Outline of engineering References External links International Electrotechnical Commission (IEC) MIT OpenCourseWare in-depth look at Electrical Engineering - online courses with video lectures. IEEE Global History Network A wiki-based site with many resources about the history of IEEE, its members, their professions and electrical and informational technologies and sciences. Electrical engineering Electrical engineering
Outline of electrical engineering
[ "Engineering" ]
1,206
[ "Electrical engineering", "Electrical-engineering-related lists" ]
1,506,351
https://en.wikipedia.org/wiki/Magnetoreception
Magnetoreception is a sense which allows an organism to detect the Earth's magnetic field. Animals with this sense include some arthropods, molluscs, and vertebrates (fish, amphibians, reptiles, birds, and mammals). The sense is mainly used for orientation and navigation, but it may help some animals to form regional maps. Experiments on migratory birds provide evidence that they make use of a cryptochrome protein in the eye, relying on the quantum radical pair mechanism to perceive magnetic fields. This effect is extremely sensitive to weak magnetic fields, and readily disturbed by radio-frequency interference, unlike a conventional iron compass. Birds have iron-containing materials in their upper beaks. There is some evidence that this provides a magnetic sense, mediated by the trigeminal nerve, but the mechanism is unknown. Cartilaginous fish including sharks and stingrays can detect small variations in electric potential with their electroreceptive organs, the ampullae of Lorenzini. These appear to be able to detect magnetic fields by induction. There is some evidence that these fish use magnetic fields in navigation. History Biologists have long wondered whether migrating animals such as birds and sea turtles have an inbuilt magnetic compass, enabling them to navigate using the Earth's magnetic field. Until late in the 20th century, evidence for this was essentially only behavioural: many experiments demonstrated that animals could indeed derive information from the magnetic field around them, but gave no indication of the mechanism. In 1972, Roswitha and Wolfgang Wiltschko showed that migratory birds responded to the direction and inclination (dip) of the magnetic field. In 1977, M. M. Walker and colleagues identified iron-based (magnetite) magnetoreceptors in the snouts of rainbow trout. In 2003, G. Fleissner and colleagues found iron-based receptors in the upper beaks of homing pigeons, both seemingly connected to the animal's trigeminal nerve. Research took a different direction in 2000, however, when Thorsten Ritz and colleagues suggested that a photoreceptor protein in the eye, cryptochrome, was a magnetoreceptor, working at a molecular scale by quantum entanglement. Proposed mechanisms In animals In animals, the mechanism for magnetoreception is still under investigation. Two main hypotheses are currently being discussed: one proposing a quantum compass based on a radical pair mechanism, the other postulating a more conventional iron-based magnetic compass with magnetite particles. Cryptochrome According to the first model, magnetoreception is possible via the radical pair mechanism, which is well-established in spin chemistry. The mechanism requires two molecules, each with unpaired electrons, at a suitable distance from each other. When these can exist in states either with their spin axes in the same direction, or in opposite directions, the molecules oscillate rapidly between the two states. That oscillation is extremely sensitive to magnetic fields. Because the Earth's magnetic field is extremely weak, at 0.5 gauss, the radical pair mechanism is currently the only credible way that the Earth's magnetic field could cause chemical changes (as opposed to the mechanical forces which would be detected via magnetic crystals acting like a compass needle). In 1978, Schulten and colleagues proposed that this was the mechanism of magnetoreception. In 2000, scientists proposed that cryptochrome – a flavoprotein in the rod cells in the eyes of birds – was the "magnetic molecule" behind this effect. It is the only protein known to form photoinduced radical-pairs in animals. The function of cryptochrome varies by species, but its mechanism is always the same: exposure to blue light excites an electron in a chromophore, which causes the formation of a radical-pair whose electrons are quantum entangled, enabling the precision needed for magnetoreception. Many lines of evidence point to cryptochrome and radical pairs as the mechanism of magnetoreception in birds: Despite 20 years of searching, no biomolecule other than cryptochrome has been identified capable of supporting radical pairs. In cryptochrome, a yellow molecule flavin adenine dinucleotide (FAD) can absorb a photon of blue light, putting the cryptochrome into an activated state: an electron is transferred from a tryptophan amino acid to the FAD molecule, forming a radical pair. Of the six types of cryptochrome in birds, cryptochrome-4a (Cry4a) binds FAD much more tightly than the rest. Cry4a levels in migratory birds, which rely on navigation for their survival, are highest during the spring and autumn migration periods, when navigation is most critical. The Cry4a protein from the European robin, a migratory bird, is much more sensitive to magnetic fields than similar but not identical Cry4a from pigeons and chickens, which are non-migratory. These findings together suggest that the Cry4a of migratory birds has been selected for its magnetic sensitivity. Behavioral experiments on migratory birds also support this theory. Caged migratory birds such as robins display migratory restlessness, known by ethologists as Zugunruhe, in spring and autumn: they often orient themselves in the direction in which they would migrate. In 2004, Thorsten Ritz showed that a weak radio-frequency electromagnetic field, chosen to be at the same frequency as the singlet-triplet oscillation of cryptochrome radical pairs, effectively interfered with the birds' orientation. The field would not have interfered with an iron-based compass. Further, birds are unable to detect a 180 degree reversal of the magnetic field, something they would straightforwardly detect with an iron-based compass. From 2007 onwards, Henrik Mouritsen attempted to replicate this experiment. Instead, he found that robins were unable to orient themselves in the wooden huts he used. Suspecting extremely weak radio-frequency interference from other electrical equipment on the campus, he tried shielding the huts with aluminium sheeting, which blocks electrical noise but not magnetic fields. When he earthed the sheeting, the robins oriented correctly; when the earthing was removed, the robins oriented at random. Finally, when the robins were tested in a hut far from electrical equipment, the birds oriented correctly. These effects imply a radical-pair compass, not an iron one. In 2016, Wiltschko and colleagues showed that European robins were unaffected by local anaesthesia of the upper beak, showing that in these test conditions orientation was not from iron-based receptors in the beak. In their view, cryptochrome and its radical pairs provide the only model that can explain the avian magnetic compass. A scheme with three radicals rather than two has been proposed as more resistant to spin relaxation and explaining the observed behaviour better. Iron-based The second proposed model for magnetoreception relies on clusters composed of iron, a natural mineral with strong magnetism, used by magnetotactic bacteria. Iron clusters have been observed in the upper beak of homing pigeons, and other taxa. Iron-based systems could form a magnetoreceptive basis for many species including turtles. Both the exact location and ultrastructure of birds' iron-containing magnetoreceptors remain unknown; they are believed to be in the upper beak, and to be connected to the brain by the trigeminal nerve. This system is in addition to the cryptochrome system in the retina of birds. Iron-based systems of unknown function might also exist in other vertebrates. Electromagnetic induction Another possible mechanism of magnetoreception in animals is electromagnetic induction in cartilaginous fish, namely sharks, stingrays, and chimaeras. These fish have electroreceptive organs, the ampullae of Lorenzini, which can detect small variations in electric potential. The organs are mucus-filled and consist of canals that connect pores in the skin of the mouth and nose to small sacs within the animal's flesh. They are used to sense the weak electric fields of prey and predators. These organs have been predicted to sense magnetic fields, by means of Faraday's law of induction: as a conductor moves through a magnetic field an electric potential is generated. In this case the conductor is the animal moving through a magnetic field, and the potential induced (Vind) depends on the time (t)-varying rate of magnetic flux (Φ) through the conductor according to The ampullae of Lorenzini detect very small fluctuations in the potential difference between the pore and the base of the electroreceptor sac. An increase in potential results in a decrease in the rate of nerve activity. This is analogous to the behavior of a current-carrying conductor. Sandbar sharks, Carcharinus plumbeus, have been shown to be able to detect magnetic fields; the experiments provided non-definitive evidence that the animals had a magnetoreceptor, rather than relying on induction and electroreceptors. Electromagnetic induction has not been studied in non-aquatic animals. The yellow stingray, Urobatis jamaicensis, is able to distinguish between the intensity and inclination angle of a magnetic field in the laboratory. This suggests that cartilaginous fishes may use the Earth's magnetic field for navigation. Passive alignment in bacteria Magnetotactic bacteria of multiple taxa contain sufficient magnetic material in the form of magnetosomes, nanometer-sized particles of magnetite, that the Earth's magnetic field passively aligns them, just as it does with a compass needle. The bacteria are thus not actually sensing the magnetic field. A possible but unexplored mechanism of magnetoreception in animals is through endosymbiosis with magnetotactic bacteria, whose DNA is widespread in animals. This would involve having these bacteria living inside an animal, and their magnetic alignment being used as part of a magnetoreceptive system. Unanswered questions It remains likely that two or more complementary mechanisms play a role in magnetic field detection in animals. Of course, this potential dual mechanism theory raises the questions of to what degree each method is responsible for the stimulus, and how they produce a signal in response to the weak magnetic field of the Earth. In addition, it is possible that magnetic senses may be different for different species. Some species may only be able to detect north and south, while others may only be able to differentiate between the equator and the poles. Although the ability to sense direction is important in migratory navigation, many animals have the ability to sense small fluctuations in earth's magnetic field to map their position to within a few kilometers. Taxonomic range Magnetoreception is widely distributed taxonomically. It is present in many of the animals so far investigated. These include arthropods, molluscs, and among vertebrates in fish, amphibians, reptiles, birds, and mammals. Its status in other groups remains unknown. The ability to detect and respond to magnetic fields may exist in plants, possibly as in animals mediated by cryptochrome. Experiments by different scientists have identified multiple effects, including changes to growth rate, seed germination, mitochondrial structure, and responses to gravity (geotropism). The results have sometimes been controversial, and no mechanism has been definitely identified. The ability may be widely distributed, but its taxonomic range in plants is unknown. In molluscs The giant sea slug Tochuina gigantea (formerly T. tetraquetra), a mollusc, orients its body between north and east prior to a full moon. A 1991 experiment offered a right turn to geomagnetic south and a left turn to geomagnetic east (a Y-shaped maze). 80% of Tochuina made a turn to magnetic east. When the field was reversed, the animals displayed no preference for either turn. Tochuinas nervous system is composed of individually identifiable neurons, four of which are stimulated by changes in the applied magnetic field, and two which are inhibited by such changes. The tracks of the similar species Tritonia exsulans become more variable in direction when close to strong rare-earth magnets placed in their natural habitat, suggesting that the animal uses its magnetic sense continuously to help it travel in a straight line. In insects The fruit fly Drosophila melanogaster may be able to orient to magnetic fields. In one choice test, flies were loaded into an apparatus with two arms that were surrounded by electric coils. Current was run through each of the coils, but only one was configured to produce a 5-Gauss magnetic field (about ten times stronger than the Earth's magnetic field) at a time. The flies were trained to associate the magnetic field with a sucrose reward. Flies with an altered cryptochrome, such as with an antisense mutation, were not sensitive to magnetic fields. Magnetoreception has been studied in detail in insects including honey bees, ants and termites. Ants and bees navigate using their magnetic sense both locally (near their nests) and when migrating. In particular, the Brazilian stingless bee Schwarziana quadripunctata is able to detect magnetic fields using the thousands of hair-like sensilla on its antennae. In vertebrates In fish Studies of magnetoreception in bony fish have been conducted mainly with salmon. Both sockeye salmon (Oncorhynchus nerka) and Chinook salmon (Oncorhynchus tschawytscha) have a compass sense. This was demonstrated in experiments in the 1980s by changing the axis of a magnetic field around a circular tank of young fish; they reoriented themselves in line with the field. In amphibians Some of the earliest studies of amphibian magnetoreception were conducted with cave salamanders (Eurycea lucifuga). Researchers housed groups of cave salamanders in corridors aligned with either magnetic north–south, or magnetic east–west. In tests, the magnetic field was experimentally rotated by 90°, and salamanders were placed in cross-shaped structures (one corridor along the new north–south axis, one along the new east–west axis). The salamanders responded to the field's rotation. Red-spotted newts (Notophthalmus viridescens) respond to drastic increases in water temperature by heading for land. The behaviour is disrupted if the magnetic field is experimentally altered, showing that the newts use the field for orientation. Both European toads (Bufo bufo) and natterjack toads (Epidalea calamita) toads rely on vision and olfaction when migrating to breeding sites, but magnetic fields may also play a role. When randomly displaced from their breeding sites, these toads can navigate their way back, but this ability can be disrupted by fitting them with small magnets. In reptiles The majority of study on magnetoreception in reptiles involves turtles. Early support for magnetoreception in turtles was provided in a 1991 study on hatchling loggerhead turtles which demonstrated that loggerheads can use the magnetic field as a compass to determine direction. Subsequent studies have demonstrated that loggerhead and green turtles can also use the magnetic field of the earth as a map, because different parameters of the Earth's magnetic field vary with geographic location. The map in sea turtles was the first ever described though similar abilities have now been reported in lobsters, fish, and birds. Magnetoreception by land turtles was shown in a 2010 experiment on Terrapene carolina, a box turtle. After teaching a group of these box turtles to swim to either the east or west end of an experimental tank, a strong magnet disrupted the learned routes. Orientation toward the sea, as seen in turtle hatchlings, may rely partly on magnetoreception. In loggerhead and leatherback turtles, breeding takes place on beaches, and, after hatching, offspring crawl rapidly to the sea. Although differences in light density seem to drive this behaviour, magnetic alignment appears to play a part. For instance, the natural directional preferences held by these hatchlings (which lead them from beaches to the sea) reverse upon experimental inversion of the magnetic poles. In birds Homing pigeons use magnetic fields as part of their complex navigation system. William Keeton showed that time-shifted homing pigeons (acclimatised in the laboratory to a different time-zone) are unable to orient themselves correctly on a clear, sunny day; this is attributed to time-shifted pigeons being unable to compensate accurately for the movement of the sun during the day. Conversely, time-shifted pigeons released on overcast days navigate correctly, suggesting that pigeons can use magnetic fields to orient themselves; this ability can be disrupted with magnets attached to the birds' backs. Pigeons can detect magnetic anomalies as weak as 1.86 gauss. For a long time the trigeminal system was the suggested location for a magnetite-based magnetoreceptor in the pigeon. This was based on two findings: First, magnetite-containing cells were reported in specific locations in the upper beak. However, the cells proved to be immune system macrophages, not neurons able to detect magnetic fields. Second, pigeon magnetic field detection is impaired by sectioning the trigeminal nerve and by application of lidocaine, an anaesthetic, to the olfactory mucosa. However, lidocaine treatment might lead to unspecific effects and not represent a direct interference with potential magnetoreceptors. As a result, an involvement of the trigeminal system is still debated. In the search for magnetite receptors, a large iron-containing organelle (the cuticulosome) of unknown function was found in the inner ear of pigeons. Areas of the pigeon brain that respond with increased activity to magnetic fields are the posterior vestibular nuclei, dorsal thalamus, hippocampus, and visual hyperpallium. Domestic hens have iron mineral deposits in the sensory dendrites in the upper beak and are capable of magnetoreception. Beak trimming causes loss of the magnetic sense. In mammals Some mammals are capable of magnetoreception. When woodmice are removed from their home area and deprived of visual and olfactory cues, they orient towards their homes until an inverted magnetic field is applied to their cage. When the same mice are allowed access to visual cues, they are able to orient themselves towards home despite the presence of inverted magnetic fields. This indicates that woodmice use magnetic fields to orient themselves when no other cues are available. The magnetic sense of woodmice is likely based on a radical-pair mechanism. The Zambian mole-rat, a subterranean mammal, uses magnetic fields to aid in nest orientation. In contrast to woodmice, Zambian mole-rats do not rely on radical-pair based magnetoreception, perhaps due to their subterranean lifestyle. Experimental exposure to magnetic fields leads to an increase in neural activity within the superior colliculus, as measured by immediate gene expression. The activity level of neurons within two levels of the superior colliculus, the outer sublayer of the intermediate gray layer and the deep gray layer, were elevated in a non-specific manner when exposed to various magnetic fields. However, within the inner sublayer of the intermediate gray layer (InGi) there were two or three clusters of cells that respond in a more specific manner. The more time the mole rats were exposed to a magnetic field, the greater the immediate early gene expression within the InGi. Magnetic fields appear to play a role in bat orientation. They use echolocation to orient themselves over short distances, typically ranging from a few centimetres up to 50 metres. When non-migratory big brown bats (Eptesicus fuscus) are taken from their home roosts and exposed to magnetic fields rotated 90 degrees from magnetic north, they become disoriented; it is unclear whether they use the magnetic sense as a map, a compass, or a compass calibrator. Another bat species, the greater mouse-eared bat (Myotis myotis), appears to use the Earth's magnetic field in its home range as a compass, but needs to calibrate this at sunset or dusk. In migratory soprano pipistrelles (Pipistrellus pygmaeus), experiments using mirrors and Helmholtz coils show that they calibrate the magnetic field using the position of the solar disk at sunset. Red foxes (Vulpes vulpes) may be influenced by the Earth's magnetic field when predating small rodents like mice and voles. They attack these prey using a specific high-jump, preferring a north-eastern compass direction. Successful attacks are tightly clustered to the north. It is unknown whether humans can sense magnetic fields. The ethmoid bone in the nose contains magnetic materials. Magnetosensitive cryptochrome 2 (cry2) is present in the human retina. Human alpha brain waves are affected by magnetic fields, but it is not known whether behaviour is affected. See also Electroreception Magnetobiology Quantum biology Salmon run References Biophysics Magnetism Magnetoreception Quantum biology
Magnetoreception
[ "Physics", "Biology" ]
4,361
[ "Applied and interdisciplinary physics", "Quantum mechanics", "Biophysics", "nan", "Quantum biology" ]
1,506,399
https://en.wikipedia.org/wiki/Secondary%20forest
A secondary forest (or second-growth forest) is a forest or woodland area which has regenerated through largely natural processes after human-caused disturbances, such as timber harvest or agriculture clearing, or equivalently disruptive natural phenomena. It is distinguished from an old-growth forest (primary or primeval forest), which has not recently undergone such disruption, and complex early seral forest, as well as third-growth forests that result from harvest in second growth forests. Secondary forest regrowing after timber harvest differs from forest regrowing after natural disturbances such as fire, insect infestation, or windthrow because the dead trees remain to provide nutrients, structure, and water retention after natural disturbances. Secondary forests are notably different from primary forests in their composition and biodiversity; however, they may still be helpful in providing habitat for native species, preserving watersheds, and restoring connectivity between ecosystems. The legal definition of what constitutes a secondary forest vary between countries. Some legal systems allows certain degree of subjectivity in assigning a forest as secondary. Development Secondary forestation is common in areas where forests have been degraded or destroyed by agriculture or timber harvesting; this includes abandoned pastures or fields that were once forests. Additionally, secondary forestation can be seen in regions where forests have been lost by the slash-and-burn method, a component of some shifting cultivation systems of agriculture. While many definitions of secondary forests limit the cause of degradation to human activities, other definitions include forests that experienced similar degradation under natural phenomena like fires or landslides. Secondary forests re-establish by the process of succession. Openings created in the forest canopy allow sunlight to reach the forest floor. An area that has been cleared will first be colonized by pioneer species, followed by shrubs and bushes. Over time, trees that were characteristic of the original forest begin to dominate the forest again. It typically takes a secondary forest 40 to 100 years to begin to resemble the original old-growth forest; however, in some cases a secondary forest will not succeed, due to erosion or soil nutrient loss in certain tropical forests. Depending on the forest, the development of primary characteristics that mark a successful secondary forest may take anywhere from a century to several millennia. Hardwood forests of the eastern United States, for example, can develop primary characteristics in one or two generations of trees, or 150–500 years. Today, most of the forests of the United States – especially those in the eastern part of the country – as well as forests of Europe consist of secondary forest. Characteristics Secondary forests tend to have trees closer spaced than primary forests and contain less undergrowth than primary forests. Usually, secondary forests have only one canopy layer, whereas primary forests have several. Species composition in the canopy of secondary forests is usually markedly different, as well. Secondary forests can also be classified by the way in which the original forest was disturbed; examples of these proposed categories include post-extraction secondary forests, rehabilitated secondary forests, and post-abandonment secondary forests. Biodiversity When forests are harvested, they either regenerate naturally or artificially (by planting and seeding select tree species). The result is often a second growth forest which is less biodiverse than the old growth forest. Patterns of regeneration in secondary forests show that species richness can quickly recover to pre-disturbance levels via secondary succession; however, relative abundances and identities of species can take much longer to recover. Artificially restored forests, in particular, are highly unlikely to compare to their old-growth counterparts in species composition. Successful recovery of biodiversity is also dependent upon local conditions, such as soil fertility, water availability, forest size, existing vegetation and seed sources, edge effect stressors, toxicity (resulting from human operations like mining), and management strategies (in assisted restoration scenarios). Low to moderate disturbances have been shown to be extremely beneficial to increase in biodiversity in secondary forests. These secondary disturbances can clear the canopies to encourage lower canopy growth as well as provide habitats for small organisms such as insects, bacteria and fungi which may feed on the decaying plant material. Additionally, forest restoration techniques such as agroforestry and intentionally planting/seeding native species can be combined with natural regeneration to restore biodiversity more effectively. This has also been shown to improve ecosystem service functionality, as well as rural independence and livelihoods. Some of these techniques are less successful at restoring original plant-soil interactions. In certain cases (as in Amazon tropical ecosystems), agroforestry practices have led to soil microbiomes that favor bacterial communities rather than the fungal communities seen in old-growth forests or naturally regenerated secondary forests. Climate change mitigation Deforestation is one of the main causes of anthropogenic carbon dioxide emissions, making it one of the largest contributors to climate change. Though preserving old-growth forests is most effective at maintaining biodiversity and ecosystem functionality, secondary forests may play a role in climate change mitigation. Despite the species loss that occurs with primary forest removal, secondary forests can still be beneficial to ecological and anthropogenic communities. They protect the watershed from further erosion and provide habitat; secondary forests may also buffer edge effects around mature forest fragments and increase connectivity between them. Secondary forests may also be a source of wood and other forest products for rural communities. Though not as effective as primary forests, secondary forests store more soil carbon than other land-uses, such as tree plantations. Land-use conversions from secondary forests to rubber plantations in Asia are expected to rise by millions of hectares by 2050; as such, the carbon stored within the biomass and soil of secondary forests is anticipated to be released into the atmosphere. In other places, forest restoration – namely the development of secondary forests – has been a governmental priority in order to meet national and international targets on biodiversity and carbon emissions. Recommendations from the Intergovernmental Panel on Climate Change (IPCC), Convention on Biological Diversity, and REDD+ have led to efforts to reduce and combat deforestation in places like Panama and Indonesia. Natural and human-assisted growth of secondary forests can offset carbon emissions and help countries meet climate targets. Biomes Rainforests In the case of semi-tropical rainforests, where soil nutrient levels are characteristically low, the soil quality may be significantly diminished following the removal of primary forest. In addition to soil nutrient levels, two areas of concern with tropical secondary forest restoration are plant biodiversity and carbon storage; it has been suggested that it takes longer for a tropical secondary forest to recover its biodiversity levels than its carbon pools. In Panama, growth of new forests from abandoned farmland exceeded loss of primary rainforest in 1990. However, due to the diminished quality of soil, among other factors, the presence of a significant majority of primary forest species fail to recover in these second-growth forests. See also Land use, land-use change and forestry Land use Overlogging Old-growth forest Ecological succession Notes General references CIFOR Secondary Forest FAO Forestry World Resource Institute External links M. van Breugel, 2007, Dynamics of secondary forests. PhD Thesis Wageningen University. Uzay. U Sezen, 2007, Parentage analysis of a regenerating palm tree in a tropical second-growth forest. Ecological Society of America, Ecology 88: 3065-3075. Rozendaal et al., 2019, Biodiversity recovery of Neotropical secondary forests Science Advances, 2019-03-06 Forest ecology Forests Reforestation Environmental issues with forests
Secondary forest
[ "Biology" ]
1,504
[ "Forests", "Ecosystems" ]
1,506,522
https://en.wikipedia.org/wiki/International%20Heliophysical%20Year
The International Heliophysical Year is a UN-sponsored scientifically driven international program of scientific collaboration to understand external drivers of planetary environments and universal processes in solar-terrestrial-planetary-heliospheric physics. The IHY will focus on advancements in all aspects of the heliosphere and its interaction with the interstellar medium. This effort culminated in the "International Heliophysical Year" (IHY) in 2007-2008. The IHY concluded in February, 2009, but was largely continued via the International Space Weather Initiative (ISWI) The term "Heliophysical" was coined to refer specifically to this activity of studying the interconnectedness of the entire solar-heliospheric-planetary system. It is a broadening of the concept of "geophysical," extending the connections from the Earth to the Sun and interplanetary space. On the 50th anniversary of the International Geophysical Year, the 2007 IHY activities will build on the success of IGY 1957 by continuing its legacy of system-sides studies of the extended heliophysical domain. History The IHY 2007 has been planned to coincide with the fiftieth anniversary of the International Geophysical Year (IGY) in 1957-1958, one of the most successful international science programs of all time. The IGY was a broad-based and all-encompassing effort to push the frontiers of geophysics, which resulted in tremendous progress in space physics, Sun-Earth connections, planetary science and the heliosphere in general. The tradition of international science years began almost 125 years ago with the first International Polar Year and international scientific studies of global processes at the North Pole in 1882-1883. The IHY has received substantial support from the United Nations, and various space agencies around the world. Objectives The IHY has three primary objectives: Advancing our Understanding of the Heliophysical Processes that Govern the Sun, Earth and Heliosphere Continuing the tradition of international research and advancing the legacy on the 50th anniversary of the International Geophysical Year Demonstrating the Beauty, Relevance and Significance of Space and Earth Science to the World Science goals The IHY team has also identified the following science goals for 2007-2008: Develop the basic science of heliophysics through cross-disciplinary studies of universal processes. Determine the response of terrestrial and planetary magnetospheres and atmospheres to external drivers. Promote research on the Sun-heliosphere system outward to the local interstellar medium – the new frontier. Foster international scientific cooperation in the study of heliophysical phenomena now and in the future Communicate unique IHY results to the scientific community and the general public See also Magnetic Data Acquisition System (MAGDAS) Sun-Earth Day External links IHY Home Page eGY Home Pag IPY Home Page IYPE Home Page IHY Japan Home Page MARP Malaysian Home Page QuakeFinder Home Page 2007 in science 2008 in science NASA programs Sun United Nations observances Space science
International Heliophysical Year
[ "Astronomy" ]
592
[ "Space science", "Outer space" ]
1,506,742
https://en.wikipedia.org/wiki/Hoechst%20stain
Hoechst stains are part of a family of blue fluorescent dyes used to stain DNA. These bis-benzimides were originally developed by Hoechst AG, which numbered all their compounds so that the dye Hoechst 33342 is the 33,342nd compound made by the company. There are three related Hoechst stains: Hoechst 33258, Hoechst 33342, and Hoechst 34580. The dyes Hoechst 33258 and Hoechst 33342 are the ones most commonly used and they have similar excitation–emission spectra. Molecular characteristics Both dyes are excited by ultraviolet light at around 350 nm, and both emit blue-cyan fluorescent light around an emission spectrum maximum at 461 nm. Unbound dye has its maximum fluorescence emission in the 510–540 nm range. Hoechst stains can be excited with a xenon- or mercury-arc lamp or with an ultraviolet laser. There is a considerable Stokes shift between the excitation and emission spectra that makes Hoechst dyes useful in experiments in which multiple fluorophores are used. The fluorescence intensity of Hoechst dyes also increases with the pH of the solvent. Hoechst dyes are soluble in water and in organic solvents such as dimethyl formamide or dimethyl sulfoxide. Concentrations can be achieved of up to 10 mg/mL. Aqueous solutions are stable at 2–6 °C for at least six months when protected from light. For longterm storage the solutions are instead frozen at −20 °C or below. The dyes bind to the minor groove of double-stranded DNA with a preference for sequences rich in adenine and thymine. Although the dyes can bind to all nucleic acids, AT-rich double-stranded DNA strands enhance fluorescence considerably. Hoechst dyes are cell-permeable and can bind to DNA in live or fixed cells. Thus, these stains are often called supravital, meaning that live cells survive a treatment with these compounds. Cells that express specific ATP-binding cassette transporter proteins can also actively transport these stains out of their cytoplasm. Applications A concentration of 0.1–12 μg/ml is commonly used to stain DNA in bacteria or eukaryote cells. Cells are stained for 1-30 min at room temperature or 37 °C and then washed to remove unbound dye. A green fluorescence of unbound Hoechst dye may be observed on samples which are stained with too much dye or which are washed partially. Hoechst dyes are often used as substitutes for another nucleic acid stain called DAPI. Key differences between Hoechst dyes and DAPI are: Hoechst dyes are less toxic than DAPI, which ensures a higher viability of stained cells. The additional ethyl group in certain Hoechst dyes (Hoechst 33342) renders them more cell-permeable. There are nuclei staining dyes that allow for viability of cells after staining. Hoechst 33342 and 33258 are quenched by bromodeoxyuridine (BrdU), which is commonly used to detect dividing cells. Hoechst 33342 exhibits a 10 fold greater cell-permeability than H 33258. Cells can integrate BrdU in newly synthesized DNA as a substitute for thymidine. When BrdU is integrated into DNA, it is supposed that the bromine deforms the minor groove so that Hoechst dyes cannot reach their optimal binding site. Binding of Hoechst dyes is even stronger to BrdU-substituted DNA; however, no fluorescence ensues. Hoechst dyes can be used with BrdU to monitor cell cycle progression. Hoechst dyes are commonly used to stain genomic DNA in the following applications: Fluorescence microscopy and immunohistochemistry, often with other fluorophores Flow cytometry to count or sort out cells. An example is the use of Hoechst dyes to analyse how many cells of a population are in which phase of the cell cycle Detecting DNA in the presence of RNA in agarose gels Automated DNA determination Chromosome sorting Hoechst efflux is also used to study hematopoietic and embryonic stem cells. As these cells are able to effectively efflux the dye, they can be detected via flow cytometry in what is termed the side population. This is done by passing the fluorescence emitted from the excited hoechst through both red and blue filters, and plotting hoechst red and blue against each other. Toxicity and safety Because Hoechst stains bind to DNA, they interfere with DNA replication during cell division. Consequently, they are potentially mutagenic and carcinogenic, so care should be used in their handling and disposal. Hoechst stain is used to sort sperm in livestock and humans. Its safety has been debated. See also References External links Spectral traces for fluorescent dyes Manual for Hoechst stains An online guide to fluorescent probes and commercial labeling technologies Staining dyes Fluorescent dyes DNA-binding substances
Hoechst stain
[ "Biology" ]
1,067
[ "Genetics techniques", "DNA-binding substances" ]
1,506,941
https://en.wikipedia.org/wiki/Neuraminidase
Exo-α-sialidase (, sialidase, neuraminidase; systematic name acetylneuraminyl hydrolase) is a glycoside hydrolase that cleaves the glycosidic linkages of neuraminic acids: Hydrolysis of α-(2→3)-, α-(2→6)-, α-(2→8)- glycosidic linkages of terminal sialic acid residues in oligosaccharides, glycoproteins, glycolipids, colominic acid and synthetic substrates Neuraminidase enzymes are a large family, found in a range of organisms. The best-known neuraminidase is the viral neuraminidase, a drug target for the prevention of the spread of influenza infection. Viral neuraminidase was the first neuraminidase to be identified. It was discovered in 1957 by Alfred Gottschalk at the Walter and Eliza Hall Institute in Melbourne. The viral neuraminidases are frequently used as antigenic determinants found on the surface of the influenza virus. Some variants of the influenza neuraminidase confer more virulence to the virus than others. Other homologues are found in mammalian cells, which have a range of functions. At least four mammalian sialidase homologues have been described in the human genome (see NEU1, NEU2, NEU3, NEU4). Sialidases may act as pathogenic factors in microbial infections. Reaction There are two major classes of neuraminidase that cleave exo or endo poly-sialic acids: Exo hydrolysis of α-(2→3)-, α-(2→6)-, α-(2→8)-glycosidic linkages of terminal sialic acid residues Endo hydrolysis of (2→8)-α-sialosyl linkages in oligo- or poly(sialic) acids (see endo-α-sialidase.) Function Sialidases, also called neuraminidases, catalyze the hydrolysis of terminal sialic acid residues from the newly formed virions and from the host cell receptors. Sialidase activities include assistance in the mobility of virus particles through the respiratory tract mucus and in the elution of virion progeny from the infected cell. Subtypes Swiss-Prot lists 137 types of neuraminidase from various species as of October 18, 2006. Nine subtypes of influenza neuraminidase are known; many occur only in various species of duck and chicken. Subtypes N1 and N2 have been positively linked to epidemics in humans, and strains with N3 or N7 subtypes have been identified in a number of isolated deaths. CAZy defines a total of 85 glycosyl hydrolase families, of which families GH34 (viral), GH33 (cellular organisms), GH58 (viral and bacterial), GH83 (viral) are major families that contain this enzyme. GH58 is the only endo-acting family. The following is a list of major classes of neuraminidase enzymes: Viral neuraminidase Bacterial neuraminidase Mammalian neuraminidases: Structure Influenza neuraminidase is a mushroom-shaped projection on the surface of the influenza virus. It has a head consisting of four co-planar and roughly spherical subunits, and a hydrophobic region that is embedded within the interior of the virus' membrane. It comprises a single polypeptide chain that is oriented in the opposite direction to the hemagglutinin antigen. The composition of the polypeptide is a single chain of six conserved polar amino acids, followed by hydrophilic, variable amino acids. β-Sheets predominate as the secondary level of protein conformation. The structure of trans-sialidase includes a catalytic β-propeller domain, a N-terminal lectin-like domain and an irregular beta-stranded domain inserted into the catalytic domain. Recent emergence of oseltamivir and zanamivir resistant human influenza A(H1N1) H274Y has emphasized the need for suitable expression systems to obtain large quantities of highly pure and stable, recombinant neuraminidase through two separate artificial tetramerization domains that facilitate the formation of catalytically active neuraminidase homotetramers from yeast and Staphylothermus marinus, which allow for secretion of FLAG-tagged proteins and further purification. Mechanism The enzymatic mechanism of influenza virus sialidase has been studied by Taylor et al., shown in Figure 1. The enzyme catalysis process has four steps. The first step involves the distortion of the α-sialoside from a 2C5 chair conformation (the lowest-energy form in solution) to a pseudoboat conformation when the sialoside binds to the sialidase. The second step leads to an oxocarbocation intermediate, the sialosyl cation. The third step is the formation of Neu5Ac initially as the α-anomer, and then mutarotation and release as the more thermodynamically stable β-Neu5Ac. Inhibitors Neuraminidase inhibitors are useful for combating influenza infection: zanamivir, administered by inhalation; oseltamivir, administered orally; peramivir administered parenterally, that is through intravenous or intramuscular injection; and laninamivir which is in phase III clinical trials. There are two major proteins on the surface of influenza virus particles. One is the lectin haemagglutinin protein with three relatively shallow sialic acid-binding sites and the other is enzyme sialidase with the active site in a pocket. Because of the relative deep active site in which low-molecular-weight inhibitors can make multiple favorable interactions and approachable methods of designing transition-state analogues in the hydrolysis of sialosides, the sialidase becomes more attractive anti-influenza drug target than the haemagglutinin. After the X-ray crystal structures of several influenza virus sialidases were available, the structure-based inhibitor design was applied to discover potent inhibitors of this enzyme. The unsaturated sialic acid (N-acetylneuraminic acid [Neu5ac]) derivative 2-deoxy-2, 3-didehydro-D-N-acetylneuraminic acid (Neu5Ac2en), a sialosyl cation transition-state (Figure 2) analogue, is believed the most potent inhibitor core template. Structurally modified Neu5Ac2en derivatives may give more effective inhibitors. Many Neu5Ac2en-based compounds have been synthesized and tested for their influenza virus sialidase inhibitory potential. For example: The 4-substituted Neu5Ac2en derivatives (Figure 3), 4-amino-Neu5Ac2en (Compound 1), which showed two orders of magnitude better inhibition of influenza virus sialidase than Neu5Ac2en5 and 4-guanidino-Neu5Ac2en (Compound 2), known as Zanamivir, which is now marketed for treatment of influenza virus as a drug, have been designed by von Itzstein and coworkers. A series of amide-linked C9 modified Neu5Ac2en have been reported by Megesh and colleagues as NEU1 inhibitors. See also Glycoside hydrolase family 33 Neuraminidase inhibitors Hemagglutinin (influenza) References External links Orthomyxoviruses, Robert B. Couch, UTMB. Article includes a good clear line drawing of a neuraminidase on an influenza virus. Carbohydrate chemistry EC 3.2.1 Glycobiology Neuraminidase inhibitors
Neuraminidase
[ "Chemistry", "Biology" ]
1,704
[ "Neuraminidase inhibitors", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Biochemistry", "Glycobiology" ]
1,507,254
https://en.wikipedia.org/wiki/Motorola%20DynaTAC
The DynaTAC is a series of cellular telephones manufactured by Motorola from 1983 to 1994. The Motorola DynaTAC 8000X received approval from the U.S. FCC on September 21, 1983. A full charge took roughly 10 hours, and it offered 30 minutes of talk time. It also offered an LED display for dialing or recall of one of 30 phone numbers. It was priced at US$3,995 in 1984, its commercial release year, equivalent to $ in . DynaTAC was an abbreviation of "Dynamic Adaptive Total Area Coverage". Several models followed, starting in 1985 with the 8000s and continuing with periodic updates of increasing frequency until 1993's Classic II. The DynaTAC was replaced in most roles by the much smaller Motorola MicroTAC when it was first introduced in 1989, and by the time of the Motorola StarTAC's release in 1996, it was obsolete. History The first cellular phone was the culmination of efforts begun at Bell Labs, which first proposed the idea of a cellular system in 1947, and continued to petition the Federal Communications Commission (FCC) for channels through the 1950s and 1960s, and research conducted at Motorola. In 1960, electrical engineer John F. Mitchell became Motorola's chief engineer for its mobile communication products. Mitchell oversaw the development and marketing of the first pager to use transistors. Motorola had long produced mobile telephones for cars that were large and heavy and consumed too much power to allow their use without the automobile's engine running. Mitchell's team, which included Martin Cooper, developed portable cellular telephony, and Mitchell was among the Motorola employees granted a patent for this work in 1973; the first call on the prototype was completed, reportedly, to a wrong number. Motorola announced the development of the Dyna-Tac in April 1973, saying that it expected to have it fully operational within three years. Motorola said that the Dyna-Tac would weigh and would cost between $60 and $100 per month. Motorola predicted that the cost would decrease to $10 or $12 per month in no more than 20 years. Motorola said that, while the Dyna-Tac would not use the same network as the existing mobile service network, it anticipated resolving this so that all mobile devices would use the same network by around 1980. By 1975, Motorola's expectations had changed; the Dyna-Tac was anticipated to be released to the public by 1985 because of U.S. Federal Communications Commission proceedings. While Motorola was developing the cellular phone itself, from 1968 to 1983, Bell Labs worked on the system called AMPS, while others designed cell phones for that and other cellular systems. Martin Cooper, a former general manager for the systems division at Motorola, led a team that produced the DynaTAC 8000X, the first commercially available cellular phone small enough to be easily carried, and made the first phone call from it. Martin Cooper was the first person to make an analog cellular mobile phone call on a prototype in 1973. The Motorola DynaTAC 8000X was very large compared to phones today. This first cell phone was very expensive when it was released in the US in 1984. The DynaTAC's retail price, $3,995 (about $ in ), ensured that it would not become a mass-market item (the minimum wage in the United States was $3.35 per hour in 1984, which meant that it required more than 1192 hours of work – more than 7 months at a standard 40-hour work week – just working for the phone, without taxes); by 1998, when Mitchell retired, cellphones and associated services made up two thirds of Motorola's $30billion in revenue. On October 13, 1983, David D. Meilahn placed the first commercial wireless call on a DynaTAC from his 1983 Mercedes-Benz 380SL to Bob Barnett, former president of Ameritech Mobile Communications, who then placed a call on a DynaTAC from inside a Chrysler convertible to the grandson of Alexander Graham Bell, who was in Germany for the event. The call, made at Soldier Field in Chicago, is considered to be a major turning point in communications. Later, Richard H. Frenkiel, the head of system development at Bell Laboratories, said about the DynaTAC: "It was a real triumph; a great breakthrough." Publications U.S. Patent 3,906,166, September 16, 1975 for a Radio Telephone System for the first cell phone was granted by Martin Cooper, Richard W. Dronsurth, Albert J. Leitich, Charles N. Lynk, James J. Mikulski, John F. Mitchell, Roy A. Richardson, and John H. Sangster. Two names were botched in the original filing; Leitich's surname was erroneously omitted, and Mikulski's first name was omitted. The original document was refiled by Motorola's legal staff, but has not yet been identified. The seeds of the idea for a portable cell phone can be traced to Mikulski, which were rejected by Mitchell for lack of sufficient business justifications. It is rumored that when Mitchell suddenly recognized during an attempted phone call that his 400 MHz phone had inherent limitations, he immediately reversed his previous decision and championed the portable cell phone concept. Description Several prototypes were made between 1973 and 1983. The product accepted by the FCC weighed 28 ounces (790 g) and was 10 inches (25 cm) high, not including its flexible "rubber duck" whip antenna. In addition to the typical 12-key telephone keypad, it had nine additional special keys: Rcl (recall) Clr (clear) Snd (send) Sto (store) Fcn (function) End Pwr (power) Lock Vol (volume) It employed some of the technology previously used in the ALOHAnet system, including metal–oxide–semiconductor (MOS) transceiver and modem technology. Variants The DynaTAC 8 Series, Classic, Classic II, Ultra Classic, and Ultra Classic II had an LED display, with red LEDs; the DynaTAC International Series with green LEDs, and the DynaTAC 6000XL used a vacuum fluorescent display. These displays were severely limited in what information they could show. The battery allowed for a call of up to 60 minutes, after which it was necessary to charge the phone up to 10 hours in a trickle charger or one hour in a fast charger, which was a separate accessory. While still retaining the DynaTAC name, the 6000XL was completely unrelated to the DynaTAC 8000 Series, in that it was a transportable phone meant for installation in a vehicle. The 6000XL was later reconfigured as the Motorola Tough Talker, with a ruggedized build intended for construction sites, emergency workers, and special events planners. The DynaTAC Series was succeeded by the MicroTAC Series in 1989. Legacy With the removal of analog network cells nearly all over the world, the DynaTAC models running on AMPS or other analog networks are mostly obsolete. Thus, they are more collectors' items than usable cellphones. The International series, however, will still work, but only on GSM 900 cells. The DynaTac 8000X, due to its resemblance in size and weight to a standard clay-fired brick, was nicknamed the brick phone by users, a term later applied to other brands as a contrast to smaller handsets appearing in the 1990s. Portability While it might be considered extremely unwieldy by modern standards, at the time it was considered revolutionary because mobile telephones were bulky affairs installed in vehicles or in heavy briefcases. The DynaTAC 8000X was long and weighed . It was truly the first mobile telephone which could connect to the telephone network without the assistance of a mobile operator and could be carried about by the user. Accessories In certain markets, a brass swivel antenna was one of the aftermarket accessories then available. Motorola also offered a one-hour desktop charger, though the battery could get quite hot while charging at this accelerated rate. In some cases, this could cause major problems with the battery, occasionally short circuiting it and rendering it unusable. Also, charging the battery at a high enough rate to substantially raise its temperature will cause the battery to wear at an accelerated rate, reducing the number of charge-discharge cycles that can be performed before the battery will need to be replaced. (However, considering the high cost of the DynaTAC, the cost of battery replacement would not typically be a concern to DynaTAC owners.) Available, too, was a snug-fitting zippered leather case which covered the entire body of the phone and had a clear plastic front to make the user interface accessible. It featured a sturdy spring-steel belt clip and a small cutaway at the top to allow the antenna to protrude. Charging could still be performed with the cover on, but change of battery required its removal. DynaTAC relates to US phones used on the DynaTAC system in the US, not phones in use in the UK. See also AMPS History of mobile phones References First generation mobile telecommunications DynaTAC Mobile phones introduced in 1984 Computer-related introductions in 1984
Motorola DynaTAC
[ "Technology" ]
1,934
[ "Mobile telecommunications", "First generation mobile telecommunications" ]
1,507,559
https://en.wikipedia.org/wiki/Interdecadal%20Pacific%20oscillation
The Interdecadal Pacific oscillation (IPO) is an oceanographic/meteorological phenomenon similar to the Pacific decadal oscillation (PDO), but occurring in a wider area of the Pacific. While the PDO occurs in mid-latitudes of the Pacific Ocean in the northern hemisphere, the IPO stretches from the southern hemisphere into the northern hemisphere. The period of oscillation is roughly 15–30 years. Positive phases of the IPO are characterized by a warmer than average tropical Pacific and cooler than average northern Pacific. Negative phases are characterized by an inversion of this pattern, with cool tropics and warm northern regions. The IPO had positive phases (southeastern tropical Pacific warm) from 1922 to 1946 and 1978 to 1998, and a negative phase between 1947 and 1976. References Physical oceanography Regional climate effects Pacific Ocean Climate oscillations
Interdecadal Pacific oscillation
[ "Physics" ]
179
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
1,507,685
https://en.wikipedia.org/wiki/Darcy%20%28unit%29
The darcy (or darcy unit) and millidarcy (md or mD) are units of permeability, named after Henry Darcy. They are not SI units, but they are widely used in petroleum engineering and geology. The unit has also been used in biophysics and biomechanics, where the flow of fluids such as blood through capillary beds and cerebrospinal fluid through the brain interstitial space is being examined. A darcy has dimensions of length2. Definition Permeability measures the ability of fluids to flow through rock (or other porous media). The darcy is defined using Darcy's law, which can be written as: where: {| | || is the volumetric fluid flow rate through the medium |- | || is the area of the medium |- | || is the permeability of the medium |- | || is the dynamic viscosity of the fluid |- | || is the applied pressure difference |- | || is the thickness of the medium |} The darcy is referenced to a mixture of unit systems. A medium with a permeability of 1 darcy permits a flow of 1 cm3/s of a fluid with viscosity 1 cP (1 mPa·s) under a pressure gradient of 1 atm/cm acting across an area of 1 cm2. Typical values of permeability range as high as 100,000 darcys for gravel, to less than 0.01 microdarcy for granite. Sand has a permeability of approximately 1 darcy. Tissue permeability, whose measurement is still in its infancy, is somewhere in the range of 0.01 to 100 darcy. Origin The darcy is named after Henry Darcy. Rock permeability is usually expressed in millidarcys (md) because rocks hosting hydrocarbon or water accumulations typically exhibit permeability ranging from 5 to 500 md. The odd combination of units comes from Darcy's original studies of water flow through columns of sand. Water has a viscosity of 1.0019 cP at about room temperature. The unit abbreviation "d" is not capitalized (contrary to industry use). The American Association of Petroleum Geologists uses the following unit abbreviations and grammar in their publications: darcy (plural darcys, not darcies): d millidarcy (plural millidarcys, not millidarcies): md Conversions Converted to SI units, 1 darcy is equivalent to or 0.9869233 μm2. This conversion is usually approximated as 1 μm2. This is the reciprocal of 1.013250—the conversion factor from atmospheres to bars. Specifically in the hydrology domain, permeability of soil or rock may also be defined as the flux of water under hydrostatic pressure (≈ 0.1 bar/m) at a temperature of 20 °C. In this specific setup, 1 darcy is equivalent to 0.831 m/day. References Richard Selley's "Elements of Petroleum Geology (2nd edition)," page 250. Units of measurement Hydraulics Hydraulic engineering Hydrology Hydrogeology Soil mechanics Soil physics
Darcy (unit)
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
649
[ "Hydrology", "Applied and interdisciplinary physics", "Hydrogeology", "Quantity", "Soil mechanics", "Soil physics", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering", "Units of measurement", "Fluid dynamics" ]
1,507,717
https://en.wikipedia.org/wiki/Mark%20and%20recapture
Mark and recapture is a method commonly used in ecology to estimate an animal population's size where it is impractical to count every individual. A portion of the population is captured, marked, and released. Later, another portion will be captured and the number of marked individuals within the sample is counted. Since the number of marked individuals within the second sample should be proportional to the number of marked individuals in the whole population, an estimate of the total population size can be obtained by dividing the number of marked individuals by the proportion of marked individuals in the second sample. The method assumes, rightly or wrongly, that the probability of capture is the same for all individuals. Other names for this method, or closely related methods, include capture-recapture, capture-mark-recapture, mark-recapture, sight-resight, mark-release-recapture, multiple systems estimation, band recovery, the Petersen method, and the Lincoln method. Another major application for these methods is in epidemiology, where they are used to estimate the completeness of ascertainment of disease registers. Typical applications include estimating the number of people needing particular services (e.g. services for children with learning disabilities, services for medically frail elderly living in the community), or with particular conditions (e.g. illegal drug addicts, people infected with HIV, etc.). Field work related to mark-recapture Typically a researcher visits a study area and uses traps to capture a group of individuals alive. Each of these individuals is marked with a unique identifier (e.g., a numbered tag or band), and then is released unharmed back into the environment. A mark-recapture method was first used for ecological study in 1896 by C.G. Johannes Petersen to estimate plaice, Pleuronectes platessa, populations. Sufficient time should be allowed to pass for the marked individuals to redistribute themselves among the unmarked population. Next, the researcher returns and captures another sample of individuals. Some individuals in this second sample will have been marked during the initial visit and are now known as recaptures. Other organisms captured during the second visit, will not have been captured during the first visit to the study area. These unmarked animals are usually given a tag or band during the second visit and then are released. Population size can be estimated from as few as two visits to the study area. Commonly, more than two visits are made, particularly if estimates of survival or movement are desired. Regardless of the total number of visits, the researcher simply records the date of each capture of each individual. The "capture histories" generated are analyzed mathematically to estimate population size, survival, or movement. When capturing and marking organisms, ecologists need to consider the welfare of the organisms. If the chosen identifier harms the organism, then its behavior might become irregular. Notation Let N = Number of animals in the population n = Number of animals marked on the first visit K = Number of animals captured on the second visit k = Number of recaptured animals that were marked A biologist wants to estimate the size of a population of turtles in a lake. She captures 10 turtles on her first visit to the lake, and marks their backs with paint. A week later she returns to the lake and captures 15 turtles. Five of these 15 turtles have paint on their backs, indicating that they are recaptured animals. This example is (n, K, k) = (10, 15, 5). The problem is to estimate N. Lincoln–Petersen estimator The Lincoln–Petersen method (also known as the Petersen–Lincoln index or Lincoln index) can be used to estimate population size if only two visits are made to the study area. This method assumes that the study population is "closed". In other words, the two visits to the study area are close enough in time so that no individuals die, are born, or move into or out of the study area between visits. The model also assumes that no marks fall off animals between visits to the field site by the researcher, and that the researcher correctly records all marks. Given those conditions, estimated population size is: Derivation It is assumed that all individuals have the same probability of being captured in the second sample, regardless of whether they were previously captured in the first sample (with only two samples, this assumption cannot be tested directly). This implies that, in the second sample, the proportion of marked individuals that are caught () should equal the proportion of the total population that is marked (). For example, if half of the marked individuals were recaptured, it would be assumed that half of the total population was included in the second sample. In symbols, A rearrangement of this gives the formula used for the Lincoln–Petersen method. Sample calculation In the example (n, K, k) = (10, 15, 5) the Lincoln–Petersen method estimates that there are 30 turtles in the lake. Chapman estimator The Lincoln–Petersen estimator is asymptotically unbiased as sample size approaches infinity, but is biased at small sample sizes. An alternative less biased estimator of population size is given by the Chapman estimator: Sample calculation The example (n, K, k) = (10, 15, 5) gives Note that the answer provided by this equation must be truncated not rounded. Thus, the Chapman method estimates 28 turtles in the lake. Surprisingly, Chapman's estimate was one conjecture from a range of possible estimators: "In practice, the whole number immediately less than (K+1)(n+1)/(k+1) or even Kn/(k+1) will be the estimate. The above form is more convenient for mathematical purposes."(see footnote, page 144). Chapman also found the estimator could have considerable negative bias for small Kn/N (page 146), but was unconcerned because the estimated standard deviations were large for these cases. Confidence interval An approximate confidence interval for the population size N can be obtained as: where corresponds to the quantile of a standard normal random variable, and The example (n, K, k) = (10, 15, 5) gives the estimate N ≈ 30 with a 95% confidence interval of 22 to 65. It has been shown that this confidence interval has actual coverage probabilities that are close to the nominal level even for small populations and extreme capture probabilities (near to 0 or 1), in which cases other confidence intervals fail to achieve the nominal coverage levels. Bayesian estimate The mean value ± standard deviation is where for for A derivation is found here: Talk:Mark and recapture#Statistical treatment. The example (n, K, k) = (10, 15, 5) gives the estimate N ≈ 42 ± 21.5 Capture probability The capture probability refers to the probability of a detecting an individual animal or person of interest, and has been used in both ecology and epidemiology for detecting animal or human diseases, respectively. The capture probability is often defined as a two-variable model, in which f is defined as the fraction of a finite resource devoted to detecting the animal or person of interest from a high risk sector of an animal or human population, and q is the frequency of time that the problem (e.g., an animal disease) occurs in the high-risk versus the low-risk sector. For example, an application of the model in the 1920s was to detect typhoid carriers in London, who were either arriving from zones with high rates of tuberculosis (probability q that a passenger with the disease came from such an area, where q>0.5), or low rates (probability 1−q). It was posited that only 5 out of 100 of the travelers could be detected, and 10 out of 100 were from the high risk area. Then the capture probability P was defined as: where the first term refers to the probability of detection (capture probability) in a high risk zone, and the latter term refers to the probability of detection in a low risk zone. Importantly, the formula can be re-written as a linear equation in terms of f: Because this is a linear function, it follows that for certain versions of q for which the slope of this line (the first term multiplied by f) is positive, all of the detection resource should be devoted to the high-risk population (f should be set to 1 to maximize the capture probability), whereas for other value of q, for which the slope of the line is negative, all of the detection should be devoted to the low-risk population (f should be set to 0. We can solve the above equation for the values of q for which the slope will be positive to determine the values for which f should be set to 1 to maximize the capture probability: which simplifies to: This is an example of linear optimization. In more complex cases, where more than one resource f is devoted to more than two areas, multivariate optimization is often used, through the simplex algorithm or its derivatives. More than two visits The literature on the analysis of capture-recapture studies has blossomed since the early 1990s. There are very elaborate statistical models available for the analysis of these experiments. A simple model which easily accommodates the three source, or the three visit study, is to fit a Poisson regression model. Sophisticated mark-recapture models can be fit with several packages for the Open Source R programming language. These include "Spatially Explicit Capture-Recapture (secr)", "Loglinear Models for Capture-Recapture Experiments (Rcapture)", and "Mark-Recapture Distance Sampling (mrds)". Such models can also be fit with specialized programs such as MARK or E-SURGE. Other related methods which are often used include the Jolly–Seber model (used in open populations and for multiple census estimates) and Schnabel estimators (an expansion to the Lincoln–Petersen method for closed populations). These are described in detail by Sutherland. Integrated approaches Modelling mark-recapture data is trending towards a more integrative approach, which combines mark-recapture data with population dynamics models and other types of data. The integrated approach is more computationally demanding, but extracts more information from the data improving parameter and uncertainty estimates. See also German tank problem, for estimation of population size when the elements are numbered. Tag and release Abundance estimation GPS wildlife tracking Shadow Effect (Genetics) References Further reading Petersen, C. G. J. (1896). "The Yearly Immigration of Young Plaice Into the Limfjord From the German Sea", Report of the Danish Biological Station (1895), 6, 5–84. Schofield, J. R. (2007). "Beyond Defect Removal: Latent Defect Estimation With Capture-Recapture Method", Crosstalk, August 2007; 27–29. External links A historical introduction to capture-recapture methods Analysis of capture-recapture data Ecological techniques Epidemiology Statistical data types Environmental statistics Environmental Sampling Equipment Population ecology
Mark and recapture
[ "Biology", "Environmental_science" ]
2,262
[ "Epidemiology", "Environmental social science", "Environmental Sampling Equipment", "Ecological techniques" ]
1,507,752
https://en.wikipedia.org/wiki/DNS%20spoofing
DNS spoofing, also referred to as DNS cache poisoning, is a form of computer security hacking in which corrupt Domain Name System data is introduced into the DNS resolver's cache, causing the name server to return an incorrect result record, e.g. an IP address. This results in traffic being diverted to any computer that the attacker chooses. Overview of the Domain Name System A Domain Name System server translates a human-readable domain name (such as example.com) into a numerical IP address that is used to route communications between nodes. Normally if the server does not know a requested translation it will ask another server, and the process continues recursively. To increase performance, a server will typically remember (cache) these translations for a certain amount of time. This means if it receives another request for the same translation, it can reply without needing to ask any other servers, until that cache expires. When a DNS server has received a false translation and caches it for performance optimization, it is considered poisoned, and it supplies the false data to clients. If a DNS server is poisoned, it may return an incorrect IP address, diverting traffic to another computer (often an attacker's). Cache poisoning attacks Normally, a networked computer uses a DNS server provided by an Internet service provider (ISP) or the computer user's organization. DNS servers are used in an organization's network to improve resolution response performance by caching previously obtained query results. Poisoning attacks on a single DNS server can affect the users serviced directly by the compromised server or those serviced indirectly by its downstream server(s) if applicable. To perform a cache poisoning attack, the attacker exploits flaws in the DNS software. A server should correctly validate DNS responses to ensure that they are from an authoritative source (for example by using DNSSEC); otherwise the server might end up caching the incorrect entries locally and serve them to other users that make the same request. This attack can be used to redirect users from a website to another site of the attacker's choosing. For example, an attacker spoofs the IP address DNS entries for a target website on a given DNS server and replaces them with the IP address of a server under their control. The attacker then creates files on the server under their control with names matching those on the target server. These files usually contain malicious content, such as computer worms or viruses. A user whose computer has referenced the poisoned DNS server gets tricked into accepting content coming from a non-authentic server and unknowingly downloads the malicious content. This technique can also be used for phishing attacks, where a fake version of a genuine website is created to gather personal details such as bank and credit/debit card details. The vulnerability of systems to DNS cache poisoning goes beyond its immediate effects as it can open users up to further risks such as phishing, malware injections, denial of service, and website hijacking due to system vulnerabilities. Various methods, ranging from the use of social engineering tactics to the exploitation of weaknesses present in the DNS server software, can lead to these attacks. Variants In the following variants, the entries for the server would be poisoned and redirected to the attacker's name server at IP address . These attacks assume that the name server for is . To accomplish the attacks, the attacker must force the target DNS server to make a request for a domain controlled by one of the attacker's nameservers. Redirect the target domain's name server The first variant of DNS cache poisoning involves redirecting the name server of the attacker's domain to the name server of the target domain, then assigning that name server an IP address specified by the attacker. DNS server's request: what are the address records for ? Attacker's response: Answer: (no response) Authority section: Additional section: A vulnerable server would cache the additional A-record (IP address) for , allowing the attacker to resolve queries to the entire domain. Redirect the NS record to another target domain The second variant of DNS cache poisoning involves redirecting the nameserver of another domain unrelated to the original request to an IP address specified by the attacker. DNS server's request: what are the address records for ? Attacker's response: Answer: (no response) Authority section: Additional section: A vulnerable server would cache the unrelated authority information for 's NS-record (nameserver entry), allowing the attacker to resolve queries to the entire domain. Prevention and mitigation Many cache poisoning attacks against DNS servers can be prevented by being less trusting of the information passed to them by other DNS servers, and ignoring any DNS records passed back that are not directly relevant to the query. For example, versions of BIND 9.5.0-P1 and above perform these checks. Source port randomization for DNS requests, combined with the use of cryptographically secure random numbers for selecting both the source port and the 16-bit cryptographic nonce, can greatly reduce the probability of successful DNS race attacks. However, when routers, firewalls, proxies, and other gateway devices perform network address translation (NAT), or more specifically, port address translation (PAT), they may rewrite source ports in order to track connection state. When modifying source ports, PAT devices may remove source port randomness implemented by nameservers and stub resolvers. Secure DNS (DNSSEC) uses cryptographic digital signatures signed with a trusted public key certificate to determine the authenticity of data. DNSSEC can counter cache poisoning attacks. In 2010 DNSSEC was implemented in the Internet root zone servers., but needs to be deployed on all top level domain servers as well. The DNSSEC readiness of these is shown in the list of Internet top-level domains. As of 2020, all of the original TLDs support DNSSEC, as do country code TLDs of most large countries, but many country-code TLDs still do not. This kind of attack can be mitigated at the transport layer or application layer by performing end-to-end validation once a connection is established. A common example of this is the use of Transport Layer Security and digital signatures. For example, by using HTTPS (the secure version of HTTP), users may check whether the server's digital certificate is valid and belongs to a website's expected owner. Similarly, the secure shell remote login program checks digital certificates at endpoints (if known) before proceeding with the session. For applications that download updates automatically, the application can embed a copy of the signing certificate locally and validate the signature stored in the software update against the embedded certificate. See also DNS hijacking DNS rebinding Mausezahn Pharming Root name server Dan Kaminsky References Computer security exploits Domain Name System Hacking (computer security) Internet security Internet ethics Internet service providers Types of cyberattacks
DNS spoofing
[ "Technology" ]
1,459
[ "Internet ethics", "Computer security exploits", "Ethics of science and technology" ]
1,507,852
https://en.wikipedia.org/wiki/DOM%20event
DOM (Document Object Model) Events are a signal that something has occurred, or is occurring, and can be triggered by user interactions or by the browser. Client-side scripting languages like JavaScript, JScript, VBScript, and Java can register various event handlers or listeners on the element nodes inside a DOM tree, such as in HTML, XHTML, XUL, and SVG documents. Examples of DOM Events: When a user clicks the mouse When a web page has loaded When an image has been loaded When the mouse moves over an element When an input field is changed When an HTML form is submitted When a user presses a key Historically, like DOM, the event models used by various web browsers had some significant differences which caused compatibility problems. To combat this, the event model was standardized by the World Wide Web Consortium (W3C) in DOM Level 2. Events HTML events Common events There is a huge collection of events that can be generated by most element nodes: Mouse events. Keyboard events. HTML frame/object events. HTML form events. User interface events. Mutation events (notification of any changes to the structure of a document). Progress events (used by XMLHttpRequest and File API). Note that the event classification above is not exactly the same as W3C's classification. Note that the events whose names start with "DOM" are currently not well supported, and for this and other performance reasons are deprecated by the W3C in DOM Level 3. Mozilla and Opera support DOMAttrModified, DOMNodeInserted, DOMNodeRemoved and DOMCharacterDataModified. Chrome and Safari support these events, except for DOMAttrModified. Touch events Web browsers running on touch-enabled devices, such as Apple's iOS and Google's Android, generate additional events. In the W3C draft recommendation, a TouchEvent delivers a TouchList of Touch locations, the modifier keys that were active, a TouchList of Touch locations within the targeted DOM element, and a TouchList of Touch locations that have changed since the previous TouchEvent. Apple didn't join this working group, and delayed W3C recommendation of its Touch Events Specification by disclosing patents late in the recommendation process. Pointer events Web browsers on devices with various types of input devices including mouse, touch panel, and pen may generate integrated input events. Users can see what type of input device is pressed, what button is pressed on that device, and how strongly the button is pressed when it comes to a stylus pen. As of October 2013, this event is only supported by Internet Explorer 10 and 11. Indie UI events Not yet really implemented, the Indie UI working groups want to help web application developers to be able to support standard user interaction events without having to handle different platform specific technical events that could match with it. Scripting usable interfaces can be difficult, especially when one considers that user interface design patterns differ across software platforms, hardware, and locales, and that those interactions can be further customized based on personal preference. Individuals are accustomed to the way the interface works on their own system, and their preferred interface frequently differs from that of the web application author's preferred interface. For example, web application authors, wishing to intercept a user's intent to undo the last action, need to "listen" for all the following events: Control+Z on Windows and Linux. Command+Z on Mac OS X. Shake events on some mobile devices. It would be simpler to listen for a single, normalized request to "undo" the previous action. Internet Explorer-specific events In addition to the common (W3C) events, two major types of events are added by Internet Explorer. Some of the events have been implemented as de facto standards by other browsers. Clipboard events. Data binding events. Note that Mozilla, Safari and Opera also support the readystatechange event for the XMLHttpRequest object. Mozilla also supports the beforeunload event using the traditional event registration method (DOM Level 0). Mozilla and Safari also support contextmenu, but Internet Explorer for Mac does not. Note that Firefox 6 and later support the beforeprint and afterprint events. XUL events In addition to the common (W3C) events, Mozilla defined a set of events that work only with XUL elements. Other events For Mozilla and Opera 9, there are also undocumented events known as DOMContentLoaded and DOMFrameContentLoaded which fire when the DOM content is loaded. These are different from "load" as they fire before the loading of related files (e.g., images). However, DOMContentLoaded has been added to the HTML 5 specification. The DOMContentLoaded event was also implemented in the Webkit rendering engine build 500+. This correlates to all versions of Google Chrome and Safari 3.1+. DOMContentLoaded is also implemented in Internet Explorer 9. Opera 9 also supports the Web Forms 2.0 events DOMControlValueChanged, invalid, forminput and formchange. Event flow Consider the situation when two event targets participate in a tree. Both have event listeners registered on the same event type, say "click". When the user clicks on the inner element, there are two possible ways to handle it: Trigger the elements from outer to inner (event capturing). This model is implemented in Netscape Navigator. Trigger the elements from inner to outer (event bubbling). This model is implemented in Internet Explorer and other browsers. W3C takes a middle position in this struggle. According to the W3C, events go through three phases when an event target participates in a tree: The capture phase: the event travels down from the root event target to the target of an event The target phase: the event travels through the event target The bubble phase (optional): the event travels back up from the target of an event to the root event target. The bubble phase will only occur for events that bubble (where event.bubbles == true) You can find a visualization of this event flow at https://domevents.dev Stopping events While an event is travelling through event listeners, the event can be stopped with event.stopPropagation() or event.stopImmediatePropagation() event.stopPropagation(): the event is stopped after all event listeners attached to the current event target in the current event phase are finished event.stopImmediatePropagation(): the event is stopped immediately and no further event listeners are executed When an event is stopped it will no longer travel along the event path. Stopping an event does not cancel an event. Legacy mechanisms to stop an event Set the event.cancelBubble to true (Internet Explorer) Set the event.returnValue property to false Canceling events A cancelable event can be canceled by calling event.preventDefault(). Canceling an event will opt out of the default browser behaviour for that event. When an event is canceled, the event.defaultPrevented property will be set to true. Canceling an event will not stop the event from traveling along the event path. Event object The Event object provides a lot of information about a particular event, including information about target element, key pressed, mouse button pressed, mouse position, etc. Unfortunately, there are very serious browser incompatibilities in this area. Hence only the W3C Event object is discussed in this article. Event handling models DOM Level 0 This event handling model was introduced by Netscape Navigator, and remains the most cross-browser model . There are two model types: the inline model and the traditional model. Inline model In the inline model, event handlers are added as attributes of elements. In the example below, an alert dialog box with the message "Hey Joe" appears after the hyperlink is clicked. The default click action is cancelled by returning false in the event handler. <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Inline Event Handling</title> </head> <body> <h1>Inline Event Handling</h1> <p>Hey <a href="http://www.example.com" onclick="triggerAlert('Joe'); return false;">Joe</a>!</p> <script> function triggerAlert(name) { window.alert("Hey " + name); } </script> </body> </html> One common misconception with the inline model is the belief that it allows the registration of event handlers with custom arguments, e.g. name in the triggerAlert function. While it may seem like that is the case in the example above, what is really happening is that the JavaScript engine of the browser creates an anonymous function containing the statements in the onclick attribute. The onclick handler of the element would be bound to the following anonymous function: function () { triggerAlert('Joe'); return false; } This limitation of the JavaScript event model is usually overcome by assigning attributes to the function object of the event handler or by using closures. Traditional model In the traditional model, event handlers can be added or removed by scripts. Like the inline model, each event can only have one event handler registered. The event is added by assigning the handler name to the event property of the element object. To remove an event handler, simply set the property to null: <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Traditional Event Handling</title> </head> <body> <h1>Traditional Event Handling</h1> <p>Hey Joe!</p> <script> var triggerAlert = function () { window.alert("Hey Joe"); }; // Assign an event handler document.onclick = triggerAlert; // Assign another event handler window.onload = triggerAlert; // Remove the event handler that was just assigned window.onload = null; </script> </body> </html> To add parameters: var name = 'Joe'; document.onclick = (function (name) { return function () { alert('Hey ' + name + '!'); }; }(name)); Inner functions preserve their scope. DOM Level 2 The W3C designed a more flexible event handling model in DOM Level 2. Some useful things to know : To prevent an event from bubbling, developers must call the stopPropagation() method of the event object. To prevent the default action of the event to be called, developers must call the preventDefault() method of the event object. The main difference from the traditional model is that multiple event handlers can be registered for the same event. The useCapture option can also be used to specify that the handler should be called in the capture phase instead of the bubbling phase. This model is supported by Mozilla, Opera, Safari, Chrome and Konqueror. A rewrite of the example used in the traditional model <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>DOM Level 2</title> </head> <body> <h1>DOM Level 2</h1> <p>Hey Joe!</p> <script> var heyJoe = function () { window.alert("Hey Joe!"); } // Add an event handler document.addEventListener( "click", heyJoe, true ); // capture phase // Add another event handler window.addEventListener( "load", heyJoe, false ); // bubbling phase // Remove the event handler just added window.removeEventListener( "load", heyJoe, false ); </script> </body> </html> Internet Explorer-specific model Microsoft Internet Explorer prior to version 8 does not follow the W3C model, as its own model was created prior to the ratification of the W3C standard. Internet Explorer 9 follows DOM level 3 events, and Internet Explorer 11 deletes its support for Microsoft-specific model. Some useful things to know : To prevent an event bubbling, developers must set the event's cancelBubble property. To prevent the default action of the event to be called, developers must set the event's returnValue property. The this keyword refers to the global window object. Again, this model differs from the traditional model in that multiple event handlers can be registered for the same event. However the useCapture option can not be used to specify that the handler should be called in the capture phase. This model is supported by Microsoft Internet Explorer and Trident based browsers (e.g. Maxthon, Avant Browser). A rewrite of the example used in the old Internet Explorer-specific model <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Internet Explorer-specific model</title> </head> <body> <h1>Internet Explorer-specific model</h1> <p>Hey Joe!</p> <script> var heyJoe = function () { window.alert("Hey Joe!"); } // Add an event handler document.attachEvent("onclick", heyJoe); // Add another event handler window.attachEvent("onload", heyJoe); // Remove the event handler just added window.detachEvent("onload", heyJoe); </script> </body> </html> References Further reading Deitel, Harvey. (2002). Internet and World Wide Web: how to program (Second Edition). The Mozilla Organization. (2009). DOM Event Reference. Retrieved August 25, 2009. Quirksmode (2008). Event compatibility tables. Retrieved November 27, 2008. http://www.sitepen.com/blog/2008/07/10/touching-and-gesturing-on-the-iphone/ External links Document Object Model (DOM) Level 2 Events Specification Document Object Model (DOM) Level 3 Events Working Draft DOM4: Events (Editor's Draft) UI Events Working Draft Pointer Events W3C Candidate Recommendation MSDN PointerEvent domevents.dev - A visualizer to learn about DOM Events through exploration JS fiddle for Event Bubbling and Capturing World Wide Web Consortium standards Application programming interfaces Events (computing)
DOM event
[ "Technology" ]
3,135
[ "Information systems", "Events (computing)" ]
1,508,243
https://en.wikipedia.org/wiki/Push-button
A push-button (also spelled pushbutton) or simply button is a simple switch mechanism to control some aspect of a machine or a process. Buttons are typically made out of hard material, usually plastic or metal. The surface is usually flat or shaped to accommodate the human finger or hand, so as to be easily depressed or pushed. Buttons are most often biased switches, although many un-biased buttons (due to their physical nature) still require a spring to return to their un-pushed state. Terms for the "pushing" of a button include pressing, depressing, mashing, slapping, hitting, and punching. Uses The "push-button" has been utilized in calculators, push-button telephones, kitchen appliances, and various other mechanical and electronic devices, home and commercial. In industrial and commercial applications, push buttons can be connected together by a mechanical linkage so that the act of pushing one button causes the other button to be released. In this way, a stop button can "force" a start button to be released. This method of linkage is used in simple manual operations in which the machine or process has no electrical circuits for control. Red pushbuttons can also have large heads (called mushroom heads) for easy operation and to facilitate the stopping of a machine. These pushbuttons are called emergency stop buttons and for increased safety are mandated by the electrical code in many jurisdictions. This large mushroom shape can also be found in buttons for use with operators who need to wear gloves for their work and could not actuate a regular flush-mounted push button. As an aid for operators and users in industrial or commercial applications, a pilot light is commonly added to draw the attention of the user and to provide feedback if the button is pushed. Typically this light is included into the center of the pushbutton and a lens replaces the pushbutton hard center disk. The source of the energy to illuminate the light is not directly tied to the contacts on the back of the pushbutton but to the action the pushbutton controls. In this way a start button when pushed will cause the process or machine operation to be started and a secondary contact designed into the operation or process will close to turn on the pilot light and signify the action of pushing the button caused the resultant process or action to start. To avoid an operator from pushing the wrong button in error, pushbuttons are often color-coded to associate them with their function. Commonly used colors are red for stopping the machine or process and green for starting the machine or process. In popular culture, the phrase "the button" (sometimes capitalized) refers to a (usually fictional) button that a military or government leader could press to launch nuclear weapons. Scram and scramble switches Akin to fire alarm switches, some big red buttons, when deployed with suitable visual and audible warnings such as flashing lights and sirens for extreme exigent emergencies, are known as "scram switches" (from the slang term scram, "get out of here"). Generally, such buttons are connected to large scale functions, beyond a regular fire alarm, such as automated shutdown procedures, complete facility power cut, fire suppression like halon release, etc. A variant of this is the scramble switch which triggers an alarm to activate emergent personnel to proactively attend to and go to such disasters. An air raid siren at an air base initiates such action, where the fighter pilots are alerted and "scrambled" to their planes to defend the base. History Push buttons were invented sometime in the late 19th century, certainly no later than 1880. The name came from the French word (something that sticks out), rather than from the kind of buttons used on clothing. The initial public reaction was curiosity mixed with fear, some of which was due to widespread fear of electricity, which was a relatively new technology at the time. See also Event-driven programming Button accordion Button (computing) Keyboard (computing) Panic button Placebo button Push-button telephone Reset button Shutter button Turbo button References Further reading Rachel Plotnick, Power Button: A History of Pleasure, Panic and the Politics of Pushing, MIT Press, 2018, , reviewed in David Trotter, "Making doorbells ring", London Review of Books 22 November 2018 External links Spring Return Button by Sándor Kabai, The Wolfram Demonstrations Project. Human–machine interaction Switches
Push-button
[ "Physics", "Technology", "Engineering", "Biology" ]
895
[ "Machines", "Behavior", "Physical systems", "Human–machine interaction", "Design", "Human behavior" ]
1,508,434
https://en.wikipedia.org/wiki/Coplanarity
In geometry, a set of points in space are coplanar if there exists a geometric plane that contains them all. For example, three points are always coplanar, and if the points are distinct and non-collinear, the plane they determine is unique. However, a set of four or more distinct points will, in general, not lie in a single plane. Two lines in three-dimensional space are coplanar if there is a plane that includes them both. This occurs if the lines are parallel, or if they intersect each other. Two lines that are not coplanar are called skew lines. Distance geometry provides a solution technique for the problem of determining whether a set of points is coplanar, knowing only the distances between them. Properties in three dimensions In three-dimensional space, two linearly independent vectors with the same initial point determine a plane through that point. Their cross product is a normal vector to that plane, and any vector orthogonal to this cross product through the initial point will lie in the plane. This leads to the following coplanarity test using a scalar triple product: Four distinct points, , are coplanar if and only if, which is also equivalent to If three vectors are coplanar, then if (i.e., and are orthogonal) then where denotes the unit vector in the direction of . That is, the vector projections of on and on add to give the original . Coplanarity of points in n dimensions whose coordinates are given Since three or fewer points are always coplanar, the problem of determining when a set of points are coplanar is generally of interest only when there are at least four points involved. In the case that there are exactly four points, several ad hoc methods can be employed, but a general method that works for any number of points uses vector methods and the property that a plane is determined by two linearly independent vectors. In an -dimensional space where , a set of points are coplanar if and only if the matrix of their relative differences, that is, the matrix whose columns (or rows) are the vectors is of rank 2 or less. For example, given four points if the matrix is of rank 2 or less, the four points are coplanar. In the special case of a plane that contains the origin, the property can be simplified in the following way: A set of points and the origin are coplanar if and only if the matrix of the coordinates of the points is of rank 2 or less. Geometric shapes A skew polygon is a polygon whose vertices are not coplanar. Such a polygon must have at least four vertices; there are no skew triangles. A polyhedron that has positive volume has vertices that are not all coplanar. See also Collinearity Plane of incidence References External links Planes (geometry)
Coplanarity
[ "Mathematics" ]
583
[ "Planes (geometry)", "Mathematical objects", "Infinity" ]
1,508,442
https://en.wikipedia.org/wiki/Logarithmically%20concave%20function
In convex analysis, a non-negative function is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality for all and . If is strictly positive, this is equivalent to saying that the logarithm of the function, , is concave; that is, for all and . Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function. Similarly, a function is log-convex if it satisfies the reverse inequality for all and . Properties A log-concave function is also quasi-concave. This follows from the fact that the logarithm is monotone implying that the superlevel sets of this function are convex. Every concave function that is nonnegative on its domain is log-concave. However, the reverse does not necessarily hold. An example is the Gaussian function  =  which is log-concave since  =  is a concave function of . But is not concave since the second derivative is positive for || > 1: From above two points, concavity log-concavity quasiconcavity. A twice differentiable, nonnegative function with a convex domain is log-concave if and only if for all satisfying , , i.e. is negative semi-definite. For functions of one variable, this condition simplifies to Operations preserving log-concavity Products: The product of log-concave functions is also log-concave. Indeed, if and are log-concave functions, then and are concave by definition. Therefore is concave, and hence also is log-concave. Marginals: if  :  is log-concave, then is log-concave (see Prékopa–Leindler inequality). This implies that convolution preserves log-concavity, since  =  is log-concave if and are log-concave, and therefore is log-concave. Log-concave distributions Log-concave distributions are necessary for a number of algorithms, e.g. adaptive rejection sampling. Every distribution with log-concave density is a maximum entropy probability distribution with specified mean μ and Deviation risk measure D. As it happens, many common probability distributions are log-concave. Some examples: the normal distribution and multivariate normal distributions, the exponential distribution, the uniform distribution over any convex set, the logistic distribution, the extreme value distribution, the Laplace distribution, the chi distribution, the hyperbolic secant distribution, the Wishart distribution, if n ≥ p + 1, the Dirichlet distribution, if all parameters are ≥ 1, the gamma distribution if the shape parameter is ≥ 1, the chi-square distribution if the number of degrees of freedom is ≥ 2, the beta distribution if both shape parameters are ≥ 1, and the Weibull distribution if the shape parameter is ≥ 1. Note that all of the parameter restrictions have the same basic source: The exponent of non-negative quantity must be non-negative in order for the function to be log-concave. The following distributions are non-log-concave for all parameters: the Student's t-distribution, the Cauchy distribution, the Pareto distribution, the log-normal distribution, and the F-distribution. Note that the cumulative distribution function (CDF) of all log-concave distributions is also log-concave. However, some non-log-concave distributions also have log-concave CDF's: the log-normal distribution, the Pareto distribution, the Weibull distribution when the shape parameter < 1, and the gamma distribution when the shape parameter < 1. The following are among the properties of log-concave distributions: If a density is log-concave, so is its cumulative distribution function (CDF). If a multivariate density is log-concave, so is the marginal density over any subset of variables. The sum of two independent log-concave random variables is log-concave. This follows from the fact that the convolution of two log-concave functions is log-concave. The product of two log-concave functions is log-concave. This means that joint densities formed by multiplying two probability densities (e.g. the normal-gamma distribution, which always has a shape parameter ≥ 1) will be log-concave. This property is heavily used in general-purpose Gibbs sampling programs such as BUGS and JAGS, which are thereby able to use adaptive rejection sampling over a wide variety of conditional distributions derived from the product of other distributions. If a density is log-concave, so is its survival function. If a density is log-concave, it has a monotone hazard rate (MHR), and is a regular distribution since the derivative of the logarithm of the survival function is the negative hazard rate, and by concavity is monotone i.e. which is decreasing as it is the derivative of a concave function. See also logarithmically concave sequence logarithmically concave measure logarithmically convex function convex function Notes References Mathematical analysis Convex analysis
Logarithmically concave function
[ "Mathematics" ]
1,102
[ "Mathematical analysis" ]
1,508,445
https://en.wikipedia.org/wiki/Ergosphere
In astrophysics, the ergosphere is a region located outside a rotating black hole's outer event horizon. Its name was proposed by Remo Ruffini and John Archibald Wheeler during the Les Houches lectures in 1971 and is derived . It received this name because it is theoretically possible to extract energy and mass from this region. The ergosphere touches the event horizon at the poles of a rotating black hole and extends to a greater radius at the equator. A black hole with modest angular momentum has an ergosphere with a shape approximated by an oblate spheroid, while faster spins produce a more pumpkin-shaped ergosphere. The equatorial (maximal) radius of an ergosphere is the Schwarzschild radius, the radius of a non-rotating black hole. The polar (minimal) radius is also the polar (minimal) radius of the event horizon which can be as little as half the Schwarzschild radius for a maximally rotating black hole. Rotation As a black hole rotates, it twists spacetime in the direction of the rotation at a speed that decreases with distance from the event horizon. This process is known as the Lense–Thirring effect or frame-dragging. Because of this dragging effect, an object within the ergosphere cannot appear stationary with respect to an outside observer at a great distance unless that object were to move at faster than the speed of light (an impossibility) with respect to the local spacetime. The speed necessary for such an object to appear stationary decreases at points further out from the event horizon, until at some distance the required speed is negligible. The set of all such points defines the ergosphere surface, called ergosurface. The outer surface of the ergosphere is called the static surface or static limit. This is because world lines change from being time-like outside the static limit to being space-like inside it. It is the speed of light that arbitrarily defines the ergosphere surface. Such a surface would appear as an oblate that is coincident with the event horizon at the pole of rotation, but at a greater distance from the event horizon at the equator. Outside this surface, space is still dragged, but at a lesser rate. Radial pull A suspended plumb, held stationary outside the ergosphere, will experience an infinite/diverging radial pull as it approaches the static limit. At some point it will start to fall, resulting in a gravitomagnetically induced spinward motion. An implication of this dragging of space is the existence of negative energies within the ergosphere. Since the ergosphere is outside the event horizon, it is still possible for objects that enter that region with sufficient velocity to escape from the gravitational pull of the black hole. An object can gain energy by entering the black hole's rotation and then escaping from it, thus taking some of the black hole's energy with it (making the maneuver similar to the exploitation of the Oberth effect around "normal" space objects). This process of removing energy from a rotating black hole was proposed by the mathematician Roger Penrose in 1969 and is called the Penrose process. The maximal amount of energy gain possible for a single particle via this process is 20.7% in terms of its mass equivalence, and if this process is repeated by the same mass, the theoretical maximal energy gain approaches 29% of its original mass-energy equivalent. As this energy is removed, the black hole loses angular momentum, and thus the limit of zero rotation is approached as spacetime dragging is reduced. In the limit, the ergosphere no longer exists. This process is considered a possible explanation for a source of energy of such energetic phenomena as gamma-ray bursts. Results from computer models show that the Penrose process is capable of producing the high-energy particles that are observed being emitted from quasars and other active galactic nuclei. Ergosphere size The size of the ergosphere, the distance between the ergosurface and the event horizon, is not necessarily proportional to the radius of the event horizon, but rather to the black hole's gravity and its angular momentum. A point at the poles does not move, and thus has no angular momentum, while at the equator a point would have its greatest angular momentum. This variation of angular momentum that extends from the poles to the equator is what gives the ergosphere its oblate shape. As the mass of the black hole or its rotation speed increases, the size of the ergosphere increases as well. References Further reading External links Black Hole Thermodynamics The Gravitomagnetic Field and Penrose Processes A Rotating Black Hole Black holes Physical phenomena
Ergosphere
[ "Physics", "Astronomy" ]
977
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
1,508,474
https://en.wikipedia.org/wiki/Robert%20W.%20Holley
Robert William Holley (January 28, 1922 – February 11, 1993) was an American biochemist. He shared the Nobel Prize in Physiology or Medicine in 1968 (with Har Gobind Khorana and Marshall Warren Nirenberg) for describing the structure of alanine transfer RNA, linking DNA and protein synthesis. Holley was born in Urbana, Illinois, and graduated from Urbana High School in 1938. He went on to study chemistry at the University of Illinois at Urbana-Champaign, graduating in 1942 and commencing his PhD studies in organic chemistry at Cornell University. During World War II Holley spent two years working under Professor Vincent du Vigneaud at Cornell University Medical College, where he was involved in the first chemical synthesis of penicillin. Holley completed his PhD studies in 1947. Following his graduate studies Holley remained associated with Cornell. He became an assistant professor of organic chemistry in 1948, and was appointed as professor of biochemistry in 1962. He began his research on RNA after spending a year's sabbatical (1955–1956) studying with James F. Bonner at the California Institute of Technology. Holley's research on RNA focused first on isolating transfer RNA (tRNA), and later on determining the sequence and structure of alanine tRNA, the molecule that incorporates the amino acid alanine into proteins. Holley's team of researchers determined the tRNA's structure by using two ribonucleases to split the tRNA molecule into pieces. Each enzyme split the molecule at location points for specific nucleotides. By a process of "puzzling out" the structure of the pieces split by the two different enzymes, then comparing the pieces from both enzyme splits, the team eventually determined the entire structure of the molecule. The group of researchers include Elizabeth Beach Keller, who developed the cloverleaf model that describes transfer RNA, during the course of the research. The structure was completed in 1964, and was a key discovery in explaining the synthesis of proteins from messenger RNA. It was also the first nucleotide sequence of a ribonucleic acid ever determined. Holley was awarded the Nobel Prize in Physiology or Medicine in 1968 for this discovery, and Har Gobind Khorana and Marshall W. Nirenberg were also awarded the prize that year for contributions to the understanding of protein synthesis. Using the Holley team's method, other scientists determined the structures of the remaining tRNA's. A few years later the method was modified to help track the sequence of nucleotides in various bacterial, plant, and human viruses. In 1968 Holley became a resident fellow at the Salk Institute for Biological Studies in La Jolla, California. According to the New York Times obituary, "He was an avid outdoorsman and an amateur sculptor of bronze." His widow Ann died in 1996. See also History of RNA biology List of RNA biologists References External links 1922 births 1993 deaths Nobel laureates in Physiology or Medicine American Nobel laureates 20th-century American biochemists History of biotechnology History of genetics Weill Cornell Medical College alumni People from Urbana, Illinois University of Illinois Urbana-Champaign alumni Recipients of the Albert Lasker Award for Basic Medical Research Members of the United States National Academy of Sciences Salk Institute for Biological Studies people
Robert W. Holley
[ "Biology" ]
668
[ "History of biotechnology" ]
1,508,507
https://en.wikipedia.org/wiki/Vector%20projection
The vector projection (also known as the vector component or vector resolution) of a vector on (or onto) a nonzero vector is the orthogonal projection of onto a straight line parallel to . The projection of onto is often written as or . The vector component or vector resolute of perpendicular to , sometimes also called the vector rejection of from (denoted or ), is the orthogonal projection of onto the plane (or, in general, hyperplane) that is orthogonal to . Since both and are vectors, and their sum is equal to , the rejection of from is given by: To simplify notation, this article defines and Thus, the vector is parallel to the vector is orthogonal to and The projection of onto can be decomposed into a direction and a scalar magnitude by writing it as where is a scalar, called the scalar projection of onto , and is the unit vector in the direction of . The scalar projection is defined as where the operator ⋅ denotes a dot product, ‖a‖ is the length of , and θ is the angle between and . The scalar projection is equal in absolute value to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of , that is, if the angle between the vectors is more than 90 degrees. The vector projection can be calculated using the dot product of and as: Notation This article uses the convention that vectors are denoted in a bold font (e.g. ), and scalars are written in normal font (e.g. a1). The dot product of vectors and is written as , the norm of is written ‖a‖, the angle between and is denoted θ. Definitions based on angle θ Scalar projection The scalar projection of on is a scalar equal to where θ is the angle between and . A scalar projection can be used as a scale factor to compute the corresponding vector projection. Vector projection The vector projection of on is a vector whose magnitude is the scalar projection of on with the same direction as . Namely, it is defined as where is the corresponding scalar projection, as defined above, and is the unit vector with the same direction as : Vector rejection By definition, the vector rejection of on is: Hence, Definitions in terms of a and b When is not known, the cosine of can be computed in terms of and , by the following property of the dot product Scalar projection By the above-mentioned property of the dot product, the definition of the scalar projection becomes: In two dimensions, this becomes Vector projection Similarly, the definition of the vector projection of onto becomes: which is equivalent to either or Scalar rejection In two dimensions, the scalar rejection is equivalent to the projection of onto , which is rotated 90° to the left. Hence, Such a dot product is called the "perp dot product." Vector rejection By definition, Hence, By using the Scalar rejection using the perp dot product this gives Properties Scalar projection The scalar projection on is a scalar which has a negative sign if 90 degrees < θ ≤ 180 degrees. It coincides with the length of the vector projection if the angle is smaller than 90°. More exactly: if , if . Vector projection The vector projection of on is a vector which is either null or parallel to . More exactly: if , and have the same direction if , and have opposite directions if . Vector rejection The vector rejection of on is a vector which is either null or orthogonal to . More exactly: if or , is orthogonal to if , Matrix representation The orthogonal projection can be represented by a projection matrix. To project a vector onto the unit vector , it would need to be multiplied with this projection matrix: Uses The vector projection is an important operation in the Gram–Schmidt orthonormalization of vector space bases. It is also used in the separating axis theorem to detect whether two convex shapes intersect. Generalizations Since the notions of vector length and angle between vectors can be generalized to any n-dimensional inner product space, this is also true for the notions of orthogonal projection of a vector, projection of a vector onto another, and rejection of a vector from another. In some cases, the inner product coincides with the dot product. Whenever they don't coincide, the inner product is used instead of the dot product in the formal definitions of projection and rejection. For a three-dimensional inner product space, the notions of projection of a vector onto another and rejection of a vector from another can be generalized to the notions of projection of a vector onto a plane, and rejection of a vector from a plane. The projection of a vector on a plane is its orthogonal projection on that plane. The rejection of a vector from a plane is its orthogonal projection on a straight line which is orthogonal to that plane. Both are vectors. The first is parallel to the plane, the second is orthogonal. For a given vector and plane, the sum of projection and rejection is equal to the original vector. Similarly, for inner product spaces with more than three dimensions, the notions of projection onto a vector and rejection from a vector can be generalized to the notions of projection onto a hyperplane, and rejection from a hyperplane. In geometric algebra, they can be further generalized to the notions of projection and rejection of a general multivector onto/from any invertible k-blade. See also Scalar projection Vector notation References External links Projection of a vector onto a plane projection Transformation (function) Functions and mappings
Vector projection
[ "Mathematics" ]
1,118
[ "Functions and mappings", "Mathematical analysis", "Transformation (function)", "Mathematical objects", "Mathematical relations", "Geometry" ]
1,508,678
https://en.wikipedia.org/wiki/Kansei%20engineering
Kansei engineering (Japanese: 感性工学 kansei kougaku, emotional or affective engineering) aims at the development or improvement of products and services by translating the customer's psychological feelings and needs into the domain of product design (i.e. parameters). It was founded by Mitsuo Nagamachi, professor emeritus of Hiroshima University (also former Dean of Hiroshima International University and CEO of International Kansei Design Institute). Kansei engineering parametrically links the customer's emotional responses (i.e. physical and psychological) to the properties and characteristics of a product or service. In consequence, products can be designed to bring forward the intended feeling. It has been adopted as one of the topics for professional development by the Royal Statistical Society. Introduction Product design has become increasingly complex as products contain more functions and have to meet increasing demands such as user-friendliness, manufacturability and ecological considerations. With a shortened product lifecycle, development costs are likely to increase. Since errors in the estimations of market trends can be very expensive, companies therefore perform benchmarking studies that compare with competitors on strategic, process, marketing, and product levels. However, success in a certain market segment not only requires knowledge about the competitors and the performance of competing products, but also about the impressions which a product leaves to the customer. The latter requirement becomes much more important as products and companies are becoming mature. Customers purchase products based on subjective terms such as brand image, reputation, design, impression etc.. A large number of manufacturers have started to consider such subjective properties and develop their products in a way that conveys the company image. A reliable instrument is therefore needed: an instrument which can predict the reception of a product on the market before the development costs become too large. This demand has triggered the research dealing with the translation of the customer's subjective, hidden needs into concrete products. Research is done foremost in Asia, including Japan and Korea. In Europe, a network has been forged under the 6th EU framework. This network refers to the new research field as "emotional design" or "affective engineering". History People want to use products that are functional at the physical level, usable at the psychological level and attractive at the emotional level. Affective engineering is the study of the interactions between the customer and the product at that third level. It focuses on the relationships between the physical traits of a product and its affective influence on the user. Thanks to this field of research, it is possible to gain knowledge on how to design more attractive products and make the customers satisfied. Methods in affective engineering (or Kansei engineering) is one of the major areas of ergonomics (human factor engineering). The study of integrating affective values in artifacts is not new at all. Already in the 18th century philosophers such as Baumgarten and Kant established the area of aesthetics. In addition to pure practical values, artifacts always also had an affective component. One example is jewellery found in excavations from the Stone Ages. The period of Renaissance is also a good example. In the middle of the 20th century, the idea of aesthetics was deployed in scientific contexts. Charles E. Osgood developed his semantic differential method in which he quantified the peoples' perceptions of artifacts. Some years later, in 1960, Professors Shigeru Mizuno and Yoji Akao developed an engineering approach in order to connect peoples' needs to product properties. This method was called quality function deployment (QFD). Another method, the Kano model, was developed in the field of quality in the early 1980s by Professor Noriaki Kano, of Tokyo University. Kano's model is used to establish the importance of individual product features for the customer's satisfaction and hence it creates the optimal requirement for process oriented product development activities. A pure marketing technique is conjoint analysis. Conjoint analysis estimates the relative importance of a product's attributes by analysing the consumer's overall judgment of a product or service. A more artistic method is called Semantic description of environments. It is mainly a tool for examining how a single person or a group of persons experience a certain (architectural) environment. Although all of these methods are concerned with subjective impact, none of them can translate this impact to design parameters sufficiently. This can, however, be accomplished by Kansei engineering. Kansei engineering (KE) has been used as a tool for affective engineering. It was developed in the early 70s in Japan and is now widely spread among Japanese companies. In the middle of the 90s, the method spread to the United States, but cultural differences may have prevented the method to enfold its whole potential. Procedure As mentioned above, Kansei engineering can be considered as a methodology within the research field of 'affective engineering'. Some researchers have identified the content of the methodology. Shimizu et al. state that 'Kansei Engineering is used as a tool for product development and the basic principles behind it are the following: identification of product properties and correlation between those properties and the design characteristics'. According to Nagasawa, one of the forerunners of Kansei engineering, there are three focal points in the method: How to accurately understand consumer Kansei How to reflect and translate Kansei understanding into product design How to create a system and organization for Kansei orientated design A model on methodology Different types of Kansei engineering are identified and applied in various contexts. Schütte examined different types of Kansei engineering and developed a general model covering the contents of Kansei engineering. Choice of Domain Domain in this context describes the overall idea behind an assembly of products, i.e. the product type in general. Choosing the domain includes the definition of the intended target group and user type, market-niche and type, and the product group in question. Choosing and defining the domain are carried out on existing products, concepts and on design solutions yet unknown. From this, a domain description is formulated, serving as the basis for further evaluation. The process is necessary and has been described by Schütte in detail in a couple of publications. Span the Semantic Space The expression Semantic space was addressed for the first time by Osgood et al.. He posed that every artifact can be described in a certain vector space defined by semantic expressions (words). This is done by collecting a large number of words that describe the domain. Suitable sources are pertinent literature, commercials, manuals, specification list, experts etc. The number of the words gathered varies according to the product, typically between 100 and 1000 words. In a second step the words are grouped using manual (e.g. Affinity diagram) or mathematical methods (e.g. factor and/or cluster analysis). Finally a few representing words are selected from this spanning the Semantic Space. These words are called "Kansei words" or "Kansei Engineering words". Span the Space of Properties The next step is to span the Space of Product Properties, which is similar to the Semantic Space. The Space of Product Properties collects products representing the domain, identifies key features and selects product properties for further evaluation. The collection of products representing the domain is done from different sources such as existing products, customer suggestions, possible technical solutions and design concepts etc. The key features are found using specification lists for the products in question. To select properties for further evaluation, a Pareto-diagram can assist the decision between important and less important features. Synthesis In the synthesis step, the Semantic Space and the Space of Properties are linked together, as displayed in Figure 3. Compared to other methods in Affective Engineering, Kansei engineering is the only method that can establish and quantify connections between abstract feelings and technical specifications. For every Kansei word a number of product properties are found, affecting the Kansei word. Synthesis The research into constructing these links has been a core part of Nagamachi's work with Kansei engineering in the last few years. Nowadays, a number of different tools is available. Some of the most common tools are : Category Identification Regression Analysis /Quantification Theory Type I Rough Sets Theory Genetic Algorithm Fuzzy Sets Theory Model building and Test of Validity After doing the necessary stages, the final step of validation remains. This is done in order to check if the prediction model is reliable and realistic. However, in case of prediction model failure, it is necessary to update the Space of Properties and the Semantic Space, and consequently refine the model. The process of refinement is difficult due to the shortage of methods. This shows the need of new tools to be integrated. The existing tools can partially be found in the previously mentioned methods for the synthesis. Software tools Kansei engineering has always been a statistically and mathematically advanced methodology. Most types require good expert knowledge and a reasonable amount of experience to carry out the studies sufficiently. This has also been the major obstacle for a widespread application of Kansei engineering. In order to facilitate application some software packages have been developed in the recent years, most of them in Japan. There are two different types of software packages available: User consoles and data collection and analysis tools. User consoles are software programs that calculate and propose a product design based on the users' subjective preferences (Kanseis). However, such software requires a database that quantifies the connections between Kanseis and the combination of product attributes. For building such databases, data collection and analysis tools can be used. This part of the paper demonstrates some of the tools. There are many more tools used in companies and universities, which might not be available to the public. User consoles Software As described above, Kansei data collection and analysis is often complex and connected with statistical analysis. Depending on which synthesis method is used, different computer software is used. Kansei Engineering Software (KESo) uses QT1 for linear analysis. The concept of Kansei Engineering Software (KESo) Linköping University in Sweden. The software generates online questionnaires for collection of Kansei raw-data Another software package (Kn6) was developed at the Polytechnic University of Valencia in Spain. Both software packages improve the collection and evaluation of Kansei data. In this way even users with no specialist competence in advanced statistics can use Kansei engineering. See also Affective computing Gandhian engineering – for low cost, frugal, large distribution product design. Fahrvergnügen Japanese quality References External links KANSEI Innovation (Hiroshima, JAPAN) European Kansei Engineering group Ph.D thesis on Kansei Engineering (europe) Ph.D thesis on Website Emotional UX and Kansei Engineering The Japan Society of Kansei Engineering The Malaysian Research Intensive Group for Kansei/Affective Engineering International Conference on Kansei Engineering & Intelligent Systems KEIS QFD Institute KESoft Engineering disciplines
Kansei engineering
[ "Engineering" ]
2,192
[ "nan" ]
1,508,709
https://en.wikipedia.org/wiki/Idrialin
Idrialin is a mineral wax which can be distilled from the mineral idrialite. According to G. Goldschmidt of the Chemical Society of London, it can be extracted by means of xylene, amyl alcohol or turpentine; also without decomposition, by distillation in a current of hydrogen, or carbon dioxide. It is a white crystalline body, very difficultly fusible, boiling above 440 °C (824 °F). Its solution in glacial acetic acid, by oxidation with chromic acid, yielded a red powdery solid and a fatty acid fusing at 62 °C, and exhibiting all the characters of a mixture of palmitic acid and stearic acid. References Waxes
Idrialin
[ "Physics" ]
151
[ "Materials", "Matter", "Waxes" ]
1,508,809
https://en.wikipedia.org/wiki/Dunes%20%28hotel%20and%20casino%29
The Dunes was a hotel and casino on the Las Vegas Strip in Paradise, Nevada. It opened on May 23, 1955, as the tenth resort on the Strip. It was initially owned by a group of businessmen from out of state, but failed to prosper under their management. It also opened at a time of decreased tourism, while the Strip was simultaneously becoming overbuilt with hotel rooms. A few months after the opening, management was taken over by the operators of the Sands resort, also on the Strip. This group failed to improve business and relinquished control less than six months later. Businessman Major Riddle turned business around after taking over operations in 1956. He was involved with the resort until his death in 1980. He had several partners, including Sid Wyman, who worked for the Dunes from 1961 until his death in 1978. Mafia attorney Morris Shenker joined in 1975, following one of the most extensive routine investigations ever conducted by the Nevada Gaming Control Board. The Dunes had frequent connections with Mafia figures, some of whom were alleged to have hidden ownership in the resort, and state officials were concerned about Shenker's association with such figures. In 1957, the Dunes debuted Las Vegas' first topless show, Minsky Goes to Paris, prompting other resorts to follow suit. Two other successful shows, by Frederic Apcar, would later debut at the Dunes. The resort also offered amenities such as the Emerald Green golf course, which opened in 1964. The Dunes was one of two Strip resorts to include a golf course, the other one being the Desert Inn. The Emerald Green was the longest course in Nevada, at 7,240 yards. The Dunes opened with 194 rooms, while a 21-story tower brought the total to 960. The tower was among the tallest buildings in Nevada, and was opened in 1965. By this time, the resort also had the tallest free-standing sign in the world, rising 181 feet. Several popular restaurants were also added in the 1960s, including the underwater-themed Dome of the Sea, and the Top O' the Strip, located at the top of the hotel tower. Another tower, 17 stories in height, was opened in 1979, giving the resort a total of 1,282 rooms. The Dunes added a second gaming facility, the Oasis Casino, in 1982. The Dunes experienced financial problems in the 1980s, and had many prospective buyers during this time, including businessman Steve Wynn. Japanese investor Masao Nangaku eventually bought the resort in 1987, at a cost of $157 million. Nangaku intended to renovate and expand the Dunes, although his plans were derailed by an unusually lengthy control board investigation, which dissuaded financiers. Wynn's company, Mirage Resorts, bought the Dunes in November 1992, paying $75 million. Plans were announced to replace it with a lake resort. The Dunes closed on January 26, 1993. The original North Tower was imploded on October 27, 1993, during a highly publicized ceremony which helped promote Wynn's new Treasure Island resort, located about a mile north. The demolition event garnered 200,000 spectators. The newer South Tower was imploded on July 20, 1994, without the fanfare of the first implosion; it attracted 3,000 spectators. Wynn's new resort, Bellagio, eventually opened on the former Dunes site in 1998. History The Dunes was initially owned by a group of businessmen that included Robert Rice of Beverly Hills, James A. Sullivan of Rhode Island, Milton Gettinger of New York, and Alfred Gottesman, a wealthy theater operator in Florida. Rice and Gottesman were new to the gaming industry. The group proposed the project, originally called the Araby, in July 1953. It was later renamed the Vegas Plaza, and then Hotel Deauville. Groundbreaking took place on June 22, 1954, with the resort now known as the Dunes. It was built by the Los Angeles-based McNeil Construction Company, which spent 11 months working on the resort. The Dunes opened on May 23, 1955, as the tenth resort on the Las Vegas Strip. The opening attracted many celebrities, including Cesar Romero, Spike Jones, and Rita Moreno. Gottesman and Sullivan were majority stockholders, and also served as 50-50 partners in the operation of the casino. Businessman Kirk Kerkorian bought a three-percent interest a couple months after the opening, marking his first Las Vegas investment. The Dunes was one of four new Las Vegas resorts to open within a six-week period, resulting in financial trouble for each of them. The Las Vegas Valley had been overbuilt with hotel rooms during a time of lessened demand, and the Dunes was also the southernmost resort on the Strip, located a considerable distance from other properties. A Dunes attorney blamed the resort's financial trouble on a persistent losing streak in its casino. Rice believed that the financial problems were the result of it competing with other resorts for expensive live entertainment. In addition, the Dunes had numerous creditors. Among these was McNeil Construction, which filed a $166,000 lien against the ownership group, representing unpaid salary. The group said it would not pay the balance, stating that the construction contract had been violated. In August 1955, an agreement was reached for Sands Hotel Corporation, owner of the Sands Hotel and Casino, to lease and operate the struggling Dunes. To mark the management change, a three-day celebration was held starting on September 9, 1955. Singer Frank Sinatra headlined the ceremony and entered on a camel. Sands closed the casino portion in January 1956, due to falling profits. It was the third Las Vegas casino to close in recent months, following the Moulin Rouge Hotel and Royal Nevada. Live entertainment also ceased, although the hotel remained open. Rice blamed disagreements within Sands for the casino's failure. The group lost $1.2 million operating the Dunes, and relinquished control of the resort on February 1, 1956. Businessman Major Riddle subsequently partnered with local hotel operator William Miller to reopen the casino. They would be equal partners with 44-percent ownership, while Rice would own the remainder. The Dunes casino reopened in June 1956. Seven months later, plans were announced for Sullivan and Gottesman to sell the property to Jacob Gottlieb, owner of a Chicago trucking firm. Gottlieb became the resort's landlord through Western Realty Company, and Miller departed the property as president and general manager. The resort was managed through Riddle's operating company, M&R Investment. The Dunes was sold in a Clark County sheriff's auction at the end of 1957, to satisfy the debt owed to McNeil Construction. It sold for $115,000, but was valued at $3.5 million. Gottesman, Sullivan, and Gettinger bought it back in November 1958. The resort thrived under Riddle, who added several new shows and facilities. On April 15, 1959, the Dunes hosted the first double groundbreaking ceremony in Las Vegas history: one for a convention center, built south of the existing resort facilities, and another for a 500-space parking lot directly north of the resort. In 1961, St. Louis businessmen Sid Wyman, Charlie Rich, and George Duckworth invested in the Dunes and became the new operators through a lease agreement. Wyman was put in charge of casino operations, and Riddle remained as the majority owner. The following year, he sold 15 percent of the operating corporation to the three men, reducing his interest to 37 percent. Several notable individuals were married at the Dunes, including Mary Tyler Moore and Grant Tinker (1962), Cary Grant and Dyan Cannon (1965), and Jane Fonda and Roger Vadim (1965). Mike Goodman, author of the best-selling 1963 book How to Win: At Cards, Dice, Races, Roulette, was a pit boss at the Dunes during the 1960s. Gambling author Barney Vinson also worked there. During the 1960s, the resort's western edge was condemned for construction of Interstate 15. The resort added a golf course in 1964. A 21-story hotel tower, initially known as the Diamond of the Dunes, was opened in May 1965, to mark the resort's 10th anniversary. It was part of a $20 million expansion project, and later became the North Tower, following the addition of another hotel building to the south. In 1969, M&R merged with Continental Connector Corporation, a New York-based electronics firm. M&R became a subsidiary of Continental Connector, which owned the Dunes and the land beneath it. Later in 1969, the U.S. Securities and Exchange Commission filed suit against Continental Connector, accusing it of making inaccurate financial statements regarding earnings at the Dunes. The company subsequently sought a buyer for the resort. In 1970, businessman Howard Hughes was in discussions to purchase the Dunes, although negotiations ended without a deal. Rapid-American Corporation began discussions to acquire the resort, but eventually dropped out. Rice, Wyman, Duckworth and three other top resort officials were indicted in 1971 by a federal grand jury, alleging that they filed false corporate income tax returns and that they conspired to skim money from the gaming tables. The officials pleaded innocent, and Wyman later divested his ownership, but remained with the Dunes as a consultant. Mafia connections The Dunes had numerous Mafia connections for much of its history. Sullivan's early ownership in the resort was actually held by Raymond Patriarca, and Gottlieb was affiliated with Jimmy Hoffa, president of the Teamsters Union. During the 1950s and 1960s, the union financed many casino expansions in Las Vegas through its pension fund. This included a $5 million loan for the Dunes' original hotel tower. Allen Dorfman, who handled negotiations on behalf of the pension fund, was alleged to have hidden ownership in the Dunes. The Dunes occasionally provided first-class treatment to Mafia figures such as Anthony Giordano, who was arrested at the resort in 1969, while visiting Wyman. The FBI planted surveillance bugs at the Dunes during the 1960s, and certain resort employees worked as informants for the agency during the 1970s. In 1972, a new group emerged as a prospective buyer for the resort, still under the ownership of Continental Connector. The group included San Diego developer Irvin Kahn and partner Morris Shenker, a St. Louis attorney who was representing Wyman and other resort officials in their case. The Nevada Gaming Control Board launched a routine investigation into Shenker and Kahn's financing, but halted its probe in 1973, following Kahn's death. In 1974, Shenker owned 37 percent of the Dunes through stock holdings in Continental Connector, and he sought to buy out the remainder, prompting the control board to reopen and expand its investigation into his financial background. It was one of the most extensive investigations in Nevada gaming history, as state officials had concerns about Mafia figures with whom Shenker was associated. Shenker later denied allegations that his ownership in the resort was a front for Nick Civella, whom Shenker had represented previously as attorney. Civella had a comped visit at the resort in 1974, but Shenker noted that he had not yet taken control of the Dunes at that time, and said he would not have allowed Civella to stay there if he had been in charge. In 1975, Tony "The Ant" Spilotro began spending extensive time in the Dunes casino, where he would take phone calls routed to the poker room. The gaming control board accused him of treating the Dunes as his personal office, and questioned Shenker and Riddle as to why he was allowed on the premises, given his Black Book status. The men denied knowing Spilotro or his background, and said they only had an outdated photograph of him from 20 years earlier, making it difficult to identify him. The control board alleged that management was, in fact, aware of Spilotro and had already been warned about his presence at the resort. M&R had negotiated a $40 million loan from the Teamsters Union pension fund in 1974. A $75 million expansion was planned to begin in 1976, and would include two additional hotel towers. The project would be financed in part by the Teamsters loan. However, the union withheld the funds, citing the Employee Retirement Income Security Act of 1974. Specifically, the union stated that the loan could not be granted because Continental Connector owned a trucking company which employed teamsters who had contributed to the pension fund. Shenker criticized the pension fund's reasoning, saying that Continental Connector had already divested itself of ownership in the trucking company. A second tower, rising 17 stories, eventually opened in 1979. In 1980, members of the Colombo crime family received comped stays at the resort. Later years Wyman died of cancer in June 1978, and gaming at the Dunes was halted for two minutes in his honor. In 1979, Continental Connector was renamed Dunes Hotels and Casinos Inc., amid plans for a second Dunes resort in Atlantic City. Riddle died in 1980, and Shenker suffered a heart attack that year, prompting him to seriously consider selling the Dunes. In 1982, the resort added a second casino building, known as the Oasis Casino. In December 1982, it was announced that the resort would be sold to brothers Stuart and Clifford Perlman for $185 million, which would include the assumption of $105 million in debt. The Perlmans provided a $10 million loan to prevent the Dunes from being seized by the Internal Revenue Service, but later backed out of the purchase after learning that the debt would be $20 million more than initially expected. Circus Circus Enterprises subsequently considered a purchase, as did Golden Nugget chairman Steve Wynn, who made a $115 million offer. In May 1984, the Dunes was sold to John Anderson, a farmer in Davis, California, who also owned the Maxim hotel-casino in Las Vegas. Shenker maintained a 26-percent stake. M&R filed for Chapter 11 bankruptcy in November 1985. Later that month, Wynn made another $115 million offer, which was rejected by Anderson and Shenker, deeming it too low and valuing the Dunes at $143.5 million. Numerous other offers would be made over the next two years, including one by New York businessman Donald Trump. Blumenfeld Properties, a Philadelphia real estate development company, made a $145.5 million offer for the Dunes, but ultimately did not purchase the resort. Burton Cohen was named as the resort's president in January 1986, following the departure of its previous president. Financial firm EF Hutton eventually formed a partnership that was interested in purchasing the Dunes, while a separate group led by Kerkorian was also in discussions. Talks with the two prospective buyers ended in February 1987, without a deal. Shortly thereafter, Texas-based lender Southmark Corporation purchased the first and second mortgages of the Dunes from Valley Bank and First Security Leasing, the Dunes' two major creditors. Later in 1987, Hilton Hotels and Japanese investor Masao Nangaku both considered buying the Dunes. Foreclosure was delayed to allow more time for a possible purchase. Hilton offered $122.5 million, and planned to refurbish the existing rooms while adding a third tower, at an additional cost of $110 million. Cohen believed that the resort needed 2,000 hotel rooms to adequately compete with other resorts. Kerkorian re-emerged as a prospective buyer, and Sheldon Adelson also considered purchasing the 163-acre resort. Nangaku ultimately prevailed, offering a $157.7 million bid in August 1987. His purchase was finalized four months later. While Nangaku waited to receive a gaming license, he hired Dennis Gomes to operate the Dunes, replacing Cohen as president. Nangaku underwent an unusually long gaming control board probe. Investigators suspected that unlicensed people from Nangaku's company, Minami Group, were involved in the resort. The control board encountered difficulty when looking into Nangaku's business associates because of differences in how Japan handles documents, which are generally kept confidential. Investigators also suspected that the associates were making attempts to hinder their efforts. In December 1988, Nangaku received a limited two-year gaming license while investigators continued their probe. Nangaku planned up to $280 million in renovations, including a new hotel tower and the demolition of the original motel-style structures, although little work had been done by mid-1989. He blamed the limited gaming license, stating that financiers were hesitant to lend money because of uncertainty about whether he would remain licensed in the near future. The first phase of Nangaku's multimillion renovation eventually began in September 1989. The following year, Nangaku announced a planned $200 million remodeling project. He also hired the architectural firm Hellmuth, Obata & Kassabaum to design the new high-rise tower. Nangaku eventually received a permanent gaming license in May 1991, at which point he was seeking a partner to help renovate and operate the Dunes. The resort had laid off hundreds of workers that year, due to financial troubles brought on by the early 1990s recession. Despite Nangaku's expansion plans for the resort, he ultimately invested only $12 million in basic repairs. The Las Vegas Review-Journal had written in 1988 that the Dunes had lost its "mystical luster" over the past 20 years, with its high rollers migrating to "more attractive" resorts. The newspaper's John L. Smith wrote that the Dunes had lost its "classy resort" reputation and had become "a dump by Strip standards" despite its name recognition and prime location on the central Strip. The Dunes failed to stay competitive against new megaresorts opening on the Strip, including The Mirage in 1989, and the Excalibur a year later. During 1990, the resort was losing $500,000 monthly. Wynn's company, since renamed as Mirage Resorts, agreed to purchase the Dunes in October 1992. It was sold the following month for $75 million. At the time, the property was losing $2 million a month. Wynn planned to demolish the Dunes and redevelop the site. Gaming executive Richard Goeglein led a team which helped operate the Dunes in the months leading up to its closure. Closure and demolition The Dunes closed on January 26, 1993. Wynn said: "It's becoming in death a much better place than it was in life. This thing about melancholy in its passing is sorta strange. No one felt that while it [the Dunes] was laying there, terminally ill. It's been laying there on life support systems for many years". At the time of its closing, the Dunes employed more than 1,200 people. Employees held reunions each year following the closure. An on-site sale of the Dunes inventory, including light fixtures and carpeting, began in March 1993. Demolition started on September 16, 1993. A four-alarm fire began on-site that afternoon, after workers accidentally ran over an electrical outlet in a bulldozer. The fire affected a two-story hotel building and eventually spread across the property. More than 200 firefighters responded, and six blocks of the Strip were closed off for more than four hours until the fire was contained. The original North Tower was demolished on the night of October 27, 1993, one day after the opening of Wynn's new Strip resort Treasure Island, located about a mile north. The tower was imploded with great fanfare in an event emceed by Wynn that incorporated his new resort; on his command, a faux pirate ship at Treasure Island shot its cannon several times, simulating the Dunes' destruction by cannonballs as the implosion began. The tower was brought down around 10:10 p.m., following a six-minute fireworks show. The $1.5 million demolition event attracted 200,000 spectators. The Dunes was the first Las Vegas resort to be imploded, and numerous others would follow suit into the next decade. The tower's implosion was handled by Controlled Demolition, Inc. The demolition required 365 pounds of dynamite, and 550 gallons of aviation fuel were also used, creating fireballs that went up each floor of the tower's east side, facing the Strip and spectators. The Oasis Casino and the Dunes' two-story casino building were not part of the implosion. Fireworks sparked two small fires on the roof of the Oasis, and numerous small fires began in the Dunes' casino area, all put out by on-site firefighters. Both facilities were bulldozed following the implosion. A three-month clean-up project began to remove the debris left from the imploded tower. During the clean-up, workers discovered hundreds of $100 Dunes casino chips in the resort's foundation; some casinos executives would dispose of outdated chips by burying them in the foundation of their buildings. The South Tower was briefly used as a job center for Treasure Island. It was eventually imploded on the morning of July 20, 1994, without the fanfare of the first implosion. Mirage Resorts had urged people not to show up for the second implosion, which attracted approximately 3,000 spectators. Commenting on the end of the Dunes, Wynn said, "This is not an execution; this is a phoenix rising". His new resort, Bellagio, eventually opened on the former Dunes site in 1998. The resort's lake covers much of the land once occupied by the Dunes' casino and hotel structures. Fire safety and 1986 arson spree New fire-safety rules were implemented in Las Vegas following the MGM Grand fire (1980) and Las Vegas Hilton fire (1981). In 1985, the Dunes was one of seven hotels that failed to comply with the new safety rules, receiving six citations. The Dunes agreed to close its main showroom and convention center in exchange for a county extension, allowing time to raise $13.5 million needed to bring the facilities up to standard. In February 1986, the Dunes won additional extensions to meet the fire-safety requirements. Later that month, a series of arson fires were set to several Strip resorts, including the Dunes, the Holiday Casino, and the Sands. As a precaution, 1,650 hotel guests were evacuated from the Dunes just before midnight. On the casino floor, many gamblers refused to leave and continued playing. Firefighters quickly determined that the fires posed no threat to the casino area. Crews battled a total of five fires at the Dunes, and guests were allowed to return to their rooms after three hours. Six people were treated for smoke inhalation, and damage was estimated at $55,000. The Dunes offered a $10,000 reward for information leading to the arrest of the arsonist. A man was eventually arrested for the arson spree and sentenced to 10 years in prison. In light of the recent fires, the county reconsidered the extensions previously granted to the Dunes. By May 1986, the resort had made significant progress on its fire retrofit work. Features The Dunes featured an Arabian theme, and was designed by Robert Dorr Jr. and John Replogle. The resort initially occupied 85 acres. The casino opened with 120 slot machines. The convention center, opened in 1959, included seating for 800 people. The casino was remodeled in 1961, and a keno lounge would be added 10 years later, part of a $2 million renovation project. In 1965, the Dunes became the first Strip business to offer a nursery, which would supervise children while their parents enjoyed the resort's amenities. By that point, the Dunes also had two swimming pools and a dozen shops, while additional retailers would be added in 1979. An addition, containing various amenities, was approved by the county in 1981. The expansion cost $15 million, and included the Oasis Casino, which opened on August 20, 1982. The structure, with an exterior of black mirrored glass, was built at a cost of $17 million. The Oasis provided the Dunes property with an additional of gaming space. Although the Oasis was a two-story building, it opened without the second floor, which was unfinished and sealed off. The Oasis Casino featured curved neon palm trees at its entrance, standing 70 feet with fronds 20 feet in length. They were designed by Ad-Art sign designer Jack DuBois, based on early design work by Raul Rodriguez. The palms were dismantled in April 1993, after being sold during the liquidation sale to a buyer in Taiwan. By 1999, the palms had been installed at the entrance to the NASA nightclub in Bangkok. The club closed some time after that, and the whereabouts of the palms are unknown. Hotel The Dunes opened with 194 rooms, and plans for additional rooms were already in the works, although it would be years before they came to fruition. In 1957, plans were announced for a $2 million expansion that would include a 14-story tower. A year later, the proposed tower was increased to 18 stories. An additional 246 rooms were eventually added in 1960, with the opening of the Olympic Wing, joining the existing Seahorse Wing. Groundbreaking for the tower eventually took place on October 20, 1962. It was designed by Milton Schwartz, and the opening was pushed back because of design changes. The tower eventually opened in May 1965. It had 510 rooms, bringing the total room count to 960. At 21 stories, it was among the tallest buildings in Nevada. The tower was originally known as Diamond of the Dunes, and was later called the North Tower, following the addition of the South Tower. Construction of the latter began on July 26, 1978, part of a $100 million expansion and remodeling project. The 17-story South Tower was topped off on April 12, 1979, and was opened that December. The second tower was designed by Maxwell Starkman and included 464 rooms, for a new total of 1,282. Golf course The Dunes opened its Emerald Green golf driving range in November 1961. The Emerald Green golf course debuted in 1964, and had its formal opening in April 1965. Since then, the resort was sometimes known as the Dunes Hotel and Country Club, reflecting its golf amenities. The Emerald Green measured 7,240 yards, making it the longest course in Las Vegas. It stretched south from Flamingo Road to Tropicana Avenue, occupying roughly 80 acres along the eastern edge of I-15. Riddle bought the site from banker Jerry Mack and Mel Close, bringing the resort a total of 163 acres. The Emerald Green's closure in 1993 left the Desert Inn as the only other Strip resort with a golf course. At the time, the Emerald Green had seen an average of 65,000 golfers each year, second only to the Las Vegas Municipal Golf Course. It was especially popular among celebrities. The Emerald Green site is now occupied by parts of Park MGM (opened in 1996) and CityCenter (2009), as well as T-Mobile Arena (2016). Sultan and neon sign The Dunes originally featured a 30-foot-(9-meter-)high sultan statue above its entrance. The fiberglass statue was created by sculptor Kermit Hawkins. The sultan's turban included a diamond that lit up at night, and which was actually a car headlamp that had been put in place. In 1964, the sultan was moved to the edge of the golf course along I-15, serving as an advertisement to motorists. The sultan was destroyed by fire, caused by a short circuit, on the night of December 31, 1985. Lee Klay of the Federal Sign and Signal Company designed a roadside sign for the Dunes, activated on November 12, 1964. Klay recalled that the resort owners asked him to create "a big phallic symbol going up in the sky as far as you can make it". At 181 feet (55 meters), it was the tallest free-standing sign in the world. The foundation measured 80 feet in width, and supported two white-colored columns forming a bulbous onion dome or stylized spade shape at the top. Contained within this shape were two-story-high letters spelling out "Dunes", with a large diamond atop the lettering. The sign contained 16,000 feet of neon tubing, including 7,200 lamps. At night, the sign lit up in red coloring. Blackout curtains were added in hotel rooms facing the sign, as some guests had trouble sleeping because of the neon lighting. Schwartz objected to the construction of the sign, believing that it conflicted with the design of his hotel tower, although Riddle overrode him. A full-time, three-man team worked to maintain the sign, which had a service elevator going up one of its columns to the top. The sign was intentionally destroyed as part of the 1993 implosion event, with the use of 18-grain detonating cord. Architectural historian Alan Hess had advocated for saving the sign, although Mirage Resorts stated that it was in extremely poor condition, with demolition being cheaper than preservation. Saving the sign would have required it to be disassembled in eight-foot sections, at a cost of up to $100,000. A smaller, similar sign exists at the city's Neon Museum. In 2019, filmmaker Tim Burton also debuted a Dunes-inspired sign as part of Lost Vegas: Tim Burton, an exhibit at the Neon Museum. An original neon entrance sign from the resort is also located at the Nevada State Museum in Las Vegas. Restaurants A popular fine-dining restaurant, Sultan's Table, opened on March 4, 1961. It was designed by Schwartz, and included live music for diners. Riddle was inspired to build Sultan's Table after visiting an upscale restaurant, the Villa Fontana, in Mexico City. Sultan's Table was the first gourmet restaurant to open on the Strip, and Diners Club named it "America's finest and most beautiful new restaurant". The Dunes opened its Dome of the Sea on June 12, 1964. It was a seafood restaurant with an underwater theme. It was also designed by Schwartz, who created the exterior as a circular building that "looked like it came from outer space". Schwartz collaborated with designer Sean Kenny on the interior, which had a budget of $150,000. Images of fish and seaweed were projected onto the restaurant's interior walls. It also featured a harpist, dressed as a mermaid, who performed in the center of the room. For a brief period starting in 1972, the restaurant would transform into Dome After Hours, offering cocktails and continuous live entertainment between the hours of 1:00 and 5:00 a.m. A restaurant and lounge, Top O' the Strip, opened on June 4, 1965. It was located on the top floor of the new hotel tower, providing views of the city. It was popular among tourists, and also featured live entertainment. It was renamed Top O' the Dunes in 1979. Live entertainment Comedian Wally Cox was an early entertainer at the Dunes, opening there in July 1955, although he was fired due to poor audience reception. Gottesman acknowledged that Cox was ill-prepared and brought no new material to his performances. Cox had been signed for four weeks, but only gave three performances. Comedian Stan Irwin briefly filled in for Cox, who was then hired back later in the month. Entertainers at Top O' the Strip included Art and Dotty Todd, Russ Morgan, and Bob Anderson. The Dunes also opened a Comedy Store location in 1984, hosting numerous comedians. It relocated to the Golden Nugget hotel-casino in 1990, but briefly returned to the Dunes in 1992. Shows The Dunes' 1955 opening included Vera-Ellen in a production show titled New York-Paris-Paradise, which was contracted for a four-week run. It was part of Gottesman's policy to focus on shows rather than big-name stars; he said, "There aren't enough name stars in the world to play all the Vegas hotels". New York-Paris-Paradise was directed by Robert Nesbitt and played in the Dunes' showroom, known as the Arabian Room. On January 10, 1957, Riddle debuted Las Vegas' first topless show, titled Minsky Goes to Paris. Riddle said, "We have something people can't get on television". The show's success inspired other resorts to debut their own topless shows. During 1958, the show was attracting 9,000 viewers weekly. Later known as Minsky's Follies, the show ran until 1961. Riddle brought Tenderloin, a Broadway musical, to the Dunes in May 1961. The Broadway show Guys and Dolls, starring Betty Grable and Dan Dailey, also played at the Dunes for about six months, starting in 1962. The Dunes opened a new venue, the Persian Room, in December 1961. It replaced the Sinbad Cocktail Lounge. The Persian Room debuted with Vive Les Girls, a French musical revue by Frederic Apcar. It was successful, becoming an annual show at the Dunes. It closed in 1971, when the Persian Room was replaced by the keno lounge. The Dunes had also debuted another show by Apcar in December 1963, titled Casino de Paris and initially starring Line Renaud. The show cost approximately $6 million to create, featuring 100 cast members and more than 500 costumes. The show incorporated a custom stage known as the Octopus or Octuramic. Designed by Schwartz and Kenny, the stage had several arms capable of extending 50 feet above the audience. Circular dancing platforms, 20 feet in diameter, were built at the end of each arm, allowing showgirls to dance above the audience. The show ended in June 1981, due to the high costs of putting it on each week. Showstoppers, a family show by Jeff Kutash, was planned to open in 1990, but was canceled before its premiere. Boxing Many major professional boxing events took place at the Dunes from 1975 to 1990; notably the May 20, 1983, undercard that featured Ossie Ocasio retaining his WBA's world Cruiserweight title by fifteen round unanimous decision over Randy Stephens, Greg Page beat Renaldo Snipes by twelve rounds unanimous decision in a WBC's Heavyweight division elimination bout, Michael Dokes retained his WBA world Heavyweight title with a fifteen-round draw (tie) over Mike Weaver in their rematch, and Larry Holmes won over Tim Witherspoon by a twelve-round split decision to retain his WBC world Heavyweight title. This was the first time in history that two world Heavyweight championship fights took place on the same day. In popular culture The Dunes made numerous appearances in television, including a 1964 episode of Arrest and Trial. It is featured in a 1977 episode of The Bionic Woman titled "Fembots in Las Vegas", and a 1978 episode of Charlie's Angels titled "Angels in Vegas". The Dunes sign is used in the intro of the television series Vega$, and the resort is seen in the pilot episode of the 1980s television series Knight Rider, titled "Knight of the Phoenix". It also appears in the season-two premiere episode "Goliath". The Dunes made film appearances as well, including the 1971 James Bond movie Diamonds Are Forever, in which it serves as the office of Whyte House casino manager Bert Saxby. The Dunes sign also makes an appearance in the film, and a deleted scene, available on home media releases, takes place in the Dome of the Sea restaurant. In the 1984 film Oxford Blues, the main character (portrayed by Rob Lowe) works as a parking attendant at the Dunes. The sign and hotel also appear in the 1984 film Cannonball Run II, and are seen in the closing credits of the 1989 film K-9. The sign also appears in the 1991 comedy Hot Shots!, when the pilot nicknamed "Wash Out" mistakes a runway and lands near the hotel. The 1991 film Harley Davidson and the Marlboro Man includes footage of the casino and hotel, including its rooftop. The hotel's 1993 implosion was filmed for Treasure Island: The Adventure Begins, a television special promoting Wynn's Treasure Island resort. The implosion is also among other Las Vegas resort demolitions featured during the closing credits of the 2003 film The Cooler. The Dunes is shown across from the fictional Tangiers casino at the beginning of the 1995 film Casino, directed by Martin Scorsese. The Dunes is also seen during the Las Vegas sequence of Scorsese's 2019 film The Irishman. See also List of Las Vegas Strip hotels Notes References External links Footage of the Dunes' grand opening with Frank Sinatra Implosion of the Dunes 1955 establishments in Nevada 1993 disestablishments in Nevada Casinos completed in 1955 Casino hotels Buildings and structures demolished by controlled implosion Demolished hotels in Clark County, Nevada Hotel buildings completed in 1955 Hotel buildings completed in 1965 Hotel buildings completed in 1979 Hotels established in 1955 Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Buildings and structures demolished in 1993 Buildings and structures demolished in 1994 Las Vegas Strip Resorts in the Las Vegas Valley Skyscraper hotels in Paradise, Nevada Former skyscraper hotels
Dunes (hotel and casino)
[ "Engineering" ]
7,571
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
1,509,102
https://en.wikipedia.org/wiki/Assisted%20GNSS
Assisted GNSS (A-GNSS) is a GNSS augmentation system that often significantly improves the startup performance—i.e., time-to-first-fix (TTFF)—of a global navigation satellite system (GNSS). A-GNSS works by providing the necessary data to the device via a radio network instead of the slow satellite link, essentially "warming up" the receiver for a fix. When applied to GPS, it is known as assisted GPS or augmented GPS (abbreviated generally as A-GPS and less commonly as aGPS). Other local names include A-GANSS for Galileo and A-Beidou for BeiDou. A-GPS is extensively used with GPS-capable cellular phones, as its development was accelerated by the U.S. FCC's 911 requirement to make cell phone location data available to emergency call dispatchers. Background Every GPS device requires orbital data about the satellites to calculate its position. The data rate of the satellite signal is only 50 bit/s, so downloading orbital information like ephemerides and the almanac directly from satellites typically takes a long time, and if the satellite signals are lost during the acquisition of this information, it is discarded and the standalone system has to start from scratch. In exceptionally poor signal conditions, for example in urban areas, satellite signals may exhibit multipath propagation where signals skip off structures, or are weakened by meteorological conditions or tree canopies. Some standalone GPS navigators used in poor conditions can't fix a position because of satellite signal fracture and must wait for better satellite reception. A regular GPS unit may need as long as 12.5 minutes (the time needed to download the GPS almanac and ephemerides) to resolve the problem and be able to provide a correct location. Operation In A-GPS, the network operator deploys an A-GPS server, a cache server for GPS data. These A-GPS servers download the orbital information from the satellite and store it in the database. An A-GPS-capable device can connect to these servers and download this information using mobile-network radio bearers such as GSM, CDMA, WCDMA, LTE or even using other radio bearers such as Wi-Fi or LoRa. Usually the data rate of these bearers is high, hence downloading orbital information takes less time. Utilizing this system can come at a cost to the user. For billing purposes, network providers often count this as a data access, which can cost money, depending on the tariff. To be precise, A-GPS features depend mostly on an Internet network or connection to an ISP (or CNP, in the case of CP/mobile-phone device linked to a cellular network provider data service). A mobile device with just an L1 front-end radio receiver and no GPS acquisition, tracking, and positioning engine only works when it has an internet connection to an ISP/CNP, where the position fix is calculated offboard the device itself. It doesn't work in areas with no coverage or internet link (or nearby base transceiver station (BTS) towers, in the case on CNP service coverage area). Without any of those resources, it can't connect to the A-GPS servers usually provided by CNPs. On the other hand, a mobile device with a GPS chipset requires no data connection to capture and process GPS data into a position solution, since it receives data directly from the GPS satellites and is able to calculate a position fix itself. However, the availability of a data connection can provide assistance to improve the performance of the GPS chip on the mobile device. Modes of operation Assistance falls into two categories: Mobile Station Based (MSB) Information used to acquire satellites more quickly. It can supply orbital data or almanac for the GPS satellites to the GPS receiver, enabling the GPS receiver to lock to the satellites more rapidly in some cases. The network can provide precise time. Mobile Station Assisted (MSA) Calculation of position by the server using information from the GPS receiver. The device captures a snapshot of the GPS signal, with approximate time, for the server to later process into a position. The assistance server has a good satellite signal and plentiful computation power, so it can compare fragmentary signals relayed to it. Accurate, surveyed coordinates for the cell site towers allow better knowledge of local ionospheric conditions and other conditions affecting the GPS signal than the GPS receiver alone, enabling more precise calculation of position. Not every A-GNSS server provides MSA mode operation due to the computational cost and the declining number of mobile terminals incapable of performing their own calculations. Google's SUPL server is one that doesn't. A typical A-GPS-enabled receiver uses a data connection (Internet or other) to contact the assistance server for aGPS information. If it also has functioning autonomous GPS, it may use standalone GPS, which is sometimes slower on time to first fix, but does not depend on the network, and therefore can work beyond network range and without incurring data-usage fees. Some A-GPS devices do not have the option of falling back to standalone or autonomous GPS. Related technologies Many mobile phones combine A-GPS and other location services, including Wi-Fi positioning system and cell-site multilateration and sometimes a hybrid positioning system. High-Sensitivity GPS is an allied technology that addresses some of these issues in a way that does not require additional infrastructure. However, unlike some forms of A-GPS, high-sensitivity GPS cannot provide a fix instantaneously when the GPS receiver has been off for some time. Standards A-GPS protocols are part of Positioning Protocol defined by two different standardization bodies, 3GPP and Open Mobile Alliance (OMA). Control Plane Protocol Defined by the 3GPP for various generations of mobile phone systems. These protocols are defined for circuit switched networks. The following positioning protocols have been defined. RRLP – 3GPP defined RRLP (Radio Resource Location Protocol) to support positioning protocol on GSM networks. TIA 801 – CDMA2000 family defined this protocol for CDMA 2000 networks. RRC position protocol – 3GPP defined this protocol as part of the RRC standard for UMTS network. LPP – 3GPP defined LPP or LTE positioning protocol for LTE networks. User Plane Protocol Defined by the OMA to support positioning protocols in packet switched networks. Three generations of Secure User Plane Location (SUPL) protocol have been defined, from version 1.0 to 3.0. SUPL The SUPL (Secure User Plane Location) protocol, unlike its control-plane equivalents restricted to mobile networks, runs on the Internet's TCP/IP infrastructure. Consequently, its application extends beyond the original intended use of mobile devices and may be used by general-purpose computers. SUPL 3.0 legitimizes such use by adding admission for WLAN and broadband connections. Actions defined by SUPL 3.0 include a wide range of services like geofencing and billing. The A-GNSS functions are defined in the SUPL Positioning Functional Group. It includes: SUPL Assistance Delivery Function (SADF), which provides the basic information sent to the device in both A-GNSS modes. SUPL Reference Retrieval Function (SRRF), which tells the server to prepare the information mentioned above by receiving from the satellites. SUPL Position Calculation Function (SPCF), which lets the client or the server ask for the client's location. The server-generated location may result from MSA or from mobile cell. If a MSB (SET based) mode is used, the client reports its location to the server instead. The specifics of communication is defined in the ULP (Userplane Location Protocol) substandard of SUPL suite. As of December 2018, GNSS systems supported include GPS, Galileo, GLONASS, and BeiDou. See also Mobile phone tracking GNSS enhancement References Mobile technology Global Positioning System
Assisted GNSS
[ "Technology", "Engineering" ]
1,647
[ "Wireless locating", "Aircraft instruments", "nan", "Aerospace engineering", "Global Positioning System" ]
1,509,165
https://en.wikipedia.org/wiki/IT%20University
IT University is a joint department between Chalmers University of Technology and the University of Gothenburg in Sweden. This joint venture offers a great scope for cooperation between researchers with different areas of expertise and academic specialties within the field of information technology. The programmes offered are based on advanced research and are in a constant state of development. IT University was established in the autumn of 2001. Today it offers programs at both Bachelor's and Master's level, mostly with a focus on applied information technology. Programs in English C:Art:Media Master Program Intelligent Systems Design Master's in Software Engineering and Management Software Engineering and Management International MSc in Applied Data Science External links IT University website Educational institutions established in 2001 Chalmers University of Technology University of Gothenburg Information technology organizations Information technology education Joint ventures 2001 establishments in Sweden
IT University
[ "Technology" ]
159
[ "Information technology", "Information technology education", "Information technology organizations" ]
1,509,191
https://en.wikipedia.org/wiki/Cohort%20%28statistics%29
In statistics, epidemiology, marketing and demography, a cohort is a group of subjects who share a defining characteristic (typically subjects who experienced a common event in a selected time period, such as birth or graduation). Comparison with period data Cohort data can oftentimes be more advantageous to demographers than period data. Because cohort data is honed to a specific time period, it is usually more accurate. It is more accurate because it can be tuned to retrieve custom data for a specific study. In addition, cohort data is not affected by tempo effects, unlike period data. However, cohort data can be disadvantageous in the sense that it can take a long amount of time to collect the data necessary for the cohort study. Another disadvantage of cohort studies is that it can be extremely costly to carry out, since the study will go on for a long period of time, demographers often require sufficient funds to fuel the study. Demography often contrasts cohort perspectives and period perspectives. For instance, the total cohort fertility rate is an index of the average completed family size for cohorts of women, but since it can only be known for women who have finished child-bearing, it cannot be measured for currently fertile women. It can be calculated as the sum of the cohort's age-specific fertility rates that obtain as it ages through time. In contrast, the total period fertility rate uses current age-specific fertility rates to calculate the completed family size for a notional woman, were she to experience these fertility rates through her life. Cohort studies A study on a cohort is a cohort study. Two important types of cohort studies are: Prospective Cohort Study: In this type of study, there is a collection of exposure data (baseline data) from the subjects recruited before development of the outcomes of interest. The subjects are then followed through time (future) to record when the subject develops the outcome of interest. Ways to follow-up with subjects of the study include: phone interviews, face-to-face interviews, physical exams, medical/laboratory tests, and mail questionnaires. An example of a prospective cohort study is, for instance, if a demographer wanted to measure all the males born in the year 2018. The demographer would have to wait for the event to be over, the year 2018 must come to an end in order for the demographer to have all the necessary data. Retrospective Cohort Study: Retrospective Studies start with subjects that are at risk to have the outcome or disease of interest and identifies the outcome starting from where the subject is when the study starts to the past of the subject to identify the exposure. Retrospective use records: clinical, educational, birth certificates, death certificates, etc. but that may be difficult because there may not be data for the study that is being initiated. These studies may have multiple exposures which may make this study difficult. On the other hand, an example of a retrospective cohort study is, if a demographer was examining a group of people born in year 1970 who have type 1 diabetes. The demographer would begin by looking at historical data. However, if the demographer was looking at ineffective data in attempts to deduce the source of type 1 diabetes, the demographers results would not be accurate. See also Age grade Bureau of Labor Statistics Case mix Cohort study Generational cohort National Longitudinal Surveys Prospective cohort study References Further reading External links Cohort in a glossary, U.S. Bureau of Labor Statistics Division of Information Services Centre for Longitudinal Studies - the UK resource centre for national birth cohort studies. Biostatistics Demography Applied statistics
Cohort (statistics)
[ "Mathematics", "Environmental_science" ]
752
[ "Demography", "Applied mathematics", "Applied statistics", "Environmental social science" ]
1,509,250
https://en.wikipedia.org/wiki/Intelligent%20Parking%20Assist%20System
Intelligent Parking Assist System (IPAS), also known as Advanced Parking Guidance System (APGS) for Toyota models in the United States, is the first production automatic parking system developed by Toyota Motor Corporation in 1999 initially for the Japanese market hybrid Prius models and Lexus models. The technology assists drivers in parking their vehicle. On vehicles equipped with the IPAS, via an in-dash screen and button controls, the car can steer itself into a parking space with little input from the user. The first version of the system was deployed on the Prius Hybrid sold in Japan in 2003. In 2006, an upgraded version debuted for the first time outside Japan on the Lexus LS luxury sedan, which featured the automatic parking technology among other brand new inventions from Toyota. In 2009, the system appeared on the third generation Prius sold in the U.S. In Asia and Europe, the parking technology is marketed as the Intelligent Park Assist System for both Lexus and Toyota models, while in the U.S. the Advanced Parking Guidance System name is only used for the Lexus system. Development The initial version of the Intelligent Parking Assist System, launched in 2003, was designed for reverse parallel parking. Driver intervention was not required, as the system estimated the size of the parking space and maneuvered the vehicle appropriately. This was done by an onboard computer which used a camera built into the forward and rear of the car. Sensors located at similar locations detected the proximity of nearby vehicles. The dashboard displayed an image of the lot, and the driver would then have to determine the exact position that the vehicle in the lot via the arrows which appeared on the screen. Using the arrows, the user would set the location of the vehicle in the space. When satisfied, the user pressed the "Set" button, which then activated the IPAS. The system then took over steering control to maneuver the vehicle. Early versions of this system had difficulty detecting objects, including cats, baby prams and pedestrians. Secondly when the driver activated the system in a too small a space, the system constantly flashed warning signals to inform the user of the danger of hitting the vehicle. User assistance is required in such situations. In 2005, an upgraded version added recognition capability for parking stripes. A later version of this parking technology, launched in 2006, added integration with parking sensors. This latest version could calculate the steering maneuvers needed for parallel or reverse parking, and help determine that the car has enough clearance for a particular space with colored screen displays which indicated adequate or inadequate space. How it works Technology The IPAS/APGS use computer processors which are tied to the vehicle's sonar warning system feature, backup camera, and two additional forward sensors on the front side fenders. The sonar park sensors, known as "Intuitive Parking Assist" or "Lexus Park Assist", include multiple sensors on the forward and rear bumpers which detect obstacles, allowing the vehicle to sound warnings and calculate optimum steering angles during regular parking. These sensors plus the two additional parking sensors are tied to a central computer processor, which in turn is integrated with the backup camera system to provide the driver parking information. When the sonar park sensors feature is used, the processor(s) calculate steering angle data which are displayed on the navigation/camera touchscreen along with obstacle information. The Intelligent Parking Assist System expands on this capability and is accessible when the vehicle is shifted to reverse (which automatically activates the backup camera). When in reverse, the backup camera screen features parking buttons which can be used to activate automated parking procedures. When the Intelligent Parking Assist System is activated, the central processor calculates the optimum parallel or reverse park steering angles and then interfaces with the Electric Power Steering systems of the vehicle to guide the car into the parking spot. Functions Newer versions of the system allow parallel or reverse parking. When parallel parking with the system, drivers first pull up alongside the parking space. They move forward until the vehicle's rear bumper passes the rear wheel of the car parked in front of the open space. Then, shifting to reverse automatically activates the backup camera system, and the car's rear view appears on dash navigation/camera display. The driver's selection of the parallel park guidance button on the navigation/camera touchscreen causes a grid to appear (with green or red lines, a flag symbol representing the corner of the parking spot, and adjustment arrows). The driver is responsible for checking to see if the representative box on the screen correctly identifies the parking space; if the space is large enough to park, the box will be green in color; if the box is incorrectly placed, or lined in red, using the arrow buttons moves the box until it turns green. Once the parking space is correctly identified, the driver presses OK and takes his/her hands off the steering wheel, while keeping the foot on the brake pedal. When the driver slowly releases the brake, while keeping the foot on the brake pedal, the car will then begin to back up and steer itself into the parking space. The reverse parking procedure is virtually identical to the parallel parking procedure. The driver approaches the parking space, moving forward and turning, positioning the car in place for backing into the reverse parking spot. The vehicle rear has to be facing the reverse parking spot, allowing the backup camera to 'see' the parking area. Shifting to reverse automatically activates the backup camera system, and the driver selects the reverse park guidance button on the navigation/camera touchscreen (the grid appears with green or red lines, a flag symbol representing the corner of the parking spot, and adjustment arrows; reverse parking adds rotation selection). After checking the parking space and engaging the reverse park procedure, the same exact parking process occurs as the car reverse parks into the spot. The system is set up so that at any time the steering wheel is touched or the brake firmly pressed, the automatic parking will disengage. The vehicle also cannot exceed a set speed, or the system will deactivate. When the car's computer voice issues the statement "The guidance is finished", the system has finished parking the car. The driver can then shift to drive and make adjustments in the space if necessary. Media coverage Press reports The debut of the parking technology in the United States in 2006 received widespread media attention, with demonstrations performed on television shows ranging from cable news programming to The Oprah Winfrey Show. In automotive publications, the feature garnered mixed reviews, with opinions on its utility varying from useful to impractical, depending on the parking situation and driver. A video from CNBC showed the system working "quite effectively" with a first-time user and other reviewers found that the system worked smoothly. A video produced by Automobile Magazine demonstrates how the system makes parking more difficult, due to some complexity to the touchscreen. A demonstration by Winding Road magazine. Advertising Lexus capitalized on the debut of the parking system in the U.S. with its LS flagship with two ads. The first, "Pyramid," depicted a driver parking a car between two stacks of glasses using the system. A second ad showing a montage of different technologies, followed finally by a demonstration of the parallel park feature, and a man stating that he never thought this technology could possibly exist. The system was also referenced by competitors Audi and Hyundai in their own advertisements. Audi marketed their 2007 A4 as "the luxury car for people who can park themselves," and showed a professional driver swing into a tight parallel parking space. Hyundai's advertisement for the 2007 Azera listed a side-by-side feature comparison between the Azera and the LS460, reaching a conclusion that although the Azera lacks the style, luxury, performance, class, and comfort of the LS, its savings can pay for valet. See also Automatic parking Lexus Toyota Motor Corporation Parallel parking Parking sensors Backup camera References External links Lexus.com Advanced Parking Guidance System description and video demo Toyota Prius - Intelligent Park Assist System description and demo Gizmodo.com Advanced Parking Guidance System technical review and non-affiliated video demo FQuick.com Advanced Parking Guidance System non-affiliated video demo Automotive accessories Automotive technology tradenames Self-driving cars Lexus Toyota
Intelligent Parking Assist System
[ "Engineering" ]
1,663
[ "Automotive engineering", "Self-driving cars" ]
1,509,289
https://en.wikipedia.org/wiki/Magnetostatics
Magnetostatics is the study of magnetic fields in systems where the currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the charges are stationary. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales of nanoseconds or less. Magnetostatics is even a good approximation when the currents are not static – as long as the currents do not alternate rapidly. Magnetostatics is widely used in applications of micromagnetics such as models of magnetic storage devices as in computer memory. Applications Magnetostatics as a special case of Maxwell's equations Starting from Maxwell's equations and assuming that charges are either fixed or move as a steady current , the equations separate into two equations for the electric field (see electrostatics) and two for the magnetic field. The fields are independent of time and each other. The magnetostatic equations, in both differential and integral forms, are shown in the table below. Where ∇ with the dot denotes divergence, and B is the magnetic flux density, the first integral is over a surface with oriented surface element . Where ∇ with the cross denotes curl, J is the current density and is the magnetic field intensity, the second integral is a line integral around a closed loop with line element . The current going through the loop is . The quality of this approximation may be guessed by comparing the above equations with the full version of Maxwell's equations and considering the importance of the terms that have been removed. Of particular significance is the comparison of the term against the term. If the term is substantially larger, then the smaller term may be ignored without significant loss of accuracy. Re-introducing Faraday's law A common technique is to solve a series of magnetostatic problems at incremental time steps and then use these solutions to approximate the term . Plugging this result into Faraday's Law finds a value for (which had previously been ignored). This method is not a true solution of Maxwell's equations but can provide a good approximation for slowly changing fields. Solving for the magnetic field Current sources If all currents in a system are known (i.e., if a complete description of the current density is available) then the magnetic field can be determined, at a position r, from the currents by the Biot–Savart equation: This technique works well for problems where the medium is a vacuum or air or some similar material with a relative permeability of 1. This includes air-core inductors and air-core transformers. One advantage of this technique is that, if a coil has a complex geometry, it can be divided into sections and the integral evaluated for each section. Since this equation is primarily used to solve linear problems, the contributions can be added. For a very difficult geometry, numerical integration may be used. For problems where the dominant magnetic material is a highly permeable magnetic core with relatively small air gaps, a magnetic circuit approach is useful. When the air gaps are large in comparison to the magnetic circuit length, fringing becomes significant and usually requires a finite element calculation. The finite element calculation uses a modified form of the magnetostatic equations above in order to calculate magnetic potential. The value of can be found from the magnetic potential. The magnetic field can be derived from the vector potential. Since the divergence of the magnetic flux density is always zero, and the relation of the vector potential to current is: Magnetization Strongly magnetic materials (i.e., ferromagnetic, ferrimagnetic or paramagnetic) have a magnetization that is primarily due to electron spin. In such materials the magnetization must be explicitly included using the relation Except in the case of conductors, electric currents can be ignored. Then Ampère's law is simply This has the general solution where is a scalar potential. Substituting this in Gauss's law gives Thus, the divergence of the magnetization, has a role analogous to the electric charge in electrostatics and is often referred to as an effective charge density . The vector potential method can also be employed with an effective current density See also Darwin Lagrangian References External links Electric and magnetic fields in matter Potentials
Magnetostatics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
873
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
1,509,475
https://en.wikipedia.org/wiki/Argo%20D-4%20Javelin
Javelin (Argo D-4) was the designation of an American sounding rocket. The four stage Javelin rocket had a payload of around 125 pounds (57 kg), an apogee of 1100 kilometers, a liftoff thrust of 365 kilonewtons (82,100 lbf), a total mass of 3,385 kilograms (7,463 lb), and a core diameter of 580 millimeters (22.8 in). It was launched 82 times between 1959 and 1976. This vehicle consisted of an Honest John first stage plus two Nike Ajax stages plus a X-248 stage. First NASA use in 1959. Could lift 45 kg (100 lb) to 800 km (500 mi). It was launched 85 times between 1959 and 1976. References External links https://web.archive.org/web/20050212234911/http://astronautix.com/lvs/javelin.htm https://history.nasa.gov/SP-4401/sp4401.htm Sounding rockets of the United States
Argo D-4 Javelin
[ "Astronomy" ]
222
[ "Rocketry stubs", "Astronomy stubs" ]
1,509,502
https://en.wikipedia.org/wiki/Nike-Hydac
Nike Hydac is the designation of an American sounding rocket with two stages, based upon the Nike Ajax booster. The Nike Hydac was launched 87 times from many missile sites. Such sites were White Sands Missile Range, Poker Flat Research Range ("Poker Flats"), Kwajalein Missile Range, Cassino Site - Rio Grande Airport, Brazil, and from North Truro Air Force Station in Massachusetts during Operation Have Horn in 1969. The directing agency for Nike Hydac was the Air Force Cambridge Research Laboratories, (AFCRL) Cambridge, Massachusetts. The AFCRL began its origins to the Cambridge Field Station in 1945 to analyze and study Massachusetts Institute of Technology (MIT) wartime efforts on electronic countermeasures and atmospheric research. Nike platform Section source: Astronautix Type: two stage Stage 1: Nike - solid propellant rocket stage, loaded/empty mass 599/256 kg Stage 2: Hydac - solid propellant rocket stage, loaded mass 300 kg Gross mass: 900 kg (1,980 lb) Height: 9.10 m (29.80 ft) Diameter: 0.42 m (1.37 ft) Thrust: 217.00 kN (48,783 lbf) Apogee: 150 km (90 mi) First date: 1966-11-05 Last date: 1983-06-16 Number: 87 launches Other Nike sounding rockets Nike-Apache Nike-Asp Nike-Cajun Nike-Deacon Nike-Iroquois Nike Javelin Nike Malemute Nike-Nike Nike Orion Nike Recruit Nike T40 T55 Nike Tomahawk Nike Viper References Nike (rocket family)
Nike-Hydac
[ "Astronomy" ]
328
[ "Rocketry stubs", "Astronomy stubs" ]
1,509,536
https://en.wikipedia.org/wiki/Nike-Hawk
Nike Hawk is the designation of an American sounding rocket. It has an apogee of 160 km, a liftoff thrust of 217 kN, a total mass of 1100 kg and a total length of 9.00 m. It is a two-stage rocket made from a Nike and a Hawk anti-aircraft missile motor, and was designed to launch a 90-kg research payload to an altitude of 160 km. References External links More information about Nike Hawk (part way down) Nike (rocket family) de:Nike Hawk
Nike-Hawk
[ "Astronomy" ]
107
[ "Rocketry stubs", "Astronomy stubs" ]
1,509,644
https://en.wikipedia.org/wiki/Desert%20Botanical%20Garden
Desert Botanical Garden is a botanical garden located in Papago Park, at 1201 N. Galvin Parkway in Phoenix, central Arizona. Founded by the Arizona Cactus and Native Flora Society in 1937 and established at this site in 1939, the garden now has more than 50,000 plants in more than 4,000 taxa, one-third of which are native to the area, including 379 species which are rare, threatened or endangered. Of special note are the rich collections of agave (4,026 plants in 248 taxa) and cacti (13,973 plants in 1,320 taxa), especially the Opuntia sub-family. Plants from less extreme climate conditions are protected under shadehouses. It focuses on plants adapted to desert conditions, including an Australian collection, a Baja California collection and a South American collection. Several ecosystems are represented: a mesquite bosque, semi-desert grassland, and upland chaparral. Desert Botanical Garden has been designated as a Phoenix Point of Pride. History In the 1930s, a small group of local citizens became interested in conserving the fragile desert environment. One was Swedish botanist Gustaf Starck, who found like-minded residents by posting a sign, "Save the desert", with an arrow pointing to his home. In April 1934 they formed the Arizona Cactus and Native Flora Society (ACNFS) to sponsor a botanical garden to encourage an understanding, appreciation and promotion of the uniqueness of the world's deserts, particularly the local Sonoran Desert. Eventually Gertrude Webster, whose home encompassed all of what is today the neighborhood of Arcadia, joined the Society. She offered her encouragement, connections and financial support to establish the botanical garden in Papago Park. Margaret Bell Douglas provided support as well, donating 1,500 specimens to the herbarium. Webster served as president of the Society's first Board of Directors and Gustaf Starck, W. E. Walker, Rell Hasket, L. L. Kreigbaum, and Samuel Wilson were the five vice president. The latter also served as Treasurer. Paul G. Olsen was Secretary. In 1938, after much work by the ACNFS, the board hired the Garden's first executive director, George Lindsay, who oversaw the first planting on the grounds. The Desert Botanical Garden opened in 1939 as a non-profit museum dedicated to research, education, conservation and display of desert plants. Education and art The Garden offers specialized tours, workshops and lectures on desert landscaping and horticulture, nature art and photography, health and wellness. The Garden presents Spring and Fall open-air acoustic concert series, art exhibitions, and Las Noches de las Luminarias since 1978. The Luminarias Festival became a Southwestern Holiday tradition featuring live music by the flickering lights of 8000 hand-lit luminaria. Volunteerism Volunteers were essential in the Garden's creation and development, when the staff was small and finances tight. These early supporters, including a few amateur botanists who donated their own plant collections, helped plan and execute plant sales, photography and art exhibits, and numerous public events. Volunteers remain a Garden asset, sharing their time, talents and professional expertise. They work closely with staff to maintain the Garden's status as a premier plant research institution and serve as members of the Board of Trustees, setting policy and governing the Garden. Gallery See also List of botanical gardens and arboretums in Arizona List of historic properties in Phoenix, Arizona List of botanical gardens in the United States References External links Botanical gardens in Arizona Butterfly houses Cactus gardens > > > Phoenix Points of Pride Institutions accredited by the American Alliance of Museums Tourist attractions in Phoenix, Arizona
Desert Botanical Garden
[ "Biology" ]
748
[ "Flora", "Desert flora" ]
1,510,233
https://en.wikipedia.org/wiki/Manual%20communication
Manual communication systems use articulation of the hands (hand signs, gestures, etc.) to mediate a message between persons. Being expressed manually, they are received visually and sometimes tactually. When it is the primary form of communication, it may be enhanced by body language and facial expressions. Manual communication is employed in sign languages and manually coded languages, though sign languages also possess non-manual elements. Other systems of manual communication have been developed for specific purposes, typically in situations where speech is not practical (such as loud environments) or permitted, or where secrecy is desired. Examples Charades Diving signals — hand communication methods while scuba diving Flag semaphores — telegraphy systems using hand-held flags, other objects, or the hands themselves Finger counting Chinese number gestures Open outcry hand signaling Fingerspelling or manual alphabets Gang signals — signs used to signify allegiance to a gang or local gang branches Hand signals in traffic Monastic sign languages — symbolic gestural communication used by monastic communities Rueda de Casino — a dance that uses hand motions to "call" other dancers Tic-tac — a traditional method of hand signs used by bookmakers in horse racing U.S. Army hand and arm signals External links ASL Resource Site Free online lessons, ASL dictionary, and resources for teachers, students, and parents. Human communication Gestures
Manual communication
[ "Biology" ]
277
[ "Human communication", "Behavior", "Gestures", "Human behavior" ]
1,510,249
https://en.wikipedia.org/wiki/Bering%20Strait%20crossing
A Bering Strait crossing is a hypothetical bridge or tunnel that would span the relatively narrow and shallow Bering Strait between the Chukotka Peninsula in Russia and the Seward Peninsula in the U.S. state of Alaska. The crossing would provide a connection linking the Americas and Afro-Eurasia. With the two Diomede Islands between the peninsulas, the Bering Strait could be spanned by a bridge or tunnel. There have been several proposals for a Bering Strait crossing made by various individuals and media outlets. The names used for them include "The Intercontinental Peace Bridge" and "EurasiaAmerica Transport Link". Tunnel names have included "TKMWorld Link", "AmerAsian Peace Tunnel" and InterBering. In April 2007, Russian government officials told the press that the Russian government would back a US$65 billion plan by a consortium of companies to build a Bering Strait tunnel. History 19th century The concept of an overland connection crossing the Bering Strait goes back before the 20th century. William Gilpin, first governor of the Colorado Territory, envisaged a vast "Cosmopolitan Railway" in 1890 linking the entire world through a series of railways. Two years later, Joseph Strauss, who went on to design over 400 bridges, and then serve as the project engineer for the Golden Gate Bridge, put forward the first proposal for a Bering Strait rail bridge in his senior thesis. The project was presented to the government of the Russian Empire, but it was rejected. 20th century In 1904, a syndicate of American railroad magnates proposed (through a French spokesman) a SiberianAlaskan railroad from Cape Prince of Wales in Alaska through a tunnel under the Bering Strait and across northeastern Siberia to Irkutsk via Cape Dezhnev, Verkhnekolymsk, and Yakutsk (around of railroad to build, plus over in North America). The proposal was for a 90-year lease, and exclusive mineral rights for each side of the right-of-way. It was debated by officials and finally turned down on March 20, 1907. Czar Nicholas II approved the American proposal in 1905 (only as a permission, not much financing from the Czar). Its cost was estimated at $65 million and $300 million, including all the railroads. These hopes were dashed with the outbreak of the 1905 Russian Revolution followed by World War I. A Nazi plan to create a wide-gauge railroad called the Breitspurbahn was mooted to connect the cities of Europe, India, China and ultimately North America via the Bering Strait. The railroad was never built. Interest was renewed during World War II with the completion in 19421943 of the Alaska Highway, linking the remote territory of Alaska with Canada and the continental United States. In 1942, the Foreign Policy Association envisioned the highway continuing to link with Nome near the Bering Strait, linked by highway to the railhead at Yakutsk, using an alternative sea-and-air ferry service across the Bering Strait. At the same time the road on the Russian side was extended by building the Kolyma Highway. In 1958, engineer Tung-Yen Lin suggested the construction of a bridge across the Bering Strait "to foster commerce and understanding between the people of the United States and the Soviet Union". Ten years later he organized the Inter-Continental Peace Bridge, Inc., a non-profit institution organized to further this proposal. At that time he made a feasibility study of a Bering Strait bridge and estimated the cost to be $1 billion for the span. In 1994 he updated the cost to more than $4 billion. Like Gilpin, Lin envisioned the project as a symbol of international cooperation and unity, and dubbed the project the Intercontinental Peace Bridge. 21st century According to a report in the Beijing Times in May 2014, Chinese transport experts had proposed building a roughly high-speed rail line from northeast China to the United States. The project would include a tunnel under the Bering Strait and connect to the contiguous United States via Wales, Alaska, along the river to Fairbanks, Alaska, and along the Alaska Highway to Edmonton, Alberta, Canada. Several American entrepreneurs have also advanced private-sector proposals, such as an Alaska-based limited-liability company InterBering founded in 2010 to lobby for a cross-straits connection, and a 2018 cryptocurrency offering to fund the construction of a tunnel. In 2005, investor Neil Bush, younger brother of U.S. President George W. Bush and son of President George H. W. Bush, traveled abroad with Sun Myung Moon of the Unification Church as he promoted a proposal to dig a transportation corridor beneath the Bering Strait. When questioned by Mother Jones during the Republican primary campaign of his brother Jeb Bush a decade later in 2015, he denied having supported the tunnel project and said that he had traveled with Moon because he supported "efforts by faith leaders to call their flock into service to others." Strategic military concerns Proposals to build a crossing predate the Russian invasion of Ukraine and the Russian-Ukrainian War, which started in February 2022. It is not known how those events have affected strategic concerns relating to the proposed crossing, which would facilitate access by Russia to North America. Even before the invasion, commentators on the proposed link have flagged strategic military concerns as a factor in any decision to build the crossing. Technical concerns Distance The straight distance between Russia and Alaska is . If building bridges and using the Diomede Islands, the straight distance over water for the three parts would be , and , in total . Depth of water The depth of the water is a minor problem, as the strait is no deeper than , comparable to the English Channel. The tides and currents in the area are not severe. Weather-related challenges Restrictions on construction work The route is just south of the Arctic Circle, and the location has long, dark winters and extreme weather, including average winter lows of and temperatures approaching in cold snaps. This would mean that construction work would likely be restricted to five months of the year, around May to September, and centered during summer. Exposed steel The weather also poses challenges to exposed steel. In Lin's design, concrete covers all structures, to simplify maintenance and to offer additional stiffening. Ice floes Although there are no icebergs in the Bering Strait, ice floes up to thick are in constant motion during certain seasons, which could produce forces on the order of on a pier. Tundra in surrounding regions Roads on either side of the strait would likely have to cross tundra, requiring either an unpaved road or some way to avoid the effects of permafrost. Likely route and expenses Bridge option If the crossing is chosen as a bridge, it would probably connect Wales, Alaska, to a location south of Uelen. The bridge would also likely be divided by the Diomede Islands, which are at the middle of the Bering Strait. In 1994, Lin estimated the cost of a bridge to be "a few billion" dollars. The roads and railways on each side were estimated to cost $50 billion. Lin contrasted this cost to petroleum resources "worth trillions". Discovery Channel's Extreme Engineering estimates the cost of a highway, electrified double-track high-speed rail, and pipelines at $105 billion (in 2007 US dollars), five times the original cost of the 1994 Channel Tunnel. Connections to the rest of the world This excludes the cost of new roads and railways to reach the bridge. Aside from the technical challenges of building two bridges or a more than tunnel across the strait, another major challenge is that, , there is nothing on either side of the Bering Strait to connect the bridge to. Russian side The Russian side of the strait, in particular, is severely lacking in infrastructure. No railways exist for over in any direction from the strait. The nearest major connecting highway is the M56 Kolyma Highway, which is currently unpaved and around from the strait. However, by 2042, the Anadyr Highway is expected to be completed connecting Ola and Anadyr, which is only about from the strait. U.S. side On the U.S. side, an estimated of highways or railroads would have to be built around Norton Sound, through a pass along the Unalakleet River, and along the Yukon River to connect to Manley Hot Springs Road – in other words, a route similar to that of the Iditarod Trail Race. A project to connect Nome, from the strait, to the rest of Alaska by a paved highway (part of Alaska Route 2) has been proposed by the Alaskan state government, although the very high cost ($2.3 to $2.7 billion, about $3 million per kilometer, or $5 million per mile) has so far prevented construction. In 2016, the Alaskan road network was extended westwards by to Tanana, from the strait, by building a fairly simple road. The Alaska Department of Transportation & Public Facilities project was supported by local indigenous groups such as the Tanana Tribal Council. Track gauge Another complicating factor is the different track gauges in use. Mainline rail in the US, Canada, China, and the Koreas uses standard gauge of 1435 millimeters. Russia uses the slightly broader Russian gauge of 1520 mm. Solutions to this break of gauge include: To have all cargo in containers, which are fairly easily reloaded from one train to another. This is used on the increasingly popular China–Europe rail freight route, which has two breaks of gauge. It is possible to transfer a 60-container train in one hour. Another solution is variable gauge axles for locomotives and rolling stock, such as those made by Talgo. A gauge changer modifies the gauge of the wheels while the train traverses the GC equipment at a speed of , which is about 4 seconds per railcar. This is faster than is possible with the transfer of ISO containers. The TKMWorld Link The TKMWorld Link (Russian: ТрансКонтинентальная магистраль, English: Transcontinental Railway), also called ICL-World Link (Intercontinental link), was a planned link between Siberia and Alaska to deliver oil, natural gas, electricity, and rail passengers to the United States from Russia. Proposed in 2007, the plan included provisions to build a tunnel under the Bering Strait, which, if built, would have been the longest tunnel in the world, surpassing the Line 3 (Guangzhou Metro) tunnel. The tunnel was intended to be part of a railway joining Yakutsk, the capital of the Russian republic of Yakutia, and Komsomolsk-on-Amur, in the Russian Far East, with the western coast of Alaska. The Bering Strait tunnel was estimated to cost between $10 billion and $12 billion, while the entire project was estimated to cost $65 billion. In 2008, Russian Prime Minister Vladimir Putin approved the plan to build a railway to the Bering Strait area, as a part of the development plan to run until 2030. The more than tunnel would have run under the Bering Strait between Chukotka, in the Russian far east, and Alaska. The cost was estimated as $66 billion. In late August 2011, at a conference in Yakutsk in eastern Russia, the plan was backed by some of President Dmitry Medvedev's top officials, including Aleksandr Levinthal, the deputy federal representative for the Russian Far East. Supporters of the idea believed that it would be a faster, safer, and cheaper way to move freight around the world than container ships. They estimated it could carry about 3% of global freight and make about $7 billion a year. Shortly after, the Russian government approved the construction of the $65 billion Siberia-Alaska rail and tunnel across the Bering Strait. Observers doubted that the rail link would be cheaper than ship, bearing in mind that the cost for rail transport from China to Europe is higher than by ship (except for expensive cargo where lead time is important). In 2013, the Amur Yakutsk Mainline connecting the Yakutsk railway ( from the strait) with the Trans-Siberian Railway was completed. However, this railway is meant for freight and is too tightly curved for high-speed passenger trains. Future projects include the and Kolyma–Anadyr highway. The Kolyma–Anadyr highway has started construction, but will be a narrow gravel road. USCanadaRussiaChina railway In 2014, China was considering construction of a US-Canada-Russia-China bullet train that would include a undersea tunnel crossing the Bering Strait and would allow passengers to travel between the United States and China in about two days. Although the press was skeptical of the project, China's state-run China Daily claimed that China possessed the necessary technology. It was unknown who was expected to pay for the construction, although China had in other projects offered to build and finance them, and expected the money back in the end through fees or rents. Trans-Eurasian Belt Development In 2015, another possible collaboration between China and Russia was reported, part of the Trans-Eurasian Belt Development, a transportation corridor across Siberia that would also include a road bridge with gas and oil pipelines between the easternmost point of Siberia and the westernmost point of Alaska. It would link London and New York by rail and superhighway via Russia if it were to go ahead. China's Belt and Road Initiative has similar plans, so the project would work in parallel for both countries. See also Alaska-Alberta Railway Development Corporation Artificial island Beringia Cosmopolitan Railway Eurasian Land Bridge Intercontinental and transoceanic fixed links Land reclamation Pan-American Highway Transportation in Alaska Transport in Russia References Further reading External links Discovery Channel's Extreme Engineering World Peace King Tunnel Trans-Global Highway The Bering Strait Crossing Alaska Canada Rail Link - Project Feasibility Study The Bridge Over the Bering Strait by James Cotter A Superhighway Across the Bering Strait, The Atlantic BART's Underwater Tunnel Withstands Test InterBering, LLC - Alaskan company, founded in 2010 by Fyodor Soloview, promoting a tunnel under the Bering Strait and a railroad between North America and Asia: InterBering.com Crossing Cross-sea bridges in Russia Exploratory engineering International bridges Proposals in the Soviet Union Proposed bridges in Russia Proposed bridges in the United States Proposed railway bridges Proposed railway lines in Alaska Proposed railway tunnels in Asia Proposed railway tunnels in North America Proposed roads in the United States Proposed transcontinental crossings Proposed transport infrastructure in Russia Proposed transportation infrastructure in the United States Proposed tunnels in Russia Proposed tunnels in the United States Proposed undersea tunnels in Asia Proposed undersea tunnels in North America Railroad bridges in Alaska Railroad tunnels in Alaska Railway bridges in Russia Railway tunnels in Russia Transport in the Russian Far East Transportation in Unorganized Borough, Alaska
Bering Strait crossing
[ "Technology" ]
3,056
[ "Exploratory engineering" ]
1,510,452
https://en.wikipedia.org/wiki/Roll%20center
The roll center of a vehicle is the notional point at which the cornering forces in the suspension are reacted to the vehicle body. There are two definitions of roll center. The most commonly used is the geometric (or kinematic) roll center, whereas the Society of Automotive Engineers uses a force-based definition. Definition Geometric roll center is solely dictated by the suspension geometry, and can be found using principles of the instant center of rotation. Force based roll center, according to the US Society of Automotive Engineers, is "The point in the transverse vertical plane through any pair of wheel centers at which lateral forces may be applied to the sprung mass without producing suspension roll". The lateral location of the roll center is typically at the center-line of the vehicle when the suspension on the left and right sides of the car are mirror images of each other. The significance of the roll center can only be appreciated when the vehicle's center of mass is also considered. If there is a difference between the position of the center of mass and the roll center a moment arm is created. When the vehicle experiences angular velocity due to cornering, the length of the moment arm, combined with the stiffness of the springs and possibly anti-roll bars (also called 'anti-sway bar'), defines how much the vehicle will roll. This has other effects too, such as dynamic load transfer. Application When the vehicle rolls the roll centers migrate. The roll center height has been shown to affect behavior at the initiation of turns such as nimbleness and initial roll control. Testing methods Current methods of analyzing individual wheel instant centers have yielded more intuitive results of the effects of non-rolling weight transfer effects. This type of analysis is better known as the lateral-anti method. This is where one takes the individual instant center locations of each corner of the car and then calculates the resultant vertical reaction vector due to lateral force. This value then is taken into account in the calculation of a jacking force and lateral weight transfer. This method works particularly well in circumstances where there are asymmetries in left to right suspension geometry. The practical equivalent of the above is to push laterally at the tire contact patch and measure the ratio of the change in vertical load to the horizontal force. See also Weight distribution Vehicle metrics References Classical mechanics Geometric centers Vehicle technology
Roll center
[ "Physics", "Mathematics", "Engineering" ]
472
[ "Point (geometry)", "Geometric centers", "Classical mechanics", "Vehicle technology", "Mechanics", "Mechanical engineering by discipline", "Symmetry" ]
1,510,587
https://en.wikipedia.org/wiki/Bayesian%20search%20theory
Bayesian search theory is the application of Bayesian statistics to the search for lost objects. It has been used several times to find lost sea vessels, for example USS Scorpion, and has played a key role in the recovery of the flight recorders in the Air France Flight 447 disaster of 2009. It has also been used in the attempts to locate the remains of Malaysia Airlines Flight 370. Procedure The usual procedure is as follows: Formulate as many reasonable hypotheses as possible about what may have happened to the object. For each hypothesis, construct a probability density function for the location of the object. Construct a function giving the probability of actually finding an object in location X when searching there if it really is in location X. In an ocean search, this is usually a function of water depth — in shallow water chances of finding an object are good if the search is in the right place. In deep water chances are reduced. Combine the above information coherently to produce an overall probability density map. (Usually this simply means multiplying the two functions together.) This gives the probability of finding the object by looking in location X, for all possible locations X. (This can be visualized as a contour map of probability.) Construct a search path which starts at the point of highest probability and 'scans' over high probability areas, then intermediate probabilities, and finally low probability areas. Revise all the probabilities continuously during the search. For example, if the hypotheses for location X imply the likely disintegration of the object and the search at location X has yielded no fragments, then the probability that the object is somewhere around there is greatly reduced (though not usually to zero) while the probabilities of its being at other locations is correspondingly increased. The revision process is done by applying Bayes' theorem. In other words, first search where it most probably will be found, then search where finding it is less probable, then search where the probability is even less (but still possible due to limitations on fuel, range, water currents, etc.), until insufficient hope of locating the object at acceptable cost remains. The advantages of the Bayesian method are that all information available is used coherently (i.e., in a "leak-proof" manner) and the method automatically produces estimates of the cost for a given success probability. That is, even before the start of searching, one can say, hypothetically, "there is a 65% chance of finding it in a 5-day search. That probability will rise to 90% after a 10-day search and 97% after 15 days" or a similar statement. Thus the economic viability of the search can be estimated before committing resources to a search. Apart from the USS Scorpion, other vessels located by Bayesian search theory include the MV Derbyshire, the largest British vessel ever lost at sea, and the SS Central America. It also proved successful in the search for a lost hydrogen bomb following the 1966 Palomares B-52 crash in Spain, and the recovery in the Atlantic Ocean of the crashed Air France Flight 447. Bayesian search theory is incorporated into the CASP (Computer Assisted Search Program) mission planning software used by the United States Coast Guard for search and rescue. This program was later adapted for inland search by adding terrain and ground cover factors for use by the United States Air Force and Civil Air Patrol. Mathematics Suppose a grid square has a probability p of containing the wreck and that the probability of successfully detecting the wreck if it is there is q. If the square is searched and no wreck is found, then, by Bayes' theorem, the revised probability of the wreck being in the square is given by For every other grid square, if its prior probability is r, its posterior probability is given by USS Scorpion In May 1968, the U.S. Navy's nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of Norfolk, Virginia. The command officers of the U.S. Navy were nearly certain that the vessel had been lost off the Eastern Seaboard, but an extensive search there failed to discover the remains of Scorpion. Then, a Navy deep-water expert, John P. Craven, suggested that Scorpion had sunk elsewhere. Craven organised a search southwest of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, Mizar, and he took advice from Metron Inc., a firm of consultant mathematicians in order to maximise his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of Scorpion. The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched. At the end of October 1968, the Navy's oceanographic research ship, , located sections of the hull of Scorpion on the seabed, about southwest of the Azores, under more than of water. This was after the Navy had released sound tapes from its underwater "SOSUS" listening system, which contained the sounds of the destruction of Scorpion. The court of inquiry was subsequently reconvened and other vessels, including the bathyscaphe Trieste II, were dispatched to the scene, collecting many pictures and other data. Although Craven received much credit for locating the wreckage of Scorpion, Gordon Hamilton, an acoustics expert who pioneered the use of hydroacoustics to pinpoint Polaris missile splashdown locations, was instrumental in defining a compact "search box" wherein the wreck was ultimately found. Hamilton had established a listening station in the Canary Islands that obtained a clear signal of what some scientists believe was the noise of the vessel's pressure hull imploding as she passed crush depth. A Naval Research Laboratory scientist named Chester "Buck" Buchanan, using a towed camera sled of his own design aboard Mizar, finally located Scorpion. The towed camera sled, which was fabricated by J. L. "Jac" Hamm of Naval Research Laboratory's Engineering Services Division, is housed in the National Museum of the United States Navy. Buchanan had located the wrecked hull of Thresher in 1964 using this technique. Optimal distribution of search effort The classical book on this subject The Theory of Optimal Search (Operations Research Society of America, 1975) by Lawrence D. Stone of Metron Inc. won the 1975 Lanchester Prize by the American Operations Research Society. Searching in boxes Assume that a stationary object is hidden in one of n boxes (locations). For each location there are three known parameters: the cost of a single search, the probability of finding the object by a single search if the object is there, and the probability that the object is there. A searcher looks for the object. They know the a priori probabilities at the beginning and update them by Bayes' law after each (unsuccessful) attempt. The problem of finding the object in minimal expected cost is a classical problem solved by David Blackwell; later, David Assaf expanded Blackwell's result to more than one object. The optimal policy is: at each stage, look into the location which maximizes . This is a special case of the Gittins index. See also References Bibliography Stone, Lawrence D., The Theory of Optimal Search, published by the Operations Research Society of America, 1975 Stone, Lawrence D., In Search of Air France Flight 447. Institute of Operations Research and the Management Sciences, 2011. https://www.informs.org/ORMS-Today/Public-Articles/August-Volume-38-Number-4/In-Search-of-Air-France-Flight-447 Iida, Koji., Studies on the Optimal Search Plan, Vol. 70, Lecture Notes in Statistics, Springer-Verlag, 1992. De Groot, Morris H., Optimal Statistical Decisions, Wiley Classics Library, 2004. Richardson, Henry R; and Stone, Lawrence D. Operations Analysis during the underwater search for Scorpion. Naval Research Logistics Quarterly, June 1971, Vol. 18, Number 2. Office of Naval Research. Stone, Lawrence D. Search for the SS Central America: Mathematical Treasure Hunting. Technical Report, Metron Inc. Reston, Virginia. Koopman, B.O. Search and Screening, Operations Research Evaluation Group Report 56, Center for Naval Analyses, Alexandria, Virginia. 1946. Richardson, Henry R; and Discenza, J.H. The United States Coast Guard computer-assisted search planning system (CASP). Naval Research Logistics Quarterly. Vol. 27 number 4. pp. 659–680. 1980. Ross, Sheldon M., An Introduction to Stochastic Dynamic Programming, Academic Press. 1983. Search theory Search algorithms Operations research
Bayesian search theory
[ "Mathematics" ]
1,917
[ "Applied mathematics", "Operations research" ]
1,510,685
https://en.wikipedia.org/wiki/Kesterson%20National%20Wildlife%20Refuge
The Kesterson National Wildlife Refuge was an artificial wetland environment, created using agricultural runoff from farmland in California's Central Valley. The irrigation water is transported to the valley from sources in the Sierra Nevada via the California Aqueduct. Minerals from these sources are carried in the water and concentrated by evaporation from aqueducts, canals, and fields. This has resulted in an exceptionally high accumulation of selenium and other minerals in the wetlands. Wildlife in this region suffered deformities due to selenium poisoning, drawing the attention of news media and leading to the closure of the refuge. Kesterson Reservoir was a unit of the refuge but is now part of San Luis National Wildlife Refuge. Westlands-Kesterson Timeline 1952 - Westlands Water District is formed and would become the nation's largest water district, covering 600,000 acres 1961 - United States Bureau of Reclamation (USBR) agrees to build San Luis Reservoir and a runoff drain that would benefit the Westlands 1975 - Funding for the Westlands drain cut by Congress resulting in toxic run-off diverted to Kesterson National Wildlife Refuge evaporation ponds 1977 Nov 5 - 529 page federal report finds USBR failed to breakup corporate ownership in Westlands over 160 acre limit 1978 - 7,000 acre-feet of toxic water laced with selenium and pesticides sent annually to Kesterson evaporation ponds flows into wildlife reserve 1983 - 60% of baby birds are deformed within Kesterson National Wildlife Refuge where contaminated water is sent 1985 - Publication of "Tragedy at Kesterson Reservoir: Death of a Wildlife Refuge Illustrates Failings of Water Law" 1996 - USBR and State of California form Grasslands Bypass Project to divert contaminated water from going into Kesterson 2000 Mar 1 - Court of Appeals orders USBR to construct Central Valley Project (CVP) Drain Jun 9 - $450 million water plan proposed by Governor Gray Davis includes raising Shasta Dam height 2002 Feb 13 - Natural Resources Defense Council appeals judge's ruling over how much CVP water can be retained for wildlife Nov 17 - USBR close to making deal to buy contaminated lands in Westlands 2004 Apr 22 - Sac Express The Rich get wetter Jul 14 - Court order allows for protection of fish in Trinity River with water Sep 14 - EWG Less Land, More Water Soaking Uncle Sam 2007 Aug 30 - California fishing e-magazine publishes, "Westlands wants to raise Shasta Dam and grab $40 billion in subsidized water" 2009 Jun 10 - TJA CVP pumping changes to protect fish 2012 Mar 2 - Court of Appeals ends thirteen-year legal battle between Westlands and Interior Department in government's favor 2017 Mar 17 - San Luis Unit Drainage Resolution Act (H.R.1769) proposed to deal with Kesterson drainage May 18 - Ex-Westlands Water District lobbyist picked for key Department of the Interior post Nov 10 - USBR and Westlands Water District settlement in limbo 2018 Jan 23 - Deadline passes but Westlands confident of help from Congress May 3 - Billions at play over Kesterson impacts and the growing pressure to accept deal from Westland's big farmers 2019 May 1 - USBR-Westlands drainage deal for CVP water: Who's who and what's involved May 23 - Congressional Research Service releases new report on the CVP and legislative proposals Sept 6 - Environmentalists win Appeals Court victory against San Joaquin Valley agricultural polluters Nov 15 - Interior Secretary David Bernhardt, who was the former lawyer for Westlands proposes permanent CVP water contract See also Selenium pollution References External links Environmental Encyclopedia San Luis Drain National Wildlife Refuges in California Protected areas of Merced County, California Wetlands of California Disasters in California Environmental disasters in the United States Constructed wetlands Environmental issues in California Landforms of Merced County, California
Kesterson National Wildlife Refuge
[ "Chemistry", "Engineering", "Biology" ]
774
[ "Bioremediation", "Constructed wetlands", "Environmental engineering" ]
1,510,738
https://en.wikipedia.org/wiki/Desorption
Desorption is the physical process where adsorbed atoms or molecules are released from a surface into the surrounding vacuum or fluid. This occurs when a molecule gains enough energy to overcome the activation barrier and the binding energy that keep it attached to the surface. Desorption is the reverse of the process of adsorption, which differs from absorption in that adsorption refers to substances bound to the surface, rather than being absorbed into the bulk. Desorption can occur from any of several processes, or a combination of them: it may result from heat (thermal energy); incident light such as infrared, visible, or ultraviolet photons; or an incident beam of energetic particles such as electrons. It may also occur following chemical reactions such as oxidation or reduction in an electrochemical cell or after a chemical reaction of a adsorbed compounds in which the surface may act as a catalyst. Mechanisms Depending on the nature of the adsorbent-to-surface bond, there are a multitude of mechanisms for desorption. The surface bond of a sorbant can be cleaved thermally, through chemical reactions or by radiation, all which may result in desorption of the species. Thermal desorption Thermal desorption is the process by which an adsorbate is heated and this induces desorption of atoms or molecules from the surface. The first use of thermal desorption was by LeRoy Apker in 1948. It is one of the most frequently used modes of desorption, and can be used to determine surface coverages of adsorbates and to evaluate the activation energy of desorption. Thermal desorption is typically described by the Polanyi-Wigner equation: where r is the rate of desorption, is the adsorbate coverage, t the time, n is the order of desorption, the pre-exponential factor, E is the activation energy, R is the gas constant and T is the absolute temperature. The adsorbate coverage is defined as the ratio between occupied and available adsorption sites. The order of desorption, also known as the kinetic order, describes the relationship between the adsorbate coverage and the rate of desorption. In first order desorption, , the rate of the particles is directly proportional to adsorbate coverage. Atomic or simple molecular desorption tend to be of the first order and in this case the temperature at which maximum desorption occurs is independent of initial adsorbate coverage. Whereas, in second order desorption the temperature of maximum rate of desorption decreases with increased initial adsorbate coverage. This is because second order is re-combinative desorption and with a larger initial coverage there is a higher probability the two particles will find each other and recombine into the desorption product. An example of second order desorption, , is when two hydrogen atoms on the surface desorb and form a gaseous molecule. There is also zeroth order desorption which commonly occurs on thick molecular layers, in this case the desorption rate does not depend on the particle concentration. In the case of zeroth order, , the desorption will continue to increase with temperature until a sudden drop once all the molecules have been desorbed. In a typical thermal desorption experiment, one would often assume a constant heating of the sample, and so temperature will increase linearly with time. The rate of heating can be represented by Therefore, the temperature can be represented by: where is the starting time and is the initial temperature. At the "desorption temperature", there is sufficient thermal energy for the molecules to escape the surface. One can use the thermal desorption as a technique to investigate the binding energy of a metal. There are several different procedures for performing analysis of thermal desorption. For example, Redhead's peak maximum method is one of the ways to determine the activation energy in desorption experiments. For first order desorption, the activation energy is estimated from the temperature (Tp) at which the desorption rate is a maximum. Using the equation for rate of desorption (Polyani Winer equation), one can find Tp, and Redhead shows that the relationship between Tp and E can be approximated to be linear, given that the ratio of the rate constant to the heating rate is within the range 10 – 10. By varying the heating rate, and then plotting a graph of against , one can find the activation energy using the following equation: This method is straightforward, routinely applied and can give a value for activation energy within an error of 30%. However a drawback of this method, is that the rate constant in the Polanyi-Wigner equation and the activation energy are assumed to be independent of the surface coverage. Due to improvement in computational power, there are now several ways to perform thermal desorption analysis without assuming independence of the rate constant and activation energy. For example, the "complete analysis" method uses a family of desorption curves for several different surface coverages and integrates to obtain coverage as a function of temperature. Next, the desorption rate for a particular coverage is determined from each curve and an Arrhenius plot of the logarithm of the rate of desorption against 1/T is made. An example of an Arrhenius plot can be seen in the figure on the right. The activation energy can be found from the gradient of this Arrhenius plot. It also became possible to account for an effect of the disorder on the value of activation energy E, that leads to a non-Debye desorption kinetics at large times and allows to explain both desorption from close-to-perfect silicon surfaces and desorption from microporous adsorbents like NaX zeolites. Another analysis technique involves simulating thermal desorption spectra and comparing to experimental data. This technique relies on kinetic Monte Carlo simulations and requires an understanding of the lattice interactions of the adsorbed atoms. These interactions are described from first principles by the Lattice Gas Hamiltonian, which varies depending on the arrangement of the atoms. An example of this method used to investigate the desorption of oxygen from rhodium can be found in the following paper: "Kinetic Monte Carlo simulations of temperature programed desorption of O/Rh(111)". Reductive or oxidative desorption In some cases, the adsorbed molecule is chemically bonded to the surface/material, providing a strong adhesion and limiting desorption. If this is the case, desorption requires a chemical reaction which cleaves the chemical bonds. One way to accomplish this is to apply a voltage to the surface, resulting in either reduction or oxidation of the adsorbed molecule (depending on the bias and the adsorbed molecules). In a typical example of reductive desorption, a self-assembled monolayer of alkyl thiols on a gold surface can be removed by applying a negative bias to the surface resulting in reduction of the sulfur head-group. The chemical reaction for this process would be: where R is an alkyl chain (e.g. CH3), S is the sulfur atom of the thiol group, Au is a gold surface atom and e− is an electron supplied by an external voltage source. Another application for reductive/oxidative desorption is to clean active carbon material through electrochemical regeneration. Electron-stimulated desorption Electron-stimulated desorption occurs as a result of an electron beam incident upon a surface in vacuum, as is common in particle physics and industrial processes such as scanning electron microscopy (SEM). At atmospheric pressure, molecules may weakly bond to surfaces in what is known as adsorption. These molecules may form monolayers at a density of 1015 atoms/cm2 for a perfectly smooth surface,. One monolayer or several may form, depending on the bonding capabilities of the molecules. If an electron beam is incident upon the surface, it provides energy to break the bonds of the surface with molecules in the adsorbed monolayer(s), causing pressure to increase in the system. Once a molecule is desorbed into the vacuum volume, it is removed via the vacuum's pumping mechanism (re-adsorption is negligible). Hence, fewer molecules are available for desorption, and an increasing number of electrons are required to maintain constant desorption. One of the leading models on electron stimulated desorption is described by Peter Antoniewicz In short, his theory is that the adsorbate becomes ionized by the incident electrons and then the ion experiences an image charge potential which attracts it towards the surface. As the ion moves closer to the surface, the possibility of electron tunnelling from the substrate increases and through this process ion neutralisation can occur. The neutralised ion still has kinetic energy from before, and if this energy plus the gained potential energy is greater than the binding energy then the ion can desorb from the surface. As ionisation is required for this process, this suggests the atom cannot desorb at low excitation energies, which agrees with experimental data on electron simulated desorption. Understanding electron stimulated desorption is crucial for accelerators such as the Large Hadron Collider, where surfaces are subjected to an intense bombardment of energetic electrons. In particular, in the beam vacuum systems the desorption of gases can strongly impact the accelerators performance by modifying the secondary electron yield of the surfaces. IR photodesorption IR photodesorption is a type of desorption that occurs when an infrared light hits a surface and activates processes involving the excitation of an internal vibrational mode of the previously absorbed molecules followed by the desorption of the species into the gas phase. One can selectively excite electrons or vibrations of the adsorbate or of the adsorbate-substrate coupled system. This relaxation of the bonds together with a sufficient energy exchange from the incident light to the system will eventually lead to desorption. Generally, the phenomenon is more effective for weaker-bound physisorbed species, which have a smaller adsorption potential depth compared to that of the chemisorbed ones. In fact, a shallower potential requires lower laser intensities to set a molecule free from the surface and make IR-photodesorption experiments feasible, because the measured desorption times are usually longer than the inverse of the other relaxation rates in the problem. Phonon activated desorption In 2005, a mode of desorption was discovered by John Weaver et al. that has elements of both thermal and electron stimulated desorption. This mode is of particular interest as desorption can occur in a closed system without external stimulus. The mode was discovered whilst investigating bromine absorbed on silicone using scanning tunnelling microscopy. In the experiment, the Si-Br wafers were heated to temperatures ranging from 620 to 775 K. However, it was not simple thermal desorption bond breaking that was observed as the activation energies calculated from Arrhenius plots were found to be lower than the Si-Br bond strength. Instead, the optical phonons of the Silicon weaken the surface bond through vibrations and also provide the energy for electron to excite to the antibonding state. Application Desorption is a physical process that can be very useful for several applications. In this section two applications of thermal desorption are explained. One of them is actually a technique of thermal desorption, temperature programmed desorption, rather than an application itself, but it has plenty of very important applications. The other one is the application of thermal desorption with the aim of reducing pollution. Temperature programmed desorption (TPD) Temperature programmed desorption (TPD) is one of the most widely used surface analysis techniques available for materials research science. It has several applications such as knowing the desorption rates and binding energies of chemical compounds and elements, evaluation of active sites on catalyst surfaces and the understanding of the mechanisms of catalytic reactions including adsorption, surface reaction and desorption, analysing material compositions, surface interactions and surface contaminates. Therefore, TPD is increasingly important in many industries including, but not limited to, quality control and industrial research on products such as polymers, pharmaceuticals, clays and minerals, food packaging, and metals and alloys. When TPD is used with the aim of knowing desorption rates of products that were previously adsorbed on a surface, it consists of heating a cold crystal surface that adsorbed a gas or a mixture of gases at a controlled rate. Then, the adsorbates will react as they are heated and then they will desorb from the surface. The results of applying TPD are the desorption rates of each of the product species that have been desorbed as a function of the temperature of the surface, this is called the TPD spectrum of the product. Also, as the temperature at which each of the surface compounds has been desorbed is known, it is possible to compute the energy that bounded the desorbed compound to the surface, the activation energy. Thermal desorption for removal of pollution Desorption, specifically thermal desorption, can be applied as an environmental remediation technique. This physical process is designed to remove contaminants at relatively low temperatures, ranging from 90 to 560 °C, from the solid matrix. The contaminated media is heated to volatilize water and organic contaminants, followed by treatment in a gas treatment system in which after removal, the contaminants are collected or thermally destroyed. They are transported using a carrier gas or vacuum to a vapor treatment system for removal/transformation into less toxic compounds. Thermal desorption systems operate at a lower design temperature, which is sufficiently high to achieve adequate volatilization of organic contaminants. Temperatures and residence times are designed to volatilize selected contaminants but typically will not oxidize them. It is applicable at sites where high direct waste burial is present, and a short timeframe is necessary to allow for continued use or redevelopment of the site. See also Adsorption Chemisorption Desorptive capacity Gibbs isotherm Langmuir equation Moisture sorption isotherm Sorption isotherm References Surface science Articles containing video clips
Desorption
[ "Physics", "Chemistry", "Materials_science" ]
2,975
[ "Condensed matter physics", "Surface science" ]
1,510,928
https://en.wikipedia.org/wiki/Chromium%28III%29%20chloride
Chromium(III) chloride (also called chromic chloride) is an inorganic chemical compound with the chemical formula . It forms several hydrates with the formula , among which are hydrates where n can be 5 (chromium(III) chloride pentahydrate ) or 6 (chromium(III) chloride hexahydrate ). The anhydrous compound with the formula are violet crystals, while the most common form of the chromium(III) chloride are the dark green crystals of hexahydrate, . Chromium chlorides find use as catalysts and as precursors to dyes for wool. Structure Anhydrous chromium(III) chloride adopts the structure, with occupying one third of the octahedral interstices in alternating layers of a pseudo-cubic close packed lattice of ions. The absence of cations in alternate layers leads to weak bonding between adjacent layers. For this reason, crystals of cleave easily along the planes between layers, which results in the flaky (micaceous) appearance of samples of chromium(III) chloride. The anhydrous is exfoliable down to the monolayer limit. If pressurized to 9.9 GPa it goes under a phase transition. Chromium(III) chloride hydrates The hydrated chromium(III) chlorides display the somewhat unusual property of existing in a number of distinct chemical forms (isomers), which differ in terms of the number of chloride anions that are coordinated to Cr(III) and the water of crystallization. The different forms exist both as solids and in aqueous solutions. Several members are known of the series of . The common hexahydrate can be more precisely described as . It consists of the cation trans- and additional molecules of water and a chloride anion in the lattice. Two other hydrates are known, pale green and violet . Similar hydration isomerism is seen with other chromium(III) compounds. Preparation Anhydrous chromium(III) chloride may be prepared by chlorination of chromium metal directly, or indirectly by carbothermic chlorination of chromium(III) oxide at 650–800 °C The hydrated chlorides are prepared by treatment of chromate with hydrochloric acid and aqueous methanol. Reactions Slow reaction rates are common with chromium(III) complexes. The low reactivity of the d3 ion can be explained using crystal field theory. One way of opening up to substitution in solution is to reduce even a trace amount to , for example using zinc in hydrochloric acid. This chromium(II) compound undergoes substitution easily, and it can exchange electrons with via a chloride bridge, allowing all of the to react quickly. With the presence of some chromium(II), solid dissolves rapidly in water. Similarly, ligand substitution reactions of solutions of are accelerated by chromium(II) catalysts. With molten alkali metal chlorides such as potassium chloride, gives salts of the type and , which is also octahedral but where the two chromiums are linked via three chloride bridges. The hexahydrate can also be dehydrated with thionyl chloride: Complexes with organic ligands is a Lewis acid, classified as "hard" according to the Hard-Soft Acid-Base theory. It forms a variety of adducts of the type , where L is a Lewis base. For example, it reacts with pyridine () to form the pyridine complex: Treatment with trimethylsilylchloride in THF gives the anhydrous THF complex: Precursor to organochromium complexes Chromium(III) chloride is used as the precursor to many organochromium compounds, for example bis(benzene)chromium, an analogue of ferrocene: Phosphine complexes derived from catalyse the trimerization of ethylene to 1-hexene. Use in organic synthesis One niche use of in organic synthesis is for the in situ preparation of chromium(II) chloride, a reagent for the reduction of alkyl halides and for the synthesis of (E)-alkenyl halides. The reaction is usually performed using two moles of per mole of lithium aluminium hydride, although if aqueous acidic conditions are appropriate zinc and hydrochloric acid may be sufficient. Chromium(III) chloride has also been used as a Lewis acid in organic reactions, for example to catalyse the nitroso Diels-Alder reaction. Dyestuffs A number of chromium-containing dyes are used commercially for wool. Typical dyes are triarylmethanes consisting of ortho-hydroxylbenzoic acid derivatives. Precautions Although trivalent chromium is far less poisonous than hexavalent, chromium salts are generally considered toxic. References Further reading Handbook of Chemistry and Physics, 71st edition, CRC Press, Ann Arbor, Michigan, 1990. The Merck Index, 7th edition, Merck & Co, Rahway, New Jersey, USA, 1960. J. March, Advanced Organic Chemistry, 4th ed., p. 723, Wiley, New York, 1992. K. Takai, in Handbook of Reagents for Organic Synthesis, Volume 1: Reagents, Auxiliaries and Catalysts for C-C Bond Formation, (R. M. Coates, S. E. Denmark, eds.), pp. 206–211, Wiley, New York, 1999. External links International Chemical Safety Card 1316 (anhydr. CrCl3) International Chemical Safety Card 1532 (CrCl3·6H2O) National Pollutant Inventory – Chromium (III) compounds fact sheet NIOSH Pocket Guide to Chemical Hazards IARC Monograph "Chromium and Chromium compounds" Chromium(III) compounds Chlorides Metal halides Coordination complexes
Chromium(III) chloride
[ "Chemistry" ]
1,265
[ "Chlorides", "Inorganic compounds", "Coordination complexes", "Coordination chemistry", "Salts", "Metal halides" ]
1,511,014
https://en.wikipedia.org/wiki/Jeffries%20Projects
The Jeffries Homes, also called the Jeffries Housing Projects, was a public housing project located in Detroit, Michigan, near the Lodge Freeway. It included 13 high-rises and hundreds of row house units, and was named for Detroit Recorder's Court Judge Edward J. Jeffries, Sr., who was also father of Detroit Mayor Edward J. Jeffries, Jr. History The first phase, Jeffries West, opened in 1953 as a complex of eight 14-story towers. The second phase included five additional towers in Jeffries West and Jeffries East, 415 apartments in a set of low-rise apartment blocks, added in 1955. In total, the project included 2,170 housing units on 47 acres. At first, the complex was popular among many Detroit residents who were eager to move into the new buildings. But by the late 1960s, the buildings had become a haven for drug dealers and an area with a high crime rate. Redevelopment Five towers of the complex were demolished in 1997, and four additional towers were imploded in 2001 to make way for the redevelopment of the site. An additional tower was demolished approximately two years later. The remaining tenants of the Jeffries were moved to Freedom Place and Research Park housing complexes, approximately 8 city blocks from the Jeffries, while the redevelopment took place. A development by Scripps Park Associates was built on the site of Jeffries West and named "Woodbridge Estates" at a cost of $92 million. Woodbridge Estates includes 281 mixed-income housing rental units, 101 owner-occupancy attached and detached single-family homes, a 100-unit senior housing apartment building, plus 297 units of senior housing in the three remaining towers of the former Jeffries West. Jeffries East was demolished in 2008 and the site redeveloped as mixed-income complex named "Cornerstone," completed in late 2012 developed in three phases.. It included the development of 180 rental units in 30 buildings of townhomes and duplexes, consisting of 138 public housing units and 42 affordable housing rental units. Former residents of Jeffries East in good standing with the Detroit Housing Commission were permitted to return to the new complex. References External links Jeffries towers tumble — 2001 article in the Detroit News about the demolition of many of the Jeffries Projects towers. — Information about the buildings in Jeffries Project. Public housing in Detroit Urban decay in the United States Residential buildings completed in 1953 1953 establishments in Michigan Buildings and structures demolished in 1997 Buildings and structures demolished in 2001 Demolished buildings and structures in Detroit Buildings and structures demolished by controlled implosion
Jeffries Projects
[ "Engineering" ]
521
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
1,511,304
https://en.wikipedia.org/wiki/Wetting%20layer
A wetting layer is an monolayer of atoms that is epitaxially grown on a flat surface. The atoms forming the wetting layer can be semimetallic elements/compounds or metallic alloys (for thin films). Wetting layers form when depositing a lattice-mismatched material on a crystalline substrate. This article refers to the wetting layer connected to the growth of self-assembled quantum dots (e.g. InAs on GaAs). These quantum dots form on top of the wetting layer. The wetting layer can influence the states of the quantum dot for applications in quantum information processing and quantum computation. Process The wetting layer is epitaxially grown on a surface using molecular beam epitaxy (MBE). The temperatures required for wetting layer growth typically range from 400-500 degrees Celsius. When a material A is deposited on a surface of a lattice-mismatched material B, the first atomic layer of material A often adopts the lattice constant of B. This mono-layer of material A is called the wetting layer. When the thickness of layer A increases further, it becomes energetically unfavorable for material A to keep the lattice constant of B. Due to the high strain of layer A, additional atoms group together once a certain critical thickness of layer A is reached. This island formation reduces the elastic energy. Overgrown with material B, the wetting layer forms a quantum well in case material A has a lower bandgap than B. In this case, the formed islands are quantum dots. Further annealing can be used to modify the physical properties of the wetting layer/quantum dot . Properties The wetting layer is a close-to mono-atomic layer with a thickness of typically 0.5 nanometers. The electronic properties of the quantum dot can change as a result of the wetting layer. Also, the strain of the quantum dot can change due to the wetting layer. Notes External links Wetting layer on arxiv.org group website of M. Dähne Quantum electronics Thin film deposition
Wetting layer
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
427
[ "Quantum electronics", "Thin film deposition", "Coatings", "Thin films", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Planes (geometry)", "Solid state engineering" ]
14,398,725
https://en.wikipedia.org/wiki/Katanosin
Katanosins are a group of antibiotics (also known as lysobactins). They are natural products with strong antibacterial potency. So far, katanosin A and katanosin B (lysobactin) have been described. Sources Katanosins have been isolated from the fermentation broth of microorganisms, such as Cytophaga. or the Gram-negative bacterium Lysobacter sp. Structure Katanosins are cyclic depsipeptides (acylcyclodepsipeptides). These non-proteinogenic structures are not ordinary proteins derived from primary metabolism. Rather, they originate from bacterial secondary metabolism. Accordingly, various non-proteinogenic (non-ribosomal) amino acids are found in katanosins, such as 3-hydroxyleucine, 3-hydroxyasparagine, allothreonine and 3-hydroxyphenylalanine. All katanosins have a cyclic and a linear segment (“lariat structure”). The peptidic ring is closed with an ester bond (lactone). Katanosin A and B differ in the amino acid position 7. The minor metabolite katanosin A has a valine in this position, whereas the main metabolite katanosin B carries an isoleucine. Biological activity Katanosin antibiotics target the bacterial cell wall biosynthesis. They are highly potent against problematic Gram-positive hospital pathogens such as staphylococci and enterococci. Their promising biological activity attracted various biological and chemical research groups. Their in-vitro potency is comparable with the current “last defence” antibiotic vancomycin. Chemical synthesis The first total syntheses of katanosin B (lysobactin) have been described in 2007. References Antibiotics Depsipeptides
Katanosin
[ "Biology" ]
392
[ "Antibiotics", "Biocides", "Biotechnology products" ]
14,403,153
https://en.wikipedia.org/wiki/Mongol%20mythology
The Mongol mythology is the traditional religion of the Mongols. Creation There are many Mongol creation myths. In one, the creation of the world is attributed to a Buddhist deity Lama. At the start of time, there was only water, and from the heavens, Lama came down to it holding an iron rod with which he began to stir. As he began to stir the water, the stirring brought about a wind and fire which caused a thickening at the centre of the waters to form the earth. Another narrative also attributes the creation of heaven and earth to a lama who is called Udan. Udan began by separating earth from heaven, and then dividing heaven and earth both into nine stories, and creating nine rivers. After the creation of the earth itself, the first male and female couple were created out of clay. They would become the progenitors of all humanity. In another example the world began as an agitating gas which grew increasingly warm and damp, precipitating a heavy rain that created the oceans. Dust and sand emerged to the surface and became earth. Yet another account tells of the Buddha Sakyamuni searching the surface of the sea for a means to create the earth and spotted a golden frog. From its east side, Buddha pierced the frog through, causing it to spin and face north. From its mouth burst fire, and its rump streamed water. Buddha tossed golden sand on his back which became land. And this was the origin of the five earthly elements, wood and metal from the arrow, and fire, water and sand. These myths date from the 17th century when Yellow Shamanism (Tibetan Buddhism using shamanistic forms) was established in Mongolia. Black Shamanism and White Shamanism from pre-Buddhist times survive only in far-northern Mongolia (around Lake Khuvsgul) and the region around Lake Baikal where Lamaist persecution had not been effective. Deities Bai-Ulgan and Esege Malan are creator deities. Ot is the goddess of marriage. Tung-ak is the patron god of tribal chiefs and the ruler of the lesser spirits of Mongol mythology Erlik Khan is the King of the Underworld. Daichi Tengri is the red god of war to whom enemy soldiers were sometimes sacrificed during battle campaigns. Zaarin Tengri is a spirit who gives Khorchi (in the Secret History of the Mongols) a vision of a cow mooing "Heaven and earth have agreed to make Temujin (later Genghis Khan) the lord of the nation". The sky god Tengri is attested from the Xiongnu of the 2nd century BC. The Xiongnu may not have been Mongol, but Tengri is common to several Central Asian peoples, including the Mongols. The wolf, falcon, deer and horse were important symbolic animals. Texts and myths The Uliger are traditional epic tales and the Epic of King Gesar is shared with much of Central Asia and Tibet. The Epic of King Gesar (Ges'r, Kesar) is a Mongol religious epic about Geser (also known as Buche Beligte), a prophet of Tengriism. See also Alpamysh Epic of Manas Manchurian mythology Mongolian cosmogony Scythian mythology Shamanism in Siberia The Secret History of the Mongols Tibetan mythology Tungusic mythology Turco-Mongol tradition Turkic mythology Notes References Walter Heissig, The Religions of Mongolia, Kegan Paul (2000). Myths Connected With Mongol Religion, A Journey in Southern Siberia, by Jeremiah Curtin. Gerald Hausman, Loretta Hausman, The Mythology of Horses: Horse Legend and Lore Throughout the Ages (2003), 37–46. Yves Bonnefoy, Wendy Doniger, Asian Mythologies, University Of Chicago Press (1993), 315–339. 满都呼, 中国阿尔泰语系诸民族神话故事(folklores of Chinese Altaic races).民族出版社, 1997. . 贺灵, 新疆宗教古籍资料辑注(materials of old texts of Xinjiang religions).Xinjiang People's Press, May 2006. . S. G. Klyashtornyj, 'Political Background of the Old Turkic Religion' in: Oelschlägel, Nentwig, Taube (eds.), "Roter Altai, gib dein Echo!" (FS Taube), Leipzig, 2005, , 260–265. External links Alpamysh Shamanism in Mongolia and Tibet The Altaic Epic Tengri on Mars Creation myths Tengriism
Mongol mythology
[ "Astronomy" ]
939
[ "Cosmogony", "Creation myths" ]
14,403,539
https://en.wikipedia.org/wiki/List%20of%20people%20by%20Erd%C5%91s%20number
Paul Erdős (1913–1996) was a Hungarian mathematician. He considered mathematics to be a social activity and often collaborated on his papers, having 511 joint authors, many of whom also have their own collaborators. The Erdős number measures the "collaborative distance" between an author and Erdős. Thus, his direct co-authors have Erdős number one, theirs have number two, and so forth. Erdős himself has Erdős number zero. There are more than 12,500 people with an Erdős number of two. This is a partial list of authors with an Erdős number of three or less. For more complete listings of Erdős numbers, see the databases maintained by the Erdős Number Project or the collaboration distance calculators maintained by the American Mathematical Society and by zbMATH. Zero Paul Erdős One A János Aczél Ron Aharoni Martin Aigner Miklós Ajtai Leonidas Alaoglu Yousef Alavi Krishnaswami Alladi Noga Alon Nesmith Ankeny Joseph Arkin Boris Aronov David Avis B László Babai Frederick Bagemihl Leon Bankoff Paul T. Bateman James Earl Baumgartner Mehdi Behzad Richard Bellman Vitaly Bergelson Arie Bialostocki Andreas Blass Ralph P. Boas Jr Béla Bollobás John Adrian Bondy Joel Lee Brenner John Brillhart Thomas Craig Brown W. G. Brown Nicolaas Govert de Bruijn R. Creighton Buck Stefan Burr Steve Butler C Neil J. Calkin Peter Cameron Paul A. Catlin Gary Chartrand Phyllis Chinn Sarvadaman Chowla Fan Chung Kai Lai Chung Václav Chvátal Charles Colbourn John Horton Conway Arthur Herbert Copeland Imre Csiszár D Harold Davenport Dominique de Caen Jean-Marie De Koninck Jean-Marc Deshouillers Michel Deza Persi Diaconis Gabriel Andrew Dirac Jacques Dixmier Yael Dowker Underwood Dudley Aryeh Dvoretzky E György Elekes Peter D. T. A. Elliott F Vance Faber Siemion Fajtlowicz Ralph Faudree László Fejes Tóth William Feller Peter C. Fishburn Géza Fodor Aviezri Fraenkel Péter Frankl Gregory Freiman Wolfgang Heinrich Johannes Fuchs Zoltán Füredi G Steven Gaal Janos Galambos Tibor Gallai Fred Galvin Joseph E. Gillis Leonard Gillman Abraham Ginzburg Chris Godsil Michael Golomb Adolph Winkler Goodman Basil Gordon Ronald J. Gould Ronald Graham Sidney Graham Andrew Granville Peter M. Gruber Branko Grünbaum Hansraj Gupta Richard K. Guy Michael Guy András Gyárfás H András Hajnal Gábor Halász Haim Hanani Frank Harary Zdeněk Hedrlín Hans Heilbronn Pavol Hell Fritz Herzog Alan J. Hoffman Verner Emil Hoggatt Jr. I Albert Ingham Aleksandar Ivić J Eri Jabotinsky Steve Jackson Michael Scott Jacobson Svante Janson Vojtěch Jarník K Mark Kac Paul Chester Kainen Shizuo Kakutani Egbert van Kampen Irving Kaplansky Jovan Karamata Ke Zhao Paul Kelly Péter Kiss Murray S. Klamkin Maria Klawe Daniel Kleitman Yoshiharu Kohayakawa Jurjen Ferdinand Koksma Péter Komjáth János Komlós Steven G. Krantz Michael Krivelevich Ewa Kubicka Kenneth Kunen L Jean A. Larson Renu C. Laskar Joseph Lehner William J. LeVeque Winnie Li Jack van Lint Nati Linial László Lovász Florian Luca Tomasz Łuczak M Robert McEliece Brendan McKay Menachem Magidor Kurt Mahler Helmut Maier Michael Makkai Solomon Marcus Giuseppe Melfi Eric Charles Milner Leon Mirsky Hugh Montgomery Peter Montgomery Shlomo Moran Leo Moser M. Ram Murty V. Kumar Murty N Melvyn B. Nathanson Jaroslav Nešetřil Elisha Netanyahu Donald J. Newman Jean-Louis Nicolas Ivan M. Niven O Andrew Odlyzko Ortrud Oellermann Cyril Offord Patrick O'Neil P János Pach Péter Pál Pálfy Torrence Parsons George Piranian Richard Pollack Harry Pollard Carl Pomerance Lajos Pósa Karl Prachar David Preiss Norman J. Pullman George B. Purdy László Pyber R Richard Rado Kanakanahalli Ramachandra S. B. Rao Alfréd Rényi Bruce Reznick Hans Riesel Vojtěch Rödl Paul C. Rosenbloom Bruce Lee Rothschild Cecil C. Rousseau Lee Albert Rubel Arthur Rubin Mary Ellen Rudin Imre Z. Ruzsa S Horst Sachs Michael Saks Peter Salamon Tibor Šalát András Sárközy Gábor N. Sárközy Richard Schelp Andrzej Schinzel Leonard Schulman Sanford Segal Wladimir Seidel John Selfridge Jeffrey Shallit Harold S. Shapiro Saharon Shelah Allen Shields Tarlok Nath Shorey Ruth Silverman Gustavus Simmons Miklós Simonovits Navin M. Singhi Alexander Soifer Vera Sós Ernst Specker Joel Spencer Cameron Leigh Stewart Doug Stinson Arthur Harold Stone Ernst G. Straus Mathukumalli V. Subbarao Henda Swart Mario Szegedy Gábor Szegő Esther Szekeres George Szekeres Endre Szemerédi Peter Szüsz T Alfred Tarski Alan D. Taylor Gérald Tenenbaum Prasad V. Tetali Carsten Thomassen Robert Tijdeman Vilmos Totik William T. Trotter Pál Turán W. T. Tutte U Stanislaw Ulam Kazimierz Urbanik V Bob Vaughan Andrew Vázsonyi Katalin Vesztergombi István Vincze W Samuel S. Wagstaff Jr. Douglas West R. M. Wilson Robin Wilson Peter Winkler Nick Wormald Y Frances Yao Z Shmuel Zaks Stanisław Krystyn Zaremba Abraham Ziv Two A Karen Aardal Shreeram Shankar Abhyankar Maya Ackerman Sibel Adalı Leonard Adleman Yehuda Afek Pankaj K. Agarwal Gordon Agnew Dorit Aharonov Rudolf Ahlswede Jin Akiyama Ian F. Akyildiz Michael H. Albert David Aldous W. R. (Red) Alford Ahmet Alkan Eric Allender Brian Alspach Andris Ambainis Warren Ambrose Robert Ammann Jane Ammons Titu Andreescu Cabiria Andreian Cazacu Hajnal Andréka George Andrews Tom M. Apostol David Applegate Zvi Arad Dan Archdeacon Richard Friederich Arens Sandra Arlinghaus Sanjeev Arora Emil Artin Shiri Artstein Tetsuo Asano Michael Aschbacher Richard Askey James Aspnes Idris Assani Mikhail Atallah A. O. L. Atkin Hagit Attiya Herman Auerbach Franz Aurenhammer Baruch Awerbuch Sheldon Axler B Eric Bach Christine Bachoc Joan Bagaria David H. Bailey Rosemary A. Bailey Alan Baker Egon Balas Ramachandran Balasubramanian Bohuslav Balcar Pierre Baldi Zoltán Tibor Balogh Stefan Banach Prith Banerjee Maya Bar-Hillel Dror Bar-Natan Imre Bárány Ruth Aaronson Bari Martin T. Barlow Michael Barnsley John D. Barrow Tomek Bartoszyński Jon Barwise Serafim Batzoglou Dave Bayer Cristina Bazgan József Beck Edwin F. Beckenbach Laurel Beckett William Beckner L. W. Beineke Nuel Belnap Valentin Danilovich Belousov Alexandra Bellow Arthur T. Benjamin Georgia Benkart Claude Berge Bonnie Berger George Bergman Peter Bergmann Elwyn Berlekamp Leah Berman Bruce C. Berndt R. Stephen Berry Tom Berson Valérie Berthé Elisa Bertino Andrea Bertozzi Abram Samoilovitch Besicovitch Evert Willem Beth Albrecht Beutelspacher Manjul Bhargava Vasanti N. Bhat-Nayak Louis Billera Andrzej Białynicki-Birula R. H. Bing Kenneth Binmore Bryan John Birch Garrett Birkhoff Pamela J. Bjorkman David Blackwell Brian Blank Woody Bledsoe Vincent Blondel Manuel Blum Mary L. Boas Salomon Bochner Mary Ellen Bock Hans L. Bodlaender Anna Bogomolnaia Enrico Bombieri Dan Boneh Carl R. de Boor Richard Borcherds Christian Borgs Allan Borodin Karol Borsuk David Borwein Jonathan Borwein Peter Borwein Jit Bose Raj Chandra Bose Fernanda Botelho Jean Bourgain Stephen R. Bourne Mireille Bousquet-Mélou Jonathan Bowen David William Boyd Stephen P. Boyd Achi Brandt Dietrich Braess Steven Brams Gilles Brassard Richard Brauer Mya Breitbart Charles Brenner Richard P. Brent David Bressoud Keith Briggs Graham Brightwell Andrei Broder Henk Broer Andries Brouwer Gavin Brown Kenneth Brown Richard A. Brualdi Andrew M. Bruckner Janusz Brzozowski Johannes Buchmann Joe P. Buhler Edward Burger Herbert Busemann C Eugenio Calabi Robert Calderbank Cristian S. Calude M. Elizabeth Cannon Charles Cantor Sylvain Cappell Lennart Carleson Gunnar Carlsson Leonard Carlitz Pierre Cartier J. W. S. Cassels Yair Censor Vint Cerf Timothy M. Chan K. S. Chandrasekharan Subrahmanyan Chandrasekhar Zoé Chatzidakis Jennifer Tour Chayes Bernard Chazelle Elliott Ward Cheney Jr. Eugenia Cheng Otfried Cheong Shiing-Shen Chern Amanda Chetwynd S. A. Choudum Maria Chudnovsky William Gemmell Cochran Henri Cohen Henry Cohn Alina Carmen Cojocaru Sidney Coleman Edward Collingwood Marston Conder Anne Condon Robert Connelly William J. Cook Cristina Conati Brian Conrey Irving Copi Philip Coppens Don Coppersmith Derek Corneil Johannes van der Corput Sylvie Corteel Collette Coullard Thomas M. Cover Lenore Cowen Harold Scott MacDonald Coxeter Richard Crandall Claude Crépeau Ernest S. Croot III Ákos Császár Sándor Csörgő Marianna Csörnyei D Raissa D’Souza Ivan Damgård David van Dantzig George Dantzig Henri Darmon Gautam Das Sandip Das Chantal David Kenneth Davidson Donald A. Dawson Mark de Berg Rina Dechter Ermelinda DeLaViña Erik Demaine Arthur P. Dempster Cyrus Derman Nachum Dershowitz Claire Deschênes Keith Devlin Ronald DeVore Luc Devroye Alexander Dewdney Tamal Dey Brenda L. Dietrich Jeff Dinitz Michael Dinneen Irit Dinur Stanislav George Djorgovski Hans Dobbertin David P. Dobkin Danny Dolev Shlomi Dolev Ron Donagi David Donoho Monroe D. Donsker Joseph L. Doob Adrien Douady Ronald G. Douglas Eric van Douwen Rod Downey Pauline van den Driessche Qiang Du Alexandra Duel-Hallen Tom Duff Dwight Duffus Michel Duflo Andrej Dujella Ioana Dumitriu Peter Duren Rick Durrett Pierre Dusart Bernard Dwork Cynthia Dwork Nira Dyn Freeman Dyson E Peter Eades A. Ross Eckler Jr. Katsuya Eda Herbert Edelsbrunner Jack Edmonds Michelle Effros Bradley Efron Tatyana Pavlovna Ehrenfest Andrzej Ehrenfeucht Tamar Eilam Samuel Eilenberg Albert Einstein David Eisenbud Kirsten Eisenträger Noam Elkies Edith Elkind Joanne Elliott Jo Ellis-Monaghan Per Enflo David Eppstein Tamás Erdélyi Alexandre Eremenko Shimon Even Hugh Everett III Howard Eves F Ronald Fagin Kenneth Falconer Jean-Claude Falmagne Ky Fan Kaitai Fang Martin Farach-Colton Odile Favaron Solomon Feferman Charles Fefferman Uriel Feige Lipót Fejér Michael Fekete Michal Feldman Pedro Felipe Felzenszwalb Amos Fiat Leah Findlater Nathan Fine Michael J. Fischer Josh Fisher Ronald Fisher Mary Flahive Philippe Flajolet Harley Flanders Wendell Fleming Ciprian Foias Jon Folkman Matthew Foreman Ferenc Forgó M. K. Fort Jr. Lance Fortnow Lorraine Foster Michael Fredman Dan Freed David A. Freedman Michael Freedman Chris Freiling Juliana Freire Peter Freund Shmuel Friedland John Friedlander Harvey Friedman Sy Friedman Kurt Otto Friedrichs Alan M. Frieze Monique Frize Zdeněk Frolík László Fuchs D. R. Fulkerson William Fulton Hillel Furstenberg G Lisl Gaal Dov Gabbay Haim Gaifman David Gale Zvi Galil Alexander Gamburd Mario Garavaglia Martin Gardner Michael Garey John B. Garnett Joachim von zur Gathen Mai Gehrke William Gehrlein Jane F. Gentleman Ira Gessel Ellen Gethner Nassif Ghoussoub Edgar Gilbert Moti Gitik Paul Glaister Sheldon Glashow Michel Goemans Edray Herber Goins Stanisław Gołąb Warren Goldfarb Dorian M. Goldfeld Oded Goldreich Judy Goldsmith Martin Goldstern Herman Goldstine Daniel Goldston Shafi Goldwasser Eric Goles Solomon W. Golomb Gene H. Golub Marty Golubitsky Martin Charles Golumbic Ralph E. Gomory Amy Ashurst Gooch Jacob E. Goodman Cameron Gordon Mark Goresky Henry W. Gould Jim Gray Ben Green Anne Greenbaum Curtis Greene Catherine Greenhill Kasper Green Larsen Russell Greiner Ulf Grenander Thomas N. E. Greville Robert C. Griffiths Dima Grigoriev Geoffrey Grimmett Rami Grossberg Emil Grosswald Martin Grötschel Helen G. Grundman Chen Guanrong Leonidas J. Guibas Max Gunzburger Yuri Gurevich Dan Gusfield Gregory Gutin H Ruth Haas Hugo Hadwiger Jaroslav Hájek Mohammad Hajiaghayi György Hajós S. L. Hakimi Heini Halberstam Alfred W. Hales Marshall Hall Paul Halmos Dan Halperin Joseph Halpern Joel David Hamkins Katalin Hangos Sariel Har-Peled Heiko Harborth G. H. Hardy Glyn Harman Leo Harrington Michael Harris Pamela E. Harris Hiroshi Haruki Joel Hass Helmut Hasse Babak Hassibi Johan Håstad David Haussler Penny Haxell Patrick Hayden John P. Hayes Walter Hayman Teresa W. Haynes Emilie Virginia Haynsworth Neil Heffernan Pinar Heggernes Katherine Heinrich Christine Heitsch Harald Helfgott John William Helton Leon Henkin Gabor Herman John Hershberger Israel Nathan Herstein Agnes M. Herzberg Silvia Heubach Edwin Hewitt Graham Higman Einar Hille Peter Hilton David Hinkley James William Peter Hirschfeld Pascal Hitzler Edmund Hlawka Dorit S. Hochbaum Wilfrid Hodges Torsten Hoefler Leslie Hogben Susan P. Holmes Alfred Horn Haruo Hosoya Roger Evans Howe John Mackintosh Howie Ehud Hrushovski John F. Hughes Roger Hui Birge Huisgen-Zimmermann Eugénie Hunsicker Ferran Hurtado Joan Hutchinson Martin Huxley I John Iacono Oscar H. Ibarra Lucian Ilie Neil Immerman Russell Impagliazzo Wilfried Imrich Piotr Indyk Aubrey William Ingleton Kori Inkpen Martin Isaacs Mourad Ismail Kazuo Iwama Henryk Iwaniec J David M. Jackson Brigitte Jaumard Thomas Jech David Jerison Meyer Jerison Mark Jerrum Børge Jessen Jia Rongqing Carl Jockusch Charles Royal Johnson David S. Johnson Ellis L. Johnson Norman Johnson Norman Lloyd Johnson William B. Johnson F. Burton Jones Peter Jones Roger Jones Nataša Jonoska Bjarni Jónsson Dominic Joyce Matti Jutila K Richard Kadison Jean-Pierre Kahane Jeff Kahn Gil Kalai Olav Kallenberg László Kalmár Akihiro Kanamori Ravindran Kannan Kao Cheng-yan Lila Kari Anna Karlin Samuel Karlin Narendra Karmarkar Richard M. Karp Marek Karpinski Gyula O. H. Katona Gyula Y. Katona Ephraim Katzir Yitzhak Katznelson Louis Kauffman Bruria Kaufman Ken-ichi Kawarabayashi Alexander S. Kechris Klara Kedem Howard Jerome Keisler Joseph Keller Leroy Milton Kelly Julia Kempe Ken Kennedy Michel Kervaire Pınar Keskinocak Harry Kesten Tanya Khovanova Jack Kiefer Clark Kimberling Valerie King John Kingman William English Kirwan Signe Kjelstrup John R. Klauder Sandi Klavžar Victor Klee Robert Kleinberg Bronisław Knaster Konrad Knopp Donald Knuth William Lawrence Kocay Christiane Koch Simon B. Kochen Waldemar W. Koczkodaj Kunihiko Kodaira Sven Koenig Phokion G. Kolaitis János Kollár Sergei Konyagin Eugene Koonin Ádám Korányi Jacob Korevaar András Kornai Thomas William Körner S. Rao Kosaraju Ronnie Kosloff Bertram Kostant Samuel Kotz Anton Kotzig Dexter Kozen Dmitri Ilyich Kozlov Bryna Kra Daniel Kráľ Sarit Kraus Marc van Kreveld Clyde Kruskal Joseph Kruskal Marek Kuczma Harold W. Kuhn Markus Kuhn Greg Kuperberg Krystyna Kuperberg Włodzimierz Kuperberg Kazimierz Kuratowski Věra Kůrková L Miklós Laczkovich Jeffrey Lagarias Radha Laha Ming-Jun Lai Tsit Yuen Lam Brian LaMacchia Joachim Lambek Edmund Landau Eric Lander Stefan Langerman Robert Langlands Michael Langston Monique Laurent Kristin Lauter Richard Laver Lucien Le Cam Imre Leader Jon Lee Charles Leedham-Green Jan van Leeuwen Derrick Henry Lehmer Emma Lehmer F. Thomson Leighton Abraham Lempel László Lempert Arjen Lenstra Hendrik Lenstra Jan Karel Lenstra Hanfried Lenz Nancy Leveson Leonid Levin Raphael David Levine Norman Levinson Donald John Lewis Paul Leyland André Lichnerowicz Katrina Ligett Elliott H. Lieb Karl Lieberherr Thomas M. Liggett Joram Lindenstrauss Yuri Linnik Jacques-Louis Lions Richard Lipton Barbara Liskov John Little John Edensor Littlewood Andy Liu Chung Laung Liu Peter A. Loeb Ling Long Charles Loewner Benjamin F. Logan Darrell Long Judith Q. Longyear Lee Lorch Paola Loreti Catherine A. Lozupone Anna Lubiw Alexander Lubotzky Michael Luby R. Duncan Luce Edith Hirsch Luchins Malwina Łuczak Monika Ludwig Eugene M. Luks Carsten Lund Joaquin Mazdak Luttinger Roger Lyndon M Kevin McCurley Ian G. Macdonald Eugene McDonnell Joanna McGrenere Angus Macintyre Sheila Scott Macintyre David J. C. MacKay John McKay Henry McKean George Mackey Lester Mackey Richard McKelvey Jeanette McLeod Peter McMullen Dugald Macpherson Jessie MacWilliams Roger Maddux Thomas L. Magnanti Dorothy Maharam Ebadollah S. Mahmoodian Kira Makarova Jitendra Malik Dahlia Malkhi Maryanthe Malliaris Paul Malliavin Claudia Malvenuto Udi Manber Henry Mann Heikki Mannila Renata Mansini Adam Marcus Edward Marczewski Harry Markowitz Alison Marr Robert Marshak Donald A. Martin Gaven Martin Anders Martin-Löf Katalin Marton Dragan Marušič Eric Maskin David Masser James Massey William A. Massey Claire Mathieu Yossi Matias Yuri Matiyasevich Jiří Matoušek Barry Mazur Peter Mazur Stanisław Mazur Victor Mazurov Catherine Meadows Elizabeth Meckes Nimrod Megiddo Kurt Mehlhorn Nicholas Metropolis Albert R. Meyer Yves Meyer Paul G. Mezey Ernest Michael Jan Mikusiński Olgica Milenkovic J. C. P. Miller Steven J. Miller Victor S. Miller Vitali Milman Tova Milo Michał Misiurewicz Joseph S. B. Mitchell Michael Mitzenmacher Karyn Moffat Bojan Mohar Joanne Moldenhauer Cristopher Moore Bernard Moret Louis J. Mordell Anne C. Morel Carlos J. Moreno Frank Morgan Dana Moshkovitz Frederick Mosteller Andrzej Mostowski Rajeev Motwani Theodore Motzkin David Mount Jennifer Mueller Alec Muffett Rahul Mukerjee Colm Mulcahy David Mumford J. Ian Munro Klaus-Robert Müller Jan Mycielski Kieka Mynhardt Wendy Myrvold Lawrence A. Mysak N David Naccache Isaac Namioka Assaf Naor Joseph Seffi Naor Moni Naor Crispin Nash-Williams Evelyn Nelson Evi Nemeth George Nemhauser Yuri Valentinovich Nesterenko Nathan Netanyahu Nancy Neudauer Bernhard Neumann Peter M. Neumann Víctor Neumann-Lara Charles M. Newman Miron Nicolescu Rolf Niedermeier Harald Niederreiter Noam Nisan Abraham Nitzan Simon P. Norton Isabella Novik Ruth Nussinov O Frédérique Oggier Jim K. Omura Mary Jo Ondrechen Elizabeth O'Neil Ken Ono Paul van Oorschot Donald Samuel Ornstein Joseph O'Rourke Patrice Ossona de Mendez Deryk Osthus Rafail Ostrovsky Alexander Ostrowski James Oxley P Lior Pachter Igor Pak Ilona Palásti Vladimír Palko Rohit Parikh Jeff Paris Harold R. Parks Michal Parnas Jonathan Partington Oren Patashnik Mike Paterson Raj Pathria Gheorghe Păun Jean Pedersen Heinz-Otto Peitgen David Peleg Magda Peligrad Peng Tsu Ann Yuval Peres Hazel Perfect Micha Perles Ed Perkins Charles S. Peskin Robert Phelps Cynthia A. Phillips Christine Piatko Subbayya Sivasankaranarayana Pillai János Pintz Nick Pippenger Tomaž Pisanski Gilles Pisier Toniann Pitassi David Plaisted Vera Pless Michael D. Plummer Amir Pnueli Henry O. Pollak George Pólya Irith Pomeranz Christian Pommerenke Bjorn Poonen Alfred van der Poorten Victoria Powers Cheryl Praeger Vaughan Pratt Franco P. Preparata Ariel D. Procaccia Gordon Preston Calton Pu William R. Pulleyblank Q Jean-Jacques Quisquater R Michael O. Rabin Charles Rackoff Charles Radin Stanisław Radziszowski Adrian Raftery Stefan Ralescu Kavita Ramanan K. G. Ramanathan Olivier Ramaré Dana Randall C. R. Rao Sofya Raskhodnikova Steen Rasmussen Michel Raynaud Dijen K. Ray-Chaudhuri Alexander Razborov Ronald C. Read László Rédei Raymond Redheffer Bruce Reed Irving S. Reed Oded Regev Eva Regnier K. B. Reid Gesine Reinert Edward Reingold Omer Reingold Jennifer Rexford Dana S. Richards Frigyes Riesz John Rigby Gerhard Ringel Alexander Rinnooy Kan John Riordan Jorma Rissanen Ivan Rival Ron Rivest Herbert Robbins Fred S. Roberts Stephen E. Robertson Abraham Robinson Burton Rodin Judith Roitman Dana Ron Colva Roney-Dougal Frances A. Rosamond Bill Roscoe Jonathan Rosenberg Azriel Rosenfeld Gian-Carlo Rota Klaus Roth Tim Roughgarden Bimal Kumar Roy Marie-Françoise Roy Gordon Royle Jean E. Rubin Ronitt Rubinfeld Ariel Rubinstein J. Hyam Rubinstein Steven Rudich Walter Rudin Zeev Rudnick Arunas Rudvalis Sushmita Ruj Czesław Ryll-Nardzewski Robert Rumely Frank Ruskey H. J. Ryser S Donald G. Saari Thomas L. Saaty Gert Sabidussi Jörg-Rüdiger Sack Edward B. Saff Shmuel Safra Jawad Salehi Raphaël Salem Lee Sallows Arto Salomaa Wojciech Samotij E. Sampathkumar Peter Sanders David Sankoff Palash Sarkar Peter Sarnak Daihachiro Sato Lisa Sauermann Carla Savage John E. Savage Najiba Sbihi Mathias Schacht Marion Scheepers Boris M. Schein Ed Scheinerman Wolfgang M. Schmidt Claus P. Schnorr Hans Schneider Isaac Jacob Schoenberg Norman Schofield Arnold Schönhage Oded Schramm Alexander Schrijver Richard Schroeppel Issai Schur Jacob T. Schwartz Dana Scott Jennifer Seberry Thomas Dyer Seeley Raimund Seidel Gary Seitz Atle Selberg Jean-Pierre Serre Brigitte Servatius Simone Severini Paul Seymour Freydoon Shahidi Aner Shalev Adi Shamir Eli Shamir Ron Shamir Daniel Shanks Micha Sharir Dennis Shasha Nir Shavit Scott Shenker G. C. Shephard Lawrence Shepp Goro Shimura David Shmoys Peter Shor Richard Shore Robert Shostak S. S. Shrikhande Wacław Sierpiński Joseph H. Silverman Barry Simon Alistair Sinclair Steven Skiena Christopher Skinner Thoralf Skolem Brian Skyrms Gordon Douglas Slade David Slepian Neil Sloane Cedric Smith Temple F. Smith Hunter Snevily Marc Snir Gustave Solomon Ronald Solomon Robert M. Solovay József Solymosi Kannan Soundararajan Diane Souvaine Gene Spafford Bettina Speckmann Terry Speed M. Grazia Speranza Sarah Spurgeon Katherine St. John Kaye Stacey Jessica Staddon Ludwig Staiger Richard P. Stanley Ralph Gordon Stanton Michael Starbird Harold Stark Sergey Stechkin Mike Steel J. Michael Steele Angelika Steger Kenneth Steiglitz Elias M. Stein Maya Stein Hugo Steinhaus Jacques Stern Shlomo Sternberg Bonnie Stewart Lorna Stewart Larry Stockmeyer Ivan Stojmenović Jorge Stolfi Marshall Harvey Stone Michael Stonebraker Volker Strassen Dona Strauss Ileana Streinu Daniel W. Stroock Bernd Sturmfels Francis Su Benny Sudakov Madhu Sudan David Sumner Patrick Suppes Sun Zhiwei Subhash Suri Klaus Sutner Peter Swinnerton-Dyer Balázs Szegedy Stefan Szeider Gábor J. Székely Ágnes Szendrei Lajos Szilassi Wanda Szmielew Tamás Szőnyi Jayme Luiz Szwarcfiter T Tomohiro Tachi Gaisi Takeuti Dov Tamari David Tannor Terence Tao Richard A. Tapia Éva Tardos Gábor Tardos Robert Tarjan Olga Taussky-Todd Herman te Riele Vanessa Teague Max Tegmark Shang-Hua Teng Bridget Tenner Katrin Tent Audrey Terras Diana Thomas Robin Thomas Heidi Thornquist Mikkel Thorup William Thurston Robert Tichy Andrey Nikolayevich Tikhonov Naftali Tishby John Todd Stevo Todorčević Nicole Tomczak-Jaegermann Tatiana Toro Godfried Toussaint Marilyn Tremaine Ann Trenk Luca Trevisan Michael Trick Jan Trlifaj Věra Trnková John Truss Marcello Truzzi Alan Tucker Albert W. Tucker Thomas W. Tucker Bryant Tuckerman John Tukey Helge Tverberg U George Uhlenbeck Jeffrey Ullman Chris Umans Eli Upfal Jorge Urrutia V Jouko Väänänen Robert J. Vanderbei Harry Vandiver Scott Vanstone S. R. Srinivasa Varadhan Alexander Vardy Richard S. Varga George Varghese Robert Lawson Vaught Umesh Vazirani Vijay Vazirani Santosh Vempala Michèle Vergne Anatoly Vershik Victor Vianu Jonathan David Victor Mathukumalli Vidyasagar Uzi Vishkin Vadim G. Vizing Margit Voigt Marc Voorhoeve Petr Vopěnka Gheorghe Vrănceanu Van H. Vu W Dorothea Wagner Stan Wagon Abraham Wald Michel Waldschmidt David J. Wales Arnold Walfisz Judy L. Walker Joseph L. Walsh Yusu Wang Ian Wanless John Clive Ward Tandy Warnow Stefan E. Warschawski Johan Wästlund Michael Waterman Gerhard Weikum Hans Weinberger Peter J. Weinberger Benjamin Weiss Guido Weiss Mary Ann Weitnauer Lloyd R. Welch Dominic Welsh Emo Welzl Carola Wenk Harald Wergeland Andrew B. Whinston Douglas R. White Sue Whitesides Hassler Whitney David Widder Harold Widom Norbert Wiener Avi Wigderson Mark Wilde Herbert Wilf Alex Wilkie Yorick Wilks Hugh C. Williams Ruth J. Williams Robert Arnott Wilson Shmuel Winograd Andreas Winter Hans Witsenhausen Gerhard J. Woeginger Jack Wolf Marek Wolf Thomas Wolff Jacob Wolfowitz Stephen Wolfram Carol Wood W. Hugh Woodin Trevor Wooley John Wrench E. M. Wright Rebecca N. Wright Mario Wschebor Angela Y. Wu Donald Wunsch Max Wyman Aaron D. Wyner Y Catherine Yan Mihalis Yannakakis Martin Yarmush Andrew Yao Shing-Tung Yau Bülent Yener Cem Yıldırım Yiqun Lisa Yin J. W. T. Youngs Moti Yung Z Adriaan Cornelis Zaanen Stathis Zachos Don Zagier Alexandru Zaharescu Thomas Zaslavsky Hans Zassenhaus Lenka Zdeborová Doron Zeilberger Karl Longin Zeller Ping Zhang Günter M. Ziegler Tamar Ziegler Paul Zimmermann Nivio Ziviani Vaclav Zizler Štefan Znám David Zuckerman Wadim Zudilin Uri Zwick William S. Zwicker Antoni Zygmund Three A Ronald Aarts Wil van der Aalst Scott Aaronson Eduardo Abeliuk Hal Abelson Ralph Abraham Carlisle Adams Colin Adams Frank Adams Jeffrey D. Adams Alejandro Adem Roy Adler Vaneet Aggarwal Ian Agol Manindra Agrawal Yakir Aharonov Lars Ahlfors Alfred Aho Michael Aizenman Naum Akhiezer Stephanie B. Alexander Pavel Alexandrov Elizabeth S. Allman Jonathan Lazare Alperin Rajeev Alur Nancy M. Amato Nina Amenta Shimshon Amitsur Martyn Amos Ross J. Anderson Dmitri Anosov Huzihiro Araki Shlomo Argamon Lars Arge Alexander Arhangelskii Vladimir Arnold Donald Aronson Nachman Aronszajn Kenneth Arrow Sergei N. Artemov James Arthur Michael Artin Matthias Aschenbrenner Richard Aster Karl Johan Åström Krassimir Atanassov Frederick Valentine Atkinson David August Robert Aumann Jeremy Avigad Artur Avila Luchezar L. Avramov Steve Awodey James Ax Richard Axel Ofer Azar B Franz Baader John C. Baez Ricardo Baeza Rodríguez Jennifer Balakrishnan John M. Ball Keith Martin Ball W. W. Rouse Ball Thomas Banchoff Yehoshua Bar-Hillel Rina Foygel Barber Nina Bari Dwight Barkley Paulo S. L. M. Barreto Nayandeep Deka Baruah Kaye Basford Hyman Bass Richard F. Bass Hannah Bast Peter W. Bates Victor Batyrev Paul Baum Gilbert Baumslag Glen E. Baxter Samuel Beatty Arnaud Beauville Carlo Beenakker Robert Behringer Jason Behrstock John Lane Bell Mihir Bellare Mordechai Ben-Ari John Benedetto Yoshua Bengio Charles H. Bennett Jonathan Bennett Henri Berestycki Zvi Bern Robert Bernecky Douglas Bernheim Daniel J. Bernstein Dimitri Bertsekas Michele Besso Hans Bethe Stefano Bianchini Edward Bierstone Eli Biham Sara Billey Sundance Bilson-Thompson Katalin Bimbó George David Birkhoff Joan Birman Alex Biryukov Richard L. Bishop Robert G. Bland David Blei Spencer Bloch Richard Earl Block Leonard Blumenthal Harald Bohr Andrei Bolibrukh George Boolos Armand Borel Max Born Nigel Boston Raoul Bott Onno J. Boxma Robert S. Boyer Samuel L. Braunstein Marilyn Breen Alberto Bressan Emmanuel Breuillard Haïm Brezis Martin Bridson Sergey Brin Roger W. Brockett Michel Broué William Browder Lawrie Brown Morton Brown W. Dale Brownawell Babette Brumback Viggo Brun David Buchsbaum Adhemar Bultheel Donald Burkholder C Luis Caffarelli Russel E. Caflisch Guido Caldarelli Alberto Calderón Danny Calegari David Callaway Robert Horton Cameron James W. Cannon John Canny Jaime Carbonell Walter Carnielli Élie Cartan Henri Cartan Mary Cartwright Carlos Castillo-Chavez Zoia Ceaușescu Gregory Chaitin Venkat Chandrasekaran Sun-Yung Alice Chang Ruth Charney Georges Charpak Bidyut Baran Chaudhuri David Chaum Jeff Cheeger Xiuxiong Chen Shiu-Yuen Cheng Cheon Jung-hee Herman Chernoff Frederic T. Chong Gustave Choquet Yvonne Choquet-Bruhat Howie Choset Demetrios Christodoulou Isaac Chuang Philippe G. Ciarlet Mitrofan Cioban Kenneth L. Clarkson Alfred H. Clifford Tim Cochran David X. Cohen Robert F. Coleman Peter Coles Francis Collins Jean-Louis Colliot-Thélène David Colquhoun Pierre Conner Alain Connes Stephen Cook Jerome Cornfield Athel Cornish-Bowden Richard Courant Michael Cowling David A. Cox David Cox Michael G. Crandall Marc Culler Joachim Cuntz Charles W. Curtis Thomas Curtright D Angelo Dalli Sajal K. Das Ingrid Daubechies James H. Davenport Paul Davies Chandler Davis Martin Davis Louis de Branges de Bourcia Gérard Debreu Percy Deift Pierre Deligne Ben Delo Jesús A. De Loera Martin Demaine Laura DeMarco James Demmel Jan Denef Dennis DeTurck Mike Develin Florin Diacu Matthew T. Dickerson Jean Dieudonné Whitfield Diffie Robbert Dijkgraaf Robert P. Dilworth Peter Dinda Stanislav George Djorgovski Roland Dobrushin Manfredo do Carmo Simon Donaldson Jack Dongarra Sergio Doplicher Dov Dori Michael R. Douglas Paul Dourish Clifford Hugh Dowker Lester Dubins Harvey Dubner Richard M. Dudley George F. D. Duff Iain S. Duff James Dugundji Jon Michael Dunn Marcus du Sautoy Eugene Dynkin E John Eccles Beno Eckmann Jean-Pierre Eckmann Alan Edelman William Edge A. W. F. Edwards Michelle Effros Nikolai Efimov Ellen Eischen Yakov Eliashberg Jordan Ellenberg George A. Elliott Robert J. Elliott George F. R. Ellis Richard Elman Ryszard Engelking Charles Epstein Arthur Erdélyi Karin Erdmann Alex Eskin Pavel Etingof Lawrence C. Evans F Ludvig Faddeev Gerd Faltings Benson Farb Herbert Federer Anita Burdman Feferman Joan Feigenbaum Walter Feit Edward Felten Werner Fenchel Enrico Fermi Arran Fernandez Richard Feynman Tim Finin Thomas Fink Raphael Finkel Hilary Finucane Melvin Fitting Luciano Floridi Gerald Folland Sergey Fomin Irene Fonseca L. R. Ford Jr. Ian Foster Ralph Fox Paul Frampton Maurice René Fréchet Benedict Freedman Laurent Freidel Herta Freitag Edward Frenkel Peter J. Freyd Susan Friedlander Avner Friedman Orrin Frink Hans Freudenthal Uriel Frisch Ferdinand Georg Frobenius Jürg Fröhlich Bent Fuglede Kenichi Fukui Wayne A. Fuller G David Gabai Ofer Gabber Robert G. Gallager Giovanni Gallavotti Joseph Gallian Irene M. Gamba Héctor García-Molina Artur d'Avila Garcez Richard Garfield Skip Garibaldi Adriano Garsia Erol Gelenbe Israel Gelfand Alexander Gelfond Murray Gell-Mann Stuart Geman Darren Gergle Teena Gerhardt Fritz Gesztesy Ezra Getzler Eknath Prabhakar Ghate Jayanta Kumar Ghosh Gary Gibbons Garth Gibson Peter B. Gilkey Jane Piore Gilman Seymour Ginsburg Ennio De Giorgi Samuel Gitler Hammer Leslie Greengard George Glauberman Andrew M. Gleason Paul Glendinning James Glimm Roland Glowinski Kurt Gödel William Goldman Andrew Goldstein Jerome Goldstein Robert Gompf Francisco Javier González-Acuña Michael T. Goodrich Maria Gordina Carolyn S. Gordon Rudolf Gorenflo Daniel Gorenstein Lothar Göttsche Ian Goulden Jeremy Gray Mary W. Gray Matthew D. Green Brian Greene Robert Griess Phillip Griffiths Rostislav Grigorchuk Uwe Grimm Mikhail Leonidovich Gromov Alexander Grothendieck Benedict Gross Edna Grossman Marcel Grossmann John Guckenheimer Victor Guillemin Robert C. Gunning H Rudolf Haag Christopher Hacon Mark Haiman Petr Hájek Thomas Callister Hales Peter Gavin Hall Joseph Halpern Richard S. Hamilton Michael Handel Robin Hanson David Harbater David Harel Peter G. Harrison George W. Hart Peter E. Hart Vi Hart James Hartle Juris Hartmanis Robin Hartshorne Allen Hatcher Herbert A. Hauptman Jane M. Hawkins Michael Heath Roger Heath-Brown Dennis Hejhal Sigurður Helgason Martin Hellman Nadia Heninger Monika Henzinger Maurice Herlihy Dudley R. Herschbach Theophil Henry Hildebrandt Geoffrey Hinton Morris Hirsch Friedrich Hirzebruch Tony Hoare Gerhard Hochschild Melvin Hochster Roald Hoffmann Maria Hoffmann-Ostenhof Guido Hoheisel Helge Holden Christopher Hooley John Hopcroft Lars Hörmander Susan Howson Juraj Hromkovič Hua Luogeng Nicolaas Marinus Hugenholtz June Huh András P Huhn Gerhard Huisken Craig Huneke Julian Huppert Michael Hutchings Rob J. Hyndman I Jun-Ichi Igusa Masatoshi Gündüz Ikeda Nicole Immorlica Leopold Infeld Adrian Ioana Dmitry Ioffe Victor Isakov Giuseppe F. Italiano Kiyosi Itô Kenneth E. Iverson Victor Ivrii Shokichi Iyanaga J William Jaco Nathan Jacobson Hervé Jacquet Arthur Jaffe Antal Jákli Markus Jakobsson Zvonimir Janko Jens Carsten Jantzen Ronald Jensen Aimee Johnson Selmer M. Johnson Peter Johnstone Jonathan A. Jones Vaughan Jones Jerzy Jurka K Victor Kac Daniel Kahneman Yael Tauman Kalai Burt Kaliski Rudolf E. Kálmán Daniel Kane Max Karoubi Kevin Karplus Daniel Kastler Michael Katehakis Anatole Katok Svetlana Katok David Kazhdan Kiran Kedlaya John L. Kelley Frank Kelly George Kempf Brian Kernighan Barbara Keyfitz Rima Khalaf Chandrashekhar Khare Olga Kharlampovich Subhash Khot Clive W. Kilmister Robion Kirby Alexandre Kirillov Frances Kirwan Laszlo B. Kish Mark Kisin Steven Kleiman Jon Kleinberg Anthony W. Knapp Julia F. Knight Lars Knudsen Christof Koch Eddie Kohler Joseph J. Kohn Walter Kohn Daphne Koller Maxim Kontsevich Robert Kottwitz Christoph Koutschan Irwin Kra Lawrence M. Krauss Hans-Peter Kriegel Saul Kripke Jonas Kubilius Vera Kublanovskaya Benjamin Kuipers Ravi S. Kulkarni Shrawan Kumar Subodha Kumar H. T. Kung Ray Kunze Philip Kutzko L Izabella Łaba Michael Lacey Olga Ladyzhenskaya Xuejia Lai Nan Laird Monica S. Lam Willis Lamb Leslie Lamport Peter Landweber Carl Landwehr Oscar Lanford Robert J. Lang Serge Lang Michael J. Larsen Irena Lasiecka Greg Lawler Ruth Lawrence H. Blaine Lawson William Lawvere Peter Lax Robert Lazarsfeld Joel Lebowitz John Leech Solomon Lefschetz Olli Lehto Richard Lenski James Lepowsky Jean Leray Randall J. LeVeque Simon A. Levin David K. Levine Harold Levine Michael Levitt Azriel Lévy Marta Lewicka Harry R. Lewis Hans Lewy Mark Liberman Victor Lidskii Lin Fanghua Michael Lin Yehuda Lindell Elon Lindenstrauss Joseph Lipman Michael L. Littman Chiu-Chu Melissa Liu Jean-Louis Loday François Loeser Nikos Logothetis Eduard Looijenga Cristina Lopes Peter James Lorimer Jerzy Łoś Michael Loss Abraham S. Luchins Yuri Luchko John Edwin Luecke Jacob Lurie George Lusztig Wilhelmus Luxemburg Nancy Lynch Richard Lyons Mikhail Lyubich M Clyde Martin William G. McCallum Dusa McDuff Saunders Mac Lane Curtis T. McMullen Robert MacPherson Gilean McVean Kathleen Madden Ib Madsen Mark Mahowald Andrew Majda David Makinson Benoit Mandelbrot Michelle Manes Yuri Manin Norman Margolus Grigory Margulis Robert J. Marks II Marco Marra David B. Massey Varghese Mathai John N. Mather Mitsuru Matsui Arthur Mattuck Ueli Maurer Margaret Maxfield J. Peter May Vladimir Mazya Lambert Meertens Alexander Merkurjev N. David Mermin Chikako Mese Silvio Micali Haynes Miller Pierre Milman James Milne John Milnor Hermann Minkowski James Mirrlees Maryam Mirzakhani Irina Mitrea Yurii Mitropolskiy Yoichi Miyaoka Alexander Molev Faron Moller Christopher Monroe Richard Montague Deane Montgomery Susan Montgomery Calvin C. Moore Greg Moore John Coleman Moore Manfred Morari Cathleen Synge Morawetz Ernesto Mordecki Boris Mordukhovich Shigefumi Mori Richard I. Morimoto Kiiti Morita David R. Morrison Yiannis N. Moschovakis Jürgen Moser Emmy Murphy Sean Murphy John Myhill N Leopoldo Nachbin M. G. Nadkarni Masayoshi Nagata Daniel K. Nakano Tadashi Nakayama Seema Nanda John Forbes Nash Jr. Frank Natterer Dana S. Nau Anil Nerode Claudia Neuhauser John von Neumann Heidi Jo Newberg Mark Newman Ngô Bảo Châu Oscar Nierstrasz Albert Nijenhuis Louis Nirenberg James R. Norris Pyotr Novikov Sergei Novikov H. Pierre Noyes O Joseph Oesterlé Hee Oh Peter O'Hearn Andrei Okounkov Dianne P. O'Leary Olga Oleinik Peter J. Olver Rosa Orellana Lars Onsager Alexander Oppenheim Øystein Ore Stanley Osher Steven J. Ostro Ross Overbeek Mark Overmars Michael Overton P Larry Page Richard Palais Raymond Paley Jacob Palis Victor Pan Greta Panova Christos Papadimitriou George C. Papanicolaou Seymour Papert Raman Parimala Stephen Parke Karen Parshall Mihai Pătrașcu Wolfgang Pauli Jean Pedersen Irena Peeva Kirsi Peltonen Steven Pemberton Roger Penrose Colin Percival Grigori Perelman Asher Peres John Perry (philosopher) Miodrag Petković Linda Petzold Jean Piaget Ragni Piene Josef Pieprzyk Jill Pipher John Platt Rudolf Podgornik Martyn Poliakoff Hugh David Politzer Harriet Pollatsek Lev Pontryagin Florian Pop Vladimir Leonidovich Popov Tom Porter Gopal Prasad Bart Preneel William H. Press Ilya Prigogine Jose Principe Enrique Pujals Geoffrey K. Pullum Hilary Putnam Ilya Piatetski-Shapiro Q Daniel Quillen Frank Quinn R Paul Rabinowitz Tibor Radó M. S. Raghunathan Srinivasa Ramanujan Norman Ramsey Helena Rasiowa Douglas Ravenel Michael C. Reed David Rees Aviv Regev Zinovy Reichstein John H. Reif Howard L. Resnikoff Paulo Ribenboim Ken Ribet John T. Riedl Marc Rieffel Emily Riehl Vincent Rijmen Donald Ringe Dennis Ritchie Igor Rivin Neil Robertson Julia Robinson Tony Robinson Matt Robshaw Phillip Rogaway Murray Rosenblatt Pierre Rosenstiehl Hugo Rossi Markus Rost Alvin E. Roth Paul W. K. Rothemund Linda Preiss Rothschild Samarendra Nath Roy Karl Rubin David Ruelle Mari-Jo P. Ruiz Mary Beth Ruskai S Pardis Sabeti Gerald Sacks Amit Sahai Anupam Saikia Abdus Salam Paul Sally Alexander Samarskii Leonard Sarason Michael Saunders Pierre Schapira Doris Schattschneider Anne Schilling Michael Schlessinger Tamar Schlick Wilfried Schmid Gavin Schmidt Bruce Schneier Richard Schoen Robert Scholtz Jan Arnoldus Schouten Erwin Schrödinger G. Peter Scott Robert Thomas Seeley Jasjeet S. Sekhon Kristian Seip Marjorie Senechal Reinhard Selten Caroline Series C. S. Seshadri Suresh P. Sethi Igor Shafarevich Chehrzad Shakiban Claude Shannon Norman Shapiro Lloyd Shapley Diana Shelstad Nicholas Shepherd-Barron Amin Shokrollahi Chi-Wang Shu Michael Shub Mikhail Shubin Laurent C. Siebenmann Carl Ludwig Siegel Joseph Sifakis Israel Michael Sigal Roman Sikorski Jack Silver Alice Silverberg Rodica Simion Herbert A. Simon Leon Simon Jim Simons Charles Sims Yakov Sinai Alistair Sinclair Isadore Singer Theodore Slaman Ian Sloan Nigel Smart Stanislav Smirnov Hamilton O. Smith Agata Smoktunowicz Robert I. Soare Sergei Sobolev Alan Sokal Eduardo D. Sontag Ralf J. Spatzier Donald C. Spencer Thomas Spencer Emanuel Sperner David Spiegelhalter Daniel Spielman Herbert Spohn Vasudevan Srinivas Bhama Srinivasan J. N. Srivastava Gigliola Staffilani John R. Steel Guy L. Steele Jr. Norman Steenrod Charles M. Stein Robert Steinberg Otto Stern Ian Stewart Clifford Stoll Gary Stormo Erling Størmer Gilbert Strang Steven Strogatz Michael Struwe Catherine Sulem Endre Süli Dennis Sullivan John M. Sullivan John Sulston Nike Sun Michio Suzuki Richard Swan Katia Sycara Stanisław Szarek Tibor Szele Lucien Szpiro Daniel B. Szyld Bolesław Szymański T Sergei Tabachnikov Eitan Tadmor Daina Taimiņa Floris Takens Masamichi Takesaki Desney Tan Yutaka Taniyama Daniel Tătaru John Tate Angus Ellis Taylor Jean Taylor Michael E. Taylor Richard Taylor Sridhar Tayur Bernard Teissier Edward Teller Roger Temam Chuu-Lian Terng Joseph Terwilliger Gerald Teschl Bernhard Thalheim Morwen Thistlethwaite George B. Thomas Rekha R. Thomas Richard Thomas Abigail Thompson John G. Thompson Ulrike Tillmann Wolfgang A. Tomé Craig Tracy Joseph F. Traub Lloyd N. Trefethen François Trèves Satish K. Tripathi Hale Trotter Stanisław Trybuła Boris Tsirelson Laurette Tuckerman U Karen Uhlenbeck Gunther Uhlmann William G. Unruh Alasdair Urquhart V Cumrun Vafa Ravi Vakil Leslie Valiant Michel Van den Bergh Bartel Leendert van der Waerden Leon van der Torre Moshe Vardi Serge Vaudenay Helmut Veith T. N. Venkataramana Craig Venter Rineke Verbrugge Sergiy Vilkomir Cédric Villani Vernor Vinge Oleg Viro Monica Vișan Michael Viscardi Karen Vogtmann Dan-Virgil Voiculescu Paul Vojta Dieter Vollhardt Vlad Voroninski W Michelle L. Wachs David A. Wagner Nolan Wallach Brian Wandell Hao Wang G. N. Watson Richard Weber Charles Weibel Steven Weinberg Shmuel Weinberger Alan Weinstein Sherman Weissman Tsachy Weissman Eric W. Weisstein Philip Welch Barry Wellman Raymond O. Wells Jr. Katrin Wendland Elisabeth M. Werner Wendelin Werner Jeff Westbrook Hermann Weyl John Archibald Wheeler Arthur Whitney Jennifer Widom Gio Wiederhold Sylvia Wiegand Eugene Wigner Frank Wilczek Andrew Wiles Lauren Williams Ryan Williams Virginia Vassilevska Williams David J. Wineland Erik Winfree Terry Winograd Jean-Pierre Wintenberger Daniel Wise Edward Witten Joseph A. Wolf Julia Wolf Melanie Wood William Wootters Margaret H. Wright Jang-Mei Wu Jie Wu Joshua Wurman Gisbert Wüstholz X Dianna Xu Jinchao Xu Y Chen-Ning Yang Horng-Tzer Yau Vadim Yefremovich Cem Yıldırım Jean-Christophe Yoccoz James A. Yorke Noriko Yui Z Norman Zabusky Lotfi A. Zadeh George Zames Oscar Zariski Eduard Zehnder Andrei Zelevinsky Efim Zelmanov Robert Zimmer Yaakov Ziv Max August Zorn Steven Zucker See also Mathematics Genealogy Project Six degrees of separation References External links Erdős Number Project list of all people with Erdős number 1 list of all people with Erdős number 2 Distance calculator from MathSciNet Distance calculator from zbMATH Lists of scholars and academics Lists of mathematicians Erdos number
List of people by Erdős number
[ "Technology" ]
10,494
[ "Lists of people in STEM fields", "Lists of mathematicians" ]
14,404,325
https://en.wikipedia.org/wiki/Delco%20Carousel
The Delco Carousel — proper name Carousel IV — was an inertial navigation system (INS) for aircraft developed by Delco Electronics. Before the advent of sophisticated flight management systems, Carousel IV allowed pilots to automate navigation of an aircraft along a series of waypoints that they entered via a control console in the cockpit. Carousel IV consisted of an inertial measurement unit (IMU) as its position reference, a digital computer to compute the navigation solution, and a control panel mounted in an aircraft's cockpit. It was used for long over water and over the North Pole aircraft navigation. Many aircraft were equipped with dual or triple Carousels for redundancy. Operation was relatively simple: a pilot or flight engineer would enter the individual waypoints by their latitude and longitude points and then the pilot or engineer would enter the starting location in latitude and longitude. The system used spinning mass gyroscopes and proof-mass accelerometers to measure movement from the start point. An involved calculation followed by sampling those sensors to determine a current position relative to the surface of the Earth. The Carousel IV system derives its name from the fact that the inertial reference platform was rotated 360° every 60 seconds as a technique to reduce drift and increase accuracy by countering systematic errors. Low drift operation was aided by maintaining the gyroscopes and accelerometers at a constant temperature of 60 °C. The elevated temperature was maintained whenever the system was switched on in either the 'Standby', 'Align', 'Navigate' or 'Attitude' mode, as selected on the Control Display Unit (CDU). During the 1982 Falklands war, RAF Avro Vulcans were fitted with Carousels from RAF Vickers VC10s to enable Operation Black Buck. Applications Military: C-5A/B, KC-135 and its derivatives, C-141 Missiles: Thor IRBM, Titan II ICBM, Titan III heavy-lift launch system Spacecraft: Apollo Command Module (IMU only) Commercial: Boeing 747 (early variants), Airbus A300 (early variants), Concorde, McDonnell Douglas DC-10, Vickers VC-10 References Avionics
Delco Carousel
[ "Technology" ]
445
[ "Avionics", "Aircraft instruments" ]
14,404,885
https://en.wikipedia.org/wiki/Grain%20boundary%20strengthening
In materials science, grain-boundary strengthening (or Hall–Petch strengthening) is a method of strengthening materials by changing their average crystallite (grain) size. It is based on the observation that grain boundaries are insurmountable borders for dislocations and that the number of dislocations within a grain has an effect on how stress builds up in the adjacent grain, which will eventually activate dislocation sources and thus enabling deformation in the neighbouring grain as well. By changing grain size, one can influence the number of dislocations piled up at the grain boundary and yield strength. For example, heat treatment after plastic deformation and changing the rate of solidification are ways to alter grain size. Theory In grain-boundary strengthening, the grain boundaries act as pinning points impeding further dislocation propagation. Since the lattice structure of adjacent grains differs in orientation, it requires more energy for a dislocation to change directions and move into the adjacent grain. The grain boundary is also much more disordered than inside the grain, which also prevents the dislocations from moving in a continuous slip plane. Impeding this dislocation movement will hinder the onset of plasticity and hence increase the yield strength of the material. Under an applied stress, existing dislocations and dislocations generated by Frank–Read sources will move through a crystalline lattice until encountering a grain boundary, where the large atomic mismatch between different grains creates a repulsive stress field to oppose continued dislocation motion. As more dislocations propagate to this boundary, dislocation 'pile up' occurs as a cluster of dislocations are unable to move past the boundary. As dislocations generate repulsive stress fields, each successive dislocation will apply a repulsive force to the dislocation incident with the grain boundary. These repulsive forces act as a driving force to reduce the energetic barrier for diffusion across the boundary, such that additional pile up causes dislocation diffusion across the grain boundary, allowing further deformation in the material. Decreasing grain size decreases the amount of possible pile up at the boundary, increasing the amount of applied stress necessary to move a dislocation across a grain boundary. The higher the applied stress needed to move the dislocation, the higher the yield strength. Thus, there is then an inverse relationship between grain size and yield strength, as demonstrated by the Hall–Petch equation. However, when there is a large direction change in the orientation of the two adjacent grains, the dislocation may not necessarily move from one grain to the other but instead create a new source of dislocation in the adjacent grain. The theory remains the same that more grain boundaries create more opposition to dislocation movement and in turn strengthens the material. Obviously, there is a limit to this mode of strengthening, as infinitely strong materials do not exist. Grain sizes can range from about (large grains) to (small grains). Lower than this, the size of dislocations begins to approach the size of the grains. At a grain size of about , only one or two dislocations can fit inside a grain (see Figure 1 above). This scheme prohibits dislocation pile-up and instead results in grain boundary diffusion. The lattice resolves the applied stress by grain boundary sliding, resulting in a decrease in the material's yield strength. To understand the mechanism of grain boundary strengthening one must understand the nature of dislocation-dislocation interactions. Dislocations create a stress field around them given by: where G is the material's shear modulus, b is the Burgers vector, and r is the distance from the dislocation. If the dislocations are in the right alignment with respect to each other, the local stress fields they create will repel each other. This helps dislocation movement along grains and across grain boundaries. Hence, the more dislocations are present in a grain, the greater the stress field felt by a dislocation near a grain boundary: Interphase boundaries can also contribute to grain boundary strengthening, particularly in composite materials and precipitation-hardened alloys. Coherent IPBs, in particular, can provide additional barriers to dislocation motion, similar to grain boundaries. In contrast, non-coherent IPBs and partially coherent IPBs can act as sources of dislocations, which can lead to localized deformation and affect the mechanical properties of the material. Subgrain strengthening A subgrain is a part of the grain that is only slightly disoriented from other parts of the grain. Current research is being done to see the effect of subgrain strengthening in materials. Depending on the processing of the material, subgrains can form within the grains of the material. For example, when Fe-based material is ball-milled for long periods of time (e.g. 100+ hours), subgrains of 60–90 nm are formed. It has been shown that the higher the density of the subgrains, the higher the yield stress of the material due to the increased subgrain boundary. The strength of the metal was found to vary reciprocally with the size of the subgrain, which is analogous to the Hall–Petch equation. The subgrain boundary strengthening also has a breakdown point of around a subgrain size of 0.1 μm, which is the size where any subgrains smaller than that size would decrease yield strength. Types of Strengthening Boundaries Coherent Interphase Boundaries Coherent grain boundaries are those in which the crystal lattice of adjacent grains is continuous across the boundary. In other words, the crystallographic orientation of the grains on either side of the boundary is related by a small rotation or translation. Coherent grain boundaries are typically observed in materials with small grain sizes or in highly ordered materials such as single crystals. Because the crystal lattice is continuous across the boundary, there are no defects or dislocations associated with coherent grain boundaries. As a result, they do not act as barriers to the motion of dislocations and have little effect on the strength of a material. However, they can still affect other properties such as diffusion and grain growth. When solid solutions become supersaturated and precipitation occurs, tiny particles are formed. These particles typically have interphase boundaries that match up with the matrix, despite differences in interatomic spacing between the particle and the matrix. This creates a coherency strain, which causes distortion. Dislocations respond to the stress field of a coherent particle in a way similar to how they interact with solute atoms of different sizes. It is worth noting that the interfacial energy can also influence the kinetics of phase transformations and precipitation processes. For instance, the energy associated with a strained coherent interface can reach a critical level as the precipitate grows, leading to a transition from a coherent to a disordered (non-coherent) interface. This transition occurs when the energy associated with maintaining the coherency becomes too high, and the system seeks a lower energy configuration. This happens when particle dispersion is introduced into a matrix. Dislocations pass through small particles and bow between large particles or particles with disordered interphase boundaries. The predominant slip mechanism determines the contribution to strength, which depends on factors such as particle size and volume fraction. Partially-coherent Interphase Boundaries A partially coherent interphase boundary is an intermediate type of IPB that lies between the completely coherent and non-coherent IPBs. In this type of boundary, there is a partial match between the atomic arrangements of the particle and the matrix, but not a perfect match. As a result, coherency strains are partially relieved, but not completely eliminated. The periodic introduction of dislocations along the boundary plays a key role in partially relieving the coherency strains. These dislocations act as periodic defects that accommodate the lattice mismatch between the particle and the matrix. The dislocations can be introduced during the precipitation process or during subsequent annealing treatments. Non-coherent Interphase Boundaries Incoherent grain boundaries are those in which there is a significant mismatch in crystallographic orientation between adjacent grains. This results in a discontinuity in the crystal lattice across the boundary, and the formation of a variety of defects such as dislocations, stacking faults, and grain boundary ledges.The presence of these defects creates a barrier to the motion of dislocations and leads to a strengthening effect. This effect is more pronounced in materials with smaller grain sizes, as there are more grain boundaries to impede dislocation motion. In addition to the barrier effect, incoherent grain boundaries can also act as sources and sinks for dislocations. This can lead to localized plastic deformation and affect the overall mechanical response of a material. When small particles are formed through precipitation from supersaturated solid solutions, their interphase boundaries may not be coherent with the matrix. In such cases, the atomic bonds do not match up across the interface and there is a misfit between the particle and the matrix. This misfit gives rise to a non-coherency strain, which can cause the formation of dislocations at the grain boundary. As a result, the properties of the small particle can be different from those of the matrix. The size at which non-coherent grain boundaries form depends on the lattice misfit and the interfacial energy. Interfacial Energy Understanding the interfacial energy of materials with different types of interphase boundaries (IPBs) provides valuable insights into several aspects of their behavior, including thermodynamic stability, deformation behavior, and phase evolution. Grain Boundary Sliding and Dislocation Transmission Interfacial energy affects the mechanisms of grain boundary sliding and dislocation transmission. Higher interfacial energy promotes greater resistance to grain boundary sliding, as the higher energy barriers inhibit the relative movement of adjacent grains. Additionally, dislocations that encounter grain boundaries can either transmit across the boundary or be reflected back into the same grain. The interfacial energy influences the likelihood of dislocation transmission, with higher interfacial energy barriers impeding dislocation motion and enhancing grain boundary strengthening. Grain Boundary Orientation High-angle grain boundaries, which have large misorientations between adjacent grains, tend to have higher interfacial energy and are more effective in impeding dislocation motion. In contrast, low-angle grain boundaries with small misorientations and lower interfacial energy may allow for easier dislocation transmission and exhibit weaker grain boundary strengthening effects. Grain Boundary Engineering Grain boundary engineering involves manipulating the grain boundary structure and energy to enhance mechanical properties. By controlling the interfacial energy, it is possible to engineer materials with desirable grain boundary characteristics, such as increased interfacial area, higher grain boundary density, or specific grain boundary types. Alloying Introducing alloying elements into the material can alter the interfacial energy of grain boundaries. Alloying can result in segregation of solute atoms at the grain boundaries, which can modify the atomic arrangements and bonding, and thereby influence the interfacial energy. Surface Treatments and Coatings Applying surface treatments or coatings can modify the interfacial energy of grain boundaries. Surface modification techniques, such as chemical treatments or deposition of thin films, can alter the surface energy and consequently affect the grain boundary energy. Heat Treatments and Annealing Thermal treatments can be employed to modify the interfacial energy of grain boundaries. Annealing at specific temperatures and durations can induce atomic rearrangements, diffusion, and stress relaxation at the grain boundaries, leading to changes in the interfacial energy. Once the interfacial energy is controlled, grain boundaries can be manipulated to enhance their strengthening effects. Severe Plastic Deformation Applying severe plastic deformation techniques, such as equal-channel angular pressing (ECAP) or high-pressure torsion (HPT), can lead to grain refinement and the creation of new grain boundaries with tailored characteristics. These refined grain structures can exhibit a high density of grain boundaries, including high-angle boundaries, which can contribute to enhanced grain boundary strengthening. Thermomechanical Processing Utilizing specific thermomechanical processing routes, such as rolling, forging, or extrusion, can result in the creation of a desired texture and the development of specific grain boundary structures. These processing routes can promote the formation of specific grain boundary types and orientations, leading to improved grain boundary strengthening. Hall Petch relationship There is an inverse relationship between delta yield strength and grain size to some power, x. where k is the strengthening coefficient and both k and x are material specific. Assuming a narrow monodisperse grain size distribution in a polycrystalline material, the smaller the grain size, the smaller the repulsion stress felt by a grain boundary dislocation and the higher the applied stress needed to propagate dislocations through the material. The relation between yield stress and grain size is described mathematically by the Hall–Petch equation: where σy is the yield stress, σ0 is a materials constant for the starting stress for dislocation movement (or the resistance of the lattice to dislocation motion), ky is the strengthening coefficient (a constant specific to each material), and d is the average grain diameter. It is important to note that the H-P relationship is an empirical fit to experimental data, and that the notion that a pileup length of half the grain diameter causes a critical stress for transmission to or generation in an adjacent grain has not been verified by actual observation in the microstructure. Theoretically, a material could be made infinitely strong if the grains are made infinitely small. This is impossible though, because the lower limit of grain size is a single unit cell of the material. Even then, if the grains of a material are the size of a single unit cell, then the material is in fact amorphous, not crystalline, since there is no long range order, and dislocations can not be defined in an amorphous material. It has been observed experimentally that the microstructure with the highest yield strength is a grain size of about , because grains smaller than this undergo another yielding mechanism, grain boundary sliding. Producing engineering materials with this ideal grain size is difficult because only thin films can be reliably produced with grains of this size. In materials having a bi-disperse grain size distribution, for example those exhibiting abnormal grain growth, hardening mechanisms do not strictly follow the Hall–Petch relationship and divergent behavior is observed. History In the early 1950s two groundbreaking series of papers were written independently on the relationship between grain boundaries and strength. In 1951, while at the University of Sheffield, E. O. Hall wrote three papers which appeared in volume 64 of the Proceedings of the Physical Society. In his third paper, Hall showed that the length of slip bands or crack lengths correspond to grain sizes and thus a relationship could be established between the two. Hall concentrated on the yielding properties of mild steels. Based on his experimental work carried out in 1946–1949, N. J. Petch of the University of Leeds, England published a paper in 1953 independent from Hall's. Petch's paper concentrated more on brittle fracture. By measuring the variation in cleavage strength with respect to ferritic grain size at very low temperatures, Petch found a relationship exact to that of Hall's. Thus this important relationship is named after both Hall and Petch. Reverse or inverse Hall Petch relation The Hall–Petch relation predicts that as the grain size decreases the yield strength increases. The Hall–Petch relation was experimentally found to be an effective model for materials with grain sizes ranging from 1 millimeter to 1 micrometer. Consequently, it was believed that if average grain size could be decreased even further to the nanometer length scale the yield strength would increase as well. However, experiments on many nanocrystalline materials demonstrated that if the grains reached a small enough size, the critical grain size which is typically around , the yield strength would either remain constant or decrease with decreasing grains size. This phenomenon has been termed the reverse or inverse Hall–Petch relation. A number of different mechanisms have been proposed for this relation. As suggested by Carlton et al., they fall into four categories: (1) dislocation-based, (2) diffusion-based, (3) grain-boundary shearing-based, (4) two-phase-based. There have been several works done to investigate the mechanism behind the inverse Hall–Petch relationship on numerous materials. In Han’s work, a series of molecular dynamics simulations were done to investigate the effect of grain size on the mechanical properties of nanocrystalline graphene under uniaxial tensile loading, with random shapes and random orientations of graphene rings. The simulation was run at grain sizes of nm and at room temperature. It was found that in the grain size of range 3.1 nm to 40 nm, inverse Hall–Petch relationship was observed. This is because when the grain size decreases at nm scale, there is an increase in the density of grain boundary junctions which serves as a source of crack growth or weak bonding. However, it was also observed that at grain size below 3.1 nm, a pseudo Hall–Petch relationship was observed, which results an increase in strength. This is due to a decrease in stress concentration of grain boundary junctions and also due to the stress distribution of 5-7 defects along the grain boundary where the compressive and tensile stress are produced by the pentagon and heptagon rings, etc. Chen at al. have done research on the inverse HallPetch relations of high-entropy CoNiFeAlxCu1–x alloys. In the work, polycrystalline models of FCC structured CoNiFeAl0.3Cu0.7 with grain sizes ranging from 7.2 nm to 18.8 nm were constructed to perform uniaxial compression using molecular dynamic simulations. All compression simulations were done after setting the periodic boundary conditions across the three orthogonal directions. It was found that when the grain size is below 12.1 nm the inverse Hall–Petch relation was observed. This is because as the grain size decreases partial dislocations become less prominent and so as deformation twinning. Instead, it was observed that there is a change in the grain orientation and migration of grain boundaries and thus cause the growth and shrinkage of neighboring grains. These are the mechanisms for inverse Hall–Petch relations. Sheinerman et al. also studied inverse Hall–Petch relation for nanocrystalline ceramics. It was found that the critical grain size for the transition from direct Hall–Petch to inverse Hall–Petch fundamentally depends on the activation energy of grain boundary sliding. This is because in direct Hall–Petch the dominant deformation mechanism is intragrain dislocation motion while in inverse Hall–Petch the dominant mechanism is grain boundary sliding. It was concluded that by plotting both the volume fraction of grain boundary sliding and volume fraction of intragrain dislocation motion as a function of grain size, the critical grain size could be found where the two curves cross. Other explanations that have been proposed to rationalize the apparent softening of metals with nanosized grains include poor sample quality and the suppression of dislocation pileups. The pileup of dislocations at grain boundaries is a hallmark mechanism of the Hall–Petch relationship. Once grain sizes drop below the equilibrium distance between dislocations, though, this relationship should no longer be valid. Nevertheless, it is not entirely clear what exactly the dependency of yield stress should be on grain sizes below this point. Grain refinement Grain refinement, also known as inoculation, is the set of techniques used to implement grain boundary strengthening in metallurgy. The specific techniques and corresponding mechanisms will vary based on what materials are being considered. One method for controlling grain size in aluminum alloys is by introducing particles to serve as nucleants, such as Al–5%Ti. Grains will grow via heterogeneous nucleation; that is, for a given degree of undercooling beneath the melting temperature, aluminum particles in the melt will nucleate on the surface of the added particles. Grains will grow in the form of dendrites growing radially away from the surface of the nucleant. Solute particles can then be added (called grain refiners) which limit the growth of dendrites, leading to grain refinement. Al-Ti-B alloys are the most common grain refiner for Al alloys; however, novel refiners such as Al3Sc have been suggested. One common technique is to induce a very small fraction of the melt to solidify at a much higher temperature than the rest; this will generate seed crystals that act as a template when the rest of the material falls to its (lower) melting temperature and begins to solidify. Since a huge number of minuscule seed crystals are present, a nearly equal number of crystallites result, and the size of any one grain is limited. See also Strengthening mechanisms of materials References Bibliography External links Grain boundary strengthening in alumina by rare earth impurities Mechanism of grain boundary strengthening of steels An open source Matlab toolbox for analysis of slip transfer through grain boundaries Strengthening mechanisms of materials Metallurgy
Grain boundary strengthening
[ "Chemistry", "Materials_science", "Engineering" ]
4,400
[ "Strengthening mechanisms of materials", "Metallurgy", "Materials science", "nan" ]
14,405,160
https://en.wikipedia.org/wiki/Synaptic%20weight
In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research. Computation In a computational neural network, a vector or set of inputs and outputs , or pre- and post-synaptic neurons respectively, are interconnected with synaptic weights represented by the matrix , where for a linear neuron . where the rows of the synaptic matrix represent the vector of synaptic weights for the output indexed by . The synaptic weight is changed by using a learning rule, the most basic of which is Hebb's rule, which is usually stated in biological terms as Neurons that fire together, wire together. Computationally, this means that if a large signal from one of the input neurons results in a large signal from one of the output neurons, then the synaptic weight between those two neurons will increase. The rule is unstable, however, and is typically modified using such variations as Oja's rule, radial basis functions or the backpropagation algorithm. Biology For biological networks, the effect of synaptic weights is not as simple as for linear neurons or Hebbian learning. However, biophysical models such as BCM theory have seen some success in mathematically describing these networks. In the mammalian central nervous system, signal transmission is carried out by interconnected networks of nerve cells, or neurons. For the basic pyramidal neuron, the input signal is carried by the axon, which releases neurotransmitter chemicals into the synapse which is picked up by the dendrites of the next neuron, which can then generate an action potential which is analogous to the output signal in the computational case. The synaptic weight in this process is determined by several variable factors: How well the input signal propagates through the axon (see myelination), The amount of neurotransmitter released into the synapse and the amount that can be absorbed in the following cell (determined by the number of AMPA and NMDA receptors on the cell membrane and the amount of intracellular calcium and other ions), The number of such connections made by the axon to the dendrites, How well the signal propagates and integrates in the postsynaptic cell. The changes in synaptic weight that occur is known as synaptic plasticity, and the process behind long-term changes (long-term potentiation and depression) is still poorly understood. Hebb's original learning rule was originally applied to biological systems, but has had to undergo many modifications as a number of theoretical and experimental problems came to light. See also Neural network Synaptic plasticity Hebbian theory References Artificial neural networks Neural circuitry Neuroplasticity
Synaptic weight
[ "Technology" ]
596
[ "Computing stubs", "Computer science", "Computer science stubs" ]
14,407,606
https://en.wikipedia.org/wiki/Erwin%20Madelung
Erwin Madelung (18 May 1881 – 1 August 1972) was a German physicist. He was born in 1881 in Bonn. His father was the surgeon Otto Wilhelm Madelung. He earned a doctorate in 1905 from the University of Göttingen, specializing in crystal structure, and eventually became a professor. It was during this time he developed the Madelung constant, which characterizes the net electrostatic effects of all ions in a crystal lattice, and is used to determine the energy of one ion. In 1921 he succeeded Max Born as the Chair of Theoretical Physics at the Goethe University Frankfurt, which he held until his retirement in 1949. He specialized in atomic physics and quantum mechanics, and it was during this time he developed the Madelung equations, an alternative form of the Schrödinger equation. He is also known for the Madelung rule, which states that atomic orbitals are filled in order of increasing quantum numbers. Publications Magnetisierung durch schnell verlaufende Stromvorgänge mit Rücksicht auf Marconis Wellendetektor. Göttingen, Univ., Phil. Fak., Diss., 1905. Die mathematischen Hilfsmittel des Physikers, Springer Verlag, Berlin 1922. subsequent editions: 1925, 1936, 1950, 1953, 1957, 1964. References External links Portrait drawing at Frankfurt University 1881 births 1972 deaths 20th-century German physicists University of Göttingen alumni Burials at Frankfurt Main Cemetery People involved with the periodic table
Erwin Madelung
[ "Chemistry" ]
311
[ "Periodic table", "People involved with the periodic table" ]
14,407,845
https://en.wikipedia.org/wiki/Sextuple%20bond
A sextuple bond is a type of covalent bond involving 12 bonding electrons and in which the bond order is 6. The only known molecules with true sextuple bonds are the diatomic dimolybdenum (Mo2) and ditungsten (W2), which exist in the gaseous phase and have boiling points of and respectively. Theoretical analysis Roos et al argue that no stable element can form bonds of higher order than a sextuple bond, because the latter corresponds to a hybrid of the s orbital and all five d orbitals, and f orbitals contract too close to the nucleus to bond in the lanthan­ides. Indeed, quantum mechanical calculations have revealed that the di­molybdenum bond is formed by a combination of two σ bonds, two π bonds and two δ bonds. (Also, the σ and π bonds contribute much more significantly to the sextuple bond than the δ bonds.) Although no φ bonding has been reported for transition metal dimers, it is predicted that if any sextuply-bonded actinides were to exist, at least one of the bonds would likely be a φ bond as in quintuply-bonded diuranium and di­neptunium. No sextuple bond has been observed in lanthanides or actinides. For the majority of elements, even the possibility of a sextuple bond is foreclosed, because the d electrons ferromagnetically couple, instead of bonding. The only known exceptions are dimolybdenum and ditungsten. Quantum-mechanical treatment The formal bond order (FBO) of a molecule is half the number of bonding electrons surplus to antibonding electrons; for a typical molecule, it attains exclusively integer values. A full quantum treatment requires a more nuanced picture, in which electrons may exist in a superposition, contributing fractionally to both bonding and antibonding orbitals. In a formal sextuple bond, there would be different electron pairs; an effective sextuple bond would then have all six contributing almost entirely to bonding orbitals. In Roos et al's calculations, the effective bond order (EBO) could be determined by the formula where is the proportion of formal bonding orbital occupation for an electron pair , is the proportion of the formal antibonding orbital occupation, and is a correction factor account­ing for deviations from equilibrium geometry. Several metal-metal bonds' EBOs are given in the table at right, compared to their formal bond orders. Dimolybdenum and ditungsten are the only mole­cules with effective bond orders above 5, with a quintuple bond and a partially formed sixth covalent bond. Dichromium, while formally described as having a sextuple bond, is best described as a pair of chromium atoms with all electron spins exchange-coupled to each other. While diuranium is also formally described as having a sextuple bond, relativistic quantum mechanical calculations have determined it to be a quadruple bond with four electrons ferro­magnetically coupled to each other rather than in two formal bonds. Previous calcu­lations on diuranium did not treat the electronic molecular Hamiltonian relativistically and produced higher bond orders of 4.2 with two ferromagnetically coupled electrons. Known instances: dimolybdenum and ditungsten Laser evaporation of a molybdenum sheet at low temperatures (7 K) produces gaseous dimolybdenum (Mo2). The resulting molecules can then be imaged with, for instance, near-infrared spectroscopy or UV spectroscopy. Both ditungsten and dimolybdenum have very short bond lengths compared to neighboring metal dimers. For example, sextuply-bonded dimolybdenum has an equilibrium bond length of 1.93 Å. This equi­librium internuclear distance is signi­ficantly lower than in the dimer of any neighboring 4d transition metal, and sug­gestive of higher bond orders. However, the bond dissociation energies of ditungsten and dimolybdenum are rather low, because the short internuclear distance introduces geometric strain. One empirical technique to determine bond order is spectroscopic exami­nation of bond force constants. Linus Pauling investigated the relationships between bonding atoms and developed a formula that predicts that bond order is roughly proportional to the force constant; that is, where is the bond order, is the force constant of the interatomic inter­action and is the force constant of a single bond between the atoms. The table at right shows some select force constants for metal-metal dimers com­pared to their EBOs; consistent with a sextuple bond, molybdenum's summed force constant is substantially more than quintuple the single-bond force constant. Like dichromium, dimolybdenum and ditungsten are expected to exhibit a 1Σg+ singlet ground state. However, in tungsten, this ground state arises from a hybrid of either two 5D0 ground states or two 7S3 excited states. Only the latter corresponds to the formation of a stable, sextuply-bonded ditungsten dimer. Ligand effects Although sextuple bonding in homodimers is rare, it remains a possibility in larger molecules. Aromatics Theoretical computations suggest that bent dimetallocenes have a higher bond order than their linear counterparts. For this reason, the Schaefer lab has investi­gated dimetallocenes for natural sextuple bonds. However, such com­pounds tend to exhibit Jahn-Teller distortion, rather than a true sextuple bond. For example, dirhenocene is bent. Calculating its frontier molecular orbitals sug­gests the existence of relatively stable singlet and triplet states, with a sextuple bond in the singlet state. But that state is the excited one; the triplet ground state should exhibit a formal quintuple bond. Similarly, for the dibenzene complexes Cr2(C6H6)2, Mo2(C6H6)2, and W2(C6H6)2, molecular bonding orbitals for the triplet states with symmetries D6h and D6d indicate the possibility of an intermetallic sex­tuple bond. Quantum chemistry calculations reveal, however, that the corre­sponding D2h singlet geometry is stabler than the D6h triplet state by , depending on the central metal. Oxo ligands Both quantum mechanical calculations and photoelectron spectroscopy of the tungsten oxide clusters W2On (n = 1-6) indicate that increased oxidation state reduces the bond order in ditungsten. At first, the weak δ bonds break to yield a quadruply-bonded W2O; further oxidation generates the ditungsten complex W2O6 with two bridging oxo ligands and no direct W-W bonds. References Further reading Chemical bonding
Sextuple bond
[ "Physics", "Chemistry", "Materials_science" ]
1,437
[ "Chemical bonding", "Condensed matter physics", "nan" ]
14,407,895
https://en.wikipedia.org/wiki/National%20Health%20and%20Nutrition%20Examination%20Survey
The National Health and Nutrition Examination Survey (NHANES) is a survey research program conducted by the National Center for Health Statistics (NCHS) to assess the health and nutritional status of adults and children in the United States, and to track changes over time. The survey combines interviews, physical examinations and laboratory tests. The NHANES interview includes demographic, socioeconomic, dietary, and health-related questions. The examination component consists of medical, dental, and physiological measurements, as well as laboratory tests administered by medical personnel. The National Health Survey Act was passed in 1956. This allowed legislative authorization to provide current statistical data on the amount, distribution, and effects on illness and disability in the United States. The first three national health examination surveys were conducted in the 1960s: 1960-62—National Health Examination Survey I (NHES I); 1963-65—National Health Examination Survey II (NHES II); and 1966-70—National Health Examination Survey III (NHES III). The first NHANES was conducted in 1971, and in 1999 the surveys became an annual event; the first report on the topic was published in 2001. NHANES findings are used to determine the prevalence of major diseases and risk factors for diseases. Information is used to assess nutritional status and its association with health promotion and disease prevention. NHANES findings are also the basis for national standards for such measurements as height, weight, and blood pressure. NHANES data are used in epidemiological studies and health sciences research (including biomarkers of aging), which help develop sound public health policy, direct and design health programs and services, expand health knowledge, extend healthspan and lifespan. Follow-up studies using NHANES data were made possible by creating linked mortality files and files based on Medicare and Medicaid data. See also National Archive of Computerized Data on Aging References External links Official website page for NHANES 1999-2000 DSDR page for NHANES 2001-2002 DSDR page for NHANES 2003-2004 DSDR page for NHANES 2005-2006 DSDR page for NHANES 2007-2008 Validity of U.S. Nutritional Surveillance: National Health and Nutrition Examination Survey Caloric Energy Intake Data, 1971–2010 Centers for Disease Control and Prevention Gerontology Health surveys
National Health and Nutrition Examination Survey
[ "Biology" ]
470
[ "Gerontology" ]
14,408,410
https://en.wikipedia.org/wiki/Floral%20Genome%20Project
The Floral Genome Project is a collaborative research cooperation primarily between Penn State University, University of Florida, and Cornell University. The initial funding came from a grant of $7.4 million from the National Science Foundation. The Floral Genome Project was initiated to bridge the genomic gap between the most broadly studied plant model systems. According to the website, the following are the aims of the project: External links Official Website Genome projects Botany University of Florida Cornell University
Floral Genome Project
[ "Biology" ]
91
[ "Plants", "Genome projects", "Botany" ]
14,408,479
https://en.wikipedia.org/wiki/Biological%20neuron%20model
Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons (or nerve cells) are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity. Central to these models is the description of how the membrane potential (that is, the difference in electric potential between the interior and the exterior of a biological cell) across the cell membrane changes over time. In an experimental setting, stimulating neurons with an electrical current generates an action potential (or spike), that propagates down the neuron's axon. This axon can branch out and connect to a large number of downstream neurons at sites called synapses. At these synapses, the spike can cause the release of neurotransmitters, which in turn can change the voltage potential of downstream neurons. This change can potentially lead to even more spikes in those downstream neurons, thus passing down the signal. As many as 95% of neurons in the neocortex, the outermost layer of the mammalian brain, consist of excitatory pyramidal neurons, and each pyramidal neuron receives tens of thousands of inputs from other neurons. Thus, spiking neurons are a major information processing unit of the nervous system. One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatial morphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of trans-membrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description of ion channel and membrane potential dynamics (initially studied by Lapique in 1907). Biological background, classification, and aims of neuron models Non-spiking cells, spiking cells, and their measurement Not all the cells of the nervous system produce the type of spike that defines the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified as glia. Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials. With extracellular measurement techniques, one or more electrodes are placed in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages: It is easier to obtain experimentally; It is robust and lasts for a longer time; It can reflect the dominant effect, especially when conducted in an anatomical region with many similar cells. Overview of neuron models Neuron models can be divided into two categories according to the physical units of the interface of the model. Each category could be further divided according to the abstraction/detail level: Electrical input–output membrane voltage models – These models produce a prediction for membrane output voltage as a function of electrical stimulation given as current or voltage input. The various models in this category differ in the exact functional relationship between the input current and the output voltage and in the level of detail. Some models in this category predict only the moment of occurrence of the output spike (also known as "action potential"); other models are more detailed and account for sub-cellular processes. The models in this category can be either deterministic or probabilistic. Natural stimulus or pharmacological input neuron models – The models in this category connect the input stimulus, which can be either pharmacological or natural, to the probability of a spike event. The input stage of these models is not electrical but rather has either pharmacological (chemical) concentration units, or physical units that characterize an external stimulus such as light, sound, or other forms of physical pressure. Furthermore, the output stage represents the probability of a spike event and not an electrical voltage. Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (network effects). Aims of neuron models Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. However, several approaches can be distinguished, from more realistic models (e.g., mechanistic models) to more pragmatic models (e.g., phenomenological models). Modeling helps to analyze experimental data and address questions. Models are also important in the context of restoring lost brain functionality through neuroprosthetic devices. Electrical input–output membrane voltage models The models in this category describe the relationship between neuronal membrane currents at the input stage and membrane voltage at the output stage. This category includes (generalized) integrate-and-fire models and biophysical models inspired by the work of Hodgkin–Huxley in the early 1950s using an experimental setup that punctured the cell membrane and allowed to force a specific membrane voltage/current. Most modern electrical neural interfaces apply extra-cellular electrical stimulation to avoid membrane puncturing, which can lead to cell death and tissue damage. Hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.). Hodgkin–Huxley The Hodgkin–Huxley model (H&H model) is a model of the relationship between the flow of ionic currents across the neuronal cell membrane and the membrane voltage of the cell. It consists of a set of nonlinear differential equations describing the behavior of ion channels that permeate the cell membrane of the squid giant axon. Hodgkin and Huxley were awarded the 1963 Nobel Prize in Physiology or Medicine for this work. It is important to note the voltage-current relationship, with multiple voltage-dependent currents charging the cell membrane of capacity The above equation is the time derivative of the law of capacitance, where the change of the total charge must be explained as the sum over the currents. Each current is given by where is the conductance, or inverse resistance, which can be expanded in terms of its maximal conductance and the activation and inactivation fractions and , respectively, that determine how many ions can flow through available membrane channels. This expansion is given by and our fractions follow the first-order kinetics with similar dynamics for , where we can use either and or and to define our gate fractions. The Hodgkin–Huxley model may be extended to include additional ionic currents. Typically, these include inward Ca2+ and Na+ input currents, as well as several varieties of K+ outward currents, including a "leak" current. The result can be at the small end of 20 parameters which one must estimate or measure for an accurate model. In a model of a complex system of neurons, numerical integration of the equations are computationally expensive. Careful simplifications of the Hodgkin–Huxley model are therefore needed. The model can be reduced to two dimensions thanks to the dynamic relations which can be established between the gating variables. it is also possible to extend it to take into account the evolution of the concentrations (considered fixed in the original model). Perfect Integrate-and-fire One of the earliest models of a neuron is the perfect integrate-and-fire model (also called non-leaky integrate-and-fire), first investigated in 1907 by Louis Lapicque. A neuron is represented by its membrane voltage which evolves in time during stimulation with an input current according which is just the time derivative of the law of capacitance, . When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold , at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The firing frequency of the model thus increases linearly without bound as input current increases. The model can be made more accurate by introducing a refractory period that limits the firing frequency of a neuron by preventing it from firing during that period. For constant input the threshold voltage is reached after an integration time after starting from zero. After a reset, the refractory period introduces a dead time so that the total time until the next firing is . The firing frequency is the inverse of the total inter-spike interval (including dead time). The firing frequency as a function of a constant input current, is therefore A shortcoming of this model is that it describes neither adaptation nor leakage. If the model receives a below-threshold short current pulse at some time, it will retain that voltage boost forever - until another input later makes it fire. This characteristic is not in line with observed neuronal behavior. The following extensions make the integrate-and-fire model more plausible from a biological point of view. Leaky integrate-and-fire The leaky integrate-and-fire model, which can be traced back to Louis Lapicque, contains a "leak" term in the membrane potential equation that reflects the diffusion of ions through the membrane, unlike the non-leaky integrate-and-fire model. The model equation looks like where is the voltage across the cell membrane and is the membrane resistance. (The non-leaky integrate-and-fire model is retrieved in the limit to infinity, i.e. if the membrane is a perfect insulator). The model equation is valid for arbitrary time-dependent input until a threshold is reached; thereafter the membrane potential is reset. For constant input, the minimum input to reach the threshold is . Assuming a reset to zero, the firing frequency thus looks like which converges for large input currents to the previous leak-free model with the refractory period. The model can also be used for inhibitory neurons. The most significant disadvantage of this model is that it does not contain neuronal adaptation, so that it cannot describe an experimentally measured spike train in response to constant input current. This disadvantage is removed in generalized integrate-and-fire models that also contain one or several adaptation-variables and are able to predict spike times of cortical neurons under current injection to a high degree of accuracy. Adaptive integrate-and-fire Neuronal adaptation refers to the fact that even in the presence of a constant current injection into the soma, the intervals between output spikes increase. An adaptive integrate-and-fire neuron model combines the leaky integration of voltage with one or several adaptation variables (see Chapter 6.1. in the textbook Neuronal Dynamics) where is the membrane time constant, is the adaptation current number, with index k, is the time constant of adaptation current , is the resting potential and is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value below the firing threshold. The reset value is one of the important parameters of the model. The simplest model of adaptation has only a single adaptation variable and the sum over k is removed. Integrate-and-fire neurons with one or several adaptation variables can account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. Moreover, adaptive integrate-and-fire neurons with several adaptation variables are able to predict spike times of cortical neurons under time-dependent current injection into the soma. Fractional-order leaky integrate-and-fire Recent advances in computational and theoretical fractional calculus lead to a new form of model called Fractional-order leaky integrate-and-fire. An advantage of this model is that it can capture adaptation effects with a single variable. The model has the following form Once the voltage hits the threshold it is reset. Fractional integration has been used to account for neuronal adaptation in experimental data. 'Exponential integrate-and-fire' and 'adaptive exponential integrate-and-fire' In the exponential integrate-and-fire model, spike generation is exponential, following the equation: where is the membrane potential, is the intrinsic membrane potential threshold, is the membrane time constant, is the resting potential, and is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses , it diverges to infinity in finite time. In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than ) at which the membrane potential is reset to a value . The voltage reset value is one of the important parameters of the model. Importantly, the right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data. In this sense the exponential nonlinearity is strongly supported by experimental evidence. In the adaptive exponential integrate-and-fire neuron the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w where denotes the adaptation current with time scale . Important model parameters are the voltage reset value , the intrinsic threshold , the time constants and as well as the coupling parameters and . The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. However, since the adaptation is in the form of a current, aberrant hyperpolarization may appear. This problem was solved by expressing it as a conductance. Adaptive Threshold Neuron Model In this model, a time-dependent function is added to the fixed threshold, , after every spike, causing an adaptation of the threshold. The threshold potential, , gradually returns to its steady state value depending on the threshold adaptation time constant . This is one of the simpler techniques to achieve spike frequency adaptation. The expression for the adaptive threshold is given by: where is defined by: When the membrane potential, , reaches a threshold, it is reset to : A simpler version of this with a single time constant in threshold decay with an LIF neuron is realized in to achieve LSTM like recurrent spiking neural networks to achieve accuracy nearer to ANNs on few spatio temporal tasks. Double Exponential Adaptive Threshold (DEXAT) The DEXAT neuron model is a flavor of adaptive neuron model in which the threshold voltage decays with a double exponential having two time constants. Double exponential decay is governed by a fast initial decay and then a slower decay over a longer period of time. This neuron used in SNNs through surrogate gradient creates an adaptive learning rate yielding higher accuracy and faster convergence, and flexible long short-term memory compared to existing counterparts in the literature. The membrane potential dynamics are described through equations and the threshold adaptation rule is: The dynamics of and are given by , , where and . Further, multi-time scale adaptive threshold neuron model showing more complex dynamics is shown in. Stochastic models of membrane voltage and spike timing The models in this category are generalized integrate-and-fire models that include a certain level of stochasticity. Cortical neurons in experiments are found to respond reliably to time-dependent input, albeit with a small degree of variations between one trial and the next if the same stimulus is repeated. Stochasticity in neurons has two important sources. First, even in a very controlled experiment where input current is injected directly into the soma, ion channels open and close stochastically and this channel noise leads to a small amount of variability in the exact value of the membrane potential and the exact timing of output spikes. Second, for a neuron embedded in a cortical network, it is hard to control the exact input because most inputs come from unobserved neurons somewhere else in the brain. Stochasticity has been introduced into spiking neuron models in two fundamentally different forms: either (i) a noisy input current is added to the differential equation of the neuron model; or (ii) the process of spike generation is noisy. In both cases, the mathematical theory can be developed for continuous time, which is then, if desired for the use in computer simulations, transformed into a discrete-time model. The relation of noise in neuron models to the variability of spike trains and neural codes is discussed in Neural Coding and in Chapter 7 of the textbook Neuronal Dynamics. Noisy input model (diffusive noise) A neuron embedded in a network receives spike input from other neurons. Since the spike arrival times are not controlled by an experimentalist they can be considered as stochastic. Thus a (potentially nonlinear) integrate-and-fire model with nonlinearity f(v) receives two inputs: an input controlled by the experimentalists and a noisy input current that describes the uncontrolled background input. Stein's model is the special case of a leaky integrate-and-fire neuron and a stationary white noise current with mean zero and unit variance. In the subthreshold regime, these assumptions yield the equation of the Ornstein–Uhlenbeck process However, in contrast to the standard Ornstein–Uhlenbeck process, the membrane voltage is reset whenever V hits the firing threshold . Calculating the interval distribution of the Ornstein–Uhlenbeck model for constant input with threshold leads to a first-passage time problem. Stein's neuron model and variants thereof have been used to fit interspike interval distributions of spike trains from real neurons under constant input current. In the mathematical literature, the above equation of the Ornstein–Uhlenbeck process is written in the form where is the amplitude of the noise input and dW are increments of a Wiener process. For discrete-time implementations with time step dt the voltage updates are where y is drawn from a Gaussian distribution with zero mean unit variance. The voltage is reset when it hits the firing threshold . The noisy input model can also be used in generalized integrate-and-fire models. For example, the exponential integrate-and-fire model with noisy input reads For constant deterministic input it is possible to calculate the mean firing rate as a function of . This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron. The leaky integrate-and-fire with noisy input has been widely used in the analysis of networks of spiking neurons. Noisy input is also called 'diffusive noise' because it leads to a diffusion of the subthreshold membrane potential around the noise-free trajectory (Johannesma, The theory of spiking neurons with noisy input is reviewed in Chapter 8.2 of the textbook Neuronal Dynamics. Noisy output model (escape noise) In deterministic integrate-and-fire models, a spike is generated if the membrane potential hits the threshold . In noisy output models, the strict threshold is replaced by a noisy one as follows. At each moment in time t, a spike is generated stochastically with instantaneous stochastic intensity or 'escape rate' that depends on the momentary difference between the membrane voltage and the threshold . A common choice for the 'escape rate' (that is consistent with biological data) is where is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and is a sharpness parameter. For the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is which means that neuronal firing becomes non-negligible as soon as the membrane potential is a few mV below the formal firing threshold. The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook Neuronal Dynamics. For models in discrete time, a spike is generated with probability that depends on the momentary difference between the membrane voltage at time and the threshold . The function F is often taken as a standard sigmoidal with steepness parameter , similar to the update dynamics in artificial neural networks. But the functional form of F can also be derived from the stochastic intensity in continuous time introduced above as where is the threshold distance. Integrate-and-fire models with output noise can be used to predict the peristimulus time histogram (PSTH) of real neurons under arbitrary time-dependent input. For non-adaptive integrate-and-fire neurons, the interval distribution under constant stimulation can be calculated from stationary renewal theory. Spike response model (SRM) main article: Spike response model The spike response model (SRM) is a generalized linear model for the subthreshold membrane voltage combined with a nonlinear output noise process for spike generation. The membrane voltage at time t is where is the firing time of spike number f of the neuron, is the resting voltage in the absence of input, is the input current at time t-s and is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t-s to the voltage at time t. The contributions to the voltage caused by a spike at time are described by the refractory kernel . In particular, describes the reset after the spike and the time course of the spike-afterpotential following a spike. It therefore expresses the consequences of refractoriness and adaptation. The voltage V(t) can be interpreted as the result of an integration of the differential equation of a leaky integrate-and-fire model coupled to an arbitrary number of spike-triggered adaptation variables. Spike firing is stochastic and happens with a time-dependent stochastic intensity (instantaneous rate) with parameters and and a dynamic threshold given by Here is the firing threshold of an inactive neuron and describes the increase of the threshold after a spike at time . In case of a fixed threshold, one sets . For the threshold process is deterministic. The time course of the filters that characterize the spike response model can be directly extracted from experimental data. With optimized parameters the SRM describes the time course of the subthreshold membrane voltage for time-dependent input with a precision of 2mV and can predict the timing of most output spikes with a precision of 4ms. The SRM is closely related to linear-nonlinear-Poisson cascade models (also called Generalized Linear Model). The estimation of parameters of probabilistic neuron models such as the SRM using methods developed for Generalized Linear Models is discussed in Chapter 10 of the textbook Neuronal Dynamics. The name spike response model arises because, in a network, the input current for neuron i is generated by the spikes of other neurons so that in the case of a network the voltage equation becomes where is the firing times of neuron j (i.e., its spike train); describes the time course of the spike and the spike after-potential for neuron i; and and describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike of the presynaptic neuron j. The time course of the PSP results from the convolution of the postsynaptic current caused by the arrival of a presynaptic spike from neuron j with the membrane filter . SRM0 The SRM0 is a stochastic neuron model related to time-dependent nonlinear renewal theory and a simplification of the Spike Response Model (SRM). The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel there is no summation sign over past spikes: only the most recent spike (denoted as the time ) matters. Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is and the network equations of the SRM0 are where is the last firing time neuron i. Note that the time course of the postsynaptic potential is also allowed to depend on the time since the last spike of neuron i to describe a change in membrane conductance during refractoriness. The instantaneous firing rate (stochastic intensity) is where is a fixed firing threshold. Thus spike firing of neuron i depends only on its input and the time since neuron i has fired its last spike. With the SRM0, the interspike-interval distribution for constant input can be mathematically linked to the shape of the refractory kernel . Moreover the stationary frequency-current relation can be calculated from the escape rate in combination with the refractory kernel . With an appropriate choice of the kernels, the SRM0 approximates the dynamics of the Hodgkin-Huxley model to a high degree of accuracy. Moreover, the PSTH response to arbitrary time-dependent input can be predicted. Galves–Löcherbach model The Galves–Löcherbach model is a stochastic neuron model closely related to the spike response model SRM0 and the leaky integrate-and-fire model. It is inherently stochastic and, just like the SRM0, it is linked to time-dependent nonlinear renewal theory. Given the model specifications, the probability that a given neuron spikes in a period may be described by where is a synaptic weight, describing the influence of neuron on neuron , expresses the leak, and provides the spiking history of neuron before , according to Importantly, the spike probability of neuron depends only on its spike input (filtered with a kernel and weighted with a factor ) and the timing of its most recent output spike (summarized by ). Didactic toy models of membrane voltage The models in this category are highly simplified toy models that qualitatively describe the membrane voltage as a function of input. They are mainly used for didactic reasons in teaching but are not considered valid neuron models for large-scale simulations or data fitting. FitzHugh–Nagumo Sweeping simplifications to Hodgkin–Huxley were introduced by FitzHugh and Nagumo in 1961 and 1962. Seeking to describe "regenerative self-excitation" by a nonlinear positive-feedback membrane voltage and recovery by a linear negative-feedback gate voltage, they developed the model described by where we again have a membrane-like voltage and input current with a slower general gate voltage and experimentally-determined parameters . Although not derivable from biology, the model allows for a simplified, immediately available dynamic, without being a trivial simplification. The experimental support is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling. Morris–Lecar In 1981, Morris and Lecar combined the Hodgkin–Huxley and FitzHugh–Nagumo models into a voltage-gated calcium channel model with a delayed-rectifier potassium channel represented by where . The experimental support of the model is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling. A two-dimensional neuron model very similar to the Morris-Lecar model can be derived step-by-step starting from the Hodgkin-Huxley model. See Chapter 4.2 in the textbook Neuronal Dynamics. Hindmarsh–Rose Building upon the FitzHugh–Nagumo model, Hindmarsh and Rose proposed in 1984 a model of neuronal activity described by three coupled first-order differential equations: with , and so that the variable only changes very slowly. This extra mathematical complexity allows a great variety of dynamic behaviors for the membrane potential, described by the variable of the model, which includes chaotic dynamics. This makes the Hindmarsh–Rose neuron model very useful, because it is still simple, allows a good qualitative description of the many different firing patterns of the action potential, in particular bursting, observed in experiments. Nevertheless, it remains a toy model and has not been fitted to experimental data. It is widely used as a reference model for bursting dynamics. Theta model and quadratic integrate-and-fire The theta model, or Ermentrout–Kopell canonical Type I model, is mathematically equivalent to the quadratic integrate-and-fire model which in turn is an approximation to the exponential integrate-and-fire model and the Hodgkin-Huxley model. It is called a canonical model because it is one of the generic models for constant input close to the bifurcation point, which means close to the transition from silent to repetitive firing. The standard formulation of the theta model is The equation for the quadratic integrate-and-fire model is (see Chapter 5.3 in the textbook Neuronal Dynamics ) The equivalence of theta model and quadratic integrate-and-fire is for example reviewed in Chapter 4.1.2.2 of spiking neuron models. For input that changes over time or is far away from the bifurcation point, it is preferable to work with the exponential integrate-and-fire model (if one wants to stay in the class of one-dimensional neuron models), because real neurons exhibit the nonlinearity of the exponential integrate-and-fire model. Sensory input-stimulus encoding neuron models The models in this category were derived following experiments involving natural stimulation such as light, sound, touch, or odor. In these experiments, the spike pattern resulting from each stimulus presentation varies from trial to trial, but the averaged response from several trials often converges to a clear pattern. Consequently, the models in this category generate a probabilistic relationship between the input stimulus to spike occurrences. Importantly, the recorded neurons are often located several processing steps after the sensory neurons, so that these models summarize the effects of the sequence of processing steps in a compact form The non-homogeneous Poisson process model (Siebert) Siebert modeled the neuron spike firing pattern using a non-homogeneous Poisson process model, following experiments involving the auditory system. According to Siebert, the probability of a spiking event at the time interval is proportional to a non-negative function , where is the raw stimulus.: Siebert considered several functions as , including for low stimulus intensities. The main advantage of Siebert's model is its simplicity. The shortcomings of the model is its inability to reflect properly the following phenomena: The transient enhancement of the neuronal firing activity in response to a step stimulus. The saturation of the firing rate. The values of inter-spike-interval-histogram at short intervals values (close to zero). These shortcomings are addressed by the age-dependent point process model and the two-state Markov Model. Refractoriness and age-dependent point process model Berry and Meister studied neuronal refractoriness using a stochastic model that predicts spikes as a product of two terms, a function f(s(t)) that depends on the time-dependent stimulus s(t) and one a recovery function that depends on the time since the last spike The model is also called an inhomogeneous Markov interval (IMI) process. Similar models have been used for many years in auditory neuroscience. Since the model keeps memory of the last spike time it is non-Poisson and falls in the class of time-dependent renewal models. It is closely related to the model SRM0 with exponential escape rate. Importantly, it is possible to fit parameters of the age-dependent point process model so as to describe not just the PSTH response, but also the interspike-interval statistics. Linear-nonlinear Poisson cascade model and GLM The linear-nonlinear-Poisson cascade model is a cascade of a linear filtering process followed by a nonlinear spike generation step. In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM). The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step. The two-state Markov model (Nossenson & Messer) The spiking neuron model by Nossenson & Messer produces the probability of the neuron firing a spike as a function of either an external or pharmacological stimulus. The model consists of a cascade of a receptor layer model and a spiking neuron model, as shown in Fig 4. The connection between the external stimulus to the spiking probability is made in two steps: First, a receptor cell model translates the raw external stimulus to neurotransmitter concentration, and then, a spiking neuron model connects neurotransmitter concentration to the firing rate (spiking probability). Thus, the spiking neuron model by itself depends on neurotransmitter concentration at the input stage. An important feature of this model is the prediction for neurons firing rate pattern which captures, using a low number of free parameters, the characteristic edge emphasized response of neurons to a stimulus pulse, as shown in Fig. 5. The firing rate is identified both as a normalized probability for neural spike firing and as a quantity proportional to the current of neurotransmitters released by the cell. The expression for the firing rate takes the following form: where, P0 is the probability of the neuron being "armed" and ready to fire. It is given by the following differential equation: P0 could be generally calculated recursively using the Euler method, but in the case of a pulse of stimulus, it yields a simple closed-form expression. y(t) is the input of the model and is interpreted as the neurotransmitter concentration on the cell surrounding (in most cases glutamate). For an external stimulus it can be estimated through the receptor layer model: with being a short temporal average of stimulus power (given in Watt or other energy per time unit). R0 corresponds to the intrinsic spontaneous firing rate of the neuron. R1 is the recovery rate of the neuron from the refractory state. Other predictions by this model include: 1) The averaged evoked response potential (ERP) due to the population of many neurons in unfiltered measurements resembles the firing rate. 2) The voltage variance of activity due to multiple neuron activity resembles the firing rate (also known as Multi-Unit-Activity power or MUA). 3) The inter-spike-interval probability distribution takes the form a gamma-distribution like function. Pharmacological input stimulus neuron models The models in this category produce predictions for experiments involving pharmacological stimulation. Synaptic transmission (Koch & Segev) According to the model by Koch and Segev, the response of a neuron to individual neurotransmitters can be modeled as an extension of the classical Hodgkin–Huxley model with both standard and nonstandard kinetic currents. Four neurotransmitters primarily influence the CNS. AMPA/kainate receptors are fast excitatory mediators while NMDA receptors mediate considerably slower currents. Fast inhibitory currents go through GABAA receptors, while GABAB receptors mediate by secondary G-protein-activated potassium channels. This range of mediation produces the following current dynamics: where is the maximal conductance (around 1S) and is the equilibrium potential of the given ion or transmitter (AMDA, NMDA, Cl, or K), while describes the fraction of open receptors. For NMDA, there is a significant effect of magnesium block that depends sigmoidally on the concentration of intracellular magnesium by . For GABAB, is the concentration of the G-protein, and describes the dissociation of G in binding to the potassium gates. The dynamics of this more complicated model have been well-studied experimentally and produce important results in terms of very quick synaptic potentiation and depression, that is fast, short-term learning. The stochastic model by Nossenson and Messer translates neurotransmitter concentration at the input stage to the probability of releasing neurotransmitter at the output stage. For a more detailed description of this model, see the Two state Markov model section above. HTM neuron model The HTM neuron model was developed by Jeff Hawkins and researchers at Numenta and is based on a theory called Hierarchical Temporal Memory, originally described in the book On Intelligence. It is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the human brain. Applications Spiking Neuron Models are used in a variety of applications that need encoding into or decoding from neuronal spike trains in the context of neuroprosthesis and brain-computer interfaces such as retinal prosthesis: or artificial limb control and sensation. Applications are not part of this article; for more information on this topic please refer to the main article. Relation between artificial and biological neuron models The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like where is the output of the th neuron, is the th input neuron signal, is the synaptic weight (or strength of connection) between the neurons and , and is the activation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks time-dependence in input and output. When an input is switched on at a time t and kept constant thereafter, biological neurons emit a spike train. Importantly, this spike train is not regular but exhibits a temporal structure characterized by adaptation, bursting, or initial bursting followed by regular spiking. Generalized integrate-and-fire models such as the Adaptive Exponential Integrate-and-Fire model, the spike response model, or the (linear) adaptive integrate-and-fire model can capture these neuronal firing patterns. Moreover, neuronal input in the brain is time-dependent. Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enables to prediction of the spike train in the output for arbitrary time-dependent input, whereas an artificial neuron or a simple leaky integrate-and-fire does not. If we take the Hodkgin-Huxley model as a starting point, generalized integrate-and-fire models can be derived systematically in a step-by-step simplification procedure. This has been shown explicitly for the exponential integrate-and-fire model and the spike response model. In the case of modeling a biological neuron, physical analogs are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water-containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance . The firing of a neuron involves the movement of ions into the cell, that occurs when neurotransmitters cause ion channels on the cell membrane to open. We describe this by a physical time-dependent current . With this comes a change in voltage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in a voltage spike called an action potential which travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given by . If the input current is constant, most neurons emit after some time of adaptation or initial bursting a regular spike train. The frequency of regular firing in response to a constant current is described by the frequency-current relation, which corresponds to the transfer function of artificial neural networks. Similarly, for all spiking neuron models, the transfer function can be calculated numerically (or analytically). Cable theory and compartmental models All of the above deterministic models are point-neuron models because they do not consider the spatial structure of a neuron. However, the dendrite contributes to transforming input into output. Point neuron models are valid description in three cases. (i) If input current is directly injected into the soma. (ii) If synaptic input arrives predominantly at or close to the soma (closeness is defined by a length scale introduced below. (iii) If synapse arrives anywhere on the dendrite, but the dendrite is completely linear. In the last case, the cable acts as a linear filter; these linear filter properties can be included in the formulation of generalized integrate-and-fire models such as the spike response model. The filter properties can be calculated from a cable equation. Let us consider a cell membrane in the form of a cylindrical cable. The position on the cable is denoted by x and the voltage across the cell membrane by V. The cable is characterized by a longitudinal resistance per unit length and a membrane resistance . If everything is linear, the voltage changes as a function of timeWe introduce a length scale on the left side and time constant on the right side. The cable equation can now be written in its perhaps best-known form: The above cable equation is valid for a single cylindrical cable. Linear cable theory describes the dendritic arbor of a neuron as a cylindrical structure undergoing a regular pattern of bifurcation, like branches in a tree. For a single cylinder or an entire tree, the static input conductance at the base (where the tree meets the cell body or any such boundary) is defined as , where is the electrotonic length of the cylinder, which depends on its length, diameter, and resistance. A simple recursive algorithm scales linearly with the number of branches and can be used to calculate the effective conductance of the tree. This is given by where is the total surface area of the tree of total length , and is its total electrotonic length. For an entire neuron in which the cell body conductance is and the membrane conductance per unit area is , we find the total neuron conductance for dendrite trees by adding up all tree and soma conductances, given by where we can find the general correction factor experimentally by noting . The linear cable model makes several simplifications to give closed analytic results, namely that the dendritic arbor must branch in diminishing pairs in a fixed pattern and that dendrites are linear. A compartmental model allows for any desired tree topology with arbitrary branches and lengths, as well as arbitrary nonlinearities. It is essentially a discretized computational implementation of nonlinear dendrites. Each piece, or compartment, of a dendrite, is modeled by a straight cylinder of arbitrary length and diameter which connects with fixed resistance to any number of branching cylinders. We define the conductance ratio of the th cylinder as , where and is the resistance between the current compartment and the next. We obtain a series of equations for conductance ratios in and out of a compartment by making corrections to the normal dynamic , as where the last equation deals with parents and daughters at branches, and . We can iterate these equations through the tree until we get the point where the dendrites connect to the cell body (soma), where the conductance ratio is . Then our total neuron conductance for static input is given by Importantly, static input is a very special case. In biology, inputs are time-dependent. Moreover, dendrites are not always linear. Compartmental models enable to include nonlinearities via ion channels positioned at arbitrary locations along the dendrites. For static inputs, it is sometimes possible to reduce the number of compartments (increase the computational speed) and yet retain the salient electrical characteristics. Conjectures regarding the role of the neuron in the wider context of the brain principle of operation The neurotransmitter-based energy detection scheme The neurotransmitter-based energy detection scheme suggests that the neural tissue chemically executes a Radar-like detection procedure. As shown in Fig. 6, the key idea of the conjecture is to account for neurotransmitter concentration, neurotransmitter generation, and neurotransmitter removal rates as the important quantities in executing the detection task, while referring to the measured electrical potentials as a side effect that only in certain conditions coincide with the functional purpose of each step. The detection scheme is similar to a radar-like "energy detection" because it includes signal squaring, temporal summation, and a threshold switch mechanism, just like the energy detector, but it also includes a unit that emphasizes stimulus edges and a variable memory length (variable memory). According to this conjecture, the physiological equivalent of the energy test statistics is neurotransmitter concentration, and the firing rate corresponds to neurotransmitter current. The advantage of this interpretation is that it leads to a unit-consistent explanation which allows for bridge between electrophysiological measurements, biochemical measurements, and psychophysical results. The evidence reviewed in suggests the following association between functionality to histological classification: Stimulus squaring is likely to be performed by receptor cells. Stimulus edge emphasizing and signal transduction is performed by neurons. Temporal accumulation of neurotransmitters is performed by glial cells. Short-term neurotransmitter accumulation is likely to occur also in some types of neurons. Logical switching is executed by glial cells, and it results from exceeding a threshold level of neurotransmitter concentration. This threshold crossing is also accompanied by a change in neurotransmitter leak rate. Physical all-or-non movement switching is due to muscle cells and results from exceeding a certain neurotransmitter concentration threshold on muscle surroundings. Note that although the electrophysiological signals in Fig.6 are often similar to the functional signal (signal power/neurotransmitter concentration / muscle force), there are some stages in which the electrical observation differs from the functional purpose of the corresponding step. In particular, Nossenson et al. suggested that glia threshold crossing has a completely different functional operation compared to the radiated electrophysiological signal and that the latter might only be a side effect of glia break. General comments regarding the modern perspective of scientific and engineering models The models above are still idealizations. Corrections must be made for the increased membrane surface area given by numerous dendritic spines, temperatures significantly hotter than room-temperature experimental data, and nonuniformity in the cell's internal structure. Certain observed effects do not fit into some of these models. For instance, the temperature cycling (with minimal net temperature increase) of the cell membrane during action potential propagation is not compatible with models that rely on modeling the membrane as a resistance that must dissipate energy when current flows through it. The transient thickening of the cell membrane during action potential propagation is also not predicted by these models, nor is the changing capacitance and voltage spike that results from this thickening incorporated into these models. The action of some anesthetics such as inert gases is problematic for these models as well. New models, such as the soliton model attempt to explain these phenomena, but are less developed than older models and have yet to be widely applied. Modern views regarding the role of the scientific model suggest that "All models are wrong but some are useful" (Box and Draper, 1987, Gribbin, 2009; Paninski et al., 2009). Recent conjecture suggests that each neuron might function as a collection of independent threshold units. It is suggested that a neuron could be anisotropically activated following the origin of its arriving signals to the membrane, via its dendritic trees. The spike waveform was also proposed to be dependent on the origin of the stimulus. External links Neuronal Dynamics: from single neurons to networks and models of cognition (W. Gerstner, W. Kistler, R. Naud, L. Paninski, Cambridge University Press, 2014). In particular, Chapters 6 - 10, html online version. Spiking Neuron Models (W. Gerstner and W. Kistler, Cambridge University Press, 2002) See also Binding neuron Bayesian approaches to brain function Brain-computer interfaces Free energy principle Models of neural computation Neural coding Neural oscillation Quantitative models of the action potential Spiking Neural Network References Biophysics Computational neuroscience Neuroscience
Biological neuron model
[ "Physics", "Biology" ]
10,215
[ "Neuroscience", "Applied and interdisciplinary physics", "Biophysics" ]
14,408,971
https://en.wikipedia.org/wiki/VoFR
Voice over Frame Relay (VoFR) is a protocol to transfer voice over Frame Relay networks. VoFR uses two sub-protocols, FRF.11 and FRF.12. FRF.11 defines the frame format of VoFR, and FRF.12 is used for packet fragmentation and reassembly. References Telephone services Network protocols Digital audio Frame Relay
VoFR
[ "Technology" ]
76
[ "Computing stubs", "Computer network stubs" ]
14,409,822
https://en.wikipedia.org/wiki/Adenosine%20A2B%20receptor
{{DISPLAYTITLE:Adenosine A2B receptor}} The adenosine A2B receptor, also known as ADORA2B, is a G-protein coupled adenosine receptor, and also denotes the human adenosine A2b receptor gene which encodes it. Mechanism This integral membrane protein stimulates adenylate cyclase activity in the presence of adenosine. This protein also interacts with netrin-1, which is involved in axon elongation. Gene The gene is located near the Smith-Magenis syndrome region on chromosome 17. Ligands Research into selective A2B ligands has lagged somewhat behind the development of ligands for the other three adenosine receptor subtypes, but a number of A2B-selective compounds have now been developed, and research into their potential therapeutic applications is ongoing. Agonists BAY 60-6583 NECA (N-ethylcarboxamidoadenosine) (S)-PHPNECA - high affinity and efficacy at A2B, but poor selectivity over other adenosine receptor subtypes LUF-5835 LUF-5845 - partial agonist Antagonists and inverse agonists Compound 38: antagonist, high affinity and good subtype selectivity ISAM-R56A: non-xanthinic high affinity selective antagonist (Ki: 1.50 nM) ISAM-140: non-xanthinic selective antagonist (Ki = 3.49 nM). ISAM-R324A: Soluble and metabolically stable non-xanthinic selective antagonist (Ki = 6.10 nM). ATL-801 CVT-6883 MRS-1706 MRS-1754 OSIP-339,391 PSB-603: xanthinic antagonist PSB-0788: xanthinic antagonist PSB-1115: xanthinic antagonist PSB-1901: xanthinic antagonist with picomolar potency References Further reading External links Adenosine receptors
Adenosine A2B receptor
[ "Chemistry" ]
423
[ "Adenosine receptors", "Signal transduction" ]
14,410,003
https://en.wikipedia.org/wiki/Brain-specific%20angiogenesis%20inhibitor%201
Brain-specific angiogenesis inhibitor 1 is a protein that in humans is encoded by the BAI1 gene. It is a member of the adhesion-GPCR family of receptors. Function Angiogenesis is controlled by a local balance between stimulators and inhibitors of new vessel growth and is suppressed under normal physiologic conditions. Angiogenesis has been shown to be essential for growth and metastasis of solid tumors. In order to obtain blood supply for their growth, tumor cells are potently angiogenic and attract new vessels as results of increased secretion of inducers and decreased production of endogenous negative regulators. BAI1 contains at least one 'functional' p53-binding site within an intron, and its expression has been shown to be induced by wildtype p53. There are two other brain-specific angiogenesis inhibitor genes, designated BAI2 and BAI3 which along with BAI1 have similar tissue specificities and structures, however only BAI1 is transcriptionally regulated by p53. BAI1 is postulated to be a member of the secretin receptor family, an inhibitor of angiogenesis and a growth suppressor of glioblastomas. Interactions Brain-specific angiogenesis inhibitor 1 has been shown to interact with BAIAP3 and MAGI1. References External links Further reading G protein-coupled receptors
Brain-specific angiogenesis inhibitor 1
[ "Chemistry" ]
277
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,020
https://en.wikipedia.org/wiki/Brain-specific%20angiogenesis%20inhibitor%202
Brain-specific angiogenesis inhibitor 2 is a protein that in humans is encoded by the BAI2 gene. It is a member of the adhesion-GPCR family of receptors. BAI1, a p53-target gene, encodes brain-specific angiogenesis inhibitor, a seven-span transmembrane protein and is thought to be a member of the secretin receptor family. Brain-specific angiogenesis proteins BAI2 and BAI3 are similar to BAI1 in structure, have similar tissue specificities and may also play a role in angiogenesis. References External links Further reading G protein-coupled receptors
Brain-specific angiogenesis inhibitor 2
[ "Chemistry" ]
128
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,030
https://en.wikipedia.org/wiki/Brain-specific%20angiogenesis%20inhibitor%203
Brain-specific angiogenesis inhibitor 3 is a protein that in humans is encoded by the BAI3 gene. BAI1, a p53-target gene, encodes brain-specific angiogenesis inhibitor, a seven-span transmembrane protein and is thought to be a member of the secretin receptor family. Brain-specific angiogenesis proteins BAI2 and BAI3 are similar to BAI1 in structure, have similar tissue specificities and may also play a role in angiogenesis. The BAI3 receptor has also been found to regulate dendrite morphogenesis, arborization growth and branching in cultured neurons. The adhesion GPCR BaI3 is an orphan receptor that has a long N-terminus consisting of one cub domain, five BaI Thrombospondin type 1 repeats, and one hormone binding domain. BaI3 is expressed in neural tissues of the central nervous system. BaI3 has been shown to have a high affinity for C1q proteins. C1q added to hippocampal neurons expressing BaI3 resulted in a decrease in the number of synapses. References Further reading External links G protein-coupled receptors
Brain-specific angiogenesis inhibitor 3
[ "Chemistry" ]
234
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,190
https://en.wikipedia.org/wiki/CD97
Cluster of differentiation 97 is a protein also known as BL-Ac[F2] encoded by the ADGRE5 gene. CD97 is a member of the adhesion G protein-coupled receptor (GPCR) family. Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain. CD97 is widely expressed on, among others, hematopoietic stem and progenitor cells, immune cells, epithelial cells, muscle cells as well as their malignant counterparts. In the case of CD97 the N-terminal domains consist of alternatively spliced epidermal growth factor (EGF)-like domains. Alternative splicing has been observed for this gene and three variants have been found. The N-terminal fragment of CD97 contains 3-5 EGF-like domains in human and 3-4 EGF-like domains in mice. Ligands Decay accelerating factor (DAF/CD55), a regulatory protein of the complement cascade, interacts with the first and second EGF-like domains of CD97; chondroitin sulfate B with the fourth EGF-like domain; α5β1 and αvβ3 integrins with an RGD downstream the EGF-like domains; and CD90 (Thy-1) with the GAIN domain. N-glycosylation of CD97 within the EGF domains is crucial for CD55 binding. Signaling Transgenic expression of a CD97 in mice enhanced levels of nonphosphorylated membrane-bound β-catenin and phosphorylated Akt. Furthermore, ectopic CD97 expression facilitated RhoA activation through binding of Gα12/13 as well as induced Ki67 expression and phosphorylated ERK and Akt through enhancing lysophosphatidic acid receptor 1 (LPAR1) signaling. Lysophosphatidylethanolamine (LPE; a plasma membrane component) and lysophosphatidic acid (LPA) use heterodimeric LPAR1–CD97 to drive Gi/o protein–phospholipase C–inositol 1,4,5-trisphosphate signaling and induce [Ca2+] in breast cancer cells. Function In the immune system, CD97 is known as a critical mediator of host defense. Upon lymphoid, myeloid cells and neutrophil activation, CD97 is upregulated to promote adhesion and migration to sites of inflammation. Moreover, it has been shown that CD97 regulates granulocyte homeostasis. Mice lacking CD97 or its ligand CD55 have twice as many granulocytes as wild-type mice possibly due to enhanced granulopoiesis. Antibodies against CD97 have been demonstrated to diminish various inflammatory disorders by depleting granulocytes. Notably, CD97 antibody-mediated granulocytopenia only happens under the condition of pro-inflammation via an Fc receptor-associated mechanism. Finally, the interaction between CD97 and its ligand CD55 regulates T-cell activation and increases proliferation and cytokine production. Changes in the expression of CD97 have been described for auto-inflammatory diseases, such as rheumatoid arthritis and multiple sclerosis. The expression of CD97 on macrophage and the abundant presence of its ligand CD55 on fibroblast-like synovial cells suggest that the CD97-CD55 interaction is involved in the recruitment and/or retention of macrophages into the synovial tissue in rheumatoid arthritis. CD97 antibodies and lack of CD97 or CD55 in mice reduced synovial inflammation and joint damage in collagen- and K/BxN serum transfer-induced arthritis. In brain tissue, CD97 is undetectable in normal white matter, and expression of CD55 is fairly restricted to the endothelium. In pre-active lesion, increased expression of CD55 in endothelial cells and robust CD97 expression on infiltrating leukocytes suggest a possible role of both molecules in immune cell migration through the blood-brain barrier. Additionally, soluble N-terminal fragment (NTF)s of CD97 are detectable in the serum of patients with rheumatoid arthritis and multiple sclerosis. Outside the immune system, CD97 is likely involved in cell–cell interactions. CD97 in colonic enterocytes strengthens E-cadherin-based adherens junctions to maintain lateral cell-cell contacts and regulates the localization and degradation of β-catenin through glycogen synthase kinase-3β (GSK-3β) and Akt signaling. Ectopic CD97 expression upregulates the expression of N-cadherin and β-catenin in HT1080 fibrosarcoma cells leading to enhanced cell-cell aggregation. CD97 is expressed at the sarcoplasmic reticulum and the peripheral sarcolemma in skeletal muscle. However, lack of CD97 only affects the structure of the sarcoplasmic reticulum, but not the function of skeletal muscle. In addition, CD97 promotes angiogenesis of the endothelium through to α5β1 and αvβ3 integrins, which contributes to cell attachment. Clinical significance CD97 expression in cancer was first reported for dedifferentiated thyroid carcinoma and their lymph node metastases. CD97 is expressed on many types of tumors including thyroid, gastric, pancreatic, esophageal, colorectal, and oral squamous carcinomas as well as glioblastoma and glioblastoma-initiating cells. In addition, enhanced CD97 expression has been found at the invasion front of tumors, suggesting a possible role in tumor migration/invasion, and correlated with a poorer clinical prognosis. CD97 has isoform-specific functions in some tumors. For instance, the small EGF(1,2,5) isoform promoted tumor invasion and metastasis in gastric carcinoma; the small EGF(1,2,5) isoform induced but the full length EGF(1-5) isoform suppressed gastric carcinoma invasion. Forced CD97 expression induced cell migration, activated proteolytic matrix metalloproteinases (MMPs), and enhanced secretion of the chemokines interleukin (IL)-8. Tumor suppressor microRNA-126, often downregulated in cancer, was found to target CD97 thereby modulating cancer progression. CD97 can heterodimerize with the LPAR1, a canonical GPCR that is implied in tumor progression, to modulate synergistic functions and LPA-mediated Rho signaling. It has been shown that CD97 regulates localization and degradation of β-catenin. GSK-3β, inhibited in some cancer, regulates the stability of β-catenin in cytoplasm and subsequently, cytosolic β-catenin moves into the nucleus to facilitate expression of pro-oncogenic genes. Because of its role in tumor invasion and angiogenesis, CD97 is a potential therapeutic target. Several treatments reduce CD97 expression in tumor cells such as cytokine tumor growth factor (TGF)β as well as the compounds sodium butyrate, retinoic acid, and troglitazone. Taken together, experimental evidence indicates that CD97 plays multiple roles in tumor progress. References External links GPCR consortium G protein-coupled receptors
CD97
[ "Chemistry" ]
1,664
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,264
https://en.wikipedia.org/wiki/Proximity%20communication
Proximity communication is a Sun microsystems technology of wireless chip-to-chip communications. Partly by Robert Drost and Ivan Sutherland. Research done as part of High Productivity Computing Systems DARPA project. Proximity communication replaces wires by capacitive coupling, promises significant increase in communications speed between chips in an electronic system, among other benefits. Partially funded by a $50 million award from the Defense Advanced Research Projects Agency. Comparing traditional area ball bonding, proximity communication has one order smaller scale, so it can be two order denser (in terms of connection number/PIN) than ball bonding. This technique requires very good alignment between chips and very small gaps between transmitting (Tx) and receiving (Rx) parts (2-3 micrometers), which can be destroyed by thermal expansion, vibration, dust, etc. Chip transmitter consists (according to presentation slide) of big 32x32 array of very small Tx micropads, 4x4 array of bigger Rx micropads (four times bigger than tx micropad), and two linear arrays of 14 X vernier and 14 Y vernier. Proximity communication can be used with 3D packing on chips in Multi-Chip Module, allowing to connect several MCM without sockets and wires. Speed was up to 1.35 Gbit/s/channel in tests of 16 channel systems. BER < 10−12. Static power is 3.6 mW/channel, dynamic power is 3.9 pJ/bit. External links Slides by Robert J. Drost List of Drost patents in Sun, most of which is about Proximity communication Semiconductors Semiconductor technology Microtechnology Sun Microsystems
Proximity communication
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
334
[ "Electrical resistance and conductance", "Physical quantities", "Microtechnology", "Semiconductors", "Materials science", "Materials", "Electronic engineering", "Condensed matter physics", "Semiconductor technology", "Solid state engineering", "Matter" ]
14,410,379
https://en.wikipedia.org/wiki/Corticotropin-releasing%20hormone%20receptor%202
Corticotropin-releasing hormone receptor 2 (CRHR2) is a protein, also known by the IUPHAR-recommended name CRF2, that is encoded by the CRHR2 gene and occurs on the surfaces of some mammalian cells. CRF2 receptors are type 2 G protein-coupled receptors for corticotropin-releasing hormone (CRH) that are resident in the plasma membranes of hormone-sensitive cells. CRH, a peptide of 41 amino acids synthesized in the hypothalamus, is the principal neuroregulator of the hypothalamic-pituitary-adrenal axis, signaling via guanine nucleotide-binding proteins (G proteins) and downstream effectors such as adenylate cyclase. The CRF2 receptor is a multi-pass membrane protein with a transmembrane domain composed of seven helices arranged in a V-shape. CRF2 receptors are activated by two structurally similar peptides, urocortin II, and urocortin III, as well as CRH. Properties The human CRHR2 gene contains 12 exons. Three major functional isoforms, alpha (411 amino acids), beta (438 amino acids), and gamma (397 amino acids), encoded by transcripts with alternative first exons, differ only in the N-terminal sequence comprising the signal peptide and part of the extracellular domain (amino acids 18-108 of CRHR2 alpha); the unique N-terminal sequence of each isoform (34 amino acids in CRHR2 alpha; 61 amino acids in Hs CRHR2 beta; 20 amino acids in CRHR2 gamma) is followed by a sequence common to all isoforms (377 amino acids) comprising most of the multi-pass transmembrane domain followed by a cytoplasmic domain of 47 amino acids. CRHR2 beta is expressed in human brain; CRHR2 alpha predominates in peripheral tissues. The N-terminal signal peptides of corticotropin-releasing hormone receptor 1 and CRHR2 beta are cleaved off in the endoplasmic reticulum to yield the mature receptors. In contrast, CRHR2 alpha contains a unique pseudo signal peptide that is not removed from the mature receptor. In adenylate cyclase activation assays, CRH-related peptides are 10 times more potent at stimulating CRHR2 beta than CRHR2 alpha and CRHR2 gamma, suggesting that the N-terminal sequence is involved in the ligand-receptor interaction. See also Corticotropin-releasing hormone receptor References Further reading External links G protein-coupled receptors Corticotropin-releasing hormone
Corticotropin-releasing hormone receptor 2
[ "Chemistry" ]
560
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,427
https://en.wikipedia.org/wiki/GPR183
G-protein coupled receptor 183 also known as Epstein-Barr virus-induced G-protein coupled receptor 2 (EBI2) is a protein (GPCR) expressed on the surface of some immune cells, namely B cells and T cells; in humans it is encoded by the GPR183 gene. Expression of EBI2 is one critical mediator of immune cell localization within lymph nodes, responsible in part for the coordination of B cell, T cell, and dendritic cell movement and interaction following antigen exposure. EBI2 is a receptor for oxysterols. The most potent activator is 7α,25-dihydroxycholesterol (7α,25-OHC), with other oxysterols exhibiting varying affinities for the receptor. Oxysterol gradients drive chemotaxis, attracting the EBI2-expressing cells to locations of high ligand concentration. The GPR183 gene was identified due to its upregulation during Epstein-Barr virus infection of the Burkitt's lymphoma cell line BL41, hence its name: EBI2. Tissue distribution and function B cells EBI2 helps B cell homing to the outer follicular region within a lymph node. Approximately three hours following B cell exposure to plasma-soluble antigen, EBI2 is upregulated via the transcription factor BRRF1. More surface receptors binding the oxysterol ligand results in cellular migration up the gradient, to the outer follicular region. The reason for this early migration is still unknown; however, because soluble antigen enters lymph nodes via afferent lymphatic vasculature, near the outer region of the follicle, it is hypothesized that B cell movement is motivated by increased exposure to the antigen. Six hours after antigen exposure, EBI2 is downregulated to low levels, permitting the B cells to migrate to the border between the B cell and T cell zones of the lymph node. Here, B cells interact with T helper cells previously activated by antigen-presenting dendritic cells. Though CCR7 is the dominant receptor in this stage of B cell migration, EBI2 is still critical, the low expression of which contributes to organized interaction along the T zone border that maximizes interactions with T cells. Following B cell receptor and CD40 co-stimulation, EBI2 is again upregulated. The B cells thus move back toward the outer follicular space, where they begin cell division. At this point, a B cell either downregulates EBI2 expression in order to enter a germinal center or maintains EBI2 expression and remains in outer follicular regions. In germinal centers (GC), B cells downregulate the receptor via the transcriptional repressor B-cell lymphoma-6 (BCL6) and, following somatic hypermutation, differentiate into long-lived antibody-secreting plasma cells or memory B cells. EBI2 must turn off to move B cells to the germinal center from the periphery, and must turn on for B cells to exit the germinal center and re-enter the periphery. Meanwhile, those remaining outside the follicle differentiate into plasmablasts, eventually becoming short-lived plasma cells. Thus, EBI2 expression modulates B cell differentiation by directing cells toward or away from germinal centers. T cells EBI2 also regulates intra-lymphatic T cell migration. Mature T helper cells upregulate EBI2 to follow the oxysterol gradient, migrating to the outer edges of the T cell zone to receive signals from antigen-presenting dendritic cells arriving from the tissues. This migration is critical as the resulting T cell-DC interaction induces T helper cell differentiation into T follicular helper cells. In concert with upregulation of CXCR5, the downregulation of EBI2 helps T follicular helper cells move toward the follicle center to help B cells undergoing affinity maturation in germinal centers. Dendritic cells EBI2 expression on CD4+ dendritic cells is a key initiator of immune response. Antigen-activated dendritic cells are driven to lymph node bridging channels via the oxysterol-EBI2 pathway. In the spleen, bridging channels connect the marginal zone, where dendritic cells pick up plasma-soluble antigen, to the T cell zone, where they present antigen to T helper cells. This results in T cell proliferation and differentiation. Localization to bridging channels is also associated with dendritic cell reception of lymphotoxin beta signaling, which augments their blood pathogen uptake, resulting in an increase in T cell responses. Ligand Oxysterols bind to and activate EBI2. The highest affinity oxysterol ligand is 7α,25-dihydroxycholesterol (7α,25-OHC), formed by enzymatic oxidation of cholesterol by the hydroxylases CH25H and CYP7B1. 7α,25-OHC is concentrated in bridging channels and the outer perimeter of B cell follicles. Conversely it is not present in follicle centers, germ centers, nor in the T zone. The enzymes responsible for ligand biosynthesis, CH25H and CYP7B1, are unsurprisingly abundant in lymphoid stromal cells. On the other hand, the enzyme that deactivates the ligand, HSD3B7, is highly concentrated in areas where the ligand concentration should be lowest—the T zone. Though it is not a cytokine, the EBI2 ligand acts much like a chemokine in that its gradient drives cellular migration. Virus infection GPR183 plays a crucial role in driving inflammation in the lungs during severe viral respiratory infections such as influenza A virus (IAV) and SARS-CoV-2. Studies using preclinical murine models of infection revealed that the activation of GPR183 by oxidized cholesterols leads to the recruitment of monocytes/macrophages and the production of inflammatory cytokines in the lungs. References Further reading G protein-coupled receptors
GPR183
[ "Chemistry" ]
1,337
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,450
https://en.wikipedia.org/wiki/LPAR1
Lysophosphatidic acid receptor 1 also known as LPA1 is a protein that in humans is encoded by the LPAR1 gene. LPA1 is a G protein-coupled receptor that binds the lipid signaling molecule lysophosphatidic acid (LPA). Function The integral membrane protein encoded by this gene is a lysophosphatidic acid (LPA) receptor from a group known as EDG receptors. These receptors are members of the G protein-coupled receptor superfamily. Utilized by LPA for cell signaling, EDG receptors mediate diverse biologic functions, including proliferation, platelet aggregation, smooth muscle contraction, inhibition of neuroblastoma cell differentiation, chemotaxis, and tumor cell invasion. Alternative splicing of this gene has been observed and two transcript variants have been described, each encoding identical proteins. An alternate translation start codon has been identified, which results in isoforms differing in the N-terminal extracellular tail. In addition, an alternate polyadenylation site has been reported. Cancer LPAR1 gene has been detected progressively overexpressed in Human papillomavirus-positive neoplastic keratinocytes derived from uterine cervical preneoplastic lesions at different levels of malignancy. For this reason, this gene is likely to be associated with tumorigenesis and may be a potential prognostic marker for uterine cervical preneoplastic lesions progression. Evolution Paralogues Source: LPAR2 LPAR3 S1PR1 S1PR3 S1PR5 S1PR4 S1PR2 CNR1 MC3R MC5R MC4R GPR12 MC2R GPR6 CNR2 GPR3 MC1R GPR119 See also Lysophospholipid receptor References Further reading External links G protein-coupled receptors
LPAR1
[ "Chemistry" ]
385
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,456
https://en.wikipedia.org/wiki/S1PR3
Sphingosine-1-phosphate receptor 3 also known as S1PR3 is a human gene which encodes a G protein-coupled receptor which binds the lipid signaling molecule sphingosine 1-phosphate (S1P). Hence this receptor is also known as S1P3. Function This gene encodes a member of the EDG family of receptors, which are G protein-coupled receptors. This protein has been identified as a functional receptor for sphingosine 1-phosphate and likely contributes to the regulation of angiogenesis and vascular endothelial cell function. Evolution Paralogues to S1PR3 Gene S1PR1 S1PR5 S1PR2 S1PR4 LPAR1 LPAR3 LPAR2 CNR1 MC5R GPR6 GPR12 MC4R CNR2 GPR3 MC3R MC2R GPR119 MC1R See also Lysophospholipid receptor References Further reading External links G protein-coupled receptors
S1PR3
[ "Chemistry" ]
211
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,475
https://en.wikipedia.org/wiki/CELSR3
Cadherin EGF LAG seven-pass G-type receptor 3 is a protein that in humans is encoded by the CELSR3 gene. The protein encoded by this gene is a member of the flamingo subfamily, part of the cadherin superfamily. The flamingo subfamily consists of nonclassic-type cadherins; a subpopulation that does not interact with catenins. The flamingo cadherins are located at the plasma membrane and have nine cadherin domains, seven epidermal growth factor-like repeats and two laminin A G-type repeats in their ectodomain. They also have seven transmembrane domains, a characteristic unique to this subfamily. It is postulated that these proteins are receptors involved in contact-mediated communication, with cadherin domains acting as homophilic binding regions and the EGF-like domains involved in cell adhesion and receptor-ligand interactions. The specific function of this particular member has not been determined. See also Flamingo (protein) References Further reading External links Adhesion G protein-coupled receptors G protein-coupled receptors
CELSR3
[ "Chemistry" ]
229
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,501
https://en.wikipedia.org/wiki/CELSR2
Cadherin EGF LAG seven-pass G-type receptor 2 is a protein that in humans is encoded by the CELSR2 gene. The protein encoded by this gene is a member of the flamingo subfamily, part of the cadherin superfamily. The flamingo subfamily consists of nonclassic-type cadherins; a subpopulation that does not interact with catenins. The flamingo cadherins are located at the plasma membrane and have nine cadherin domains, seven epidermal growth factor-like repeats and two laminin A G-type repeats in their ectodomain. They also have seven transmembrane domains, a characteristic unique to this subfamily. It is postulated that these proteins are receptors involved in contact-mediated communication, with cadherin domains acting as homophilic binding regions and the EGF-like domains involved in cell adhesion and receptor-ligand interactions. The specific function of this particular member has not been determined. See also Flamingo (protein) References Further reading External links Adhesion G protein-coupled receptors G protein-coupled receptors
CELSR2
[ "Chemistry" ]
229
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,583
https://en.wikipedia.org/wiki/Formyl%20peptide%20receptor%203
N-formyl peptide receptor 3 (FPR3) is a receptor protein that in humans is encoded by the FPR3 gene. Nomenclature note Confusingly, there are two nomenclatures for FPR receptors and their genes, the first one used, FPR, FPR1, and FPR2 and its replacement (which corresponds directly to these three respective receptors and their genes), FPR1, FPR2, and FPR3. The latter nomenclature is recommended by the International Union of Basic and Clinical Pharmacology and is used here. Other previously used names for FPR1 are NFPR, and FMLPR; for FPR2 are FPRH1, FPRL1, RFP, LXA4R, ALXR, FPR2/ALX, HM63, FMLPX, and FPR2A; and for FPR3 are FPRH2, FPRL2, and FMLPY. FPR3 function The overall function of FPR3 is quite unclear. Compared to FPR1 and FPR2, FPR3 is highly phosphorylated (a signal for receptor inactivation and internalization) and more localized to small intracellular vesicles. This suggests that FPR3 rapidly internalizes after binding its ligands and thereby may serve as a "decoy" receptor to reduce the binding of its ligands to FRP1 and FRP2 receptors. Genes Humans The FPR3 gene was cloned and named based on the similarity of the amino acid sequence which it encodes to that encoded by the gene for FPR1 (see formyl peptide receptor 1 for details) The studies indicated that FPR3 is composed of 352 amino acids and its gene, similar to FPR1, has an intronless open reading frames which encodes a protein with the 7 transmembrane structure of G protein coupled receptors; FPR3 has 69% and 72% amino acid sequence identities with FPR1. All three genes localize to chromosome 19q.13.3 in the order of FPR1 (19q13.410), FPR2 (19q13.3-q13.4), and FPR3 (19q13.3-q13.4) to form a cluster which also includes the genes for another G protein-coupled chemotactic factor receptor, the C5a receptor (also termed CD88) and GPR77, and a second C5a receptor, C5a2 (C5L2), which has the structure of a G protein coupled receptor but fails to couple to G proteins and is of debated function. Mice Mouse FPR receptors localize to chromosome 17A3.2 in the following order: Fpr1, Fpr-rs2 (or fpr2), Fpr-rs1 (or LXA4R), Fpr-rs4, Fpr-rs7, Fpr-rs7, Fpr-rs6, and Fpr-rs3; Pseudogenes ψFpr-rs2 and ψFpr-rs3 (or ψFpr-rs5) lie just after Fpr-rs2 and Fpr-rs1, respectively. All of the active mouse FPR receptors have ≥50% amino acid sequence identity with each other as well as with the three human FPR receptors. Based on its predominantly intracellular distribution, mFpr-rs1 correlates, and therefore may share functionality, with human FPR3; However, the large number of mouse compared to human FPR receptors makes it difficult to extrapolate human FPR functions based on genetic (e.g. gene knockout or forced overexpression) or other experimental manipulations of the FPR receptors in mice. Other species FPR receptors are widely distributed throughout mammalian species with the FPR1, FPR2, and FPR3 paralogs, based on phylogenetic analysis, originating from a common ancestor and early duplication of FPR1 and FPR2/FPR3 splitting with FPR3 originating from the latest duplication event near the origin of primates. Rabbits express an ortholog of FPR1 (78% amino acid sequence identity) with high binding affinity for FMLP; rats express an ortholog of FPR2 (74% amino acid sequence identity) with high affinity for lipoxin A4. Cellular and tissue distribution FPL3 is expressed by circulating monocytes, eosinophils, and basophils but not neutrophils; tissue macrophages and dendritic cells. Ligands and potential ligand-based disease related activities The functions of FPR3 and the few ligands which activate it have not been fully clarified. Despite its homology to FPR1, FPR3 is unresponsive to many FPR1-stimulating formyl peptides including FMLP. However, fMMYALF, a N-formyl hexapeptide derived from the mitochondrial protein, NADH dehydrogenase subunit 6, is a weak agonist for FPR3 but >100-fold more potent in stimulating FPR1 and FPR2. F2L is a naturally occurring acylated peptide derived from the N-terminal sequence of heme-binding protein 1 by cathepsin D cleavage that potently stimulates chemotaxis through FPR3 in monocytes and monocyte-derived dendritic cells. F2L thereby may be a pro-inflammatory stimulus for FPR3. Similar to FPR2 (see FPR2 section), FPR3 is activated by humanin and thereby may be involved in inhibiting the inflammation occurring in and perhaps contributing to Alzheimer's disease. See also Formyl peptide receptor 1 Formyl peptide receptor 2 N-Formylmethionine-leucyl-phenylalanine References Further reading External links G protein-coupled receptors Formyl peptide receptors
Formyl peptide receptor 3
[ "Chemistry" ]
1,247
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,612
https://en.wikipedia.org/wiki/Frizzled-2
Frizzled-2 (Fz-2) is a protein that in humans is encoded by the FZD2 gene. Members of the 'frizzled' gene family encode 7-transmembrane domain proteins that are receptors for Wnt signaling proteins. The expression of the FZD2 gene appears to be developmentally regulated, with high levels of expression in fetal kidney and lung and in adult colon and ovary. References Further reading External links G protein-coupled receptors
Frizzled-2
[ "Chemistry" ]
102
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,628
https://en.wikipedia.org/wiki/Galanin%20receptor%201
Galanin receptor 1 (GAL1) is a G-protein coupled receptor encoded by the GALR1 gene. Function The neuropeptide galanin elicits a range of biological effects by interaction with specific G-protein-coupled receptors. Galanin receptors are seven-trans membrane proteins shown to activate a variety of intracellular second-messenger pathways. GALR1 inhibits adenylyl cyclase via a G protein of the GI/GO family. GALR1 is widely expressed in the brain and spinal cord, as well as in peripheral sites such as the small intestine and heart. See also Galanin receptor References Further reading External links G protein-coupled receptors sr:Galaninski receptor 3
Galanin receptor 1
[ "Chemistry" ]
148
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,691
https://en.wikipedia.org/wiki/GPR1
G protein-coupled receptor 1, also known as GPR1, is a protein that in humans is encoded by the GPR1 gene. GPR1 is a member of the G protein-coupled receptor family of transmembrane receptors. It functions as a receptor for chemerin. Other receptors for chemerin include CMKLR1 and CCRL2. References Further reading G protein-coupled receptors
GPR1
[ "Chemistry" ]
86
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,724
https://en.wikipedia.org/wiki/GPR4
G-protein coupled receptor 4 is a protein that in humans is encoded by the GPR4 gene. See also Proton-sensing G protein-coupled receptors References Further reading G protein-coupled receptors
GPR4
[ "Chemistry" ]
41
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,736
https://en.wikipedia.org/wiki/GPR6
G protein-coupled receptor 6, also known as GPR6, is a protein which in humans is encoded by the GPR6 gene. Function GPR6 is a member of the G protein-coupled receptor family of transmembrane receptors. It has been reported that GPR6 is both constitutively active but in addition is further activated by sphingosine-1-phosphate. GPR6 up-regulates cyclic AMP levels and promotes neurite outgrowth. Ligand Inverse Agonist Cannabidiol Evolution Paralogues to GPR6 gene Source: GPR3 GPR12 S1PR5 S1PR1 CNR1 S1PR2 LPAR2 CNR2 MC1R S1PR3 S1PR4 GPR119 MC3R MC4R MC5R LPAR1 LPAR3 MC2R See also Lysophospholipid receptor References Further reading External links G protein-coupled receptors
GPR6
[ "Chemistry" ]
200
[ "G protein-coupled receptors", "Signal transduction" ]
14,410,739
https://en.wikipedia.org/wiki/Neuropeptides%20B/W%20receptor%201
Neuropeptides B/W receptor 1, also known as NPBW1 and GPR7, is a human protein encoded by the NPBWR1 gene. As implied by its name, it and related gene NPBW2 (with which it shares 70% nucleotide identity) are transmembranes protein that bind Neuropeptide B (NPB) and Neuropeptide W (NPW), both proteins expressed strongly in parts of the brain that regulate stress and fear including the extended amygdala and stria terminalis. When originally discovered in 1995, these receptors had no known ligands ("orphan receptors") and were called GPR7 and GPR8, but at least three groups in the early 2000s independently identified their endogenous ligands, triggering the name change in 2005. Structure NPBW1 has seven transmembrane domains, which it unsurprisingly shares with NPBWR2, but also a family of somatostatin and opioid receptors, and like these proteins couple to Gi-class G proteins. Functions In rodent models, NPBWR1 is over-expressed in Schwann cells associated with neuropathic pain, suggesting it inhibits inflammatory pain responses. Mice without NPBW1 exhibited a stronger hostile reaction to intruders, suggesting NPBW1 has a role in stress responses. Early studies indicated that NPB and NPW had a complex effect on appetite, but generally led to anorexia. Similarly, male rats lacking NPBWR1 exhibited hyperphagia and adult-onset obesity, though why female rats are unaffected is unknown. Researchers speculated that activating these pathways might decrease obesity, and synthesized a small-molecule ligand that is capable of stimulating both receptors at low concentrations. References Further reading External links G protein-coupled receptors
Neuropeptides B/W receptor 1
[ "Chemistry" ]
383
[ "G protein-coupled receptors", "Signal transduction" ]