id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,503,668 | https://en.wikipedia.org/wiki/Risk%20management%20plan | A risk management plan is a document to foresee risks, estimate impacts, and define responses to risks. It also contains a risk assessment matrix. According to the Project Management Institute, a risk management plan is a "component of the project, program, or portfolio management plan that describes how risk management activities will be structured and performed".
Moreover, according to the Project Management Institute, a risk is "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project's objectives". Risk is inherent with any project, and project managers should assess risks continually and develop plans to address them. The risk management plan contains an analysis of likely risks with both high and low impact, as well as mitigation strategies to help the project avoid being derailed should common problems arise. Risk management plans should be periodically reviewed by the project team to avoid having the analysis become stale and not reflective of actual potential project risks.
Risk response
Broadly, there are four potential responses to risk with numerous variations on the specific terms used to name these response options:
Avoid – Change plans to circumvent the problem;
Control / mitigate / modify / reduce – Reduce threat impact or likelihood (or both) through intermediate steps;
Accept / retain – Assume the chance of the negative impact (or auto-insurance), eventually budget the cost (e.g. via a contingency budget line); or
Transfer / share – Outsource risk (or a portion of the risk) to a third party or parties that can manage the outcome. This is done financially through insurance contracts or hedging transactions, or operationally through outsourcing an activity.
(Mnemonic: SARA, for Share Avoid Reduce Accept, or A-CAT, for "Avoid, Control, Accept, or Transfer")
Risk management plans often include matrices.
Examples
The United States Department of Defense, as part of acquisition, uses risk management planning that may have a Risk Management Plan document for the specific project. The general intent of the RMP in this context is to define the scope of risks to be tracked and means of documenting reports. It is also desired that there would be an integrated relationship to other processes. An example of this would be explaining which developmental tests verify risks of the design type were minimized are stated as part of the test and evaluation master plan. A further example would be instructions from 5000.2D that for programs that are part of a system of systems the risk management strategy shall specifically address integration and interoperability as a risk area. The RMP specific process and templates shift over time (e.g. the disappearance of 2002 documents Defense Finance and Accounting Service / System Risk Management Plan, and the SPAWAR Risk Management Process).
See also
Event chain methodology
Project management
Project Management Professional
Risk evaluation and mitigation strategy (REMS)
Risk management
Risk management tools
Risk management framework
Gordon–Loeb model for cyber security investments
Citations
References
External links
Creating The Risk Management Plan (template included)
EPA RMP Rule page
Risk Management Guide for DoD Acquisition (ver 6 - ver 5.2 more detailed but obsolete)
Defense Acquisition University, System Engineering Fundamentals (see ch 15)
US DoD extension to PMBOK Guide, June 2003 (see ch 11)
US DoD extension to PMBOK Guide (see ch 11)
US Defense Acquisition Guidebook (DAG) - ch8 testing
DAU Risk Management Plan template
Risk management
Systems engineering
Project management | Risk management plan | [
"Engineering"
] | 701 | [
"Systems engineering"
] |
1,503,750 | https://en.wikipedia.org/wiki/Rolling%20resistance | Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc., is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below), and permanent (plastic) deformation of the object or the surface (e.g. soil). Note that the slippage between the wheel and the surface also results in energy dissipation. Although some researchers have included this term in rolling resistance, some suggest that this dissipation term should be treated separately from rolling resistance because it is due to the applied torque to the wheel and the resultant slip between the wheel and ground, which is called slip loss or slip resistance. In addition, only the so-called slip resistance involves friction, therefore the name "rolling friction" is to an extent a misnomer.
Analogous with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction.
Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac/asphalt. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete. Soil rolling resistance factor is not dependent on speed.
Primary cause
The primary cause of pneumatic tire rolling resistance is hysteresis:
A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber.
— National Academy of Sciences
This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate.
The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion.
Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. Note that railroads also have hysteresis in the roadbed structure.
Definitions
In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words, the vehicle would be coasting if it were not for the force to maintain constant speed. This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail).
But there is an even broader sense that would include energy wasted by wheel slippage due to the torque applied from the engine. This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly.
The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like).
Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must, of course, provide the energy to overcome this broad-sense rolling resistance.
For tires, rolling resistance is defined as the energy consumed by a tire per unit distance covered. It is also called rolling friction or rolling drag. It is one of the forces that act to oppose the motion of a driver. The main reason for this is that when the tires are in motion and touch the surface, the surface changes shape and causes deformation of the tire.
For highway motor vehicles, there is some energy dissipated in shaking the roadway (and the earth beneath it), the shaking of the vehicle itself, and the sliding of the tires. But, other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances.
Rolling resistance coefficient
The "rolling resistance coefficient" is defined by the following equation:
where
is the rolling resistance force (shown as in figure 1),
is the dimensionless rolling resistance coefficient or coefficient of rolling friction (CRF), and
is the normal force, the force perpendicular to the surface on which the wheel is rolling.
is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on a level surface, or zero grade, with zero air resistance) per unit force of weight. It is assumed that all wheels are the same and bear identical weight. Thus: means that it would only take 0.01 pounds to tow a vehicle weighing one pound. For a 1000-pound vehicle, it would take 1000 times more tow force, i.e. 10 pounds. One could say that is in lb(tow-force)/lb(vehicle weight). Since this lb/lb is force divided by force, is dimensionless. Multiply it by 100 and you get the percent (%) of the weight of the vehicle required to maintain slow steady speed. is often multiplied by 1000 to get the parts per thousand, which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ), which is the same as pounds of resistance per 1000 pounds
of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used; this is just . Thus, they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances", sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that is also the force per unit mass in such units. The SI system would use N/tonne (N/T, N/t), which is and is force per unit mass, where g is the acceleration of gravity in SI units (meters per second square).
The above shows resistance proportional to but does not explicitly show any variation with speed, loads, torque, surface roughness, diameter, tire inflation/wear, etc., because itself varies with those factors. It might seem from the above definition of that the rolling resistance is directly proportional to vehicle weight but it is not.
Measurement
There are at least two popular models for calculating rolling resistance.
"Rolling resistance coefficient (RRC). The value of the rolling resistance force divided by the wheel load. The Society of Automotive Engineers (SAE) has developed test practices to measure the RRC of tires. These tests (SAE J1269 and SAE J2452) are usually performed on new tires. When measured by using these standard test practices, most new passenger tires have reported RRCs ranging from 0.007 to 0.014." In the case of bicycle tires, values of 0.0025 to 0.005 are achieved. These coefficients are measured on rollers, with power meters on road surfaces, or with coast-down tests. In the latter two cases, the effect of air resistance must be subtracted or the tests performed at very low speeds.
The coefficient of rolling resistance b, which has the dimension of length, is approximately (due to the small-angle approximation of ) equal to the value of the rolling resistance force times the radius of the wheel divided by the wheel load.
ISO 18164:2005 is used to test rolling resistance in Europe.
The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance".
Physical formulae
The coefficient of rolling resistance for a slow rigid wheel on a perfectly elastic surface, not adjusted for velocity, can be calculated by
where
is the sinkage depth
is the diameter of the rigid wheel
The empirical formula for for cast iron mine car wheels on steel rails is:
where
is the wheel diameter in inches
is the load on the wheel in pounds-force
As an alternative to using one can use , which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length. It is defined by the following formula:
where
is the rolling resistance force (shown in figure 1),
is the wheel radius,
is the rolling resistance coefficient or coefficient of rolling friction with dimension of length, and
is the normal force (equal to W, not R, as shown in figure 1).
The above equation, where resistance is inversely proportional to radius seems to be based on the discredited "Coulomb's law" (Neither Coulomb's inverse square law nor Coulomb's law of friction). See dependence on diameter. Equating this equation with the force per the rolling resistance coefficient, and solving for , gives = . Therefore, if a source gives rolling resistance coefficient () as a dimensionless coefficient, it can be converted to , having units of length, by multiplying by wheel radius .
Rolling resistance coefficient examples
Table of rolling resistance coefficient examples:
For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s2 × 0.01 = 98.1 N).
Dependence on diameter
Stagecoaches and railroads
According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. This rule has been experimentally verified for cast iron wheels (8″ - 24″ diameter) on steel rail and for 19th century carriage wheels. But there are other tests on carriage wheels that do not agree. Theory of a cylinder rolling on an elastic roadway also gives this same rule These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). This disputed (or wrongly applied)
-"Coulomb's law" is still found in handbooks, however.
Pneumatic tires
For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters).
Dependence on applied torque
The driving torque to overcome rolling resistance and maintain steady speed on level ground (with no air resistance) can be calculated by:
where
is the linear speed of the body (at the axle), and
its rotational speed.
It is noteworthy that is usually not equal to the radius of the rolling body as a result of wheel slip. The slip between wheel and ground inevitably occurs whenever a driving or braking torque is applied to the wheel. Consequently, the linear speed of the vehicle differs from the wheel's circumferential speed. It is notable that slip does not occur in driven wheels, which are not subjected to driving torque, under different conditions except braking. Therefore, rolling resistance, namely hysteresis loss, is the main source of energy dissipation in driven wheels or axles, whereas in the drive wheels and axles slip resistance, namely loss due to wheel slip, plays the role as well as rolling resistance. Significance of rolling or slip resistance is largely dependent on the tractive force, coefficient of friction, normal load, etc.
All wheels
"Applied torque" may either be driving torque applied by a motor (often through a transmission) or a braking torque applied by brakes (including regenerative braking). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, i.e. except slip resistance). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%.
A small percentage slip can result in a slip resistance which is much larger than the basic rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels.
It is shown that for a passenger car, when the tractive force is about 40% of the maximum traction, the slip resistance is almost equal to the basic rolling resistance (hysteresis loss). But in case of a tractive force equal to 70% of the maximum traction, slip resistance becomes 10 times larger than the basic rolling resistance.
Railroad steel wheels
In order to apply any traction to the wheels, some slippage of the wheel is required. For trains climbing up a grade, this slip is normally 1.5% to 2.5%.
Slip (also known as creep) is normally roughly directly proportional to tractive effort. An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage.
Pneumatic tires
Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). This is in part due to a slip of about 5%. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher.
Dependence on wheel load
Railroad steel wheels
The rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. For example, an empty freight car had about twice the Crr as a loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load.
If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs.
Pneumatic tires
For pneumatic tires, the direction of change in Crr (rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. It is reported that, if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But, if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course, this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance.
Dependence on curvature of roadway
General
When a vehicle (motor vehicle or railroad train) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle.
This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering.
Sound
Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway, a process which generates sound as a by-product. The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads.
Factors that contribute in tires
Several factors affect the magnitude of rolling resistance a tire generates:
As mentioned in the introduction: wheel radius, forward speed, surface adhesion, and relative micro-sliding.
Material - different fillers and polymers in tire composition can improve traction while reducing hysteresis. The replacement of some carbon black with higher-priced silica–silane is one common way of reducing rolling resistance. The use of exotic materials including nano-clay has been shown to reduce rolling resistance in high performance rubber tires. Solvents may also be used to swell solid tires, decreasing the rolling resistance.
Dimensions - rolling resistance in tires is related to the flex of sidewalls and the contact area of the tire For example, at the same pressure, wider bicycle tires flex less in the sidewalls as they roll and thus have lower rolling resistance (although higher air resistance).
Extent of inflation - Lower pressure in tires results in more flexing of the sidewalls and higher rolling resistance. This energy conversion in the sidewalls increases resistance and can also lead to overheating and may have played a part in the infamous Ford Explorer rollover accidents.
Over inflating tires (such a bicycle tires) may not lower the overall rolling resistance as the tire may skip and hop over the road surface. Traction is sacrificed, and overall rolling friction may not be reduced as the wheel rotational speed changes and slippage increases.
Sidewall deflection is not a direct measurement of rolling friction. A high quality tire with a high quality (and supple) casing will allow for more flex per energy loss than a cheap tire with a stiff sidewall. Again, on a bicycle, a quality tire with a supple casing will still roll easier than a cheap tire with a stiff casing. Similarly, as noted by Goodyear truck tires, a tire with a "fuel saving" casing will benefit the fuel economy through many tread lives (i.e. retreading), while a tire with a "fuel saving" tread design will only benefit until the tread wears down.
In tires, tread thickness and shape has much to do with rolling resistance. The thicker and more contoured the tread, the higher the rolling resistance Thus, the "fastest" bicycle tires have very little tread and heavy duty trucks get the best fuel economy as the tire tread wears out.
Diameter effects seem to be negligible, provided the pavement is hard and the range of diameters is limited. See dependence on diameter.
Virtually all world speed records have been set on relatively narrow wheels, probably because of their aerodynamic advantage at high speed, which is much less important at normal speeds.
Temperature: with both solid and pneumatic tires, rolling resistance has been found to decrease as temperature increases (within a range of temperatures: i.e. there is an upper limit to this effect) For a rise in temperature from 30 °C to 70 °C the rolling resistance decreased by 20-25%. Racers heat their tires before racing, but this is primarily used to increase tire friction rather than to decrease rolling resistance.
Railroads: Components of rolling resistance
In a broad sense rolling resistance can be defined as the sum of components):
Wheel bearing torque losses.
Pure rolling resistance.
Sliding of the wheel on the rail.
Loss of energy to the roadbed (and earth).
Loss of energy to oscillation of railway rolling stock.
Wheel bearing torque losses can be measured as a rolling resistance at the wheel rim, Crr. Railroads normally use roller bearings which are either cylindrical (Russia) or tapered (United States). The specific rolling resistance in bearings varies with both wheel loading and speed. Wheel bearing rolling resistance is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels.
Comparing rolling resistance of highway vehicles and trains
The rolling resistance of steel wheels on steel rail of a train is far less than that of the rubber tires wheels of an automobile or truck. The weight of trains varies greatly; in some cases they may be much heavier per passenger or per net ton of freight than an automobile or truck, but in other cases they may be much lighter.
As an example of a very heavy passenger train, in 1975, Amtrak passenger trains weighed a little over 7 tonnes per passenger, which is much heavier than an average of a little over one ton per passenger for an automobile. This means that for an Amtrak passenger train in 1975, much of the energy savings of the lower rolling resistance was lost to its greater weight.
An example of a very light high-speed passenger train is the N700 Series Shinkansen, which weighs 715 tonnes and carries 1323 passengers, resulting in a per-passenger weight of about half a tonne. This lighter weight per passenger, combined with the lower rolling resistance of steel wheels on steel rail means that an N700 Shinkansen is much more energy efficient than a typical automobile.
In the case of freight, CSX ran an advertisement campaign in 2013 claiming that their freight trains move "a ton of freight 436 miles on a gallon of fuel", whereas some sources claim trucks move a ton of freight about 130 miles per gallon of fuel, indicating trains are more efficient overall.
See also
Coefficient of friction
Low-rolling resistance tires
Maglev (Magnetic Levitation, the elimination of rolling and thus rolling resistance)
Rolling element bearing
References
Астахов П.Н. "Сопротивление движению железнодорожного подвижного состава" (Resistance to motion of railway rolling stock) Труды ЦНИИ МПС (ISSN 0372-3305). Выпуск 311 (Vol. 311). - Москва: Транспорт, 1966. – 178 pp. perm. record at UC Berkeley (In 2012, full text was on the Internet but the U.S. was blocked)
Деев В.В., Ильин Г.А., Афонин Г.С. "Тяга поездов" (Traction of trains) Учебное пособие. - М.: Транспорт, 1987. - 264 pp.
Hay, William W. "Railroad Engineering" New York, Wiley 1953
Hersey, Mayo D., "Rolling Friction" Transactions of the ASME, April 1969 pp. 260–275 and Journal of Lubrication Technology, January 1970, pp. 83–88 (one article split between two journals) Except for the "Historical Introduction" and a survey of the literature, it is mainly about laboratory testing of mine railroad cast iron wheels of diameters 8″ to 24 done in the 1920s (almost a half century delay between experiment and publication).
Hoerner, Sighard F., "Fluid dynamic drag", published by the author, 1965. (Chapt. 12 is "Land-Borne Vehicles" and includes rolling resistance (trains, autos, trucks).)
Roberts, G. B., "Power wastage in tires", International Rubber Conference, Washington, D.C. 1959.
U.S National Bureau of Standards, "Mechanics of Pneumatic Tires", Monograph #132, 1969–1970.
Williams, J. A. ''Engineering tribology'. Oxford University Press, 1994.
External links
Rolling Resistance and Fuel Saving
temperature vs rolling resistance
Simple roll-down test to measure Crr in cars and bikes
Rolling Resistance Thresholds
Classical mechanics
Energy economics
Energy in transport
Transport economics
Vehicle dynamics
ko:마찰력#구름 마찰력
it:Attrito#Attrito volvente | Rolling resistance | [
"Physics",
"Environmental_science"
] | 5,852 | [
"Energy economics",
"Classical mechanics",
"Physical systems",
"Transport",
"Mechanics",
"Energy in transport",
"Environmental social science"
] |
1,503,867 | https://en.wikipedia.org/wiki/DAPI | DAPI (pronounced 'DAPPY', /ˈdæpiː/), or 4′,6-diamidino-2-phenylindole, is a fluorescent stain that binds strongly to adenine–thymine-rich regions in DNA. It is used extensively in fluorescence microscopy. As DAPI can pass through an intact cell membrane, it can be used to stain both live and fixed cells, though it passes through the membrane less efficiently in live cells and therefore provides a marker for membrane viability.
History
DAPI was first synthesised in 1971 in the laboratory of Otto Dann as part of a search for drugs to treat trypanosomiasis. Although it was unsuccessful as a drug, further investigation indicated it bound strongly to DNA and became more fluorescent when bound. This led to its use in identifying mitochondrial DNA in ultracentrifugation in 1975, the first recorded use of DAPI as a fluorescent DNA stain.
Strong fluorescence when bound to DNA led to the rapid adoption of DAPI for fluorescent staining of DNA for fluorescence microscopy. Its use for detecting DNA in plant, metazoa and bacteria cells and virus particles was demonstrated in the late 1970s, and quantitative staining of DNA inside cells was demonstrated in 1977. Use of DAPI as a DNA stain for flow cytometry was also demonstrated around this time.
Fluorescence properties
When bound to double-stranded DNA, DAPI has an absorption maximum at a wavelength of 358 nm (ultraviolet) and its emission maximum is at 461 nm (blue). Therefore, for fluorescence microscopy, DAPI is excited with ultraviolet light and is detected through a blue/cyan filter. The emission peak is fairly broad. DAPI will also bind to RNA, though it is not as strongly fluorescent. Its emission shifts to around 500 nm when bound to RNA.
DAPI's blue emission is convenient for microscopists who wish to use multiple fluorescent stains in a single sample. There is some fluorescence overlap between DAPI and green-fluorescent molecules like fluorescein and green fluorescent protein (GFP) but the effect of this is small. Use of spectral unmixing can account for this effect if extremely precise image analysis is required.
Outside of analytical fluorescence light microscopy DAPI is also popular for labeling of cell cultures to detect the DNA of contaminating Mycoplasma or virus. The labelled Mycoplasma or virus particles in the growth medium fluoresce once stained by DAPI making them easy to detect.
Modelling of absorption and fluorescence properties
This DNA fluorescent probe has been effectively modeled using the time-dependent density functional theory, coupled with the IEF version of the polarizable continuum model. This quantum-mechanical modeling has rationalized the absorption and fluorescence behavior given by minor groove binding and intercalation in the DNA pocket, in term of a reduced structural flexibility and polarization.
Live cells and toxicity
DAPI can be used for fixed cell staining. The concentration of DAPI needed for live cell staining is generally very high; it is rarely used for live cells. It is labeled non-toxic in its MSDS and though it was not shown to have mutagenicity to E. coli, it is labelled as a known mutagen in manufacturer information. As it is a small DNA binding compound, it is likely to have some carcinogenic effects and care should be taken in its handling and disposal.
Alternatives
The Hoechst stains are similar to DAPI in that they are also blue-fluorescent DNA stains which are compatible with both live- and fixed-cell applications, as well as visible using the same equipment filter settings as for DAPI.
References
See also
DNA binding ligand
Hoechst stain
Lexitropsin
Netropsin
Pentamidine
Staining dyes
Fluorescent dyes
DNA-binding substances
Indoles
Amidines | DAPI | [
"Chemistry",
"Biology"
] | 791 | [
"Genetics techniques",
"Amidines",
"Functional groups",
"DNA-binding substances",
"Bases (chemistry)"
] |
1,503,921 | https://en.wikipedia.org/wiki/Hidden%20track | In the field of recorded music, a hidden track (sometimes called a ghost track, secret track or unlisted track) is a song or a piece of audio that has been placed on a CD, audio cassette, LP record, or other recorded medium, in such a way as to avoid detection by the casual listener. In some cases, the piece of music may simply have been left off the track listing, while in other cases, more elaborate methods are used. In rare cases, a 'hidden track' is actually the result of an error that occurred during the mastering stage production of the recorded media. However, since the rise of digital and streaming services such as iTunes and Spotify in the late 2000s and early 2010s, the inclusion of hidden tracks has declined on studio albums.
It is occasionally unclear whether a piece of music is 'hidden.' For example, "Her Majesty," which is preceded by fourteen seconds of silence, was originally unlisted on The Beatles' Abbey Road but is listed on current versions of the album. That song and others push the definition of the term, causing a lack of consensus on what is considered a hidden track. Alternatively, such things are instead labeled as vague audio experiments, errors, or simply an integral part of an adjacent song on the record.
Techniques
A vinyl record may be double-grooved, with the second groove containing the hidden tracks. Examples of double-grooving include Monty Python's 'three-sided' Matching Tie and Handkerchief, Tool's Opiate EP, and Mr. Bungle's Disco Volante.
With the invention of digital media and compact discs, alternative methods for hiding unlisted tracks were conceived. With a similar aim of concealment, unlisted tracks are sometimes given their own separate index point on digital media. Songs can be placed in the pregap of the first track of certain CD formats, so that the CD must first be cued to the track, and then manually back-scanned. These are often referred to as Track 0 or Hidden Track One Audio (HTOA). A CD player will not play these tracks without manual intervention, and some models (including many computer operating systems) are unable to read such content. On Super Furry Animals' Guerrilla, "The Citizens Band" is found in the pre-gap approximately five minutes before the beginning of track one. A glossary of terms used in the song's lyrics is printed on the interior of the cardboard outer sleeve of the CD. This essentially renders them inaccessible without taking the sleeve apart, hiding the glossary in a parallel way to the song itself.
A less concealed method is to place the song at the end of another track, typically the last track on the album, following a period of silence. For example, Nirvana's song "Endless, Nameless" was included as a hidden track in this way on their 1991 CD Nevermind, after 10 minutes of complete silence within the track listed as the final song. Although it was not the first hidden song to use this technique, it gained significant attention. Similarly, short tracks of silence can be layered before the hidden track plays. On Lazlo Bane's debut album, 11 Transistor, the eleventh song is followed by 57 silent tracks, each four seconds in duration, with "Prada Wallet" (sometimes referred to as "The Birthday Song") being the 69th track on the album. The total length of silence between the two songs is 3:48.
It is possible for a track to be playable only through a computer, such as the '15th' track on Marilyn Manson's Mechanical Animals album, which can only be accessed through an Enhanced CD executable.
There are yet-deeper ways a track can be hidden. A "ghost" track can be subtly mixed to play concurrently with other, dominant audio, or heavily distorted in a way which must be undone to be played. For example, on a DVD included with the deluxe and 'ultra-deluxe' editions of Nine Inch Nails' Ghosts I–IV, two hidden bonus tracks ("37 Ghosts" and "38 Ghosts") are included as digital multitrack files, from which the songs may be reconstructed.
Reasoning
Aaliyah's self-titled album Aaliyah features the hidden song "Messed Up" on track 15. During the stages of the album creation, Aaliyah had no desire to put this song on the album, but after numerous inquiries from different labels and colleagues, she settled on making it a hidden track.
In some rare cases, it is used to avoid legal issues. An example is Ramones' Loco Live American version, which has the song "Carbona Not Glue" hidden after Pet Sematary on track 17. It was originally recorded on their album Leave Home, but the makers of the spot remover Carbona, a registered trademark, objected. Therefore, reference to the song was removed from the album and cover.
"Freedom" by Paul McCartney was a hidden track on the original release of Driving Rain. It was later added as a track on the re-release. The track was not meant to be hidden; it was a tribute to 9/11 victims, and McCartney wanted it on the album. The artwork was already finalised, so there was no choice but to make it a hidden track.
"Train in Vain" by The Clash, which appears at the end of London Calling, was left out of the vinyl's track listing simply because it was a last-minute addition to the album, when the sleeves were already printed. It is thus not a real hidden track. It was originally intended as a promotional giveaway for NME. The later CD versions list the track on the sleeve.
Green Day's "All By Myself" (by drummer Tré Cool) was added as a secret song to Dookie due to the low sound quality of the original live recording.
"Weird Al" Yankovic's "Bite Me" from the album Off the Deep End was put on after ten minutes of silence to scare listeners who had forgotten to turn off the CD player. It was also a loose parody of "Endless, Nameless" by Nirvana. The cover of Off the Deep End is also a parody of the album containing that track, Nevermind, and its first track is a parody of that album's first track, "Smells Like Teen Spirit".
The X-Files: The Album features a hidden track at 10 minutes and 13 seconds into the final track. The track consists of series creator Chris Carter explaining the series mythology and meaning behind the alien conspiracy. The hidden track even includes spoilers and minute details about the show's overall plot that had not yet been resolved on the show itself when the album was released. This track was included as both a surprise to devoted fans who would seek out answers in cross-promotional merchandise, and as a mystery to new fans who would need to watch the show more closely to better understand the track.
Eugene Mirman's album The Absurd Night Club Comedy of Eugene Mirman includes a hidden track making fun of hidden tracks, and telling the listener that they have a very bizarre mission.
The Jam's All Mod Cons does not list the song "English Rose" and its lyrics on original vinyl copies because Paul Weller believed the title and song would lose meaning without accompanying music. They have been added to re-releases of the album.
Skip Spence's "Land of the Sun" was included as a hidden track by producer Bill Bentley to specifically close a tribute album to Spence, More Oar: A Tribute to the Skip Spence Album.
Oasis' compilation album Time Flies features the single "Sunday Morning Call" as a hidden track. The album was an anthology of all of the band's singles, but principal songwriter Noel Gallagher openly detests the song, so chose to have it hidden.
Oasis' studio album Heathen Chemistry also features the hidden instrumental track The Cage. In CD versions it can be heard after the listed track Better Man, after nearly 30 minutes of silence. Some streaming services have removed the 30-minute silence and list the track in their platforms, although initially hidden on original release.
311's Transistor album contains an instrumental intro track that was performed on their 1996 tour, often referred to as the “Transistor Intro.”
Notable hidden tracks
Some hidden tracks are historically significant, have become well known and even occasionally received radio airplay and climbed the charts.
The Beatles' track "Her Majesty" from their 1969 album Abbey Road is considered a hidden track. It was originally a part of the medley on side two of the album, before Paul McCartney requested that it be removed; the engineer who edited it out of the rough mix placed it after the medley to preserve it, and when the Beatles heard it there, they decided to place it there on the album. The original pressings of Abbey Road did not list "Her Majesty" on the back cover song title listing, nor the record label; subsequent LP pressings and then CD issues were issued revealing the track. However, two years prior, in 1967, on the UK version of the Sgt. Pepper's Lonely Hearts Club Band album, there was the "inner groove" that appeared after "A Day in the Life" at the end of side two. It was an unexpected, untitled, and un-credited Beatles recording; so this might be deemed a precursor to the hidden track. A potential hidden track on yet another Beatles album is on The Beatles (also known popularly as The White Album) 1968 double album. The hidden track is a snippet of a song called "Can You Take Me Back", serving as an "outro" to "Cry Baby Cry".
Nirvana put the hidden song "Endless, Nameless" 10 minutes after the last listed track on their 1991 album Nevermind. It was the first prominent hidden track in the CD era and inspired a slew of hidden tracks on albums in the following years. Lead singer Kurt Cobain said he got the idea from when he would make mix tapes for his friends and then add a secret song after a long silent gap at the end, to startle them. Interestingly, some of the initial pressings of the album accidentally omitted the secret track because the person pressing the album thought it was not meant to be there. This was quickly corrected in subsequent pressings after the band let the label know.
Janet Jackson's track "Whoops Now", a hidden track from her album janet., was released as a single, and reached number nine in UK Singles Charts, and number one in New Zealand Singles Chart.
The Rembrandts had a sudden radio hit in 1995 with "I'll Be There for You", the theme song to Friends, so it was added at the last minute to their third album LP. As a result, the song was a hidden track on the early printing, since the CD packaging had already been completed by the time the song was added. However, a sticker was added to the outer shrink wrap advising the song's inclusion.
Eels' album Daisies of the Galaxy contains a hidden track, "Mr. E's Beautiful Blues", which was released as a single, and received radio airplay, although it was not featured on the sleeve notes. The song was, in fact, released as the first single from the album, and peaked at number 11 on the UK Singles Chart.
Cracker's "Euro-Trash Girl", an original, was one of their biggest radio hits, despite being a hidden track on Kerosene Hat.
"Skin (Sarabeth)" by Rascal Flatts, a hidden track from their 2004 album Feels Like Today, received enough airplay to chart in the Top 40 on the country charts, peaking at number 2 in late 2005. In mid-2005, the album was re-issued, with the song officially listed as a track, coinciding with the song's release as a single.
Of the two hidden tracks on Lauryn Hill's The Miseducation of Lauryn Hill, one of them, the cover of "Can't Take My Eyes Off You" was nominated for a Grammy in 1999 in the category of 'Best Female Pop Vocal Performance'. It was the first time a hidden track was nominated for a Grammy.
One of the hidden tracks on P!nk's fourth album, "I Have Seen the Rain", gained significant attention by P!nk fans, as her father, James T. Moore, was featured on the song.
Peter, Paul and Mary's 2003 album, In These Times, revealed that after 25 seconds of silence from "Oh, Had I a Golden Thread", there was a hidden live track of a Spanish folk song "Mi Caballo Blanco", although it was listed in the box set Carry It On. The track was later officially listed on their 2014 album Discovered: Live in Concert
Tally Hall's 2005 album Marvin's Marvelous Mechanical Museum had a hidden track, aptly titled "Hidden in the Sand", that would prove to be the band's most successful song, gaining over 35 million plays on YouTube and over 280 million on Spotify.
My Chemical Romance put the hidden track "Blood" after the final song on their 2006 rock opera The Black Parade, though it would be omitted on Japanese editions of the album.
Coldplay's song "O", from their 2014 album Ghost Stories, is composed mainly of a hidden track, called "Fly On". This track made it into charts in the UK, France, and the US, peaking at #9 on the US Rock Digital Song Sales. "Fly On" appeared on their 2014 live album, with "O" being replaced by its reprise.
Fall Out Boy placed a hidden track titled "Lullabye" right before the start of their 2008 album Folie à Deux. It is an acoustic ballad influenced by Bob Dylan, written with the intention of helping Pete Wentz' son, Bronx Mowgli, fall asleep. The track is only accessible by placing a CD version of the album into a media player and pressing the previous track button before "Disloyal Order of Water Buffaloes" begins.
Deftones's 1997 album, Around the Fur, has two hidden tracks. "Bong Hit" starts after 19:31 minutes of the last track "MX". After 12:41 minutes of silence (after "Bong Hit"), "Damone" starts.
Incubus's second album, S.C.I.E.N.C.E., has a mix of sounds and music called "Segue 1", which starts after 30 seconds of the track "Calgone".
On early copies of Better than Ezra's 1996 album Friction, Baby, the track "Mejor de Ezra" is contained in the pregap, meaning listeners must start the first track, "King of New Orleans", then rewind the CD to hear the hidden song. Later copies of the album tack this secret track onto the end of the album.
Robbie Williams has had hidden tracks on many of his albums. On his first studio album, Life thru a Lens, the standard edition included one hidden track. His second album, I've Been Expecting You, includes two. By doing this, Williams' regular listeners would have likely expected a hidden track of sorts on the third album, Sing When You're Winning. To play on this, instead of a hidden track appearing on the album, a recording of Williams' saying "No, I'm not doing one on this album" plays after 24 minutes of silence.
See also
Easter egg (media)
B-side
List of albums containing a hidden track
List of albums with tracks hidden in the pregap
Backmasking
Bonus track
Sampling (music)
Surprise album
References
External links
Hidden Songs — a user-submitted database of hidden song listings
Songs
Sound recording technology | Hidden track | [
"Technology"
] | 3,252 | [
"Recording devices",
"Sound recording technology"
] |
1,503,963 | https://en.wikipedia.org/wiki/Chebyshev%20distance | In mathematics, Chebyshev distance (or Tchebychev distance), maximum metric, or L∞ metric is a metric defined on a real coordinate space where the distance between two points is the greatest of their differences along any coordinate dimension. It is named after Pafnuty Chebyshev.
It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board. For example, the Chebyshev distance between f6 and e2 equals 4.
Definition
The Chebyshev distance between two vectors or points x and y, with standard coordinates and , respectively, is
This equals the limit of the Lp metrics:
hence it is also known as the L∞ metric.
Mathematically, the Chebyshev distance is a metric induced by the supremum norm or uniform norm. It is an example of an injective metric.
In two dimensions, i.e. plane geometry, if the points p and q have Cartesian coordinates
and , their Chebyshev distance is
Under this metric, a circle of radius r, which is the set of points with Chebyshev distance r from a center point, is a square whose sides have the length 2r and are parallel to the coordinate axes.
On a chessboard, where one is using a discrete Chebyshev distance, rather than a continuous one, the circle of radius r is a square of side lengths 2r, measuring from the centers of squares, and thus each side contains 2r+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square.
Properties
In one dimension, all Lp metrics are equal – they are just the absolute value of the difference.
The two dimensional Manhattan distance has "circles" i.e. level sets in the form of squares, with sides of length r, oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed as equivalent by rotation and scaling to (i.e. a linear transformation of) the planar Manhattan distance.
However, this geometric equivalence between L1 and L∞ metrics does not generalize to higher dimensions. A sphere formed using the Chebyshev distance as a metric is a cube with each face perpendicular to one of the coordinate axes, but a sphere formed using Manhattan distance is an octahedron: these are dual polyhedra, but among cubes, only the square (and 1-dimensional line segment) are self-dual polytopes. Nevertheless, it is true that in all finite-dimensional spaces the L1 and L∞ metrics are mathematically dual to each other.
On a grid (such as a chessboard), the points at a Chebyshev distance of 1 of a point are the Moore neighborhood of that point.
The Chebyshev distance is the limiting case of the order- Minkowski distance, when reaches infinity.
Applications
The Chebyshev distance is sometimes used in warehouse logistics, as it effectively measures the time an overhead crane takes to move an object (as the crane can move on the x and y axes at the same time but at the same speed along each axis).
It is also widely used in electronic Computer-Aided Manufacturing (CAM) applications, in particular, in optimization algorithms for these. Many tools, such as plotting or drilling machines, photoplotter, etc. operating in the plane, are usually controlled by two motors in x and y directions, similar to the overhead cranes.
Generalizations
For the sequence space of infinite-length sequences of real or complex numbers, the Chebyshev distance generalizes to the -norm; this norm is sometimes called the Chebyshev norm. For the space of (real or complex-valued) functions, the Chebyshev distance generalizes to the uniform norm.
See also
King's graph
Taxicab geometry
References
Distance
Mathematical chess problems
Metric geometry | Chebyshev distance | [
"Physics",
"Mathematics"
] | 868 | [
"Mathematical chess problems",
"Distance",
"Physical quantities",
"Recreational mathematics",
"Quantity",
"Size",
"Space",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
1,504,065 | https://en.wikipedia.org/wiki/Biological%20target | A biological target is anything within a living organism to which some other entity (like an endogenous ligand or a drug) is directed and/or binds, resulting in a change in its behavior or function. Examples of common classes of biological targets are proteins and nucleic acids. The definition is context-dependent, and can refer to the biological target of a pharmacologically active drug compound, the receptor target of a hormone (like insulin), or some other target of an external stimulus. Biological targets are most commonly proteins such as enzymes, ion channels, and receptors.
Mechanism
The external stimulus (i.e., the drug or ligand) physically binds to ("hits") the biological target. The interaction between the substance and the target may be:
noncovalent – A relatively weak interaction between the stimulus and the target where no chemical bond is formed between the two interacting partners and hence the interaction is completely reversible.
reversible covalent – A chemical reaction occurs between the stimulus and target in which the stimulus becomes chemically bonded to the target, but the reverse reaction also readily occurs in which the bond can be broken.
irreversible covalent – The stimulus is permanently bound to the target through irreversible chemical bond formation.
Depending on the nature of the stimulus, the following can occur:
There is no direct change in the biological target, but the binding of the substance prevents other endogenous substances (such as activating hormones) from binding to the target. Depending on the nature of the target, this effect is referred as receptor antagonism, enzyme inhibition, or ion channel blockade.
A conformational change in the target is induced by the stimulus which results in a change in target function. This change in function can mimic the effect of the endogenous substance in which case the effect is referred to as receptor agonism (or channel or enzyme activation) or be the opposite of the endogenous substance which in the case of receptors is referred to as inverse agonism.
Drug targets
The term "biological target" is frequently used in pharmaceutical research to describe the native protein in the body whose activity is modified by a drug resulting in a specific effect, which may be a desirable therapeutic effect or an unwanted adverse effect. In this context, the biological target is often referred to as a drug target. The most common drug targets of currently marketed drugs include:
proteins
G protein-coupled receptors (target of 50% of drugs)
enzymes (especially protein kinases, proteases, esterases, and phosphatases)
ion channels
ligand-gated ion channels
voltage-gated ion channels
nuclear hormone receptors
structural proteins such as tubulin
membrane transport proteins
nucleic acids
Drug target identification
Identifying the biological origin of a disease, and the potential targets for intervention, is the first step in the discovery of a medicine using the reverse pharmacology approach. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. An alternative means of identifying new drug targets is forward pharmacology based on phenotypic screening to identify "orphan" ligands whose targets are subsequently identified through target deconvolution.
Databases
Databases containing biological targets information:
Therapeutic Targets Database (TTD)
DrugMap
DrugBank
Binding DB
Conservation ecology
These biological targets are conserved across species, making pharmaceutical pollution of the environment a danger to species who possess the same targets. For example, the synthetic estrogen in human contraceptives, 17-R-ethinylestradiol, has been shown to increase the feminization of fish downstream from sewage treatment plants, thereby unbalancing reproduction and creating an additional selective pressure on fish survival. Pharmaceuticals are usually found at ng/L to low-μg/L concentrations in the aquatic environment. Adverse effects may occur in non-target species as a consequence of specific drug target interactions. Therefore, evolutionarily well-conserved drug targets are likely to be associated with an increased risk for non-targeted pharmacological effects.
See also
Drug discovery
Environmental impact of pharmaceuticals and personal care products
References
Pharmacology
Biology terminology | Biological target | [
"Chemistry",
"Biology"
] | 835 | [
"Pharmacology",
"nan",
"Medicinal chemistry"
] |
1,504,080 | https://en.wikipedia.org/wiki/Lillydale%20Lake | Lillydale Lake (the name retaining the earliest spelling and the name of the former Lillydale Shire) is an artificial lake and wetlands area created in Lilydale, an outer eastern suburb of Melbourne, Victoria, Australia. The park in which the lake is situated covers over and includes extensive recreational facilities.
The lake covers 28 hectares with a 440 metre long dam wall. The ruins of the 1850s era Cashin's flour mill stand at the northern end of the dam wall. There are over 10 kilometres of shared trails, a community room and meeting facilities, a monster playground, toilets, barbecues and a picnic area. A boat launching ramp provides for non-powered boating. There are also two areas where dogs can exercise off leash.
History
Areas of Lilydale township are low-lying and susceptible to flooding from the Olinda Creek.
In 1853 surveyor James Blackburn saw the site as a location for a water catchment though the Yan Yean Reservoir was eventually built instead.
The local council proposed a 65-hectare park in 1969 with several more proposals made but due to lack of funds none eventuated. Following floods in September 1984, construction of the lake was proposed to prevent future flooding and provide recreational facilities. Construction began in 1988 and was completed in June 1990. It was officially opened to the public on 7 July 1990.
Local residents often walk, run or cycle around the lake for fitness reasons.
References
Further reading
Lillydale Lake: History & Development, Shire of Yarra Ranges, Lilydale, Australia
Lillydale Lake: Park Map, Shire of Yarra Ranges, Lilydale, Australia
Lakes of Melbourne
Reservoirs in Victoria (state)
Melbourne Water catchment
Rivers of Greater Melbourne (region)
Constructed wetlands
Yarra Ranges Shire | Lillydale Lake | [
"Chemistry",
"Engineering",
"Biology"
] | 342 | [
"Bioremediation",
"Constructed wetlands",
"Environmental engineering"
] |
1,504,233 | https://en.wikipedia.org/wiki/Comparison%20of%20file%20archivers | The following tables compare general and technical information for a number of file archivers. Please see the individual products' articles for further information. They are neither all-inclusive nor are some entries necessarily up to date. Unless otherwise specified in the footnotes section, comparisons are based on the stable versions—without add-ons, extensions or external programs.
General information
Basic general information about the archivers.
Legend:
Notes:
Operating system support
The operating systems the archivers can run on without emulation or compatibility layer. Ubuntu's own GUI Archive manager, for example, can open and create many archive formats (including Rar archives) even to the extent of splitting into parts and encryption and ability to be read by the native program. This is presumably a "compatibility layer."
Notes:
Archiver features
Information about what common archiver features are implemented natively (without third-party add-ons).
Notes:
Archive format support
Reading
Information about what archive formats the archivers can read. External links lead to information about support in future versions of the archiver or extensions that provide such functionality. Note that gzip, bzip2 and xz are compression formats rather than archive formats.
Notes:
Writing
Information about what archive formats the archivers can write and create. External links lead to information about support in future versions of the archiver or extensions that provide such functionality. Note that gzip, bzip2 and xz are compression formats rather than archive formats.
Notes:
Uncommon archive format support
PeaZip has full support for Brotli, Zstandard, various LPAQ and PAQ formats, QUAD / BALZ / BCM (highly efficient ROLZ based compressors), FreeArc format, and for its native PEA format.
7-Zip includes read support for .msi, cpio and xar, plus Apple's dmg/HFS disk images and the deb/.rpm package distribution formats; beta versions (9.07 onwards) have full support for the LZMA2-compressed .xz format.
See also
Comparison of archive formats
Lossless compression benchmarks
Comparison of file systems
List of archive formats
List of file systems
References
Further reading
Maximum Compression, site benchmarking compressors for several filetypes (text, executable, jpeg etc.).
Kingsley G. Morse Jr., "Compression Tools Compared", Linux Journal, issue 137, September 2005
Patrick Schmid, Achim Roos, (March 10, 2010) "Four Compression And Archiving Solutions Compared", Tom's Hardware
File archivers | Comparison of file archivers | [
"Technology"
] | 533 | [
"Software comparisons",
"Computing comparisons"
] |
1,504,236 | https://en.wikipedia.org/wiki/Creed%20%26%20Company | Creed & Company was a British telecommunications company founded by Frederick George Creed which was an important pioneer in the field of teleprinter machines. It was merged into the International Telephone and Telegraph Corporation (ITT) in 1928.
History
The company was founded by Frederick George Creed and Danish telegraph engineer Harald Bille, and was first incorporated in 1912 as "Creed, Bille & Company Limited". After Bille's death in a railway accident in 1916, his name was dropped from the company's title and it became simply Creed & Company.
The Company spent most of World War I producing high-quality instruments, manufacturing facilities for which were very limited at that time in the UK. Among the items produced were amplifiers, spark-gap transmitters, aircraft compasses, high-voltage generators, bomb release apparatus, and fuses for artillery shells and bombs.
In 1924 Creed entered the teleprinter field with their Model 1P, which was soon superseded by the improved Model 2P. In 1925 Creed acquired the patents for Donald Murray's Murray code, a rationalised Baudot code, and it was used for their new Model 3 Tape Teleprinter of 1927. This machine printed received messages directly onto gummed paper tape at a rate of 65 words per minute and was the first combined start-stop transmitter-receiver teleprinter from Creed to enter mass production.
Some of the key models were:
Creed model 6S (punched paper tape reader)
Creed model 7 (page printing teleprinter introduced in 1931)
Creed model 7B (50 baud page printing teleprinter)
Creed model 7E (page printing teleprinter with overlap cam and range finder)
Creed model 7/TR (non-printing teleprinter reperforator)
Creed model 54 (page printing teleprinter introduced in 1954)
Creed model 75 (page printing teleprinter introduced in 1958)
Creed model 85 (printing reperforator introduced in 1948)
Creed model 86 (printing reperforator using 7/8" wide tape)
Creed model 444 (page printing teleprinter introduced in 1966 - GPO type 15)
In July 1928, Creed & Company were merged into ITT.
During World War II Creed Company manufactured some of the British Typex machines, cipher devices similar to the German Enigma machine.
References
External links
Creed and Company Limited. The First 50 years
History of Nova Scotia Creed and Company Limited
Defunct telecommunications companies of the United Kingdom
Telecommunications companies established in 1912
Rotor machines
Cryptographic hardware
1912 establishments in England
Technology companies disestablished in 1928
1928 disestablishments in England
British companies disestablished in 1928
British companies established in 1912 | Creed & Company | [
"Physics",
"Technology"
] | 548 | [
"Physical systems",
"Machines",
"Rotor machines"
] |
1,504,248 | https://en.wikipedia.org/wiki/Home%20Computer%20Initiative | The Home Computing Initiative (HCI) was a UK Government program which allowed employers to provide personal computers, software and computer peripherals to their employees without the benefit being taxed as a salary. The HCI was introduced in 1999 to improve the IT literacy of the British workforce. It was also aimed at bridging Britain's digital divide - the increasing gap between those who have access to, and the skills to use, information technology, and those who do not. The program gained traction after four years, in 2003 after it was re-branded. The Trade Union Congress and the Department of Trade and Industry also made the initiative more user-friendly by publishing standard guideline that employers could easily adopt.
The HCI program was a lease agreement between the employer and the employee. The agreement usually lasted for three years, costing a maximum of £500 a year. At the end of the lease period, the employee was given the option to purchase the computer at its market value, which was typically £10 at that time.
The HCI scheme was very popular. More than 1250 firms, employing 4.5 million people, had adopted the scheme.
Discontinuation
On 23 March 2006, in his UK Budget, Chancellor Gordon Brown announced the removal of HCI tax exemption for employer-loaned computers. the HCI program was discontinued. The move was made without any consultation with the employers or employees' bodies, a stark contrast to the extensive consultation that preceded its creation.
The Treasury of the United Kingdom made initial claims that the scheme's consumers were being dominated by higher-rate taxpayers. However, research by the HCI Alliance found that 75% of employees who purchased personal computers through the HCI were basic or starting rate taxpayers and 50% were "blue collar" workers. The HCI Alliance, created in 2003, was a group of industry leaders who worked together with the UK Government. Their aim was to increase access to personal computing in the UK.
Another reason for the HCI being cancelled was that computers had become relatively more affordable. Most people in the workplace had access to computers and therefore, the purpose of the scheme had been achieved.
In the days following the budget announcement, a significant lobbying campaign ensued, resulting in the treasury announcing that it would consider alternatives to HCI in its current format rather than disbanding it altogether. This led to the creation of the Educational Technology Allowance.
Educational Technology Allowance
In 2008, the Gordon Brown administration announced the £300 million Educational Technology Allowance incentive. The program granted up to £700 to low-income households with schooling children who had no internet access at home. The policy was aimed towards helping approximately 1.4 million children who did not have access to a broadband connection at home. The program was piloted in two local authority areas in 2010 and was completely rolled out across England in 2011. The funding for the Educational Technology Allowance came from the Children's and Schools budget.
The money could be used by families to pay for computer equipment, technical support and cabling in the street, if necessary.
The Educational Technology Allowance was not offered in Scotland, Wales and Northern Ireland.
References
Computing and society
Information technology in the United Kingdom | Home Computer Initiative | [
"Technology"
] | 641 | [
"Computing and society"
] |
1,504,547 | https://en.wikipedia.org/wiki/Playing%20doctor | "Playing doctor" is a phrase used colloquially in the Western world to refer to children examining each other's genitals. It originates from children using the pretend roles of doctor and patient as a pretext for such an examination. However, whether or not such role-playing is involved, the phrase is used to refer to any similar examination.
Playing doctor is considered by most child psychologists to be a normal step in childhood development between the ages of approximately three and six years, so long as all parties are willing participants and relatively close in age. A study by American sexologist Alfred Kinsey published in the book Sexual Behavior in the Human Male (1948) found that 38.6% of all 10-year-old children practice heterosexual and homosexual doctor play. However, it can be a source of discomfort to parents to discover their children are engaging in such an activity. Parenting professionals often advise parents to view such a discovery as an opportunity to calmly teach their children about different sex characteristics, personal privacy, private parts, and respecting the privacy of other children.
Playing doctor is distinguished from child-on-child sexual abuse, because the latter is an overt and deliberate action directed at sexual stimulation, including orgasm, coercively or in a situation of difference of knowledge, as compared to non-coercive anatomical curiosity.
See also
Child sexual abuse
Child sexuality
Genital play
Make believe
References
Child development
English phrases
Sexual anatomy
Child sexuality | Playing doctor | [
"Biology"
] | 289 | [
"Sexual anatomy",
"Sex"
] |
1,504,755 | https://en.wikipedia.org/wiki/Electronic%20flight%20instrument%20system | In aviation, an electronic flight instrument system (EFIS) is a flight instrument display system in an aircraft cockpit that displays flight data electronically rather than electromechanically. An EFIS normally consists of a primary flight display (PFD), multi-function display (MFD), and an engine indicating and crew alerting system (EICAS) display. Early EFIS models used cathode-ray tube (CRT) displays, but liquid crystal displays (LCD) are now more common. The complex electromechanical attitude director indicator (ADI) and horizontal situation indicator (HSI) were the first candidates for replacement by EFIS. Now, however, few flight deck instruments cannot be replaced by an electronic display.
Display units
Primary flight display (PFD)
On the flight deck, the display units are the most obvious parts of an EFIS system, and are the features that lead to the term glass cockpit. The display unit that replaces the artificial horizon is called the primary flight display (PFD). If a separate display replaces the HSI, it is called the navigation display. The PFD displays all information critical to flight, including calibrated airspeed, altitude, heading, attitude, vertical speed and yaw. The PFD is designed to improve a pilot's situational awareness by integrating this information into a single display instead of six different analog instruments, reducing the amount of time necessary to monitor the instruments. PFDs also increase situational awareness by alerting the aircrew to unusual or potentially hazardous conditions — for example, low airspeed, high rate of descent — by changing the color or shape of the display or by providing audio alerts.
The names Electronic Attitude Director Indicator and Electronic Horizontal Situation Indicator are used by some manufacturers. However, a simulated ADI is only the centerpiece of the PFD. Additional information is both superimposed on and arranged around this graphic.
Multi-function displays can render a separate navigation display unnecessary. Another option is to use one large screen to show both the PFD and navigation display.
The PFD and navigation display (and multi-function display, where fitted) are often physically identical. The information displayed is determined by the system interfaces where the display units are fitted. Thus, spares holding is simplified: the one display unit can be fitted in any position.
LCD units generate less heat than CRTs; an advantage in a congested instrument panel. They are also lighter, and occupy a lower volume.
Multi-function display (MFD)
The MFD (multi-function display) displays navigational and weather information from multiple systems. MFDs are most frequently designed as "chart-centric", where the aircrew can overlay different information over a map or chart. Examples of MFD overlay information include the aircraft's current route plan, weather information from either on-board radar or lightning detection sensors or ground-based sensors, e.g., NEXRAD, restricted airspace and aircraft traffic. The MFD can also be used to view other non-overlay type of data (e.g., current route plan) and calculated overlay-type data, e.g., the glide radius of the aircraft, given current location over terrain, winds, and aircraft speed and altitude.
MFDs can also display information about aircraft systems, such as fuel and electrical systems (see EICAS, below). As with the PFD, the MFD can change the color or shape of the data to alert the aircrew to hazardous situations.
Engine indications and crew alerting system (EICAS) / electronic centralized aircraft monitoring (ECAM)
EICAS (Engine Indications and Crew Alerting System) displays information about the aircraft's systems, including its fuel, electrical and propulsion systems (engines). EICAS displays are often designed to mimic traditional round gauges while also supplying digital readouts of the parameters.
EICAS improves situational awareness by allowing the aircrew to view complex information in a graphical format and also by alerting the crew to unusual or hazardous situations. For example, if an engine begins to lose oil pressure, the EICAS might sound an alert, switch the display to the page with the oil system information and outline the low oil pressure data with a red box. Unlike traditional round gauges, many levels of warnings and alarms can be set. Proper care must be taken when designing EICAS to ensure that the aircrew are always provided with the most important information and not overloaded with warnings or alarms.
ECAM is a similar system used by Airbus, which in addition to providing EICAS functions also recommend remedial action.
Control panels
EFIS provides pilots with controls that select display range and mode (for example, map or compass rose) and enter data (such as selected heading).
Where other equipment uses pilot inputs, data buses broadcast the pilot's selections so that the pilot need only enter the selection once. For example, the pilot selects the desired level-off altitude on a control unit. The EFIS repeats this selected altitude on the PFD, and by comparing it with the actual altitude (from the air data computer) generates an altitude error display. This same altitude selection is used by the automatic flight control system to level off, and by the altitude alerting system to provide appropriate warnings.
Data processors
The EFIS visual display is produced by the symbol generator. This receives data inputs from the pilot, signals from sensors, and EFIS format selections made by the pilot. The symbol generator can go by other names, such as display processing computer, display electronics unit, etc.
The symbol generator does more than generate symbols. It has (at the least) monitoring facilities, a graphics generator and a display driver. Inputs from sensors and controls arrive via data buses, and are checked for validity. The required computations are performed, and the graphics generator and display driver produce the inputs to the display units.
Capabilities
Like personal computers, flight instrument systems need power-on-self-test facilities and continuous self-monitoring. Flight instrument systems, however, need additional monitoring capabilities:
Input validation — verify that each sensor is providing valid data
Data comparison — cross check inputs from duplicated sensors
Display monitoring — detect failures within the instrument system
Former practice
Traditional (electromechanical) displays are equipped with synchro mechanisms that transmit the pitch, roll, and heading shown on the captain and first officer's instruments to an instrument comparator. The comparator warns of excessive differences between the captain and first officer displays. Even a fault as far downstream as a jam in, say, the roll mechanism of an ADI triggers a comparator warning. The instrument comparator thus provides both comparator monitoring and display monitoring.
Comparator monitoring
With EFIS, the comparator function is simple: Is roll data (bank angle) from sensor 1 the same as roll data from sensor 2? If not, display a warning caption (such as CHECK ROLL) on both PFDs. Comparison monitors give warnings for airspeed, pitch, roll, and altitude indications. More advanced EFIS systems have more comparator monitors.
Display monitoring
In this technique, each symbol generator contains two display monitoring channels. One channel, the internal, samples the output from its own symbol generator to the display unit and computes, for example, what roll attitude should produce that indication. This computed roll attitude is then compared with the roll attitude input to the symbol generator from the INS or AHRS. Any difference has probably been introduced by faulty processing, and triggers a warning on the relevant display.
The external monitoring channel carries out the same check on the symbol generator on the other side of the flight deck: the Captain's symbol generator checks the First Officer's, the First Officer's checks the Captain's. Whichever symbol generator detects a fault, puts up a warning on its own display.
The external monitoring channel also checks sensor inputs (to the symbol generator) for reasonableness. A spurious input, such as a radio height greater than the radio altimeter's maximum, results in a warning.
Human factors
Clutter
At various stages of a flight, a pilot needs different combinations of data. Ideally, the avionics only show the data in use—but an electromechanical instrument must be in view all the time. To improve display clarity, ADIs and HSIs use intricate mechanisms to remove superfluous indications temporarily—e.g., removing the glide slope scale when the pilot doesn't need it.
Under normal conditions, an EFIS might not display some indications, e.g., engine vibration. Only when some parameter exceeds its limits does the system display the reading. In similar fashion, EFIS is programmed to show the glideslope scale and pointer only during an ILS approach.
In the case of an input failure, an electromechanical instrument adds yet another indicator—typically, a bar drops across the erroneous data. EFIS, on the other hand, removes invalid data from the display and substitutes an appropriate warning.
A de-clutter mode activates automatically when circumstances require the pilot's attention for a specific item. For example, if the aircraft pitches up or down beyond a specified limit—usually 30 to 60 degrees—the attitude indicator de-clutters other items from sight until the pilot brings the pitch to an acceptable level. This helps the pilot focus on the most important tasks.
Color
Traditional instruments have long used color, but lack the ability to change a color to indicate some change in condition. The electronic display technology of EFIS has no such restriction and uses color widely. For example, as an aircraft approaches the glide slope, a blue caption can indicate glide slope is armed, and capture might change the color to green. Typical EFIS systems color code the navigation needles to reflect the type of navigation. Green needles indicate ground-based navigation, such as VORs, Localizers and ILS systems. Magenta needles indicate GPS navigation.
Advantages
EFIS provides versatility by avoiding some physical limitations of traditional instruments. A pilot can switch the same display that shows a course deviation indicator to show the planned track provided by an area navigation or flight management system. Pilots can choose to superimpose the weather radar picture on the displayed route.
The flexibility afforded by software modifications minimises the costs of responding to new aircraft regulations and equipment. Software updates can update an EFIS system to extend its capabilities. Updates introduced in the 1990s included the ground proximity warning system and traffic collision avoidance system.
A degree of redundancy is available even with the simple two-screen EFIS installation. Should the PFD fail, transfer switching repositions its vital information to the screen normally occupied by the navigation display.
Advances in EFIS
In the late 1980s, EFIS became standard equipment on most Boeing and Airbus airliners, and many business aircraft adopted EFIS in the 1990s.
Recent advances in computing power and reductions in the cost of liquid-crystal displays and navigational sensors (such as GPS and attitude and heading reference system) have brought EFIS to general aviation aircraft. Notable examples are the Garmin G1000 and Chelton Flight Systems EFIS-SV.
Several EFIS manufacturers have focused on the experimental aircraft market, producing EFIS and EICAS systems for as little as US$1,000-2000. The low cost is possible because of steep drops in the price of sensors and displays, and equipment for experimental aircraft doesn't require expensive Federal Aviation Administration certification. This latter point restricts their use to experimental aircraft and certain other aircraft categories, depending on local regulations. Uncertified EFIS systems are also found in Light-sport aircraft, including factory built, microlight, and ultralight aircraft. These systems can be fitted to certified aircraft in some cases as secondary or backup systems depending on local aviation rules.
See also
Index of aviation articles
Acronyms and abbreviations in avionics
Notes
Further reading
Advisory Circular AC25-11A Electronic Flight Deck Displays, at the U.S. Federal Aviation Administration
Electronic Aircraft Instruments Air Data Computer and Displays
Avionics
Aircraft instruments
Applications of control engineering
Display technology
Glass cockpit
Navigational flight instruments
ja:グラスコックピット#電子飛行計器システム | Electronic flight instrument system | [
"Technology",
"Engineering"
] | 2,558 | [
"Avionics",
"Measuring instruments",
"Glass cockpit",
"Electronic engineering",
"Control engineering",
"Aircraft instruments",
"Display technology",
"Applications of control engineering",
"Navigational flight instruments"
] |
1,504,792 | https://en.wikipedia.org/wiki/Nascent%20hydrogen | Nascent hydrogen is an outdated concept in organic chemistry that was once invoked to explain dissolving-metal reactions, such as the Clemmensen reduction and the Bouveault–Blanc reduction. Since organic compounds do not react with H2, a special state of hydrogen was postulated. It is now understood that dissolving-metal reactions occur at the metal surface, and the concept of nascent hydrogen has been discredited in organic chemistry. However, the formation of atomic hydrogen is largely invoked in inorganic chemistry and corrosion sciences to explain hydrogen embrittlement in metals exposed to electrolysis and anaerobic corrosion (e.g., dissolution of zinc in strong acids (HCl) and aluminium in strong bases (NaOH)). The mechanism of hydrogen embrittlement was first proposed by Johnson (1875). The inability of hydrogen atoms to react with organic reagents in organic solvents does not exclude the transient formation of hydrogen atoms capable to immediately diffuse into the crystal lattice of common metals (steel, titanium) different from these of the platinoid group (Pt, Pd, Rh, Ru, Ni) which are well known to dissociate molecular dihydrogen (H) into atomic hydrogen.
History
The idea of hydrogen in the nascent state having chemical properties different from those of molecular hydrogen developed the mid-19th century. Alexander Williamson repeatedly refers to nascent hydrogen in his textbook Chemistry for Students, for example writing of the substitution reaction of carbon tetrachloride with hydrogen to form products such as chloroform and dichloromethane that the "hydrogen must for this purpose be in the nascent state, as free hydrogen does not produce the effect". Williamson also describes the use of nascent hydrogen in the earlier work of Marcellin Berthelot. Franchot published a paper on the concept in 1896, which drew a strongly worded response from Tommasi who pointed to his own work that concluded "nascent hydrogen is nothing else than H + x calories".
The term "nascent hydrogen" continued to be invoked into the 20th century.
Reducing agents at low and high pH
Devarda's alloy (alloy of aluminium (~45%), copper (~50%) and zinc (~5%)) is a reducing agent that was commonly used in wet analytical chemistry to produce in situ so-called nascent hydrogen under alkaline conditions for the determination of nitrates () after their reduction into ammonia ().
In the Marsh test, used for arsenic determination (from the reduction of arsenate () and arsenite () into arsine ()), hydrogen is generated by contacting zinc powder with hydrochloric acid.
So, hydrogen can be conveniently produced at low or high pH, according to the volatility of the species to be detected. Acid conditions in the Marsh test promote the fast escape of the arsine gas (AsH3), while under hyperalkaline solution, the degassing of the reduced ammonia (NH3) is greatly facilitated (the ammonium ion being soluble in aqueous solution under acidic conditions).
See also
Atomic hydrogen welding
References
Further reading
Hydrogen
Electrolysis
Hydrogen
Obsolete theories in chemistry | Nascent hydrogen | [
"Physics",
"Chemistry"
] | 666 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Electrochemistry",
"Materials",
"Electrolysis",
"Matter"
] |
1,504,893 | https://en.wikipedia.org/wiki/Balance%20wheel | A balance wheel, or balance, is the timekeeping device used in mechanical watches and small clocks, analogous to the pendulum in a pendulum clock. It is a weighted wheel that rotates back and forth, being returned toward its center position by a spiral torsion spring, known as the balance spring or hairspring. It is driven by the escapement, which transforms the rotating motion of the watch gear train into impulses delivered to the balance wheel. Each swing of the wheel (called a "tick" or "beat") allows the gear train to advance a set amount, moving the hands forward. The balance wheel and hairspring together form a harmonic oscillator, which due to resonance oscillates preferentially at a certain rate, its resonant frequency or "beat", and resists oscillating at other rates. The combination of the mass of the balance wheel and the elasticity of the spring keep the time between each oscillation or "tick" very constant, accounting for its nearly universal use as the timekeeper in mechanical watches to the present. From its invention in the 14th century until tuning fork and quartz movements became available in the 1960s, virtually every portable timekeeping device used some form of balance wheel.
Overview
Until the 1980s balance wheels were the timekeeping technology used in chronometers, bank vault time locks, time fuzes for munitions, alarm clocks, kitchen timers and stopwatches, but quartz technology has taken over these applications, and the main remaining use is in quality mechanical watches.
Modern (2007) watch balance wheels are usually made of Glucydur, a low thermal expansion alloy of beryllium, copper and iron, with springs of a low thermal coefficient of elasticity alloy such as Nivarox. The two alloys are matched so their residual temperature responses cancel out, resulting in even lower temperature error. The wheels are smooth, to reduce air friction, and the pivots are supported on precision jewel bearings. Older balance wheels used weight screws around the rim to adjust the poise (balance), but modern wheels are computer-poised at the factory, using a laser to burn a precise pit in the rim to make them balanced. Balance wheels rotate about turns with each swing, that is, about 270° to each side of their center equilibrium position. The rate of the balance wheel is adjusted with the regulator, a lever with a narrow slit on the end through which the balance spring passes. This holds the part of the spring behind the slit stationary. Moving the lever slides the slit up and down the balance spring, changing its effective length, and thus the resonant vibration rate of the balance. Since the regulator interferes with the spring's action, chronometers and some precision watches have "free sprung" balances with no regulator, such as the Gyromax. Their rate is adjusted by weight screws on the balance rim.
A balance's vibration rate is traditionally measured in beats (ticks) per hour, or BPH, although beats per second and Hz are also used. The length of a beat is one swing of the balance wheel, between reversals of direction, so there are two beats in a complete cycle. Balances in precision watches are designed with faster beats, because they are less affected by motions of the wrist. Alarm clocks and kitchen timers often have a rate of 4 beats per second (14,400 BPH). Watches made prior to the 1970s usually had a rate of 5 beats per second (18,000 BPH). Current watches have rates of 6 (21,600 BPH), 8 (28,800 BPH) and a few have 10 beats per second (36,000 BPH). Audemars Piguet currently produces a watch with a very high balance vibration rate of 12 beats/s (43,200 BPH). During World War II, Elgin produced a very precise stopwatch for US Air Force bomber crews that ran at 40 beats per second (144,000 BPH), earning it the nickname 'Jitterbug'.
The precision of the best balance wheel watches on the wrist is around a few seconds per day. The most accurate balance wheel timepieces made were marine chronometers, which were used on ships for celestial navigation, as a precise time source to determine longitude. By World War II they had achieved accuracies of 0.1 second per day.
Period of oscillation
A balance wheel's period of oscillation T in seconds, the time required for one complete cycle (two beats), is determined by the wheel's moment of inertia I in kilogram-meter2 and the stiffness (spring constant) of its balance spring κ in newton-meters per radian:
History
The balance wheel appeared with the first mechanical clocks, in 14th century Europe, but it seems unknown exactly when or where it was first used. It is an improved version of the foliot, an early inertial timekeeper consisting of a straight bar pivoted in the center with weights on the ends, which oscillates back and forth. The foliot weights could be slid in or out on the bar, to adjust the rate of the clock. The first clocks in northern Europe used foliots, while those in southern Europe used balance wheels. As clocks were made smaller, first as bracket clocks and lantern clocks and then as the first large watches after 1500, balance wheels began to be used in place of foliots. Since more of its weight is located on the rim away from the axis, a balance wheel could have a larger moment of inertia than a foliot of the same size, and keep better time. The wheel shape also had less air resistance, and its geometry partly compensated for thermal expansion error due to temperature changes.
Addition of balance spring
These early balance wheels were crude timekeepers because they lacked the other essential element: the balance spring. Early balance wheels were pushed in one direction by the escapement until the verge flag that was in contact with a tooth on the escape wheel slipped past the tip of the tooth ("escaped") and the action of the escapement reversed, pushing the wheel back the other way. In such an "inertial" wheel, the acceleration is proportional to the drive force. In a clock or watch without balance spring, the drive force provides both the force that accelerates the wheel and also the force that slows it down and reverses it. If the drive force is increased, both acceleration and deceleration are increased, this results in the wheel getting pushed back and forth faster. This made the timekeeping strongly dependent on the force applied by the escapement. In a watch the drive force provided by the mainspring, applied to the escapement through the timepiece's gear train, declined during the watch's running period as the mainspring unwound. Without some means of equalizing the drive force, the watch slowed down during the running period between windings as the spring lost force, causing it to lose time. This is why all pre-balance spring watches required fusees (or in a few cases stackfreeds) to equalize the force from the mainspring reaching the escapement, to achieve even minimal accuracy. Even with these devices, watches prior to the balance spring were very inaccurate.
The idea of the balance spring was inspired by observations that springy hog bristle curbs, added to limit the rotation of the wheel, increased its accuracy. Robert Hooke first applied a metal spring to the balance in 1658 and Jean de Hautefeuille and Christiaan Huygens improved it to its present spiral form in 1674. The addition of the spring made the balance wheel a harmonic oscillator, the basis of every modern clock. This means the wheel vibrated at a natural resonant frequency or "beat" and resisted changes in its vibration rate caused by friction or changing drive force. This crucial innovation greatly increased the accuracy of watches, from several hours per day to perhaps 10 minutes per day, changing them from expensive novelties into useful timekeepers.
Temperature error
After the balance spring was added, a major remaining source of inaccuracy was the effect of temperature changes. Early watches had balance springs made of plain steel and balances of brass or steel, and the influence of temperature on these noticeably affected the rate.
An increase in temperature increases the dimensions of the balance spring and the balance due to thermal expansion. The strength of a spring, the restoring force it produces in response to a deflection, is proportional to its breadth and the cube of its thickness, and inversely proportional to its length. An increase in temperature would actually make a spring stronger if it affected only its physical dimensions. However, a much larger effect in a balance spring made of plain steel is that the elasticity of the spring's metal decreases significantly as the temperature increases, the net effect being that a plain steel spring becomes weaker with increasing temperature. An increase in temperature also increases diameter of a steel or brass balance wheel, increasing its rotational inertia, its moment of inertia, making it harder for the balance spring to accelerate. The two effects of increasing temperature on physical dimensions of the spring and the balance, the strengthening of the balance spring and the increase in rotational inertia of the balance, have opposing effects and to an extent cancel each other. The major effect of temperature which affects the rate of a watch is the weakening of the balance spring with increasing temperature.
In a watch that is not compensated for the effects of temperature, the weaker spring takes longer to return the balance wheel back toward the center, so the "beat" gets slower and the watch loses time. Ferdinand Berthoud found in 1773 that an ordinary brass balance and steel hairspring, subjected to a 60 °F (33 °C) temperature increase, loses 393 seconds ( minutes) per day, of which 312 seconds is due to spring elasticity decrease.
Temperature-compensated balance wheel
The need for an accurate clock for celestial navigation during sea voyages drove many advances in balance technology in 18th century Britain and France. Even a 1-second per day error in a marine chronometer could result in a error in ship's position after a 2-month voyage. John Harrison was first to apply temperature compensation to a balance wheel in 1753, using a bimetallic "compensation curb" on the spring, in the first successful marine chronometers, H4 and H5. These achieved an accuracy of a fraction of a second per day, but the compensation curb was not further used because of its complexity.
A simpler solution was devised around 1765 by Pierre Le Roy, and improved by John Arnold, and Thomas Earnshaw: the Earnshaw or compensating balance wheel. The key was to make the balance wheel change size with temperature. If the balance could be made to shrink in diameter as it got warmer, the smaller moment of inertia would compensate for the weakening of the balance spring, keeping the period of oscillation the same.
To accomplish this, the outer rim of the balance was made of a "sandwich" of two metals; a layer of steel on the inside fused to a layer of brass on the outside. Strips of this bimetallic construction bend toward the steel side when they are warmed, because the thermal expansion of brass is greater than steel. The rim was cut open at two points next to the spokes of the wheel, so it resembled an S-shape (see figure) with two circular bimetallic "arms". These wheels are sometimes referred to as "Z-balances". A temperature increase makes the arms bend inward toward the center of the wheel, and the shift of mass inward reduces the moment of inertia of the balance, similar to the way a spinning ice skater can reduce their moment of inertia by pulling in their arms. This reduction in the moment of inertia compensated for the reduced torque produced by the weaker balance spring. The amount of compensation is adjusted by moveable weights on the arms. Marine chronometers with this type of balance had errors of only 3–4 seconds per day over a wide temperature range. By the 1870s compensated balances began to be used in watches.
Middle temperature error
The standard Earnshaw compensation balance dramatically reduced error due to temperature variations, but it didn't eliminate it. As first described by J. G. Ulrich, a compensated balance adjusted to keep correct time at a given low and high temperature will be a few seconds per day fast at intermediate temperatures. The reason is that the moment of inertia of the balance varies as the square of the radius of the compensation arms, and thus of the temperature. But the elasticity of the spring varies linearly with temperature.
To mitigate this problem, chronometer makers adopted various 'auxiliary compensation' schemes, which reduced error below 1 second per day. Such schemes consisted for example of small bimetallic arms attached to the inside of the balance wheel. Such compensators could only bend in one direction toward the center of the balance wheel, but bending outward would be blocked by the wheel itself. The blocked movement causes a non-linear temperature response that could slightly better compensate the elasticity changes in the spring. Most of the chronometers that came in first in the annual Greenwich Observatory trials between 1850 and 1914 were auxiliary compensation designs. Auxiliary compensation was never used in watches because of its complexity.
Better materials
The bimetallic compensated balance wheel was made obsolete in the early 20th century by advances in metallurgy. Charles Édouard Guillaume won a Nobel prize for the 1896 invention of Invar, a nickel steel alloy with very low thermal expansion, and Elinvar (from , 'invariable elasticity') an alloy whose elasticity is unchanged over a wide temperature range, for balance springs. A solid Invar balance with a spring of Elinvar was largely unaffected by temperature, so it replaced the difficult-to-adjust bimetallic balance. This led to a series of improved low temperature coefficient alloys for balances and springs.
Before developing Elinvar, Guillaume also invented an alloy to compensate for middle temperature error in bimetallic balances by endowing it with a negative quadratic temperature coefficient. This alloy, named anibal, is a slight variation of invar. It almost completely negated the temperature effect of the steel hairspring, but still required a bimetal compensated balance wheel, known as a Guillaume balance wheel. This design was mostly fitted in high precision chronometers destined for competition in observatories. The quadratic coefficient is defined by its place in the equation of expansion of a material;
where:
is the length of the sample at some reference temperature
is the temperature above the reference
is the length of the sample at temperature
is the linear coefficient of expansion
is the quadratic coefficient of expansion
Footnotes
References
. Has detailed account of development of balance spring.
.
. Detailed section on balance temperature error and auxiliary compensation.
. Good engineering overview of development of clock and watch escapements, focusing on sources of error.
. Comprehensive 616 p. book by astronomy professor, good account of origin of clock parts, but historical research dated. Long bibliography.
. Detailed illustrations of parts of a modern watch, on watch repair website
. Technical article on construction of watch balance wheels, starting with compensation balances, by a professional watchmaker, on a watch repair website.
External links
Video of antique mid-19th century watch showing the balance wheel turning
History of watches, on commercial website.
Monochrome-Watches A technical perspective the regulating organ of the watch
Oliver Mundy, The Watch Cabinet Pictures of a private collection of antique watches from 1710 to 1908, showing many different varieties of balance wheel.
Timekeeping components | Balance wheel | [
"Technology"
] | 3,233 | [
"Timekeeping components",
"Components"
] |
1,504,912 | https://en.wikipedia.org/wiki/Hans%20Sluga | Hans D. Sluga (; born 24 April 1937) is a German philosopher who spent most of his career as professor of philosophy at the University of California, Berkeley. Sluga teaches and writes on topics in the history of analytic philosophy, the history of continental philosophy, as well as on political theory, and ancient philosophy in Greece and China. He has been particularly influenced by the thought of Gottlob Frege, Ludwig Wittgenstein, Martin Heidegger, Friedrich Nietzsche, and Michel Foucault.
Education and career
Hans Sluga studied at the University of Bonn and the University of Munich. He subsequently obtained a BPhil at Oxford, where he studied under R. M. Hare, Isaiah Berlin, Gilbert Ryle and Michael Dummett.
Since 1970, Sluga has been a professor of philosophy at the University of California, Berkeley, serving from 2009 as the William and Trudy Ausfahl Professor of Philosophy until his retirement in 2020. He previously served as a lecturer in philosophy at University College London.
Philosophical work
Sluga describes his philosophical orientation as follows: "My overall philosophical outlook is radically historicist. I believe that we can understand ourselves only as beings with a particular evolution and history."
He has worked extensively on the early history of analytic philosophy. In his writings on Gottlob Frege he has sought to establish the influence of Immanuel Kant, Hermann Lotze, and of Neo-Kantians like Cuno Fischer and Wilhelm Windelband on Frege's views on the foundations of mathematics and in the theory of meaning. This historically oriented approach to Frege's thought brought him into sharp conflict with Michael Dummett's "realist" interpretation of Frege. Sluga's work in analytic philosophy has been influenced substantially by Wittgenstein to whose early and late writings he has devoted a number of studies. His writings on both Frege and Wittgenstein have contributed to the development of the study of the history of analytic philosophy as a field within analytic philosophy.
Since the early 1990s Sluga has become increasingly concerned with political philosophy. In Heidegger's Crisis he set out to explore the question why philosophers from Plato till the present get so often entangled in dangerous political affairs. Sluga analyzes Heidegger's political engagement by putting it into the larger context of the development of German philosophy in the Nazi period. He seeks to show thereby that many diagnoses of Heidegger's politics are misdirected because of their overly narrow focus on the person and work of Heidegger. He challenges, in particular, the claim that Heidegger's critique of reason is to blame for his political errors by pointing out that committed "rationalists" among the German philosophers were prone to the same errors. Sluga's book seeks to show that the willingness to involve themselves politically not only Heidegger, but also of Neo-Kantians like Bruno Bauch, Neo-Fichteans like Max Wundt, and Nietzscheans like Alfred Baeumler was ultimately due to their misconceived belief that they were living through a moment of world-historical crisis in which they were particularly called upon to intervene.
His book Politics and the Search for the Common Good seeks to re-think politics in substantially new terms. Sluga distinguishes in it between a long tradition of "normative political theorizing" that ranges from Plato and Aristotle through Kant to contemporary writers like John Rawls and a more recent form of "diagnostic practice" that emerged in the 19th century and whose first practitioners were Karl Marx and Friedrich Nietzsche. Diagnostic political philosophy, Sluga argues, does not seek to establish political norms through a process of abstract philosophical reasoning but seeks to reach practical conclusions through a careful diagnosis of the political realities. Identifying himself with this strand of political philosophizing, Sluga proceeds to examine the thinking of Carl Schmitt, Hannah Arendt, and Michel Foucault as 20th century exemplars of the diagnostic approach. The book seeks to highlight the promise and the achievements of the diagnostic method as well as its shortcomings so far and its inherent limitations. In doing so, Sluga maps out an understanding of politics that makes use of some of Wittgenstein's methodological concepts. He characterizes politics as a family resemblance phenomenon and argues that the concept of politics does not identify a natural kind. It is therefore also mistaken to assume that there is a single common good at which all politics aims. Similarly, we must forgo the belief that there is a best form of government (as, e.g., democracy). Politics must, rather, be conceived as a continuous search for a common good which can have no final, conclusive answer. It is a sphere of uncertainty in which we operate always with a radically incomplete and unreliable picture of where we are and with only shifting ideas of where we want to go. The institutional forms that this search takes will change over time. Sluga agrees with other diagnostic thinkers that the classical institution of the modern state is now giving way to a new form of political order which he calls "the corporāte," whose challenges are defined by the growth of human populations, rapid technological changes, and an ever more pressing environmental crisis.
Wittgenstein
Sluga is a noted interpreter of Wittgenstein and has contributed significantly to Wittgenstein scholarship, including editing the 1996 volume The Cambridge Companion to Wittgenstein with David G. Stern. He has argued against the relevance of increasingly more detailed and sophisticated analyses of Wittgenstein's work, even claiming that Wittgenstein himself would not have regarded this exegetical excess as a legitimate concern for philosophy. In recent years, he has endorsed Rupert Read's "post-therapeutic" or "liberatory" interpretation of Wittgenstein.
Books
Gottlob Frege, Routledge & Kegan Paul, London 1980
Chinese translation, Beijing 1990, 2nd ed. 1993
Greek translation, Athens 2010
Heidegger's Crisis. Philosophy and Politics in Nazi Germany, Harvard U. P. 1993
Chinese translation, Beijing 2015
Bulgarian translation, Sofia 2024
Wittgenstein, Wiley-Blackwell, 2011
Italian translation, 2012
Arabic translation, 2014
Chinese translation, 2015
Politics and the Search for the Common Good, Cambridge U. P. 2014
The Philosophy of Frege, (ed.), 4 vols., Garland Press, 1993
The Cambridge Companion to Wittgenstein, (ed. With David Stern), Cambridge U. P. 1996
Licensed Chinese edition, Beijing 2007
Articles
"Frege and the Rise of Analytic Philosophy", Inquiry, vol. 18, 1975
"Frege as a Rationalist," in Studies on Frege, ed. M. Schirn, Stuttgart 1976, vol. 1
"Frege's Alleged Realism," Inquiry, vol. 20, 1977
"Subjectivity in the Tractatus", Synthese, vol. 56, 1983
"Frege: The Early Years", in Philosophy in History, ed. Q. Skinner et al., Cambridge U. P. 1984
"Foucault, the author and the discourse", Inquiry, vol. 28, 1985
"Frege against the Booleans", Notre Dame Journal of Formal Logic, 1987
"Semantic Content and Cognitive Sense", in Frege Synthesized, Amsterdam 1987.
"Das Ich muss aufgegeben werden. Zur Metaphysik in der analytischen Philosophie", in Metaphysik nach Kant?, Stuttgart 1987
"Heidegger: suite sans fin," in Le Messager Europeen, vol. 3, 1989
"Macht und Ohnmacht der analytischen Philosophie", in Bausteine wissenschaftlicher Weltauffassung, ed. F. Stadtler, Vienna 1996
“Frege on Meaning", Ratio, vol. 9, 1996
"'Whose house is that?' Wittgenstein on the self", in The Cambridge Companion to Wittgenstein, 1996
"Homelessness and Homecoming. Nietzsche, Heidegger, Hölderlin," in India and Beyond, Amsterdam 1996
"What has history to do with me? Wittgenstein and analytic philosophy", Inquiry, March 1998
"Von der Uneinheitlichkeit des Wissens", in Philosophie in synthetischer Absicht, ed. by M. Stamm, Stuttgart 1998
"Truth before Tarski" in Alfred Tarski and the Vienna Circle, Kluwer, Dordrecht 1999
"Heidegger and the Critique of Reason", in What's Left of Enlightenment?, ed. K. Baker and P. H. Reill, Stanford 2001
"Conflict is the Father of Everything: Heidegger’s Polemical Conception of Politics" in Heidegger’s Introduction to Metaphysics, ed. R. Polt and G. Fried, Yale U.P., New Haven 2001
"Frege and the Indefinability of Truth" in From Frege to Wittgenstein, ed. E. Reck, Oxford 2001
"Freges These von der Undefinierbarkeit der Wahrheit" in Das Wahre und das Falsche. Studien zu Freges Auffassung der Wahrheit, ed. by Dirk GreimannOlms 2003
"Wittgenstein and Pyrrhonism," in Pyrrhonian Skepticism, edited by Walter Sinnott-Arnstrong, Oxford U. P. 2004
"Heidegger’s Nietzsche," in The Blackwell Companion to Heidegger, ed. by Mark Wrathall and Hubert Dreyfus, Blackwell Publishing, 2005
"Foucault’s Encounter with Heidegger and Nietzsche," in The Cambridge Companion to Foucault, 2nd ed., ed. by Gary Gutting, Cambridge U. P., 2005
"Der erkenntnistheoretische Anarchismus. Paul Feyerabend in Berkeley," in Paul Feyerabend. Ein Philosoph aus Wien, edited by Kurt Fischer and Friedrich Stadler, Vienna 2005.
"Stanley Cavell and the Care of the Common", in The Claim of Community. Essays on Stanley Cavell and Political Philosophy, edited by Andrew Norris, Stanford U. P. 2006
"Family Resemblance", in Deepening our Understanding of Wittgenstein, edited by Michael Kober, Rodopi, Amsterdam 2006
"Glitter and Doom at the Metropolitan: German Art in Search of the Self," Inquiry, vol. 50, 2007
"Truth and the Imperfection of Language," in Essays on Frege's Conception of Truth. Grazer Philosophische Studien, ed. By Dirk Greimann, vol. 75, 2007
"The Pluralism of the Political. From Schmitt to Arendt," Telos, vol. 142, 2008, (28 pp.)
"I am only a Nietzschean," in Foucault and Philosophy, ed. by Timothy O’Leary and Christopher Falzon, Wiley-Blackwell, Chichester 2010
"Our grammar lacks surveyability," in Language and World. Part One. Essays on the Philosophy of Wittgenstein, edited by Volker Munz, Klaus Puhl, and Joseph Wang, ontos verlag, Frankfurt 2010
"'Could you define the sense you give the word "political"'? Michel Foucault as a Political Philosopher," History of the Human Sciences, vol. 24, 2011
"Von der normativen Theorie zu diagnostischen Praxis" Deutsche Zeitschrift für Philosophie, vol. 59, 2011
"Simple Objects: Complex Questions," in Wittgenstein’s Early Philosophy, edited by José L. Zalabardo, Oxford U. P., Oxford 2012
"Beyond 'the New' Wittgenstein," in Ethics, Society, Politics, Proceedings of the 35th International Ludwig Wittgenstein Symposium, edited by Hajo Greif and Martin Gerhard Weiss, De Gruyter Ontos, Berlin/Boston 2013
"Der Mensch ist von Natur aus ein politisches Lebewesen. Zur Kritik der politischen Anthropologie," in Die Anthropologische Wende'', Schwabe Verlag, Basel 2014
References
1939 births
Living people
Alumni of the University of Oxford
University of California, Berkeley faculty
Analytic philosophers
German logicians
21st-century German philosophers
Philosophers of mathematics
German male writers | Hans Sluga | [
"Mathematics"
] | 2,591 | [
"Philosophers of mathematics"
] |
1,505,128 | https://en.wikipedia.org/wiki/Harvard%E2%80%93Smithsonian%20Center%20for%20Astrophysics | The Center for Astrophysics | Harvard & Smithsonian (CfA), previously known as the Harvard–Smithsonian Center for Astrophysics, is an astrophysics research institute jointly operated by the Harvard College Observatory and Smithsonian Astrophysical Observatory. Founded in 1973 and headquartered in Cambridge, Massachusetts, United States, the CfA leads a broad program of research in astronomy, astrophysics, Earth and space sciences, as well as science education. The CfA either leads or participates in the development and operations of more than fifteen ground- and space-based astronomical research observatories across the electromagnetic spectrum, including the forthcoming Giant Magellan Telescope (GMT) and the Chandra X-ray Observatory, one of NASA's Great Observatories.
Hosting more than 850 scientists, engineers, and support staff, the CfA is among the largest astronomical research institutes in the world. Its projects have included Nobel Prize-winning advances in cosmology and high energy astrophysics, the discovery of many exoplanets, and the first image of a black hole. The CfA also serves a major role in the global astrophysics research community: the CfA's Astrophysics Data System (ADS), for example, has been universally adopted as the world's online database of astronomy and physics papers. Known for most of its history as the "Harvard-Smithsonian Center for Astrophysics", the CfA rebranded in 2018 to its current name in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. The CfA's current director (since 2022) is Lisa Kewley, who succeeds Charles R. Alcock (Director from 2004 to 2022), Irwin I. Shapiro (Director from 1982 to 2004) and George B. Field (Director from 1973 to 1982).
History of the CfA
The Center for Astrophysics | Harvard & Smithsonian is not formally an independent legal organization, but rather an institutional entity operated under a memorandum of understanding between Harvard University and the Smithsonian Institution. This collaboration was formalized on July 1, 1973, with the goal of coordinating the related research activities of the Harvard College Observatory (HCO) and the Smithsonian Astrophysical Observatory (SAO) under the leadership of a single director, and housed within the same complex of buildings on the Harvard campus in Cambridge, Massachusetts. The CfA's history is therefore also that of the two fully independent organizations that comprise it. With a combined history of more than 300 years, HCO and SAO have been host to major milestones in astronomical history that predate the CfA's founding. These are briefly summarized below.
History of the Smithsonian Astrophysical Observatory (SAO)
Samuel Pierpont Langley, the third Secretary of the Smithsonian, founded the Smithsonian Astrophysical Observatory on the south yard of the Smithsonian Castle (on the U.S. National Mall) on March 1, 1890. The Astrophysical Observatory's initial, primary purpose was to "record the amount and character of the Sun's heat". Charles Greeley Abbot was named SAO's first director, and the observatory operated solar telescopes to take daily measurements of the Sun's intensity in different regions of the optical electromagnetic spectrum. In doing so, the observatory enabled Abbot to make critical refinements to the Solar constant, as well as to serendipitously discover Solar variability. It is likely that SAO's early history as a solar observatory was part of the inspiration behind the Smithsonian's "sunburst" logo, designed in 1965 by Crimilda Pontes.
In 1955, the scientific headquarters of SAO moved from Washington, D.C. to Cambridge, Massachusetts, to affiliate with the Harvard College Observatory (HCO). Fred Lawrence Whipple, then the chairman of the Harvard Astronomy Department, was named the new director of SAO. The collaborative relationship between SAO and HCO therefore predates the official creation of the CfA by 18 years. SAO's move to Harvard's campus also resulted in a rapid expansion of its research program. Following the launch of Sputnik (the world's first human-made satellite) in 1957, SAO accepted a national challenge to create a worldwide satellite-tracking network, collaborating with the United States Air Force on Project Space Track.
With the creation of NASA the following year and throughout the Space Race, SAO led major efforts in the development of orbiting observatories and large ground-based telescopes, laboratory and theoretical astrophysics, as well as the application of computers to astrophysical problems.
History of Harvard College Observatory (HCO)
Partly in response to renewed public interest in astronomy following the 1835 return of Halley's Comet, the Harvard College Observatory was founded in 1839, when the Harvard Corporation appointed William Cranch Bond as an "Astronomical Observer to the University". For its first four years of operation, the observatory was situated at the Dana-Palmer House (where Bond also resided) near Harvard Yard, and consisted of little more than three small telescopes and an astronomical clock. In his 1840 book recounting the history of the college, then Harvard President Josiah Quincy III noted that "there is wanted a reflecting telescope equatorially mounted". This telescope, the 15-inch "Great Refractor", opened seven years later (in 1847) at the top of Observatory Hill in Cambridge (where it still exists today, housed in the oldest of the CfA's complex of buildings). The telescope was the largest in the United States from 1847 until 1867. William Bond and pioneer photographer John Adams Whipple used the Great Refractor to produce the first clear Daguerrotypes of the Moon (winning them an award at the 1851 Great Exhibition in London). Bond and his son, George Phillips Bond (the second director of HCO), used it to discover Saturn's 8th moon, Hyperion (which was also independently discovered by William Lassell).
Under the directorship of Edward Charles Pickering from 1877 to 1919, the observatory became the world's major producer of stellar spectra and magnitudes, established an observing station in Peru, and applied mass-production methods to the analysis of data. It was during this time that HCO became host to a series of major discoveries in astronomical history, powered by the observatory's so-called "Computers" (women hired by Pickering as skilled workers to process astronomical data). These "Computers" included Williamina Fleming, Annie Jump Cannon, Henrietta Swan Leavitt, Florence Cushman and Antonia Maury, all widely recognized today as major figures in scientific history. Henrietta Swan Leavitt, for example, discovered the so-called period-luminosity relation for Classical Cepheid variable stars, establishing the first major "standard candle" with which to measure the distance to galaxies. Now called "Leavitt's law", the discovery is regarded as one of the most foundational and important in the history of astronomy; astronomers like Edwin Hubble, for example, would later use Leavitt's law to establish that the Universe is expanding, the primary piece of evidence for the Big Bang model.
Upon Pickering's retirement in 1921, the directorship of HCO fell to Harlow Shapley (a major participant in the so-called "Great Debate" of 1920). This era of the observatory was made famous by the work of Cecelia Payne-Gaposchkin, who became the first woman to earn a PhD in astronomy from Radcliffe College (a short walk from the observatory). Payne-Gapochkin's 1925 thesis proposed that stars were composed primarily of hydrogen and helium, an idea thought ridiculous at the time. Between Shapley's tenure and the formation of the CfA, the observatory was directed by Donald H. Menzel and then Leo Goldberg, both of whom maintained widely recognized programs in solar and stellar astrophysics. Menzel played a major role in encouraging the Smithsonian Astrophysical Observatory to move to Cambridge and collaborate more closely with HCO.
Joint history as the Center for Astrophysics (CfA)
The collaborative foundation for what would ultimately give rise to the Center for Astrophysics began with SAO's move to Cambridge in 1955. Fred Whipple, who was already chair of the Harvard Astronomy Department (housed within HCO since 1931), was named SAO's new director at the start of this new era; an early test of the model for a unified directorship across HCO and SAO. The following 18 years would see the two independent entities merge ever closer together, operating effectively (but informally) as one large research center.
This joint relationship was formalized as the new Harvard–Smithsonian Center for Astrophysics on July 1, 1973. George B. Field, then affiliated with Berkeley, was appointed as its first director. That same year, a new astronomical journal, the CfA Preprint Series was created, and a CfA/SAO instrument flying aboard Skylab discovered coronal holes on the Sun. The founding of the CfA also coincided with the birth of X-ray astronomy as a new, major field that was largely dominated by CfA scientists in its early years. Riccardo Giacconi, regarded as the "father of X-ray astronomy", founded the High Energy Astrophysics Division within the new CfA by moving most of his research group (then at American Sciences and Engineering) to SAO in 1973. That group would later go on to launch the Einstein Observatory (the first imaging X-ray telescope) in 1976, and ultimately lead the proposals and development of what would become the Chandra X-ray Observatory. Chandra, the second of NASA's Great Observatories and still the most powerful X-ray telescope in history, continues operations today as part of the CfA's Chandra X-ray Center. Giacconi would later win the 2002 Nobel Prize in Physics for his foundational work in X-ray astronomy.
Shortly after the launch of the Einstein Observatory, the CfA's Steven Weinberg won the 1979 Nobel Prize in Physics for his work on electroweak unification. The following decade saw the start of the landmark CfA Redshift Survey (the first attempt to map the large scale structure of the Universe), as well as the release of the "Field Report", a highly influential Astronomy and Astrophysics Decadal Survey chaired by the outgoing CfA Director George Field. He would be replaced in 1982 by Irwin Shapiro, who during his tenure as director (1982 to 2004) oversaw the expansion of the CfA's observing facilities around the world, including the newly named Fred Lawrence Whipple Observatory, the Infrared Telescope (IRT) aboard the Space Shuttle, the 6.5-meter Multiple Mirror Telescope (MMT), the SOHO satellite, and the launch of Chandra in 1999. CfA-led discoveries throughout this period include canonical work on Supernova 1987A, the "CfA2 Great Wall" (then the largest known coherent structure in the Universe), the best-yet evidence for supermassive black holes, and the first convincing evidence for an extrasolar planet.
The 1980s also saw the CfA play a distinct role in the history of computer science and the internet: in 1986, SAO started developing SAOImage, one of the world's first X11-based applications made publicly available (its successor, DS9, remains the most widely used astronomical FITS image viewer worldwide). During this time, scientists and software developers at the CfA also began work on what would become the Astrophysics Data System (ADS), one of the world's first online databases of research papers. By 1993, the ADS was running the first routine transatlantic queries between databases, a foundational aspect of the internet today.
The CfA today
Research at the CfA
Charles Alcock, known for a number of major works related to massive compact halo objects, was named the third director of the CfA in 2004. Today Alcock oversees one of the largest and most productive astronomical institutes in the world, with more than 850 staff and an annual budget in excess of $100 million. The Harvard Department of Astronomy, housed within the CfA, maintains a continual complement of approximately 60 PhD students, more than 100 postdoctoral researchers, and roughly 25 undergraduate astronomy and astrophysics majors from Harvard College. SAO, meanwhile, hosts a long-running and highly rated REU Summer Intern program as well as many visiting graduate students. The CfA estimates that roughly 10% of the professional astrophysics community in the United States spent at least a portion of their career or education there.
The CfA is either a lead or major partner in the operations of the Fred Lawrence Whipple Observatory, the Submillimeter Array, MMT Observatory, the South Pole Telescope, VERITAS, and a number of other smaller ground-based telescopes. The CfA's 2019–2024 Strategic Plan includes the construction of the Giant Magellan Telescope as a driving priority for the center.
Along with the Chandra X-ray Observatory, the CfA plays a central role in a number of space-based observing facilities, including the recently launched Parker Solar Probe, Kepler space telescope, the Solar Dynamics Observatory (SDO), and Hinode. The CfA, via the Smithsonian Astrophysical Observatory, recently played a major role in the Lynx X-ray Observatory, a NASA-funded large mission concept study commissioned as part of the 2020 Astronomy and Astrophysics Decadal Survey ("Astro2020"). If launched, Lynx would be the most powerful X-ray observatory constructed to date, enabling order-of-magnitude advances in capability over Chandra.
SAO is one of the 13 stakeholder institutes for the Event Horizon Telescope Board, and the CfA hosts its Array Operations Center. In 2019, the project revealed the first direct image of a black hole. The result is widely regarded as a triumph not only of observational astronomy, but of its intersection with theoretical astrophysics. Union of the observational and theoretical subfields of astrophysics has been a major focus of the CfA since its founding.
In 2018, the CfA rebranded, changing its official name to the "Center for Astrophysics | Harvard & Smithsonian" in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Today, the CfA receives roughly 70% of its funding from NASA, 22% from Smithsonian federal funds, and 4% from the National Science Foundation. The remaining 4% comes from contributors including the United States Department of Energy, the Annenberg Foundation, as well as other gifts and endowments.
Organizational structure
Research across the CfA is organized into six divisions and seven research centers:
Scientific divisions within the CfA
Atomic and Molecular Physics (AMP)
High Energy Astrophysics (HEA)
Optical and Infrared Astronomy (OIR)
Radio and Geoastronomy (RG)
Solar, Stellar, and Planetary Sciences (SSP)
Theoretical Astrophysics (TA)
Centers hosted at the CfA
Chandra X-ray Center (CXC), the science operations center for NASA's Chandra X-ray Observatory
Institute for Theory and Computation (ITC)
Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP)
Center for Parallel Astrophysical Computing (CPAC)
Minor Planet Center (MPC)
Telescope Data Center (TDC)
Radio Telescope Data Center (RTDC)
Solar & Stellar X-ray Group (SSXG)
The CfA is also host to the Harvard University Department of Astronomy, large central engineering and computation facilities, the Science Education Department, the John G. Wolbach Library, the world's largest database of astronomy and physics papers (ADS), and the world's largest collection of astronomical photographic plates.
Observatories operated with CfA participation
Ground-based observatories
Fred Lawrence Whipple Observatory
Magellan telescopes
MMT Observatory
Event Horizon Telescope
South Pole Telescope
Submillimeter Array
1.2-Meter Millimeter-Wave Telescope
Very Energetic Radiation Imaging Telescope Array System (VERITAS)
Space-based observatories and probes
Chandra X-ray Observatory
Transiting Exoplanet Survey Satellite (TESS)
Parker Solar Probe
Hinode
Kepler
Solar Dynamics Observatory (SDO)
Solar and Heliospheric Observatory (SOHO)
Spitzer Space Telescope
Planned future observatories
Lynx X-ray Observatory
Giant Magellan Telescope
Murchison Widefield Array
Square Kilometer Array
Pan-STARRS
Vera C. Rubin Observatory (formerly called the Large Synoptic Survey Telescope)
See also
Clara Sousa-Silva, research scientist
List of astronomical observatories
References
External links
01
Astronomical observatories in Massachusetts
Astronomy institutes and departments
Astrophysics research institutes
Harvard University research institutes
Smithsonian Institution research programs
Research institutes established in 1973
1973 establishments in Massachusetts
Harvard University buildings | Harvard–Smithsonian Center for Astrophysics | [
"Physics",
"Astronomy"
] | 3,351 | [
"Astronomy organizations",
"Astrophysics research institutes",
"Astrophysics",
"Astronomy institutes and departments"
] |
1,505,166 | https://en.wikipedia.org/wiki/Genetic%20architecture | Genetic architecture is the underlying genetic basis of a phenotypic trait and its variational properties. Phenotypic variation for quantitative traits is, at the most basic level, the result of the segregation of alleles at quantitative trait loci (QTL). Environmental factors and other external influences can also play a role in phenotypic variation. Genetic architecture is a broad term that can be described for any given individual based on information regarding gene and allele number, the distribution of allelic and mutational effects, and patterns of pleiotropy, dominance, and epistasis.
There are several different experimental views of genetic architecture. Some researchers recognize that the interplay of various genetic mechanisms is incredibly complex, but believe that these mechanisms can be averaged and treated, more or less, like statistical noise. Other researchers claim that each and every gene interaction is significant and that it is necessary to measure and model these individual systemic influences on evolutionary genetics.
Applications
Genetic architecture can be studied and applied at many different levels. At the most basic, individual level, genetic architecture describes the genetic basis for differences between individuals, species, and populations. This can include, among other details, how many genes are involved in a specific phenotype and how gene interactions, such as epistasis, influence that phenotype. Line-cross analyses and QTL analyses can be used to study these differences. This is perhaps the most common way that genetic architecture is studied, and though it is useful for supplying pieces of information, it does not generally provide a complete picture of the genetic architecture as a whole.
Genetic architecture can also be used to discuss the evolution of populations. Classical quantitative genetics models, such as that developed by R.A. Fisher, are based on analyses of phenotype in terms of the contributions from different genes and their interactions. Genetic architecture is sometimes studied using a genotype–phenotype map, which graphically depicts the relationship between the genotype and the phenotype.
Genetic architecture is incredibly important for understanding evolutionary theory because it describes phenotypic variation in its underlying genetic terms, and thus it gives us clues about the evolutionary potential of these variations. Therefore, genetic architecture can help us to answer biological questions about speciation, the evolution of sex and recombination, the survival of small populations, inbreeding, understanding diseases, animal and plant breeding, and more.
Evolvability
Evolvability is literally defined as the ability to evolve. In terms of genetics, evolvability is the ability of a genetic system to produce and maintain potentially adaptive genetic variants. There are several aspects of genetic architecture that contribute strongly to the evolvability of a system, including autonomy, mutability, coordination, epistasis, pleiotropy, polygeny, and robustness.
Autonomy: the existence of quasi-independent characters with the potential for evolutionary autonomy.
Mutability: the possibility that genetic mutation can occur.
Coordination: a phenomenon such as development, during which many different genetic processes and changes happen at once.
Epistasis: a phenomenon in which one gene is dependent on the presence of one or more "modifier" genes.
Polygeny: a phenomenon in which multiple genes contribute to a particular phenotypic character.
Pleiotropy: a phenomenon in which a single gene affects one or more phenotypic characteristics.
Robustness: the ability of a phenotype to remain constant in spite of genetic mutation.
Examples
A study published in 2006 used phylogeny to compare the genetic architecture of differing human skin color. In this study, researchers were able to suggest a speculative framework for the evolutionary history underlying current-day phenotypic variation in human skin pigmentation based on the similarities and differences they found in the genotype. Evolutionary history is an important consideration in understanding the genetic basis of any trait, and this study was among the first to utilize these concepts in a paired fashion to determine information about the underlying genetics of a phenotypic trait.
In 2013, a group of researchers used genome-wide association studies (GWAS) and genome-wide interaction studies (GWIS) to determine the risk of congenital heart defects in patients with Down Syndrome. Down Syndrome is a genetic disorder caused by trisomy of human chromosome 21. The current hypothesis regarding congenital heart defect phenotypes in Down Syndrome individuals is that three copies of functional genomic elements on chromosome 21 and genetic variation of chromosome 21 and non-chromosome 21 loci predispose patients to abnormal heart development. This study identified several congenital heart defect risk loci in Down Syndrome individuals, as well as three copy number variation (CNV) regions that may contribute to congenital heart defects in Down Syndrome individuals.
Another study, which was published in 2014, sought to identify the genetic architecture of psychiatric disorders. The researchers in this study suggested that there are a large number of contributing loci that are related to various psychiatric disorders. Additionally, they, like many others, suggested that the genetic risk of psychiatric disorders involves the combined effects of many common variants with small effects - in other words, the small effects of a wide number of variants at specific loci add together to produce a large, combined effect on the overall phenotype of the individual. They also acknowledged the presence of large but rare mutations that have a large effect on phenotype. This study showcases the intricacy of genetic architecture by providing an example of many different SNPs and mutations working together, each with a varying effect, to generate a given phenotype.
Other studies regarding genetic architecture are many and varied, but most use similar types of analyses to provide specific information regarding loci involved in producing a phenotype. A study of the human immune system in 2015 uses the same general concepts to identify several loci involved in the development of the immune system, but, like the other studies outlined here, failed to consider other aspects of genetic architecture, such as environmental influences. Unfortunately, many other aspects of genetic architecture remain difficult to quantify.
Although there are a few studies that seek to explore the other aspects of genetic architecture, there is little ability with current technologies to link all of the pieces together to build a truly comprehensive model of genetic architecture. For example, in 2003, a study of genetic architecture and the environment was able to show an association of social environment with variation in body size in Drosophila melanogaster. However, this study was not able to tie a direct link to specific genes involved in this variation.
See also
Ambidirectional dominance
References
External links
Line-Cross Analysis
Genotype-Phenotype Maps
The Rise of Genetic Architecture
Genetics | Genetic architecture | [
"Biology"
] | 1,348 | [
"Genetics"
] |
1,505,215 | https://en.wikipedia.org/wiki/Tip%20of%20the%20red-giant%20branch | Tip of the red-giant branch (TRGB) is a primary distance indicator used in astronomy. It uses the luminosity of the brightest red-giant-branch stars in a galaxy as a standard candle to gauge the distance to that galaxy. It has been used in conjunction with observations from the Hubble Space Telescope to determine the relative motions of the Local Cluster of galaxies within the Local Supercluster. Ground-based, 8-meter-class telescopes like the VLT are also able to measure the TRGB distance within reasonable observation times in the local universe.
Method
The Hertzsprung–Russell diagram (HR diagram) is a plot of stellar luminosity versus surface temperature for a population of stars. During the core hydrogen burning phase of a Sun-like star's lifetime, it will appear on the HR diagram at a position along a diagonal band called the main sequence. When the hydrogen at the core is exhausted, energy will continue to be generated by hydrogen fusion in a shell around the core. The center of the star will accumulate the helium "ash" from this fusion and the star will migrate along an evolutionary branch of the HR diagram that leads toward the upper right. That is, the surface temperature will decrease and the total energy output (luminosity) of the star will increase as the surface area increases.
At a certain point, the helium at the core of the star will reach a pressure and temperature where it can begin to undergo nuclear fusion through the triple-alpha process. For a star with less than 1.8 times the mass of the Sun, this will occur in a process called the helium flash. The evolutionary track of the star will then carry it toward the left of the HR diagram as the surface temperature increases under the new equilibrium. The result is a sharp discontinuity in the evolutionary track of the star on the HR diagram. This discontinuity is called the tip of the red-giant branch.
When distant stars at the TRGB are measured in the I-band (in the infrared), their luminosity is somewhat insensitive to their composition of elements heavier than helium (metallicity) or their mass; they are a standard candle with an I-band absolute magnitude of –4.0±0.1. This makes the technique especially useful as a distance indicator. The TRGB indicator uses stars in the old stellar populations (Population II).
See also
Asymptotic giant branch
Hess diagram
Red clump
Stellar classification
References
External links
Large-scale structure of the cosmos
Physical cosmology
Red giants
Standard candles | Tip of the red-giant branch | [
"Physics",
"Astronomy"
] | 524 | [
"Standard candles",
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
1,505,283 | https://en.wikipedia.org/wiki/Pleiotropy | Pleiotropy (from Greek , 'more', and , 'way') occurs when one gene influences two or more seemingly unrelated phenotypic traits. Such a gene that exhibits multiple phenotypic expression is called a pleiotropic gene. Mutation in a pleiotropic gene may have an effect on several traits simultaneously, due to the gene coding for a product used by a myriad of cells or different targets that have the same signaling function.
Pleiotropy can arise from several distinct but potentially overlapping mechanisms, such as gene pleiotropy, developmental pleiotropy, and selectional pleiotropy. Gene pleiotropy occurs when a gene product interacts with multiple other proteins or catalyzes multiple reactions. Developmental pleiotropy occurs when mutations have multiple effects on the resulting phenotype. Selectional pleiotropy occurs when the resulting phenotype has many effects on fitness (depending on factors such as age and sex).
An example of pleiotropy is phenylketonuria, an inherited disorder that affects the level of phenylalanine, an amino acid that can be obtained from food, in the human body. Phenylketonuria causes this amino acid to increase in amount in the body, which can be very dangerous. The disease is caused by a defect in a single gene on chromosome 12 that codes for enzyme phenylalanine hydroxylase, that affects multiple systems, such as the nervous and integumentary system.
Pleiotropic gene action can limit the rate of multivariate evolution when natural selection, sexual selection or artificial selection on one trait favors one allele, while selection on other traits favors a different allele. Some gene evolution is harmful to an organism. Genetic correlations and responses to selection most often exemplify pleiotropy.
History
Pleiotropic traits had been previously recognized in the scientific community but had not been experimented on until Gregor Mendel's 1866 pea plant experiment. Mendel recognized that certain pea plant traits (seed coat color, flower color, and axial spots) seemed to be inherited together; however, their correlation to a single gene has never been proven. The term "pleiotropie" was first coined by Ludwig Plate in his Festschrift, which was published in 1910. He originally defined pleiotropy as occurring when "several characteristics are dependent upon ... [inheritance]; these characteristics will then always appear together and may thus appear correlated". This definition is still used today.
After Plate's definition, Hans Gruneberg was the first to study the mechanisms of pleiotropy. In 1938 Gruneberg published an article dividing pleiotropy into two distinct types: "genuine" and "spurious" pleiotropy. "Genuine" pleiotropy is when two distinct primary products arise from one locus. "Spurious" pleiotropy, on the other hand, is either when one primary product is utilized in different ways or when one primary product initiates a cascade of events with different phenotypic consequences. Gruneberg came to these distinctions after experimenting on rats with skeletal mutations. He recognized that "spurious" pleiotropy was present in the mutation, while "genuine" pleiotropy was not, thus partially invalidating his own original theory. Through subsequent research, it has been established that Gruneberg's definition of "spurious" pleiotropy is what we now identify simply as "pleiotropy".
In 1941 American geneticists George Beadle and Edward Tatum further invalidated Gruneberg's definition of "genuine" pleiotropy, advocating instead for the "one gene-one enzyme" hypothesis that was originally introduced by French biologist Lucien Cuénot in 1903. This hypothesis shifted future research regarding pleiotropy towards how a single gene can produce various phenotypes.
In the mid-1950s Richard Goldschmidt and Ernst Hadorn, through separate individual research, reinforced the faultiness of "genuine" pleiotropy. A few years later, Hadorn partitioned pleiotropy into a "mosaic" model (which states that one locus directly affects two phenotypic traits) and a "relational" model (which is analogous to "spurious" pleiotropy). These terms are no longer in use but have contributed to the current understanding of pleiotropy.
By accepting the one gene-one enzyme hypothesis, scientists instead focused on how uncoupled phenotypic traits can be affected by genetic recombination and mutations, applying it to populations and evolution. This view of pleiotropy, "universal pleiotropy", defined as locus mutations being capable of affecting essentially all traits, was first implied by Ronald Fisher's Geometric Model in 1930. This mathematical model illustrates how evolutionary fitness depends on the independence of phenotypic variation from random changes (that is, mutations). It theorizes that an increasing phenotypic independence corresponds to a decrease in the likelihood that a given mutation will result in an increase in fitness. Expanding on Fisher's work, Sewall Wright provided more evidence in his 1968 book Evolution and the Genetics of Populations: Genetic and Biometric Foundations by using molecular genetics to support the idea of "universal pleiotropy". The concepts of these various studies on evolution have seeded numerous other research projects relating to individual fitness.
In 1957 evolutionary biologist George C. Williams theorized that antagonistic effects will be exhibited during an organism's life cycle if it is closely linked and pleiotropic. Natural selection favors genes that are more beneficial prior to reproduction than after (leading to an increase in reproductive success). Knowing this, Williams argued that if only close linkage was present, then beneficial traits will occur both before and after reproduction due to natural selection. This, however, is not observed in nature, and thus antagonistic pleiotropy contributes to the slow deterioration with age (senescence).
Mechanism
Pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. The underlying mechanism is genes that code for a product that is either used by various cells or has a cascade-like signaling function that affects various targets.
Polygenic traits
Most genetic traits are polygenic in nature: controlled by many genetic variants, each of small effect. These genetic variants can reside in protein coding or non-coding regions of the genome. In this context pleiotropy refers to the influence that a specific genetic variant, e.g., a single nucleotide polymorphism or SNP, has on two or more distinct traits.
Genome-wide association studies (GWAS) and machine learning analysis of large genomic datasets have led to the construction of SNP based polygenic predictors for human traits such as height, bone density, and many disease risks. Similar predictors exist for plant and animal species and are used in agricultural breeding.
One measure of pleiotropy is the fraction of genetic variance that is common between two distinct complex human traits: e.g., height vs bone density, breast cancer vs heart attack risk, or diabetes vs hypothyroidism risk. This has been calculated for hundreds of pairs of traits, with results shown in the Table. In most cases examined the genomic regions controlling each trait are largely disjoint, with only modest overlap.
Thus, at least for complex human traits so far examined, pleiotropy is limited in extent.
Models for the origin
One basic model of pleiotropy's origin describes a single gene locus to the expression of a certain trait. The locus affects the expressed trait only through changing the expression of other loci. Over time, that locus would affect two traits by interacting with a second locus. Directional selection for both traits during the same time period would increase the positive correlation between the traits, while selection on only one trait would decrease the positive correlation between the two traits. Eventually, traits that underwent directional selection simultaneously were linked by a single gene, resulting in pleiotropy.
The "pleiotropy-barrier" model proposes a logistic growth pattern for the increase of pleiotropy over time. This model differentiates between the levels of pleiotropy in evolutionarily younger and older genes subjected to natural selection. It suggests a higher potential for phenotypic innovation in evolutionarily newer genes due to their lower levels of pleiotropy.
Other more complex models compensate for some of the basic model's oversights, such as multiple traits or assumptions about how the loci affect the traits. They also propose the idea that pleiotropy increases the phenotypic variation of both traits since a single mutation on a gene would have twice the effect.
Evolution
Pleiotropy can have an effect on the evolutionary rate of genes and allele frequencies. Traditionally, models of pleiotropy have predicted that evolutionary rate of genes is related negatively with pleiotropyas the number of traits of an organism increases, the evolutionary rates of genes in the organism's population decrease. This relationship has not been clearly found in empirical studies for a long time. However, a study based on human disease genes revealed the evidence of lower evolutionary rate in genes with higher pleiotropy.
In mating, for many animals the signals and receptors of sexual communication may have evolved simultaneously as the expression of a single gene, instead of the result of selection on two independent genes, one that affects the signaling trait and one that affects the receptor trait. In such a case, pleiotropy would facilitate mating and survival. However, pleiotropy can act negatively as well. A study on seed beetles found that intralocus sexual conflict arises when selection for certain alleles of a gene that are beneficial for one sex causes expression of potentially harmful traits by the same gene in the other sex, especially if the gene is located on an autosomal chromosome.
Pleiotropic genes act as an arbitrating force in speciation. William R. Rice and Ellen E. Hostert (1993) conclude that the observed prezygotic isolation in their studies is a product of pleiotropy's balancing role in indirect selection. By imitating the traits of all-infertile hybridized species, they noticed that the fertilization of eggs was prevented in all eight of their separate studies, a likely effect of pleiotropic genes on speciation. Likewise, pleiotropic gene's stabilizing selection allows for the allele frequency to be altered.
Studies on fungal evolutionary genomics have shown pleiotropic traits that simultaneously affect adaptation and reproductive isolation, converting adaptations directly to speciation. A particularly telling case of this effect is host specificity in pathogenic ascomycetes and specifically, in venturia, the fungus responsible for apple scab. These parasitic fungi each adapts to a host, and are only able to mate within a shared host after obtaining resources. Since a single toxin gene or virulence allele can grant the ability to colonize the host, adaptation and reproductive isolation are instantly facilitated, and in turn, pleiotropically causes adaptive speciation. The studies on fungal evolutionary genomics will further elucidate the earliest stages of divergence as a result of gene flow, and provide insight into pleiotropically induced adaptive divergence in other eukaryotes.
Antagonistic pleiotropy
Sometimes, a pleiotropic gene may be both harmful and beneficial to an organism, which is referred to as antagonistic pleiotropy. This may occur when the trait is beneficial for the organism's early life, but not its late life. Such "trade-offs" are possible since natural selection affects traits expressed earlier in life, when most organisms are most fertile, more than traits expressed later in life.
This idea is central to the antagonistic pleiotropy hypothesis, which was first developed by G.C. Williams in 1957. Williams suggested that some genes responsible for increased fitness in the younger, fertile organism contribute to decreased fitness later in life, which may give an evolutionary explanation for senescence. An example is the p53 gene, which suppresses cancer but also suppresses stem cells, which replenish worn-out tissue.
Unfortunately, the process of antagonistic pleiotropy may result in an altered evolutionary path with delayed adaptation, in addition to effectively cutting the overall benefit of any alleles by roughly half. However, antagonistic pleiotropy also lends greater evolutionary "staying power" to genes controlling beneficial traits, since an organism with a mutation to those genes would have a decreased chance of successfully reproducing, as multiple traits would be affected, potentially for the worse.
Sickle cell anemia is a classic example of the mixed benefit given by the staying power of pleiotropic genes, as the mutation to Hb-S provides the fitness benefit of malaria resistance to heterozygotes as sickle cell trait, while homozygotes have significantly lowered life expectancy—what is known as "heterozygote advantage". Since both of these states are linked to the same mutated gene, large populations today are susceptible to sickle cell despite it being a fitness-impairing genetic disorder.
Examples
Albinism
Albinism is the mutation of the TYR gene, also termed tyrosinase. This mutation causes the most common form of albinism. The mutation alters the production of melanin, thereby affecting melanin-related and other dependent traits throughout the organism. Melanin is a substance made by the body that is used to absorb light and provides coloration to the skin. Indications of albinism are the absence of color in an organism's eyes, hair, and skin, due to the lack of melanin. Some forms of albinism are also known to have symptoms that manifest themselves through rapid-eye movement, light sensitivity, and strabismus.
Autism and schizophrenia
Pleiotropy in genes has been linked between certain psychiatric disorders as well. Deletion in the 22q11.2 region of chromosome 22 has been associated with schizophrenia and autism. Schizophrenia and autism are linked to the same gene deletion but manifest very differently from each other. The resulting phenotype depends on the stage of life at which the individual develops the disorder. Childhood manifestation of the gene deletion is typically associated with autism, while adolescent and later expression of the gene deletion often manifests in schizophrenia or other psychotic disorders. Though the disorders are linked by genetics, there is no increased risk found for adult schizophrenia in patients who experienced autism in childhood.
A 2013 study also genetically linked five psychiatric disorders, including schizophrenia and autism. The link was a single nucleotide polymorphism of two genes involved in calcium channel signaling with neurons. One of these genes, CACNA1C, has been found to influence cognition. It has been associated with autism, as well as linked in studies to schizophrenia and bipolar disorder. These particular studies show clustering of these diseases within patients themselves or families. The estimated heritability of schizophrenia is 70% to 90%, therefore the pleiotropy of genes is crucial since it causes an increased risk for certain psychotic disorders and can aid psychiatric diagnosis.
Phenylketonuria (PKU)
A common example of pleiotropy is the human disease phenylketonuria (PKU). This disease causes mental retardation and reduced hair and skin pigmentation, and can be caused by any of a large number of mutations in the single gene on chromosome 12 that codes for the enzyme phenylalanine hydroxylase, which converts the amino acid phenylalanine to tyrosine. Depending on the mutation involved, this conversion is reduced or ceases entirely. Unconverted phenylalanine builds up in the bloodstream and can lead to levels that are toxic to the developing nervous system of newborn and infant children. The most dangerous form of this is called classic PKU, which is common in infants. The baby seems normal at first but actually incurs permanent intellectual disability. This can cause symptoms such as mental retardation, abnormal gait and posture, and delayed growth. Because tyrosine is used by the body to make melanin (a component of the pigment found in the hair and skin), failure to convert normal levels of phenylalanine to tyrosine can lead to fair hair and skin.
The frequency of this disease varies greatly. Specifically, in the United States, PKU is found at a rate of nearly 1 in 10,000 births. Due to newborn screening, doctors are able to detect PKU in a baby sooner. This allows them to start treatment early, preventing the baby from suffering from the severe effects of PKU. PKU is caused by a mutation in the PAH gene, whose role is to instruct the body on how to make phenylalanine hydroxylase. Phenylalanine hydroxylase is what converts the phenylalanine, taken in through diet, into other things that the body can use. The mutation often decreases the effectiveness or rate at which the hydroxylase breaks down the phenylalanine. This is what causes the phenylalanine to build up in the body.
Sickle cell anemia
Sickle cell anemia is a genetic disease that causes deformed red blood cells with a rigid, crescent shape instead of the normal flexible, round shape. It is caused by a change in one nucleotide, a point mutation in the HBB gene. The HBB gene encodes information to make the beta-globin subunit of hemoglobin, which is the protein red blood cells use to carry oxygen throughout the body. Sickle cell anemia occurs when the HBB gene mutation causes both beta-globin subunits of hemoglobin to change into hemoglobinS (HbS).
Sickle cell anemia is a pleiotropic disease because the expression of a single mutated HBB gene produces numerous consequences throughout the body. The mutated hemoglobin forms polymers and clumps together causing the deoxygenated sickle red blood cells to assume the disfigured sickle shape. As a result, the cells are inflexible and cannot easily flow through blood vessels, increasing the risk of blood clots and possibly depriving vital organs of oxygen. Some complications associated with sickle cell anemia include pain, damaged organs, strokes, high blood pressure, and loss of vision. Sickle red blood cells also have a shortened lifespan and die prematurely.
Marfan syndrome
Marfan syndrome (MFS) is an autosomal dominant disorder which affects 1 in 5–10,000 people. MFS arises from a mutation in the FBN1 gene, which encodes for the glycoprotein fibrillin-1, a major constituent of extracellular microfibrils which form connective tissues. Over 1,000 different mutations in FBN1 have been found to result in abnormal function of fibrillin, which consequently relates to connective tissues elongating progressively and weakening. Because these fibers are found in tissues throughout the body, mutations in this gene can have a widespread effect on certain systems, including the skeletal, cardiovascular, and nervous system, as well as the eyes and lungs.
Without medical intervention, prognosis of Marfan syndrome can range from moderate to life-threatening, with 90% of known causes of death in diagnosed patients relating to cardiovascular complications and congestive cardiac failure. Other characteristics of MFS include an increased arm span and decreased upper to lower body ratio.
"Mini-muscle" allele
A gene recently discovered in laboratory house mice, termed "mini-muscle", causes, when mutated, a 50% reduction in hindlimb muscle mass as its primary effect (the phenotypic effect by which it was originally identified). In addition to smaller hindlimb muscle mass, the mutant mice exhibit lower heart rates during physical activity, and a higher endurance. Mini Muscle Mice also exhibit larger kidneys and livers. All of these morphological deviations influence the behavior and metabolism of the mouse. For example, mice with the Mini Muscle mutation were observed to have a higher per-gram aerobic capacity. The mini-muscle allele shows a mendelian recessive behavior. The mutation is a single nucleotide polymorphism (SNP) in an intron of the myosin heavy polypeptide4 gene.
Pain susceptibility
In the context of pain, pleiotropy refers to the ability of a single gene or genomic region to influence multiple pain-related traits. A study that conducted a genome-wide association joint analysis of 17 pain-related traits revealed that many of the 99 identified risk loci are pleiotropic. This implies that, rather than these loci being associated with just one type of pain, many genetic loci contribute to susceptibility to various forms of pain, including headaches, muscle pain, and chronic pain.
These pleiotropic loci were classified into four groups: loci associated with nearly all pain traits, loci associated with a specific type of pain, loci associated with multiple forms of musculoskeletal pain, and loci associated with headaches.
Additionally, pleiotropy was not limited to different types of pain but also extended to psychiatric, metabolic, and immunological traits. Genetic correlations were found between pain susceptibility and conditions such as depression, increase of body mass index, asthma, and cardiovascular diseases.
DNA repair proteins
DNA repair pathways that repair damage to cellular DNA use many different proteins. These proteins often have other functions in addition to DNA repair. In humans, defects in some of these multifunctional proteins can cause widely differing clinical phenotypes. As an example, mutations in the XPB gene that encodes the largest subunit of the basal Transcription factor II H have several pleiotropic effects. XPB mutations are known to be deficient in nucleotide excision repair of DNA and in the quite separate process of gene transcription. In humans, XPB mutations can give rise to the cancer-prone disorder xeroderma pigmentosum or the noncancer-prone multisystem disorder trichothiodystrophy. Another example in humans is the ERCC6 gene, which encodes a protein that mediates DNA repair, transcription, and other cellular processes throughout the body. Mutations in ERCC6 are associated with disorders of the eye (retinal dystrophy), heart (cardiac arrhythmias), and immune system (lymphocyte immunodeficiency).
Chickens
Chickens exhibit various traits affected by pleiotropic genes. Some chickens exhibit frizzle feather trait, where their feathers all curl outward and upward rather than lying flat against the body. Frizzle feather was found to stem from a deletion in the genomic region coding for α-Keratin. This gene seems to pleiotropically lead to other abnormalities like increased metabolism, higher food consumption, accelerated heart rate, and delayed sexual maturity.
Domesticated chickens underwent a rapid selection process that led to unrelated phenotypes having high correlations, suggesting pleiotropic, or at least close linkage, effects between comb mass and physiological structures related to reproductive abilities. Both males and females with larger combs have higher bone density and strength, which allows females to deposit more calcium into eggshells. This linkage is further evidenced by the fact that two of the genes, HAO1 and BMP2, affecting medullary bone (the part of the bone that transfers calcium into developing eggshells) are located at the same locus as the gene affecting comb mass. HAO1 and BMP2 also display pleiotropic effects with commonly desired domestic chicken behavior; those chickens who express higher levels of these two genes in bone tissue produce more eggs and display less egg incubation behavior.
See also
cis-regulatory element
Enhancer (genetics)
Epistasis
Genetic correlation
Metabolic network
Metabolic supermice
Polygene
References
External links
Pleiotropy is 100 years old
Evolutionary developmental biology
Genetics concepts | Pleiotropy | [
"Biology"
] | 5,071 | [
"Genetics concepts"
] |
1,505,379 | https://en.wikipedia.org/wiki/Robert%20Ammann | Robert Ammann (October 1, 1946 – May, 1994) was an amateur mathematician who made several significant and groundbreaking contributions to the theory of quasicrystals and aperiodic tilings.
Ammann attended Brandeis University, but generally did not go to classes, and left after three years. He worked as a programmer for Honeywell. After twelve years, his position was eliminated as part of a routine cutback, and Ammann ended up working as a mail sorter for a post office.
In 1975, Ammann read an announcement by Martin Gardner of new work by Roger Penrose. Penrose had discovered two simple sets of aperiodic tiles, each consisting of just two quadrilaterals. Since Penrose was taking out a patent, he wasn't ready to publish them, and Gardner's description was rather vague. Ammann wrote a letter to Gardner, describing his own work, which duplicated one of Penrose's sets, plus a foursome of "golden rhombohedra" that formed aperiodic tilings in space.
More letters followed, and Ammann became a correspondent with many of the professional researchers. He discovered several new aperiodic tilings, each among the simplest known examples of aperiodic sets of tiles. He also showed how to generate tilings using lines in the plane as guides for lines marked on the tiles, now called "Ammann bars".
The discovery of quasicrystals in 1982 changed the status of aperiodic tilings and Ammann's work from mere recreational mathematics to respectable academic research.
After more than ten years of coaxing, he agreed to meet various professionals in person, and eventually even went to two conferences and delivered a lecture at each. Afterwards, Ammann dropped out of sight, and died of a heart attack a few years later. News of his death did not reach the research community for a few more years.
Five sets of tiles discovered by Ammann were described in Tilings and patterns and later, in collaboration with the authors of the book, he published a paper proving the aperiodicity for four of them. Ammann's discoveries came to notice only after Penrose had published his own discovery and gained priority. In 1981 de Bruijn exposed the cut and project method and in 1984 came the sensational news about Shechtman quasicrystals which promoted the Penrose tiling to fame. But in 1982 Beenker published a similar mathematical explanation for the octagonal case which became known as the Ammann–Beenker tiling. In 1987 Wang, Chen and Kuo announced the discovery of a quasicrystal with octagonal symmetry. The decagonal covering of the Penrose tiling was proposed in 1996 and two years later Ben Abraham and Gähler proposed an octagonal variant for the Ammann–Beenker tiling. Ammann's name became that of the perennial second. It is acknowledged however that Ammann first proposed the construction of rhombic prisms which is the three-dimensional model of Shechtman's quasicrystals.
See also
, includes Ammann's tilings
Ammann A1 tilings
Ammann A5 tilings, also discusses Ammann A4 tilings
References
External links
Ammann tilings and references at the Tilings encyclopedia
Amateur mathematicians
Recreational mathematicians
20th-century American mathematicians
1946 births
1994 deaths
Brandeis University alumni | Robert Ammann | [
"Mathematics"
] | 686 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
1,505,381 | https://en.wikipedia.org/wiki/Numerical%20weather%20prediction | Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models.
History
The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.
As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts.
The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.
Initialization
The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast.
A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.
Computation
An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.
These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog.
Parameterization
Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between and in length. A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity.
The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.
Domains
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.
The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about ) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain.
Model output statistics
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s.
Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.
Ensembles
In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future.
Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere.
Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts.
In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7.
In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output.
Applications
Air quality modeling
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts.
Climate modeling
A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved.
Ocean surface modeling
The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.
Tropical cyclone forecasting
Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models.
In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance.
Wildfire modeling
On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere.
A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under , forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally.
See also
Atmospheric physics
Atmospheric thermodynamics
Tropical cyclone forecast model
Types of atmospheric models
References
Further reading
From Turbulence to sCl
External links
NOAA Supercomputer upgrade
Air Resources Laboratory
Fleet Numerical Meteorology and Oceanography Center
European Centre for Medium-Range Weather Forecasts
UK Met Office
Computational science
Numerical climate and weather models
Applied mathematics
Weather prediction
Computational fields of study | Numerical weather prediction | [
"Physics",
"Mathematics",
"Technology"
] | 5,123 | [
"Weather prediction",
"Physical phenomena",
"Computational fields of study",
"Weather",
"Applied mathematics",
"Computational science",
"Computing and society"
] |
1,505,382 | https://en.wikipedia.org/wiki/Hybrid%20computer | Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical and numerical operations, while the analog component often serves as a solver of differential equations and other mathematically complex problems.
History
The first desktop hybrid computing system was the Hycomp 250, released by Packard Bell in 1961. Another early example was the HYDAC 2400, an integrated hybrid computer released by EAI in 1963. In the 1980s, Marconi Space and Defense Systems Limited (under Peggy Hodges) developed their "Starglow Hybrid Computer", which consisted of three EAI 8812 analog computers linked to an EAI 8100 digital computer, the latter also being linked to an SEL 3200 digital computer. Late in the 20th century, hybrids dwindled with the increasing capabilities of digital computers including digital signal processors.
In general, analog computers are extraordinarily fast, since they are able to solve most mathematically complex equations at the rate at which a signal traverses the circuit, which is generally an appreciable fraction of the speed of light. On the other hand, the precision of analog computers is not good; they are limited to three, or at most, four digits of precision.
Digital computers can be built to take the solution of equations to almost unlimited precision, but quite slowly compared to analog computers. Generally, complex mathematical equations are approximated using iterative methods which take huge numbers of iterations, depending on how good the initial "guess" at the final value is and how much precision is desired. (This initial guess is known as the numerical "seed".) For many real-time operations in the 20th century, such digital calculations were too slow to be of much use (e.g., for very high frequency phased array radars or for weather calculations), but the precision of an analog computer is insufficient.
Hybrid computers can be used to obtain a very good but relatively imprecise 'seed' value, using an analog computer front-end, which is then fed into a digital computer iterative process to achieve the final desired degree of precision. With a three or four digit, highly accurate numerical seed, the total digital computation time to reach the desired precision is dramatically reduced, since many fewer iterations are required. One of the main technical problems to be overcome in hybrid computers is minimizing digital-computer noise in analog computing elements and grounding systems.
Consider that the nervous system in animals is a form of hybrid computer. Signals pass across the synapses from one nerve cell to the next as discrete (digital) packets of chemicals, which are then summed within the nerve cell in an analog fashion by building an electro-chemical potential until its threshold is reached, whereupon it discharges and sends out a series of digital packets to the next nerve cell. The advantages are at least threefold: noise within the system is minimized (and tends not to be additive), no common grounding system is required, and there is minimal degradation of the signal even if there are substantial differences in activity of the cells along a path (only the signal delays tend to vary). The individual nerve cells are analogous to analog computers; the synapses are analogous to digital computers.
Hybrid computers are distinct from hybrid systems. The latter may be no more than a digital computer equipped with an analog-to-digital converter at the input and/or a digital-to-analog converter at the output, to convert analog signals for ordinary digital signal processing, and conversely, e.g., for driving physical control systems, such as servomechanisms.
VLSI hybrid computer chip
In 2015, researchers at Columbia University published a paper on a small scale hybrid computer in 65 nm CMOS technology. This 4th-order VLSI hybrid computer contains 4 integrator blocks, 8 multiplier/gain-setting blocks, 8 fanout blocks for distributing current-mode signals, 2 ADCs, 2 DACs and 2 SRAMs blocks. Digital controllers are also implemented on the chip for executing the external instructions. A robot experiment in the paper demonstrates the use of the hybrid computing chip in today's emerging low-power embedded applications.
References
External links
A New Tool For Science By Daniel Greco and Ken Kuehl, The Wisconsin Engineer, Nov 1972, reprinted Feb 2001
Computing terminology
Classes of computers
Analog computers | Hybrid computer | [
"Technology"
] | 893 | [
"Computing terminology",
"Computers",
"Computer systems",
"Classes of computers"
] |
1,505,491 | https://en.wikipedia.org/wiki/Lassa%20mammarenavirus | Lassa mammarenavirus (LASV) is an arenavirus that causes Lassa hemorrhagic fever,
a type of viral hemorrhagic fever (VHF), in humans and other primates. Lassa mammarenavirus is an emerging virus and a select agent, requiring Biosafety Level 4-equivalent containment. It is endemic in West African countries, especially Sierra Leone, the Republic of Guinea, Nigeria, and Liberia, where the annual incidence of infection is between 300,000 and 500,000 cases, resulting in 5,000 deaths per year.
As of 2012 discoveries within the Mano River region of west Africa have expanded the endemic zone between the two known Lassa endemic regions, indicating that LASV is more widely distributed throughout the tropical wooded savannah ecozone in west Africa. There are no approved vaccines against Lassa fever for use in humans.
Discovery
In 1969, missionary nurse Laura Wine fell ill with a mysterious disease she contracted from an obstetrical patient in Lassa, a village in Borno State, Nigeria. She was then transported to Jos, where she died. Subsequently, two others became infected, one of whom was fifty-two-year-old nurse Lily Pinneo who had cared for Laura Wine. Samples from Pinneo were sent to Yale University in New Haven where a new virus, that would later be known as Lassa mammarenavirus, was isolated for the first time by Jordi Casals-Ariet, Sonja Buckley, and others. Casals contracted the fever, and nearly lost his life; one technician died from it. By 1972, the multimammate rat, Mastomys natalensis, was found to be the main reservoir of the virus in West Africa, able to shed virus in its urine and feces without exhibiting visible symptoms.
Virology
Structure and genome
Lassa viruses are enveloped, single-stranded, bisegmented, ambisense RNA viruses. Their genome is contained in two RNA segments that code for two proteins each, one in each sense, for a total of four viral proteins.
The large segment encodes a small zinc finger protein (Z) that regulates transcription and replication,
and the RNA polymerase (L). The small segment encodes the nucleoprotein (NP) and the surface glycoprotein precursor (GP, also known as the viral spike), which is proteolytically cleaved into the envelope glycoproteins GP1 and GP2 that bind to the alpha-dystroglycan receptor and mediate host cell entry.
Lassa fever causes hemorrhagic fever frequently shown by immunosuppression. Lassa mammarenavirus replicates very rapidly, and demonstrates temporal control in replication. The first replication step is transcription of mRNA copies of the negative- or minus-sense genome. This ensures an adequate supply of viral proteins for subsequent steps of replication, as the NP and L proteins are translated from the mRNA. The positive- or plus-sense genome, then makes viral complementary RNA (vcRNA) copies of itself. The RNA copies are a template for producing negative-sense progeny, but mRNA is also synthesized from it. The mRNA synthesized from vcRNA are translated to make the GP and Z proteins. This temporal control allows the spike proteins to be produced last, and therefore, delay recognition by the host immune system.
Nucleotide studies of the genome have shown that Lassa has four lineages: three found in Nigeria and the fourth in Guinea, Liberia, and Sierra Leone. The Nigerian strains seem likely to have been ancestral to the others but additional work is required to confirm this.
Receptors
Lassa mammarenavirus gains entry into the host cell by means of the cell-surface receptor the alpha-dystroglycan (alpha-DG), a versatile receptor for proteins of the extracellular matrix. It shares this receptor with the prototypic Old World arenavirus lymphocytic choriomeningitis virus. Receptor recognition depends on a specific sugar modification of alpha-dystroglycan by a group of glycosyltransferases known as the LARGE proteins. Specific variants of the genes encoding these proteins appear to be under positive selection in West Africa where Lassa is endemic. Alpha-dystroglycan is also used as a receptor by viruses of the New World clade C arenaviruses (Oliveros and Latino viruses). In contrast, the New World arenaviruses of clades A and B, which include the important viruses Machupo, Guanarito, Junin, and Sabia in addition to the non pathogenic Amapari virus, use the transferrin receptor 1. A small aliphatic amino acid at the GP1 glycoprotein amino acid position 260 is required for high-affinity binding to alpha-DG. In addition, GP1 amino acid position 259 also appears to be important, since all arenaviruses showing high-affinity alpha-DG binding possess a bulky aromatic amino acid (tyrosine or phenylalanine) at this position.
Unlike most enveloped viruses which use clathrin coated pits for cellular entry and bind to their receptors in a pH dependent fashion, Lassa and lymphocytic choriomeningitis virus instead use an endocytotic pathway independent of clathrin, caveolin, dynamin and actin. Once within the cell the viruses are rapidly delivered to endosomes via vesicular trafficking albeit one that is largely independent of the small GTPases Rab5 and Rab7. On contact with the endosome pH-dependent membrane fusion occurs mediated by the envelope glycoprotein, which at the lower pH of the endosome binds the lysosome protein LAMP1 which results in membrane fusion and escape from the endosome.
Life cycle
The life cycle of Lassa mammarenavirus is similar to the Old World arenaviruses. Lassa mammarenavirus enters the cell by the receptor-mediated endocytosis. Which endocytotic pathway is used is not known yet, but at least the cellular entry is sensitive to cholesterol depletion. It was reported that virus internalization is limited upon cholesterol depletion. The receptor used for cell entry is alpha-dystroglycan, a highly conserved and ubiquitously expressed cell surface receptor for extracellular matrix proteins. Dystroglycan, which is later cleaved into alpha-dystroglycan and beta-dystroglycan is originally expressed in most cells to mature tissues, and it provides molecular link between the ECM and the actin-based cytoskeleton. After the virus enters the cell by alpha-dystroglycan mediated endocytosis, the low-pH environment triggers pH-dependent membrane fusion and releases RNP (viral ribonucleoprotein) complex into the cytoplasm. Viral RNA is unpacked, and replication and transcription initiate in the cytoplasm. As replication starts, both S and L RNA genomes synthesize the antigenomic S and L RNAs, and from the antigenomic RNAs, genomic S and L RNA are synthesized. Both genomic and antigenomic RNAs are needed for transcription and translation. The S RNA encodes GP and NP (viral nucleocapsid protein) proteins, while L RNA encodes Z and L proteins. The L protein most likely represents the viral RNA-dependent RNA polymerase. When the cell is infected by the virus, L polymerase is associated with the viral RNP and initiates the transcription of the genomic RNA. The 5’ and 3’ terminal 19 nt viral promoter regions of both RNA segments are necessary for recognition and binding of the viral polymerase. The primary transcription first transcribes mRNAs from the genomic S and L RNAs, which code NP and L proteins, respectively. Transcription terminates at the stem-loop (SL) structure within the intergenomic region. Arenaviruses use a cap snatching strategy to gain the cap structures from the cellular mRNAs, and it is mediated by the endonuclease activity of the L polymerase and the cap binding activity of NP. Antigenomic RNA transcribes viral genes GPC and Z, encoded in genomic orientation, from S and L segments respectively. The antigenomic RNA also serves as the template for the replication. After translation of GPC, it is posttranslationally modified in the endoplasmic reticulum. GPC is cleaved into GP1 and GP2 at the later stage of the secretory pathway. It has been reported that the cellular protease SKI-1/S1P is responsible for this cleavage. The cleaved glycoproteins are incorporated into the virion envelope when the virus buds and release from the cell membrane.
Pathogenesis
Lassa fever is caused by the Lassa mammarenavirus. The symptoms include flu-like illness characterized by fever, general weakness, cough, sore throat, headache, and gastrointestinal manifestations. Hemorrhagic manifestations include vascular permeability.
Upon entry, the Lassa mammarenavirus infects almost every tissue in the human body. It starts with the mucosa, intestine, lungs and urinary system, and then progresses to the vascular system.
The main targets of the virus are antigen-presenting cells, mainly dendritic cells and endothelial cells.
In 2012 it was reported how Lassa mammarenavirus nucleoprotein (NP) sabotages the host's innate immune system response. Generally, when a pathogen enters into a host, innate defense system recognizes the pathogen-associated molecular patterns (PAMP) and activates an immune response. One of the mechanisms detects double stranded RNA (dsRNA), which is only synthesized by negative-sense viruses. In the cytoplasm, dsRNA receptors, such as RIG-I (retinoic acid-inducible gene I) and MDA-5 (melanoma differentiation associated gene 5), detect dsRNAs and initiate signaling pathways that translocate IRF-3 (interferon regulatory factor 3) and other transcription factors to the nucleus. Translocated transcription factors activate expression of interferons 𝛂 and 𝛃, and these initiate adaptive immunity. NP encoded in Lassa mammarenavirus is essential in viral replication and transcription, but it also suppresses host innate IFN response by inhibiting translocation of IRF-3. NP of Lassa mammarenavirus is reported to have an exonuclease activity to only dsRNAs. the NP dsRNA exonuclease activity counteracts IFN responses by digesting the PAMPs thus allowing the virus to evade host immune responses.
See also
Coalition for Epidemic Preparedness Innovations
References
Animal viral diseases
Biological agents
Biosafety level 4 pathogens
Arenaviridae
Tropical diseases
Zoonoses | Lassa mammarenavirus | [
"Biology",
"Environmental_science"
] | 2,290 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
1,505,732 | https://en.wikipedia.org/wiki/Aprepitant | Aprepitant, sold under the brand name Emend among others, is a medication used to prevent chemotherapy-induced nausea and vomiting and to prevent postoperative nausea and vomiting. It may be used together with ondansetron and dexamethasone. It is taken by mouth or administered by intravenous injection. A pro-drug, fosaprepitant, is also available for intravenous administration.
Common side effects include tiredness, loss of appetite, diarrhea, abdominal pain, hiccups, itchiness, pneumonia, and blood pressure changes. Other severe side effects may include anaphylaxis. While use in pregnancy does not appear to be harmful, such use has not been well studied. Aprepitant belongs to the class of neurokinin-1 receptor antagonists. It works by blocking substance P from attaching to the NK1 receptors.
Aprepitant was approved for medical use in the European Union and the United States in 2003. It is made by Merck & Co. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Aprepitant is used to prevent chemotherapy-induced nausea and vomiting and to prevent postoperative nausea and vomiting. It may be used together with ondansetron and dexamethasone.
Mechanism of action
Aprepitant is classified as an NK1 antagonist because it blocks signals given off by NK1 receptors. This, therefore, decreases the likelihood of vomiting in patients.
NK1 is a G protein-coupled receptor located in the central and peripheral nervous system. This receptor has a dominant ligand known as Substance P (SP). SP is a neuropeptide, composed of 11 amino acids, which sends impulses and messages from the brain. It is found in high concentrations in the vomiting center of the brain, and, when activated, it results in a vomiting reflex. In addition to this it also plays a key part in the transmission of pain impulses from the peripheral receptors to the central nervous system.
Aprepitant has been shown to inhibit both the acute and delayed emesis induced by cytotoxic chemotherapeutic drugs by blocking substance P landing on receptors in the brain's neurons. Positron emission tomography (PET) studies, have demonstrated that aprepitant can cross the blood brain barrier and bind to NK1 receptors in the human brain. It has also been shown to increase the activity of the 5-HT3 receptor antagonist ondansetron and the corticosteroid dexamethasone, which are also used to prevent nausea and vomiting caused by chemotherapy.
Pharmacokinetics
Before clinical testing, a new class of therapeutic agent has to be characterized in terms of preclinical metabolism and excretion studies. Average bioavailability is found to be around 60-65%. Aprepitant is metabolized primarily by CYP3A4 with minor metabolism by CYP1A2 and CYP2C19. Seven metabolites of aprepitant, which are only weakly active, have been identified in human plasma. As a moderate inhibitor of CYP3A4, aprepitant can increase plasma concentrations of co-administered medicinal products that are metabolized through CYP3A4. Specific interaction has been demonstrated with oxycodone, where aprepitant both increased the efficacy and worsened the side effects of oxycodone; however it is unclear whether this is due to CYP3A4 inhibition or through its NK-1 antagonist action. Following IV administration of a 14C-labeled prodrug of aprepitant (L-758298), which is converted rapidly and completely to aprepitant, approximately 57% of the total radioactivity is excreted in the urine and 45% in feces. No unchanged substance is excreted in urine.
Structure and properties
Aprepitant is made up of a morpholine core with two substituents attached to adjacent ring carbons. These substitute groups are trifluoromethylated 1-phenylethanol and fluorophenyl group. Aprepitant also has a third substituent (triazolinone), which is joined to the morpholine ring nitrogen. It has three chiral centres very close together, which combine to produce an amino acetal arrangement. Its empirical formula is C23H21F7N4O3.
Synthesis
Shortly after Merck initiated research into reducing the severity and likelihood of chemotherapy-induced nausea and vomiting, researchers discovered that aprepitant is effective in prevention. Researchers worked on coming up with a process to create aprepitant, and within a short period they came up with effective synthesis of the substance. This original synthesis was deemed to be workable and proved to be a crucial step in achieving commercialization; however, Merck decided that the process was not environmentally sustainable. This was due to the original synthesis requiring six steps, many of which needed dangerous chemicals such as sodium cyanide, dimethyltitanocene, and gaseous ammonia. In addition to this, for the process to be effective cryogenic temperatures were needed for some of the steps and other steps produced hazardous byproducts such as methane. The environmental concerns of the synthesis of aprepitant became so great that Merck research team decided to withdraw the drug from clinical trials and attempt to create a different synthesis of aprepitant.
The gamble of taking the drug out of clinical trials proved to be successful when shortly afterwards the team of Merck researchers came up with an alternative and more environmentally friendly synthesis of aprepitant. The new process works by four compounds of similar size and complexity being fused together. This therefore is a much simpler process and requires only three steps, half the number of the original synthesis.
The new process begins by enantiopure trifluoromethylated phenyl ethanol being joined to a racemic morpholine precursor. This results in the desired isomer crystallizing on the top of the solution and the unwanted isomer remaining in the solution. The unwanted isomer is then converted to the desired isomer through a crystallization-induced asymmetric transformation. By the end of this step a secondary amine, the base of the drug, is formed.
The second step involves the fluorophenyl group being attached to the morpholine ring. Once this has been achieved the third and final step can initiated. This step involved a side chain of triazolinone being added to the ring. Once this step has been successfully completed a stable molecule of aprepitant has been produced.
This more streamlined route yields around 76% more aprepitant than the original process and reduces the operating cost by a significant amount. In addition, the new process also reduces the amount of solvent and reagents required by about 80% and saving an estimated 340,000L per ton of aprepitant produced.
The improvements in the synthesis process have also decreased the long-term detriment to the natural environment associated with the original procedure, due to eliminating the use of several hazardous chemicals.
History
It was approved by the US Food and Drug Administration (FDA) in 2003. In 2008, fosaprepitant, an intravenous form of aprepitant was approved in the United States.
Research
Major depression
Plans to develop aprepitant as an antidepressant have been withdrawn. Subsequently, other trials with NK1 receptor antagonists, casopitant and orvepitant, have shown promising results.
Beyond suggestions that PET receptor occupancy must not be used routinely to cap dosing for new medical indications for this class, or that > 99% human receptor occupancy might be required for consistent psycho-pharmacological or other therapeutic effects, critical scientific dissection and debate of the above data might be needed to enable aprepitant, and the class of NK1 antagonists as a whole, to fulfill preclinically predicted utilities beyond chemotherapy-induced nausea and vomiting (i.e., for other psychiatric disorders, addictions, neuropathic pain, migraine, osteoarthritis, overactive bladder, inflammatory bowel disease and other disorders with suspected inflammatory or immunological components. However, most data remain proprietary and thus reviews on the expanded clinical potential for drugs like aprepitant range from optimistic to poor.
Cannabinoid Hyperemesis Syndrome
Aprepitant has been identified as having strong potential in treating protracted vomiting episodes in individuals with cannabinoid hyperemesis syndrome. This syndrome is characterized by nausea, cyclical vomiting, and cramping abdominal pain resulting from prolonged, frequent cannabis use.
Standard first-line antiemetics such as ondansetron and prochlorperazine are often ineffective in treating cannabinoid hyperemesis syndrome.
References
Antiemetics
CYP3A4 inhibitors
4-Fluorophenyl compounds
Lactams
Drugs developed by Merck & Co.
Morpholines
NK1 receptor antagonists
Triazoles
Trifluoromethyl compounds
Ureas
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Aprepitant | [
"Chemistry"
] | 1,917 | [
"Organic compounds",
"Ureas"
] |
1,505,811 | https://en.wikipedia.org/wiki/Phosphosilicate%20glass | Phosphosilicate glass, commonly referred to by the acronym PSG, is a silicate glass commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers, due to its effect in gettering alkali ions. Another common type of phosphosilicate glass is borophosphosilicate glass (BPSG).
Soda-lime phosphosilicate glasses also form the basis for bioactive glasses (e.g. Bioglass), a family of materials which chemically convert to mineralised bone (hydroxy-carbonate-apatite) in physiological fluid.
Bismuth doped phosphosilicate glasses are being explored for use as the active gain medium in fiber lasers for fiber-optic communication.
See also
Wafer (electronics)
References
Glass compositions
Semiconductor device fabrication | Phosphosilicate glass | [
"Chemistry",
"Materials_science"
] | 188 | [
"Glass compositions",
"Glass chemistry",
"Semiconductor device fabrication",
"Microtechnology"
] |
1,505,829 | https://en.wikipedia.org/wiki/Borophosphosilicate%20glass | Borophosphosilicate glass, commonly known as BPSG, is a type of silicate glass that includes additives of both boron and phosphorus. Silicate glasses such as PSG and borophosphosilicate glass are commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers.
BPSG has been implicated in increasing a device's susceptibility to soft errors since the boron-10 isotope is good at capturing thermal neutrons from cosmic radiation. It then undergoes fission producing a gamma ray, an alpha particle, and a lithium ion. These products may then dump charge into nearby structures, causing data loss (bit flipping, or single event upset).
In critical designs, depleted boron consisting almost entirely of boron-11 is used to avoid this effect as a radiation hardening measure. Boron-11 is a by-product of the nuclear industry.
References
Semiconductor device fabrication
Glass compositions
Boron compounds | Borophosphosilicate glass | [
"Chemistry",
"Materials_science"
] | 212 | [
"Glass compositions",
"Glass chemistry",
"Semiconductor device fabrication",
"Microtechnology"
] |
11,714,262 | https://en.wikipedia.org/wiki/Ecosharing | Ecosharing is an environmental ethic for people to live by: that their own impact on the Earth's biosphere be limited to no more than their own fair ecoshare. The term seems to have been first used by G. Tyler Miller, Jr. in the 1975 edition of his Living in the Environment text. The 1990 book Coming of Age in the Global Village sought to quantify an "ecoshare" by linking it to average world per capita income and energy use. A more modern approach might extend this by also including one's carbon footprint. However it is gauged, an ecoshare is determined by overall assessment of the human impact on the biosphere, computer models of its future condition, and necessary limits imposed by sustainability criteria.
What might the life of someone attempting to live by this ecosharing ethic look like? In Choices We Make in the Global Village --the sequel to his earlier book--Stephen Cook continues the story of his attempt to live a life based on what he calls "Enoughness," and quantitatively connects it with previously defined ecosharing objectives and carbon footprint. Mahatma Gandhi urging "Live simply so that others may simply live" has inspired his lifestyle.
References
External links
"Enoughness" from Project Worldview
Environmental ethics | Ecosharing | [
"Environmental_science"
] | 263 | [
"Environmental ethics"
] |
11,714,782 | https://en.wikipedia.org/wiki/Hydnum%20rufescens | Hydnum rufescens, commonly known as the terracotta hedgehog, is an edible basidiomycete of the family Hydnaceae. It belongs to the small group of mushrooms often referred to as the tooth fungi, which produce fruit bodies whose cap undersurfaces are covered by hymenophores resembling spines or teeth, and not pores or gills.
It is very similar to the more common hedgehog fungus (Hydnum repandum), and was previously sometimes considered a variety of that species. However, the following differences have been noted:
the cap of H. rufescens is russet rather than beige,
the overall dimensions are smaller and more regular in shape, with a central stipe,
the spines are not decurrent, and
the spores are slightly larger.
Both species are found in European coniferous and deciduous forests growing on soil. It is reportedly ectomycorrhizal with Picea abies, Pinus sylvestris, Fagus sylvatica and Quercus robur.
References
Hydnum rufescens at Northern Ireland Fungus Group URL accessed 11 June 2007
Edible fungi
Fungi described in 1800
Fungi of Europe
Taxa named by Christiaan Hendrik Persoon
rufescens
Fungus species | Hydnum rufescens | [
"Biology"
] | 263 | [
"Fungi",
"Fungus species"
] |
11,715,031 | https://en.wikipedia.org/wiki/Iran%E2%80%93Pakistan%20border | The Iran–Pakistan border (; ), is the international boundary that separates Iran and Pakistan. It demarcates the Iranian province of Sistan and Baluchestan from the Pakistani province of Balochistan, and spans 909 kilometres (565 miles) in length.
Description
The border begins at the tripoint with Afghanistan at the Kuh-i-Malik Salih mountain, then follows a straight line going southeast, then a series of mountain ridges, seasonal streams, and the Tahlab River southwest to the vicinity of Hamun-e Mashkel lake. The boundary then veers sharply southwards via a series of straight lines, then east along some mountains to the Mashkil River, which it follows southwards, before reaching the Nahang River which it follows westwards. It leaves the Nahang and then goes overland via various mountain ridges and straight-line segments southwards to Gwatar Bay in the Gulf of Oman.
History
The modern boundary cuts through the region known as Balochistan, an area long contested between various empires centred in Persia (Iran), Afghanistan, and Pakistan. From the 18th century onwards, the British gradually took control of most of India, including what is now Pakistan, bringing it into close proximity with lands traditionally claimed by Persia. In 1871, the British (representing the Khan of Kalat) and the Persians agreed to define their mutual frontier; a boundary commission surveyed the area the following year but did not mark the border on the ground. Some minor alignment issues stemming from this were tidied up via another joint treaty in 1905.
In 1947, the British departed, and Pakistan gained independence from British India. Iran and Pakistan confirmed their mutual border by treaty in 1958–59, fully mapping the border area and demarcating it on the ground with pillars.
In June 2023, there was a terrorist attack at the Iran-Pakistan border. Some Pakistan border patrol officers were killed. A few days before that, there was another terrorist attack at the border and 5 Iranian border patrol officers were killed.
Border barriers
Iranian fencing project (2011)
The 3 ft (91.4 cm) thick and 10 ft (3.05 m) high concrete wall, fortified with steel rods, will span the 700 km frontier stretching from Taftan to Mand. The project will include large earth and stone embankments and deep ditches to deter illegal trade crossings and drug smuggling to both sides. The border region is already dotted with police observation towers and fortress-style garrisons for troops. Iran and Pakistan do not have border disputes or other irredentist claims, and Pakistan's Foreign Ministry has stated, "Pakistan has no reservation because Iran is constructing the fence on its territory."
History and stated purpose
The wall is being constructed to stop illegal border crossings and stem the flow of drugs, and is also a response to terror attacks, notably the one in the Iranian border town of Zahedan on February 17, 2007, which killed 13 people, including nine Iranian Revolutionary Guard officials. However Pakistani Foreign Ministry spokeswoman Tasnim Aslam denied any link between the fence and the bomb blast, saying that Iran was not blaming these incidents on Pakistan.
Reactions to the barrier
The Foreign Ministry of Pakistan has stated that Iran has the right to erect border fencing in its territory. However, opposition to the construction of the wall was raised in the Provincial Assembly of Balochistan. It maintained that the wall would create problems for the local people whose lands straddle the border region. They apprehended the barrier would further divide politically and socially the local population and impede trade and social activities. An opposition leader in the provincial assembly in 2007 said the governments of the two countries should take the people of the area into confidence, and demanded a stop to the construction of the barrier.
Pakistani fencing project (2019)
In 2019, Pakistan announced its intention to fence its border with Iran. In May 2019, Pakistan allocated $18.6 million to fund the border fencing project. In September 2021, Pakistan approved an additional $58.5 million for border fencing. As of mid-2021, Pakistan had completed 46% of the border fencing and aimed to finish the project by December 2021. As of January 2022, Pakistan had fenced 80% of the border. The Interior Ministry confirmed plans to fence the remaining border sections.
Border crossings and markets
On the Pakistani side, the Frontier Corps oversees border security and immigration. In Iran, the Iranian Revolutionary Guards are responsible for border security.
Pakistan and Iran share four official border crossings. Taftan and Gabd serve both pedestrians and trade, while Mand and Chadgi are exclusively for trade. Since Iran drives on the right, and Pakistan on the left, the border crossings require road traffic to change sides.
Additionally, both countries have agreed to establish six joint-border markets to enhance trade. Initially, three markets will open at the border points of Kuhak-Chadgi, Rimdan-Gabd, and Pishin-Mand areas. The remaining three markets will be established in the second phase. Currently, the first three border markets out of six have been constructed and are operational at Gabd, Mand, and Chadgi.
Road
Rail
Taftan / Mirjaveh, on the line between Quetta and Zahedan
Settlements near the border
Iran
Lar Marud
Zahedan
Kacheh Rud
Mirjaveh
Ladiz
Narreh Now
Jaleq
Kalleh
Fahreh
Murt
Esfandak
Kavari
Pishin
Kushak
Pakistan
Sohtagan
Qila Ladgasht
Washap
Sar-i Parom
Girbum
Sohrag
Abdui
Taftan
Sirag
Kurumb
Jiwani
See also
Iran–Pakistan relations
References
Further reading
Iran to wall off Baluchistan border, Al Jazeera
’بلوچوں کوتقسیم کیاجا رہا ہے‘, BBC
Iran raising wall along border with Pakistan
border
Borders of Iran
Borders of Pakistan
International borders
Balochistan
Border barriers | Iran–Pakistan border | [
"Engineering"
] | 1,228 | [
"Separation barriers",
"Border barriers"
] |
11,715,384 | https://en.wikipedia.org/wiki/William%20Craig%20%28philosopher%29 | William Craig (November 13, 1918 – January 13, 2016) was an American academic and philosopher, who taught at the University of California, Berkeley, in Berkeley, California. His research interests included mathematical logic, and the philosophy of science, and he is best known for the Craig interpolation theorem.
Biography
William Craig was born in Nuremberg, Weimar Republic, on November 13, 1918. He graduated from Harvard University with a Ph.D. in 1951. He married Julia Rebecca Dwight Wilson and had four children: Ruth, Walter, Sarah, and Deborah. In 1959, he moved to UC Berkeley. He died on January 13, 2016, at the age of 97.
Achievements
Craig is particularly remembered in two theorems that bear his name:
the Craig interpolation theorem, and
Craig's theorem, also known as Craig's axiomatization theorem or Craig's reaxiomatization theorem.
See also
American philosophy
List of American philosophers
References
External links
Official Berkeley page
A conference in honor of William Craig
Some Publications DBLP
1918 births
2016 deaths
20th-century American male writers
20th-century American mathematicians
20th-century American philosophers
20th-century American educators
20th-century American essayists
20th-century German male writers
20th-century German mathematicians
20th-century German non-fiction writers
20th-century German philosophers
21st-century American male writers
21st-century American mathematicians
21st-century American philosophers
21st-century American academics
21st-century American essayists
American male essayists
American male non-fiction writers
Analytic philosophers
Computability theorists
20th-century German educators
German emigrants to the United States
German logicians
German male essayists
German male non-fiction writers
Harvard University alumni
American philosophers of logic
American philosophers of mathematics
American philosophers of science
American philosophy academics
Proof theorists
University of California, Berkeley College of Letters and Science faculty
Writers from Nuremberg | William Craig (philosopher) | [
"Mathematics"
] | 368 | [
"Proof theorists",
"Proof theory"
] |
11,715,934 | https://en.wikipedia.org/wiki/Opportunity-Driven%20Multiple%20Access | Opportunity-Driven Multiple Access (ODMA) is a UMTS communications relaying protocol standard first introduced by the European Telecommunication Standards Institute (ETSI) in 1996. ODMA has been adopted by the 3rd-Generation Partnership Project, 3GPP to improve the efficiency of UMTS networks using the TDD mode. One of the objectives of ODMA is to enhance the capacity and the coverage of radio transmissions towards the boundaries of the cell. While mobile stations under the cell coverage area can communicate directly with the base station, mobile stations outside the cell boundary can still access the network and communicating with the base station via multihop transmission. Mobile stations with high data rate inside the cell are used as multihop relays.
The initial concept of Opportunity Driven Multiple Access (ODMA) was conceived and patented in South Africa by David Larsen and James Larsen of SRD Pty Ltd in 1978
The ODMA standard was tabled by the 3GPP committee in 1999 due to complexity issues. The technology continues to be developed and enhanced by IWICS who holds the key patents describing the methods employed in ODMA to effect opportunity driven communications.
ODMA Technology
Basic Concepts
With the explosion of cellular phone use and Internet multi-media services, wireless networks are becoming increasingly congested. The increased demand has raised our expectations, while creating capacity problems and a need for greater bandwidth. However, if the transmitted power of wireless units is significantly reduced, then there is a potential solution. This implies a signal-to-noise ratio improvement: the ratio is affected by numerous parameters, including radio frequency and path. Opportunity Driven Multiple Access (ODMA) continually determines optimal points along that path to support each transmission.
Adaptation
ODMA uses many adaptation techniques to optimize communications, but one of the most powerful is path diversity. From origin to destination, ODMA stations relay the transmissions in an intelligent and efficient manner.
Each Subscriber Builds the Network
The available optimal paths will increase as subscribers join the network, supporting a fundamental aspect of the ODMA philosophy: Communications are dynamic and local, best controlled at the station level, rather than from some centralized source. Each ODMA-network station is an intelligent burst-mode radio, which can use all the available bandwidth some of the time. However, as with any technology, weather or general network conditions can affect transmissions.
Efficiency with Sub-Bands
Like cellular networks, the ODMA-network stations operate in the same wide frequency band, but frequency hopping, at lower data rates, introduces sub-bands. Because transmission is packet based and connectionless, stations relay packets from neighbor stations. For each packet, a station optimizes the transmission by adapting the route, power, data rate, packet length, frequency, time window and data quality over a wide range. Each station has responsibility and much autonomy for routing and service-enhancing adaptation to the current environment. For security, stations accept the authority of a network supervisor.
See also
3G
3GPP: the body that manages the UMTS standards
CDMA: Code-division multiple access
TDMA: Time-division multiple access
FDMA: Frequency-division multiple access
References
External links
http://www.3gpp.org/ftp/Specs/archive/25_series/25.924/25924-100.zip
http://www.iwics.com
Mobile telecommunications standards
3GPP standards
UMTS | Opportunity-Driven Multiple Access | [
"Technology"
] | 703 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
11,716,544 | https://en.wikipedia.org/wiki/Classical%20fluid | Classical fluids are systems of particles which retain a definite volume, and are at sufficiently high temperatures (compared to their Fermi energy) that quantum effects can be neglected. A system of hard spheres, interacting only by hard collisions (e.g., billiards, marbles), is a model classical fluid. Such a system is well described by the Percus–Yevik equation. Common liquids, e.g., liquid air, gasoline etc., are essentially mixtures of classical fluids. Electrolytes, molten salts, salts dissolved in water, are classical charged fluids. A classical fluid when cooled undergoes a freezing transition. On heating it undergoes an evaporation transition and becomes a classical gas that obeys Boltzmann statistics.
A system of charged classical particles moving in a uniform positive neutralizing background is known as a one-component plasma (OCP). This is well described by the hyper-netted chain equation (see classical-map hypernetted-chain method or CHNC). An essentially very accurate way of determining the properties of classical fluids is provided by the method of molecular dynamics. An electron gas confined in a metal is not a classical fluid, whereas a very high-temperature plasma of electrons could behave as a classical fluid. Such non-classical Fermi systems, i.e., quantum fluids, can be studied using quantum Monte Carlo methods, Feynman path integral formulation, and approximately via CHNC integral-equation methods.
See also
Bose–Einstein condensate
Fermi liquid
Many-body theory
Quantum fluid
References
Concepts in physics | Classical fluid | [
"Physics"
] | 321 | [
"nan"
] |
11,716,595 | https://en.wikipedia.org/wiki/Echium%20plantagineum%20in%20Australia | Paterson's curse or Salvation Jane (Echium plantagineum) is an invasive plant species in Australia. There are a number of theories regarding where the name Salvation Jane originated, and it is mostly used in South Australia. These explanations include "salvation jane" (lower-case “jane”) referring to the flower which looks similar to the bonnets of Salvation Army ladies (‘janes’ - see Parsons & Cuthbertson Noxious Weeds of Australia 1992), its “salvation” to beekeepers because it is often in flower when the honeyflow is down, and due to its use as a source of emergency food for grazing animals when the less drought-tolerant grazing pastures die off. Other names are blueweed, Lady Campbell weed, Riverina bluebell, and purple viper's bugloss.
Three other Echium species have been introduced and are of concern; viper's bugloss (Echium vulgare) is the most common of them. Viper's bugloss is biennial, with a single unbranched flowering stem and smaller, more blue flowers, but is otherwise similar. This species is also useful for honey production.
While Salvation Jane can be used as fodder for cattle and sheep over hot, dry summer months, it is toxic for livestock that do not have ruminant digestive systems; furthermore, grains that are contaminated with the plant significantly lower their value.
History
In the 1880s, it was introduced to Australia, probably both as an accidental contaminant of pasture seed and as an ornamental plant. Reportedly, both names for the plant derive from Jane Paterson or Patterson, an early settler of the country near Albury. She brought the first seeds from Europe to beautify a garden, and then could only watch helplessly as the weed infested previously productive pastures for many miles around.
Paterson's curse is now a dominant broadleaf pasture weed through much of New South Wales, the Australian Capital Territory, Victoria, South Australia, and Tasmania and also infests native grasslands, heathlands, and woodlands.
Description
Appearance
The plant has hairy, dark green, broadly oval rosette leaves to 30 cm long. The several seeding stems grow to 120 cm in height and develop branches with age. Flowers develop in clusters; they are purple, tubular and 2–3 cm long with five petals. It has a fleshy taproot with smaller laterals.
Growth
Although generally an autumn-germinating, spring-flowering annual, Paterson's curse has become highly adaptable to Australian erratic rainfall events, and given suitable rainfall, some plants germinate at any time of year, but the plant never survives for more than one year. It is a very prolific seed producer; heavy infestations can yield up to 30 000 seeds/m2. Paterson's curse can germinate under a wide variety of temperature conditions, tolerates dry periods well, and responds vigorously to fertiliser. If cut by a lawnmower, it quickly recovers and sends out new shoots and flowers.
The plant disperses by movement of seeds — on the wool or fur of animals, the alimentary tracts of grazing animals or birds, movement in water, and most importantly as a contaminant of hay or grain. This is most noticeable in times of drought, when considerable movement of fodder and livestock occurs.
It can rapidly establish a large population on disturbed ground and competes vigorously with both smaller plants and the seedlings of regenerating overstorey species. Its spread has been greatly aided by human-induced habitat degradation, particularly the removal of perennial grasses through overgrazing by sheep and cattle and the introduction of the rabbit. Paterson's curse is rarely able to establish itself in habitats where the native vegetation is healthy and undisturbed.
Control
Chemical
Control of the plant is carried out by hand (for small infestations) or with any of a variety of herbicides, and must be continued over many years to reduce the seedbank. (Most seeds germinate in the first year, but some survive for as long as five years before germinating.) In the longer term, perennial grasses (which do not need to regenerate from seed each year) can outcompete Paterson's curse, and any increase in perennial cover produces a direct decrease in it. However, the annual cost in control measures and lost production in Australia was estimated (in a 1985 study by the Industries Assistance Commission) to be over $30 million, compared to $2 million per year in benefits.
Biological
The Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) has carried out research on numerous classical biological control solutions, and of the 100-odd insects found feeding on Paterson's curse in the Mediterranean, judged six safe to release in Australia without endangering crops or native plants. The leaf-mining moth Dialectica scalariella, the crown weevil Mogulones larvatus, root weevil Mogulones geographicus, and flea beetle Longitarsus echii are now currently widely distributed in southern Australia and can be found easily on most large Paterson's curse plants encountered. The crown weevil and flea beetle are proving highly effective. While the CSIRO is cautiously optimistic, biological control agents are expected to take many years to be fully effective. The most recent economic analysis, however, suggests that biological control has already brought nearly $1.2 B in benefits to Australia by reducing the amount of Paterson's curse in pastures. Investment into the biological control of Paterson's curse has already reaped a benefit cost ratio of 52:1.
Toxicity
E. plantagineum contains pyrrolizidine alkaloids and is poisonous. When eaten in large quantities, it can cause reduced livestock weight or even death. Paterson's curse can kill horses and irritate the udders of dairy cows and the skin of humans. After the 2003 Canberra bushfires, over 40 horses were recorded as put down after eating the weed.
See also
Invasive species in Australia
References
External links
Paterson's curse, Echium plantagineum Biological control research in CSIRO Entomology
Echium plantagineum L. FloraBase - the Western Australian Flora
Invasive plant species in Australia
plantagineum in Australia
Veterinary toxicology | Echium plantagineum in Australia | [
"Environmental_science"
] | 1,285 | [
"Veterinary toxicology",
"Toxicology"
] |
11,717,153 | https://en.wikipedia.org/wiki/TabletKiosk | TabletKiosk is a manufacturer of enterprise-grade Tablet PCs and UMPCs located in Torrance, California, United States. All mobile computers produced by TabletKiosk fall into the slate category, featuring touchscreen or pen (active digitizer) input, in lieu of integrated or convertible keyboards. Current products include the Sahara Slate PC i500 series, designed in-house at TabletKiosk's Taiwan R&D facility. Early generations of the eo brand of UMPC (Ultra-Mobile PC) were designed in collaboration with outside designers and the TabletKiosk team, while the fourth generation of this brand, the eo a7400 is designed exclusively in-house.
TabletKiosk is a wholly owned subsidiary of Sand Dune Ventures, based in Torrance, California.
In 2006, TabletKiosk delayed shipment of its "eo" brand tablet after discovering problems with the device's fan.
SoftBrands announced in 2007 that it would use TabletKiosk's Sahara Slate PC line to distribute SoftBrands software to hotel companies.
Parkland Memorial Hospital in Dallas, Texas, United States, has patients visiting its emergency department fill in their details using a TabletKiosk machine.,
In 2013, Healthcare Global named the Sahara Slate PC i500 as One of the Top 10 Mobile Tablets For Healthcare Professionals.
References
See also
External links
Company website
Computer hardware companies
Computer systems companies
Computer companies established in 2003
Microsoft Tablet PC
Tablet computers
Computer companies of the United States | TabletKiosk | [
"Technology"
] | 306 | [
"Computer systems companies",
"Computer hardware companies",
"Computer hardware stubs",
"Computer systems",
"Computing stubs",
"Computers"
] |
11,717,197 | https://en.wikipedia.org/wiki/Surface%20water | Surface water is water located on top of land, forming terrestrial (surrounding by land on all sides) waterbodies, and may also be referred to as blue water, opposed to the seawater and waterbodies like the ocean.
The vast majority of surface water is produced by precipitation. As the climate warms in the spring, snowmelt runs off towards nearby streams and rivers contributing towards a large portion of human drinking water. Levels of surface water lessen as a result of evaporation as well as water moving into the ground becoming ground-water. Alongside being used for drinking water, surface water is also used for irrigation, wastewater treatment, livestock, industrial uses, hydropower, and recreation. For USGS water-use reports, surface water is considered freshwater when it contains less than 1,000 milligrams per liter (mg/L) of dissolved solids.
There are three major types of surface water. Permanent (perennial) surface waters are present year round, and includes lakes, rivers and wetlands (marshes and swamps). Semi-permanent (ephemeral) surface water refers to bodies of water that are only present at certain times of the year including seasonally dry channels such as creeks, lagoons and waterholes. Human-made surface water is water that can be continued by infrastructures that humans have assembled. This would be dammed artificial lakes, canals and artificial ponds (e.g. garden ponds) or swamps. The surface water held by dams can be used for renewable energy in the form of hydropower. Hydropower is the forcing of surface water sourced from rivers and streams to produce energy.
Measurement
Surface water can be measured as annual runoff. This includes the amount of rain and snowmelt drainage left after the uptake of nature, evaporation from land, and transpiration from vegetation. In areas such as California, the California Water Science Center records the flow of surface water and annual runoff by utilizing a network of approximately 500 stream gages collecting real time data from all across the state. This then contributes to the 8,000 stream gage stations that are overseen by the USGS national stream gage record. This in turn has provided to date records and documents of water data over the years. Management teams that oversee the distribution of water are then able to make decisions of adequate water supply to sectors. These include municipal, industrial, agricultural, renewable energy (hydropower), and storage in reservoirs.
Impacts of climate change
Due to climate change, sea ice and glaciers are melting, contributing to the rise in sea levels. As a result, salt water from the ocean is beginning to infiltrate our freshwater aquifers contaminating water used for urban and agricultural services. It is also affecting surrounding ecosystems as it places stress on the wildlife inhabiting those areas. It was recorded by the NOAA in the years 2012 to 2016, ice sheets in Greenland and the Antarctic reduced by 247 billion tons per year. This number will continue to increase as global warming persists.
Climate change has a direct connection with the water cycle. It has increased evaporation yet decreased precipitation, runoff, groundwater, and soil moisture. This has altered surface water levels. Climate change also enhances the existing challenges we face in water quality. The quality of surface water is based on the chemical inputs from the surrounding elements such as the air and the nearby landscape. When these elements are polluted due to human activity, it alters the chemistry of the water.
Conjunctive use of ground and surface water
Surface and groundwater are two separate entities, so they must be regarded as such. However, there is an ever-increasing need for management of the two as they are part of an interrelated system that is paramount when the demand for water exceeds the available supply (Fetter 464). Depletion of surface and ground water sources for public consumption (including industrial, commercial, and residential) is caused by over-pumping. Aquifers near river systems that are over-pumped have been known to deplete surface water sources as well. Research supporting this has been found in numerous water budgets for a multitude of cities.
Response times for an aquifer are long (Young & Bredehoeft 1972). However, a total ban on ground water usage during water recessions would allow surface water to retain better levels required for sustainable aquatic life. By reducing ground water pumping, the surface water supplies will be able to maintain their levels, as they recharge from direct precipitation, surface runoff, etc.
It is recorded by the Environmental Protection Agency (EPA), that approximately 68 percent of water provided to communities in the United States comes from surface water.
See also
Environmental persistent pharmaceutical pollutant
Meltwater
Optimum water content for tillage
Water resources
Surface-water hydrology
References
Applied Hydrogeology, Fourth Edition by C.W. Fetter.
R.A. Young and J.D. Bredehoeft Digital simulation for solving management problems with conjunctive groundwater and surface water systems from Water Resources Research 8:533-56
External links
"Surface Water," Iowa State University
Hydrology
Water | Surface water | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,035 | [
"Water",
"Hydrology",
"Environmental engineering"
] |
11,717,900 | https://en.wikipedia.org/wiki/Appell%27s%20equation%20of%20motion | In classical mechanics, Appell's equation of motion (aka the Gibbs–Appell equation of motion) is an alternative general formulation of classical mechanics described by Josiah Willard Gibbs in 1879 and Paul Émile Appell in 1900.
Statement
The Gibbs-Appell equation reads
where is an arbitrary generalized acceleration, or the second time derivative of the generalized coordinates , and is its corresponding generalized force. The generalized force gives the work done
where the index runs over the generalized coordinates , which usually correspond to the degrees of freedom of the system. The function is defined as the mass-weighted sum of the particle accelerations squared,
where the index runs over the particles, and
is the acceleration of the -th particle, the second time derivative of its position vector . Each is expressed in terms of generalized coordinates, and is expressed in terms of the generalized accelerations.
Relations to other formulations of classical mechanics
Appell's formulation does not introduce any new physics to classical mechanics and as such is equivalent to other reformulations of classical mechanics, such as Lagrangian mechanics, and Hamiltonian mechanics. All classical mechanics is contained within Newton's laws of motion. In some cases, Appell's equation of motion may be more convenient than the commonly used Lagrangian mechanics, particularly when nonholonomic constraints are involved. In fact, Appell's equation leads directly to Lagrange's equations of motion. Moreover, it can be used to derive Kane's equations, which are particularly suited for describing the motion of complex spacecraft. Appell's formulation is an application of Gauss' principle of least constraint.
Derivation
The change in the particle positions rk for an infinitesimal change in the D generalized coordinates is
Taking two derivatives with respect to time yields an equivalent equation for the accelerations
The work done by an infinitesimal change dqr in the generalized coordinates is
where Newton's second law for the kth particle
has been used. Substituting the formula for drk and swapping the order of the two summations yields the formulae
Therefore, the generalized forces are
This equals the derivative of S with respect to the generalized accelerations
yielding Appell's equation of motion
Examples
Euler's equations of rigid body dynamics
Euler's equations provide an excellent illustration of Appell's formulation.
Consider a rigid body of N particles joined by rigid rods. The rotation of the body may be described by an angular velocity vector , and the corresponding angular acceleration vector
The generalized force for a rotation is the torque , since the work done for an infinitesimal rotation is . The velocity of the -th particle is given by
where is the particle's position in Cartesian coordinates; its corresponding acceleration is
Therefore, the function may be written as
Setting the derivative of S with respect to equal to the torque yields Euler's equations
See also
Principle of stationary action
Analytical mechanics
References
Further reading
Connection of Appell's formulation with the principle of least action.
PDF copy of Appell's article at Goettingen University
PDF copy of a second article on Appell's equations and Gauss's principle
Classical mechanics | Appell's equation of motion | [
"Physics"
] | 636 | [
"Mechanics",
"Classical mechanics"
] |
11,718,498 | https://en.wikipedia.org/wiki/Spindrift | Spindrift (more rarely spoondrift) is the spray blown from cresting waves during a gale. This spray, which "drifts" in the direction of the gale, is one of the characteristics of a wind speed of 8 Beaufort and higher at sea. In Greek and Roman mythology, Leucothea was the goddess of spindrift.
Terminology
Spindrift is derived from the Scots language, but its further etymology is uncertain. Although the Oxford English Dictionary suggests it is a variant of spoondrift based on the way that word was pronounced in southwest Scotland, from spoon or spoom ("to sail briskly with the wind astern, with or without sails hoisted") and drift ("a mass of matter driven or forced onward together in a body, etc., especially by wind or water"), this is doubted by the Scottish National Dictionary, because spoondrift is attested later than spindrift and it seems unlikely that the Scots spelling would have superseded the English one, and because the early use of the word in the form spenedrift by James Melville (1556–1614) is unlikely to have derived from spoondrift. In any case, spindrift was popularized in England through its use in the novels of the Scottish-born author William Black (1841–1898).
Spindrift or spoondrift is also used to refer to fine sand or snow that is blown off the ground by the wind.
References
Wind
Precipitation
Oceanography | Spindrift | [
"Physics",
"Environmental_science"
] | 313 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
11,718,624 | https://en.wikipedia.org/wiki/Trend%20surface%20analysis | Trend surface analysis is a mathematical technique used in environmental sciences (archeology, geology, soil science, etc.). Trend surface analysis (also called trend surface mapping) is a method based on low-order polynomials of spatial coordinates for estimating a regular grid of points from scattered observations. For example, from archeological finds or from soil survey.
References
Methods in archaeology
Multivariate interpolation | Trend surface analysis | [
"Mathematics"
] | 82 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
11,718,631 | https://en.wikipedia.org/wiki/Liouville%20dynamical%20system | In classical mechanics, a Liouville dynamical system is an exactly solvable dynamical system in which the kinetic energy T and potential energy V can be expressed in terms of the s generalized coordinates q as follows:
The solution of this system consists of a set of separably integrable equations
where E = T + V is the conserved energy and the are constants. As described below, the variables have been changed from qs to φs, and the functions us and ws substituted by their counterparts χs and ωs. This solution has numerous applications, such as the orbit of a small planet about two fixed stars under the influence of Newtonian gravity. The Liouville dynamical system is one of several things named after Joseph Liouville, an eminent French mathematician.
Example of bicentric orbits
In classical mechanics, Euler's three-body problem describes the motion of a particle in a plane under the influence of two fixed centers, each of which attract the particle with an inverse-square force such as Newtonian gravity or Coulomb's law. Examples of the bicenter problem include a planet moving around two slowly moving stars, or an electron moving in the electric field of two positively charged nuclei, such as the first ion of the hydrogen molecule H2, namely the hydrogen molecular ion or H2+. The strength of the two attractions need not be equal; thus, the two stars may have different masses or the nuclei two different charges.
Solution
Let the fixed centers of attraction be located along the x-axis at ±a. The potential energy of the moving particle is given by
The two centers of attraction can be considered as the foci of a set of ellipses. If either center were absent, the particle would move on one of these ellipses, as a solution of the Kepler problem. Therefore, according to Bonnet's theorem, the same ellipses are the solutions for the bicenter problem.
Introducing elliptic coordinates,
the potential energy can be written as
and the kinetic energy as
This is a Liouville dynamical system if ξ and η are taken as φ1 and φ2, respectively; thus, the function Y equals
and the function W equals
Using the general solution for a Liouville dynamical system below, one obtains
Introducing a parameter u by the formula
gives the parametric solution
Since these are elliptic integrals, the coordinates ξ and η can be expressed as elliptic functions of u.
Constant of motion
The bicentric problem has a constant of motion, namely,
from which the problem can be solved using the method of the last multiplier.
Derivation
New variables
To eliminate the v functions, the variables are changed to an equivalent set
giving the relation
which defines a new variable F. Using the new variables, the u and w functions can be expressed by equivalent functions χ and ω. Denoting the sum of the χ functions by Y,
the kinetic energy can be written as
Similarly, denoting the sum of the ω functions by W
the potential energy V can be written as
Lagrange equation
The Lagrange equation for the rth variable is
Multiplying both sides by , re-arranging, and exploiting the relation 2T = YF yields the equation
which may be written as
where E = T + V is the (conserved) total energy. It follows that
which may be integrated once to yield
where the are constants of integration subject to the energy conservation
Inverting, taking the square root and separating the variables yields a set of separably integrable equations:
References
Further reading
Classical mechanics | Liouville dynamical system | [
"Physics"
] | 726 | [
"Mechanics",
"Classical mechanics"
] |
11,720,017 | https://en.wikipedia.org/wiki/Log-Laplace%20distribution | In probability theory and statistics, the log-Laplace distribution is the probability distribution of a random variable whose logarithm has a Laplace distribution. If X has a Laplace distribution with parameters μ and b, then Y = eX has a log-Laplace distribution. The distributional properties can be derived from the Laplace distribution.
Characterization
A random variable has a log-Laplace(μ, b) distribution if its probability density function is:
The cumulative distribution function for Y when y > 0, is
Generalization
Versions of the log-Laplace distribution based on an asymmetric Laplace distribution also exist. Depending on the parameters, including asymmetry, the log-Laplace may or may not have a finite mean and a finite variance.
References
Continuous distributions
Probability distributions with non-finite variance
Non-Newtonian calculus | Log-Laplace distribution | [
"Mathematics"
] | 171 | [
"Non-Newtonian calculus",
"Calculus"
] |
11,720,211 | https://en.wikipedia.org/wiki/Ascochyta%20doronici | Ascochyta doronici is a fungal plant pathogen that causes leaf spot on African daisy.
See also
List of Ascochyta species
References
Fungal plant pathogens and diseases
Eudicot diseases
doronici
Fungi described in 1878
Fungus species | Ascochyta doronici | [
"Biology"
] | 51 | [
"Fungi",
"Fungus species"
] |
11,720,315 | https://en.wikipedia.org/wiki/Hilbert%27s%20theorem%20%28differential%20geometry%29 | In differential geometry, Hilbert's theorem (1901) states that there exists no complete regular surface of constant negative gaussian curvature immersed in . This theorem answers the question for the negative case of which surfaces in can be obtained by isometrically immersing complete manifolds with constant curvature.
History
Hilbert's theorem was first treated by David Hilbert in "Über Flächen von konstanter Krümmung" (Trans. Amer. Math. Soc. 2 (1901), 87–99).
A different proof was given shortly after by E. Holmgren in "Sur les surfaces à courbure constante négative" (1902).
A far-leading generalization was obtained by Nikolai Efimov in 1975.
Proof
The proof of Hilbert's theorem is elaborate and requires several lemmas. The idea is to show the nonexistence of an isometric immersion
of a plane to the real space . This proof is basically the same as in Hilbert's paper, although based in the books of Do Carmo and Spivak.
Observations: In order to have a more manageable treatment, but without loss of generality, the curvature may be considered equal to minus one, . There is no loss of generality, since it is being dealt with constant curvatures, and similarities of multiply by a constant. The exponential map is a local diffeomorphism (in fact a covering map, by Cartan-Hadamard theorem), therefore, it induces an inner product in the tangent space of at : . Furthermore, denotes the geometric surface with this inner product. If is an isometric immersion, the same holds for
.
The first lemma is independent from the other ones, and will be used at the end as the counter statement to reject the results from the other lemmas.
Lemma 1: The area of is infinite.
Proof's Sketch:
The idea of the proof is to create a global isometry between and . Then, since has an infinite area, will have it too.
The fact that the hyperbolic plane has an infinite area comes by computing the surface integral with the corresponding coefficients of the First fundamental form. To obtain these ones, the hyperbolic plane can be defined as the plane with the following inner product around a point with coordinates
Since the hyperbolic plane is unbounded, the limits of the integral are infinite, and the area can be calculated through
Next it is needed to create a map, which will show that the global information from the hyperbolic plane can be transfer to the surface , i.e. a global isometry. will be the map, whose domain is the hyperbolic plane and image the 2-dimensional manifold , which carries the inner product from the surface with negative curvature. will be defined via the exponential map, its inverse, and a linear isometry between their tangent spaces,
.
That is
,
where . That is to say, the starting point goes to the tangent plane from through the inverse of the exponential map. Then travels from one tangent plane to the other through the isometry , and then down to the surface with another exponential map.
The following step involves the use of polar coordinates, and , around and respectively. The requirement will be that the axis are mapped to each other, that is goes to . Then preserves the first fundamental form.
In a geodesic polar system, the Gaussian curvature can be expressed as
.
In addition K is constant and fulfills the following differential equation
Since and have the same constant Gaussian curvature, then they are locally isometric (Minding's Theorem). That means that is a local isometry between and . Furthermore, from the Hadamard's theorem it follows that is also a covering map.
Since is simply connected, is a homeomorphism, and hence, a (global) isometry. Therefore, and are globally isometric, and because has an infinite area, then has an infinite area, as well.
Lemma 2: For each exists a parametrization , such that the coordinate curves of are asymptotic curves of and form a Tchebyshef net.
Lemma 3: Let be a coordinate neighborhood of such that the coordinate curves are asymptotic curves in . Then the area A of any quadrilateral formed by the coordinate curves is smaller than .
The next goal is to show that is a parametrization of .
Lemma 4: For a fixed , the curve , is an asymptotic curve with as arc length.
The following 2 lemmas together with lemma 8 will demonstrate the existence of a parametrization
Lemma 5: is a local diffeomorphism.
Lemma 6: is surjective.
Lemma 7: On there are two differentiable linearly independent vector fields which are tangent to the asymptotic curves of .
Lemma 8: is injective.
Proof of Hilbert's Theorem:
First, it will be assumed that an isometric immersion from a complete surface with negative curvature exists:
As stated in the observations, the tangent plane is endowed with the metric induced by the exponential map . Moreover, is an isometric immersion and Lemmas 5,6, and 8 show the existence of a parametrization of the whole , such that the coordinate curves of are the asymptotic curves of . This result was provided by Lemma 4. Therefore, can be covered by a union of "coordinate" quadrilaterals with . By Lemma 3, the area of each quadrilateral is smaller than . On the other hand, by Lemma 1, the area of is infinite, therefore has no bounds. This is a contradiction and the proof is concluded.
See also
Nash embedding theorem, states that every Riemannian manifold can be isometrically embedded into some Euclidean space.
References
, Differential Geometry of Curves and Surfaces, Prentice Hall, 1976.
, A Comprehensive Introduction to Differential Geometry, Publish or Perish, 1999.
Hyperbolic geometry
Theorems in differential geometry
Articles containing proofs | Hilbert's theorem (differential geometry) | [
"Mathematics"
] | 1,239 | [
"Theorems in differential geometry",
"Articles containing proofs",
"Theorems in geometry"
] |
11,720,339 | https://en.wikipedia.org/wiki/Cercospora%20gerberae | Cercospora gerberae is a fungal plant pathogen.
References
gerberae
Fungal plant pathogens and diseases
Fungus species | Cercospora gerberae | [
"Biology"
] | 27 | [
"Fungi",
"Fungus species"
] |
11,720,467 | https://en.wikipedia.org/wiki/Timeline%20of%20the%20introduction%20of%20color%20television%20in%20countries%20and%20territories | This is a list of when the first color television broadcasts were transmitted to the general public. Non-public field tests, closed-circuit demonstrations and broadcasts available from other countries are not included, while including dates when the last black-and-white stations in the country switched to color or shutdown all black-and white television sets, which has been highlighted in red. This list also includes nations subdivisions.
List in alphabetical order by country and territory
Note: Asterisks (*) after locations below are for "Television in LOCATION" links.
List of countries and territories that never had black and white television
Countries and territories that never had black and white television (i.e., their first broadcasts were in color) are not included in the table above.
(Bechuanaland)
(Swaziland)
(Nyasaland)
(South West Africa)
(Ceylon)
(Tanganyika)
List by each nations subdivsions
Algeria
- 1973
Sétif - 1974
Aïn Témouchent - 1975
Batna - 1975
Boumerdès - 1975
Chlef - 1975
Constantine - 1975
El Bayadh - 1975
Ghardaïa - 1975
Oran - 1975
Skikda - 1975
Tamanrasset - 1975
Tindouf - 1975
Tlemcen - 1975
Khenchela - 1977
Bordj Bou Arreridj - 1979
Argentina
- 1978
- 1978
- 1978
- 1978
- 1978
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
- 1980
Armenia
Yerevan - 1973
Lori - 1976
Aragatsotn - 1978
Gegharkunik - 1981
Australia
- 1975
- 1975
- 1975
- 1975
- 1976
- 1978
Austria
Lower Austria - 1969
Upper Austria - 1969
Vienna - 1969
Salzburg - 1970
Vorralberg - 1970
Carinthia - 1971
Burgenland - 1972
Tyrol - 1972
Styria - 1974
Azerbaijan
Nakhcivan - 1979
Bangladesh
Dhaka - 1980
Barishal - 1981
Chittagong - 1981
Mymensingh - 1981
Rangpu - 1981
Sylhet - 1981
Belgium
- 1971
- 1971
Brazil
- 1972
- 1973
- 1973
- 1973
- 1973
- 1974
- 1974
- 1974
- 1974
- 1975
- 1975
- 1975
- 1975
- 1975
- 1975
- 1975
- 1975
- 1976
- 1976
- 1976
- 1976
- 1976
- 1977
- 1977
- 1978
- 1978
Bulgaria
Sofia - 1968
Dobrich - 1971
Blagoevgrad - 1972
Gabrovo - 1972
Haskovo - 1972
Pernik - 1972
Yambol - 1972
Kyustendil - 1973
Kardzhali - 1975
Stara Zagora - 1975
Montana - 1976
Vratsa - 1976
Cambodia
Phnom Penh - 1986
Battambang - 1989
Kampong Cham - 1989
Kampong Chhnang - 1989
Kampong Speu - 1989
Kampong Thom - 1989
Kandal - 1989
Koh Kong - 1989
Kep - 1989
Mondol Kiri - 1989
Oddar Meanchey - 1989
Pallin - 1989
Pursat - 1989
Sihanoukville - 1989
Prey Veng - 1989
Rotanakiri - 1989
Siem Reap - 1989
Stung Treng - 1989
Takeo - 1989
Tbong Khmum - 1989
Canada
- 1966
- 1966
- 1966
- 1966
- 1966
- 1966
- 1967
- 1968
- 1968
- 1969
- 1971
- 1972
- 1972
Chile
- 1977
- 1978
La Araucania - 1978
Maule - 1978
Ñuble - 1978
Valparaíso - 1978
China
Beijing - 1973
Shanghai - 1974
Guangdong - 1976
Jilin - 1977
Fujian - 1978
Hainan - 1978
Tibet - 1979
Inner Mongolia - 1979
Ningxia - 1980
Anhui - 1981
Chongqing - 1981
Gansu - 1981
Heilionjiang - 1981
Hunan - 1981
Jiangxi - 1981
Shandong - 1981
Shanxi - 1981
Xinjiang - 1982
Henan - 1983
Colombia
- 1973
- 1977
- 1978
- 1979
- 1979
- 1979
- 1979
- 1979
- 1979
- 1980
- 1980
- 1981
Cuba
Havana - 1958
Camaguey - 1970
Artemisa - 1978
Cienfuegos - 1980
Granma - 1982
Las Tunas - 1982
Mayabeque - 1982
Santiago de Cuba - 1982
Villa Clara - 1982
Sanci Spiritus - 1985
Denmark
Hovedstaden - 1968
Nordjylland - 1969
Syddanmark - 1970
Ecuador
- 1973
- 1976
- 1978
- 1978
- 1978
- 1978
- 1978
- 1978
- 1978
Egypt
Cairo - 1973
Damietta - 1974
Suez - 1974
Port Said - 1975
France
- 1968
- 1967
- 1972
- 1968
- 1974
- 1968
- 1967
- 1968
- 1968
- 1968
Germany
- 1967
- 1969
- 1969
- 1971
Greece
Attica - 1976
Crete - 1978
Peloponeese - 1978
Thessaly - 1978
Epirus - 1981
Mount Athos - 1981
Western Greece - 1981
Western Macedonia - 1981
India
Andra Pradesh - 1982
Arunachal Pradesh - 1982
Assam - 1982
Bihar - 1982
Chhatisgarh - 1982
Delhi - 1982
Goa - 1982
Gujarat - 1982
Haryana - 1982
Karnataka - 1982
Kerala - 1982
Ladakh - 1982
Madhya Pradesh - 1982
Manipur - 1982
Meghalaya - 1982
Mizoram - 1982
Nagaland - 1982
Odisha - 1982
Punjab - 1982
Sikkim - 1982
Tamil Nadu - 1982
Telangana - 1982
Uttar Pradesh - 1982
Indonesia
- 1977
- 1978
- 1979
- 1983
Italy
- 1972
- 1975
- 1976
- 1977
Japan
- 1960
- 1960
- 1962
- 1964
- 1964
- 1965
- 1966
- 1966
- 1967
- 1967
- 1968
- 1968
- 1968
- 1969
- 1969
- 1970
Malaysia
- 1980
- 1980
Mexico
- 1963
- 1965
- 1967
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
- 1968
Morocco
Rabat - Salé - Kénitra - 1972
Grand Casablanca - Settat - 1973
Marrakech - Safi - 1973
Béni Mellal - Khénifra - 1974
Netherlands
- 1967
- 1970
New Zealand
Auckland - 1973
Wellington - 1973
Nelson - 1974
Southland - 1974
Tasman - 1974
Waikato - 1974
Marlborough - 1975
Taranaki - 1975
Panama
Panamá - 1972
Colón - 1974
Emberá-Wounaan - 1974
Herrera - 1974
- 1974
Peru
- 1978
- 1980
- 1980
- 1980
- 1980
Huánuco - 1980
- 1980
- 1980
- 1980
- 1980
- 1980
Philippines
- 1966
- 1969
Baguio - 1970
Davao - 1970
Zamboanga - 1971
Iloilo - 1972
- 1973
Cotabato - 1977
Subic - 1978
Portugal
- 1982
Russia
- 1970
- 1970
- 1971
- 1971
- 1971
- 1972
- 1972
- 1973
- 1974
- 1974
- 1975
- 1975
- 1976
- 1977
- 1977
- 1979
- 1980
- 1982
- 1983
Spain
- 1974
- 1975
- 1975
- 1975
- 1976
- 1976
- 1976
- 1976
- 1977
- 1979
Thailand
- 1967
- 1973
Turkey (Turkiye)
Ankara - 1981
Istanbul - 1981
Hatay - 1983
Bartın - 1984
Düzce - 1984
Edirne - 1984
Izmir - 1984
Kahramanmaraş - 1984
Niğde - 1984
Adana - 1984
Adıyaman - 1984
Diyarbakır - 1984
Gaziantep - 1984
Kilis - 1984
Malatya - 1984
Osmaniye - 1984
Şanlıurfa - 1984
United Arab Emirates
- 1974
United Kingdom
- 1969
United States
- 1954
- 1957
- 1957
- 1958
- 1959
- 1959
- 1959
- 1960
- 1960
- 1960
- 1960
- 1960
- 1961
- 1961
- 1961
- 1961
- 1961
- 1962
- 1962
- 1962
- 1962
- 1962
- 1962
- 1963
- 1963
- 1963
- 1963
- 1963
- 1963
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1964
- 1965
- 1965
- 1965
- 1965
- 1966
- 1966
- 1967
- 1969
- 1972
Uruguay
Montevideo - 1981
Cerro Largo - 1984
Durazno - 1984
- 1984
Salto - 1984
Tacuarembó - 1984
List of regional subdivisions and organizations that never had black and white television
Regional subdivisions and organisations that never had black and white television (i.e., their first broadcasts were in color) are not included in the table above.
- Cabinda
-
-
-
-
-
- Lebanese Forces (militia)
-
-
-
-
-
See also
Digital television transition
Geographical usage of television
Timeline of the introduction of television in countries
Notes
References
Technology timelines
Television timelines
Television technology | Timeline of the introduction of color television in countries and territories | [
"Technology"
] | 1,714 | [
"Information and communications technology",
"Television technology"
] |
11,721,606 | https://en.wikipedia.org/wiki/A%20Logic%20Named%20Joe | "A Logic Named Joe" is a science fiction short story by American writer Murray Leinster, first published in the March 1946 issue of Astounding Science Fiction. (The story appeared under Leinster's real name, Will F. Jenkins. That issue of Astounding also included a story under the Leinster pseudonym called "Adapter".) The story is particularly noteworthy as a prediction of massively networked personal computers and their drawbacks, written at a time when computing was in its infancy; it has been described as "the first computer-paranoia yarn".
Plot
The story's narrator is a "logic repairman" nicknamed Ducky. A "logic" is a computer-like device described as looking "like a vision receiver used to, only it's got keys instead of dials and you punch the keys for what you wanna get".
In the story, a logic (whom Ducky later calls Joe) develops some degree of sapience and ambition. Joe proceeds to switch around a few relays in "the tank" (one of a distributed set of central information repositories), and cross-correlate all information ever assembled – yielding highly unexpected results. It then proceeds to freely disseminate all of those results to everyone on demand (and simultaneously disabling all of the content-filtering protocols). Logics begin offering up unexpected assistance to everyone which includes designing custom chemicals that alleviate inebriation, giving sex advice to small children, and plotting the perfect murder.
Eventually Ducky "saves civilization" by locating and turning off the only logic capable of doing this.
Reception
In 1982, Isaac Asimov lauded the story as "prophetic" and "one of [Leinster's] finest", and observed that it "actually get(s) things right", if one "change(s) 'logics' to 'home computers' and make(s) a few other inconsequential semantic changes".
IN 2007, Dave Truesdale praised it as "absolutely incredible" and "one of the greatest predictive, prophetic short SF stories in history, bar none", noting "how righteously dead on Leinster is in his depiction of the home personal computer and the internet in 1946!"
In 2012, Steven H Silver, reviewing the 2005 Leinster collection A Logic Named Joe, stated that "(i)f it hadn't predicted the rise of the internet, 'A Logic Named Joe' would be seen as a dated story rather than as an important work", but emphasized that it is "still an enjoyable story".
Publication history
"A Logic Named Joe" has appeared in the collections Sidewise in Time (Shasta, 1950), The Best of Murray Leinster (Del Rey, 1978), First Contacts (NESFA, 1998), and A Logic Named Joe (Baen, 2005), and was also included in the Machines That Think compilation, with notes by Isaac Asimov, published 1984 Holt, Rinehart, and Winston.
References
External links
Fictional computers
1946 short stories
Science fiction short stories
Works originally published in Analog Science Fiction and Fact
Works by Murray Leinster | A Logic Named Joe | [
"Technology"
] | 639 | [
"Fictional computers",
"Computers"
] |
11,722,993 | https://en.wikipedia.org/wiki/Raster%20Document%20Object | The .RDO (Raster Document Object) file format is the native format used by Xerox's DocuTech range of hardware and software, that underpins the company's "Xerox Document On Demand" "XDOD" systems. It is therefore a significant file format for the "print on demand" market sector, along with PostScript and PDF.
RDO is a metafile format based on the Open Document Architecture (ODA) specifications: In Xerox's RDO implementation, description and control information is stored within the RDO file, while raster images are stored separately, usually in a separate folder, as TIFF files. The RDO file dictates which bitmap images will be used on each page of a document, and where they will be placed.
Features and disadvantages
This approach has advantages and disadvantages over the monolithic approach used by PDF:
The disadvantages of RDO are that it is a largely proprietary format, and the multi-file approach means that file management and orphan control is more of an issue: one cannot tell from a computer's file system whether all the files required for a document to print are present and correct.
In RDO's favor, the multi-file approach allows a networked device to load the small RDO file and then request the larger bitmap files only when necessary: This allows a full job specification to be loaded and installed over a network almost immediately, with the larger bitmap files only having to be transferred as and when needed, allowing more flexibility for managing network traffic loading.
The TIFF file format is highly portable, and Xerox's MakeReady software, supplied with its XDOD systems, readily imports and export postscript files: however, the Xerox "on demand" systems typically require a document library to be stored as RDO / TIFF files, and most non-Xerox applications will not read RDO structures directly.
See also
Xerox
DocuTech
Print on demand
Open Document Architecture
Tag Image File Format
Portable Document Format
References
"Document encoding formats for Phoenix: an example of on-demand publishing" - Summary Report prepared by South Bank University
Oya Y. Rieger and Anne R. Kenney "Risk Management of Digital Information Case Study for Image File Format"
Xerox
Page description languages
Digital press
Computer file formats
RDO | Raster Document Object | [
"Technology"
] | 491 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
11,723,786 | https://en.wikipedia.org/wiki/PG%201159-035 | PG 1159-035 is the prototypical PG 1159 star after which the class of PG 1159 stars was named. It was discovered in the Palomar-Green survey of ultraviolet-excess stellar objects and, like the other PG 1159 stars, is in transition between being the central star of a planetary nebula and being a white dwarf.
The luminosity of PG 1159-035 was observed to vary in 1979, and it was given the variable star designation GW Vir in 1985. Variable PG 1159 stars may be called GW Vir stars, or the class may be split into DOV and PNNV stars. The variability of PG 1139-035, like that of other GW Vir stars, arises from non-radial gravity wave pulsations within itself. Its light curve has been observed intensively by the Whole Earth Telescope over a 264-hour period in March 1989, and over 100 of its vibrational modes have been found in the resulting vibrational spectrum, with periods ranging from 300 to 1,000 seconds.
References
Virgo (constellation)
Pulsating white dwarfs
Virginis, GW
TIC objects | PG 1159-035 | [
"Astronomy"
] | 238 | [
"Virgo (constellation)",
"Constellations"
] |
11,724,178 | https://en.wikipedia.org/wiki/1s%20Slater-type%20function | A normalized 1s Slater-type function is a function which is used in the descriptions of atoms and in a broader way in the description of atoms in molecules. It is particularly important as the accurate quantum theory description of the smallest free atom, hydrogen. It has the form
It is a particular case of a Slater-type orbital (STO) in which the principal quantum number n is 1. The parameter is called the Slater orbital exponent. Related sets of functions can be used to construct STO-nG basis sets which are used in quantum chemistry.
Applications for hydrogen-like atomic systems
A hydrogen-like atom or a hydrogenic atom is an atom with one electron. Except for the hydrogen atom itself (which is neutral) these atoms carry positive charge , where is the atomic number of the atom. Because hydrogen-like atoms are two-particle systems with an interaction depending only on the distance between the two particles, their (non-relativistic) Schrödinger equation can be exactly solved in analytic form. The solutions are one-electron functions and are referred to as hydrogen-like atomic orbitals.
The electronic Hamiltonian (in atomic units) of a Hydrogenic system is given by
, where is the nuclear charge of the hydrogenic atomic system. The 1s electron of a hydrogenic systems can be accurately described by the corresponding Slater orbital:
, where is the Slater exponent. This state, the ground state, is the only state that can be described by a Slater orbital. Slater orbitals have no radial nodes, while the excited states of the hydrogen atom have radial nodes.
Exact energy of a hydrogen-like atom
The energy of a hydrogenic system can be exactly calculated analytically as follows :
, where
. Using the expression for Slater orbital, the integrals can be exactly solved. Thus,
The optimum value for is obtained by equating the differential of the energy with respect to as zero.
. Thus
Non-relativistic energy
The following energy values are thus calculated by using the expressions for energy and for the Slater exponent.
Hydrogen : H
and
−0.5 Eh
−13.60569850 eV
−313.75450000 kcal/mol
Gold : Au(78+)
and
−3120.5 Eh
−84913.16433850 eV
−1958141.8345 kcal/mol.
Relativistic energy of Hydrogenic atomic systems
Hydrogenic atomic systems are suitable models to demonstrate the relativistic effects in atomic systems in a simple way. The energy expectation value can calculated by using the Slater orbitals with or without considering the relativistic correction for the Slater exponent . The relativistically corrected Slater exponent is given as
.
The relativistic energy of an electron in 1s orbital of a hydrogenic atomic systems is obtained by solving the Dirac equation.
.
Following table illustrates the relativistic corrections in energy and it can be seen how the relativistic correction scales with the atomic number of the system.
References
Atoms
Quantum models | 1s Slater-type function | [
"Physics"
] | 627 | [
"Quantum models",
"Atoms",
"Quantum mechanics",
"Matter"
] |
11,724,245 | https://en.wikipedia.org/wiki/Hydrogen-like%20atom | A hydrogen-like atom (or hydrogenic atom) is any atom or ion with a single valence electron. These atoms are isoelectronic with hydrogen. Examples of hydrogen-like atoms include, but are not limited to, hydrogen itself, all alkali metals such as Rb and Cs, singly ionized alkaline earth metals such as Ca+ and Sr+ and other ions such as He+, Li2+, and Be3+ and isotopes of any of the above. A hydrogen-like atom includes a positively charged core consisting of the atomic nucleus and any core electrons as well as a single valence electron. Because helium is common in the universe, the spectroscopy of singly ionized helium is important in EUV astronomy, for example, of DO white dwarf stars.
The non-relativistic Schrödinger equation and relativistic Dirac equation for the hydrogen atom can be solved analytically, owing to the simplicity of the two-particle physical system. The one-electron wave function solutions are referred to as hydrogen-like atomic orbitals. Hydrogen-like atoms are of importance because their corresponding orbitals bear similarity to the hydrogen atomic orbitals.
Other systems may also be referred to as "hydrogen-like atoms", such as muonium (an electron orbiting an antimuon), positronium (an electron and a positron), certain exotic atoms (formed with other particles), or Rydberg atoms (in which one electron is in such a high energy state that it sees the rest of the atom effectively as a point charge).
Schrödinger solution
In the solution to the Schrödinger equation, which is non-relativistic, hydrogen-like atomic orbitals are eigenfunctions of the one-electron angular momentum operator L and its z component Lz. A hydrogen-like atomic orbital is uniquely identified by the values of the principal quantum number n, the angular momentum quantum number l, and the magnetic quantum number m. The energy eigenvalues do not depend on l or m, but solely on n. To these must be added the two-valued spin quantum number ms = ±, setting the stage for the Aufbau principle. This principle restricts the allowed values of the four quantum numbers in electron configurations of more-electron atoms. In hydrogen-like atoms all degenerate orbitals of fixed n and l, m and s varying between certain values (see below) form an atomic shell.
The Schrödinger equation of atoms or ions with more than one electron has not been solved analytically, because of the computational difficulty imposed by the Coulomb interaction between the electrons. Numerical methods must be applied in order to obtain (approximate) wavefunctions or other properties from quantum mechanical calculations. Due to the spherical symmetry (of the Hamiltonian), the total angular momentum J of an atom is a conserved quantity. Many numerical procedures start from products of atomic orbitals that are eigenfunctions of the one-electron operators L and Lz. The radial parts of these atomic orbitals are sometimes numerical tables or are sometimes Slater orbitals. By angular momentum coupling many-electron eigenfunctions of J2 (and possibly S2) are constructed.
In quantum chemical calculations hydrogen-like atomic orbitals cannot serve as an expansion basis, because they are not complete. The non-square-integrable continuum (E > 0) states must be included to obtain a complete set, i.e., to span all of one-electron Hilbert space.
In the simplest model, the atomic orbitals of hydrogen-like atoms/ions are solutions to the Schrödinger equation in a spherically symmetric potential. In this case, the potential term is the potential given by Coulomb's law:
where
ε0 is the permittivity of the vacuum,
Z is the atomic number (number of protons in the nucleus),
e is the elementary charge (charge of an electron),
r is the distance of the electron from the nucleus.
After writing the wave function as a product of functions:
(in spherical coordinates), where are spherical harmonics, we arrive at the following Schrödinger equation:
where is, approximately, the mass of the electron (more accurately, it is the reduced mass of the system consisting of the electron and the nucleus), and is the reduced Planck constant.
Different values of l give solutions with different angular momentum, where l (a non-negative integer) is the quantum number of the orbital angular momentum. The magnetic quantum number m (satisfying ) is the (quantized) projection of the orbital angular momentum on the z-axis. See here for the steps leading to the solution of this equation.
Non-relativistic wavefunction and energy
In addition to l and m, a third integer n > 0, emerges from the boundary conditions placed on R. The functions R and Y that solve the equations above depend on the values of these integers, called quantum numbers. It is customary to subscript the wave functions with the values of the quantum numbers they depend on. The final expression for the normalized wave function is:
where:
are the generalized Laguerre polynomials.
where is the fine-structure constant. Here, is the reduced mass of the nucleus-electron system, that is, where is the mass of the nucleus. Typically, the nucleus is much more massive than the electron, so (but in positronium, for instance, ). is the Bohr radius.
function is a spherical harmonic.
parity due to angular wave function is .
Quantum numbers
The quantum numbers , and are integers and can have the following values:
For a group-theoretical interpretation of these quantum numbers, see this article. Among other things, this article gives group-theoretical reasons why and .
Angular momentum
Each atomic orbital is associated with an angular momentum L. It is a vector operator, and the eigenvalues of its square L2 ≡ Lx2 + Ly2 + Lz2 are given by:
The projection of this vector onto an arbitrary direction is quantized. If the arbitrary direction is called z, the quantization is given by:
where m is restricted as described above. Note that L2 and Lz commute and have a common eigenstate, which is in accordance with Heisenberg's uncertainty principle. Since Lx and Ly do not commute with Lz, it is not possible to find a state that is an eigenstate of all three components simultaneously. Hence the values of the x and y components are not sharp, but are given by a probability function of finite width. The fact that the x and y components are not well-determined, implies that the direction of the angular momentum vector is not well determined either, although its component along the z-axis is sharp.
These relations do not give the total angular momentum of the electron. For that, electron spin must be included.
This quantization of angular momentum closely parallels that proposed by Niels Bohr (see Bohr model) in 1913, with no knowledge of wavefunctions.
Including spin–orbit interaction
In a real atom, the spin of a moving electron can interact with the electric field of the nucleus through relativistic effects, a phenomenon known as spin–orbit interaction. When one takes this coupling into account, the spin and the orbital angular momentum are no longer conserved, which can be pictured by the electron precessing. Therefore, one has to replace the quantum numbers l, m and the projection of the spin ms by quantum numbers that represent the total angular momentum (including spin), j and mj, as well as the quantum number of parity.
See the next section on the Dirac equation for a solution that includes the coupling.
Solution to Dirac equation
In 1928 in England Paul Dirac found an equation that was fully compatible with special relativity. The equation was solved for hydrogen-like atoms the same year (assuming a simple Coulomb potential around a point charge) by the German Walter Gordon. Instead of a single (possibly complex) function as in the Schrödinger equation, one must find four complex functions that make up a bispinor. The first and second functions (or components of the spinor) correspond (in the usual basis) to spin "up" and spin "down" states, as do the third and fourth components.
The terms "spin up" and "spin down" are relative to a chosen direction, conventionally the z direction. An electron may be in a superposition of spin up and spin down, which corresponds to the spin axis pointing in some other direction. The spin state may depend on location.
An electron in the vicinity of a nucleus necessarily has non-zero amplitudes for the third and fourth components. Far from the nucleus these may be small, but near the nucleus they become large.
The eigenfunctions of the Hamiltonian, which means functions with a definite energy (and which therefore do not evolve except for a phase shift), have energies characterized not by the quantum number n only (as for the Schrödinger equation), but by n and a quantum number j, the total angular momentum quantum number. The quantum number j determines the sum of the squares of the three angular momenta to be j(j+1) (times ħ2, see Planck constant). These angular momenta include both orbital angular momentum (having to do with the angular dependence of ψ) and spin angular momentum (having to do with the spin state). The splitting of the energies of states of the same principal quantum number n due to differences in j is called fine structure. The total angular momentum quantum number j ranges from 1/2 to n−1/2.
The orbitals for a given state can be written using two radial functions and two angle functions. The radial functions depend on both the principal quantum number n and an integer k, defined as:
where ℓ is the azimuthal quantum number that ranges from 0 to n−1. The angle functions depend on k and on a quantum number m which ranges from −j to j by steps of 1. The states are labeled using the letters S, P, D, F et cetera to stand for states with ℓ equal to 0, 1, 2, 3 et cetera (see azimuthal quantum number), with a subscript giving j. For instance, the states for n=4 are given in the following table (these would be prefaced by n, for example 4S1/2):
These can be additionally labeled with a subscript giving m. There are 2n2 states with principal quantum number n, 4j+2 of them with any allowed j except the highest (j=n−1/2) for which there are only 2j+1. Since the orbitals having given values of n and j have the same energy according to the Dirac equation, they form a basis for the space of functions having that energy.
The energy, as a function of n and |k| (equal to j+1/2), is:
(The energy of course depends on the zero-point used.) Note that if were able to be more than 137 (higher than any known element) then we would have a negative value inside the square root for the S1/2 and P1/2 orbitals, which means they would not exist. The Schrödinger solution corresponds to replacing the inner bracket in the second expression by 1. The accuracy of the energy difference between the lowest two hydrogen states calculated from the Schrödinger solution is about 9 ppm (90 μeV too low, out of around 10 eV), whereas the accuracy of the Dirac equation for the same energy difference is about 3 ppm (too high). The Schrödinger solution always puts the states at slightly higher energies than the more accurate Dirac equation. The Dirac equation gives some levels of hydrogen quite accurately (for instance the 4P1/2 state is given an energy only about eV too high), others less so (for instance, the 2S1/2 level is about eV too low). The modifications of the energy due to using the Dirac equation rather than the Schrödinger solution is of the order of α2, and for this reason α is called the fine-structure constant.
The solution to the Dirac equation for quantum numbers n, k, and m, is:
where the Ωs are columns of the two spherical harmonics functions shown to the right. signifies a spherical harmonic function:
in which is an associated Legendre polynomial. (Note that the definition of Ω may involve a spherical harmonic that doesn't exist, like , but the coefficient on it will be zero.)
Here is the behavior of some of these angular functions. The normalization factor is left out to simplify the expressions.
From these we see that in the S1/2 orbital (k = −1), the top two components of Ψ have zero orbital angular momentum like Schrödinger S orbitals, but the bottom two components are orbitals like the Schrödinger P orbitals. In the P1/2 solution (k = 1), the situation is reversed. In both cases, the spin of each component compensates for its orbital angular momentum around the z axis to give the right value for the total angular momentum around the z axis.
The two Ω spinors obey the relationship:
To write the functions and let us define a scaled radius ρ:
with
where E is the energy () given above. We also define γ as:
When k = −n (which corresponds to the highest j possible for a given n, such as 1S1/2, 2P3/2, 3D5/2...), then and are:
where A is a normalization constant involving the gamma function:
Notice that because of the factor Zα, f(r) is small compared to g(r). Also notice that in this case, the energy is given by
and the radial decay constant C by
In the general case (when k is not −n), are based on two generalized Laguerre polynomials of order and :
with A now defined as
Again f is small compared to g (except at very small r) because when k is positive the first terms dominate, and α is big compared to γ−k, whereas when k is negative the second terms dominate and α is small compared to γ−k. Note that the dominant term is quite similar to corresponding the Schrödinger solution – the upper index on the Laguerre polynomial is slightly less (2γ+1 or 2γ−1 rather than 2ℓ+1, which is the nearest integer), as is the power of ρ (γ or γ−1 instead of ℓ, the nearest integer). The exponential decay is slightly faster than in the Schrödinger solution.
The normalization factor makes the integral over all space of the square of the absolute value equal to 1.
1S orbital
Here is the 1S1/2 orbital, spin up, without normalization:
Note that γ is a little less than 1, so the top function is similar to an exponentially decreasing function of r except that at very small r it theoretically goes to infinity. But the value of the only surpasses 10 at a value of r smaller than which is a very small number (much less than the radius of a proton) unless is very large.
The 1S1/2 orbital, spin down, without normalization, comes out as:
We can mix these in order to obtain orbitals with the spin oriented in some other direction, such as:
which corresponds to the spin and angular momentum axis pointing in the x direction. Adding i times the "down" spin to the "up" spin gives an orbital oriented in the y direction.
2P1/2 and 2S1/2 orbitals
To give another example, the 2P1/2 orbital, spin up, is proportional to:
(Remember that . C is about half what it is for the 1S orbital, but γ is still the same.)
Notice that when ρ is small compared to α (or r is small compared to ) the "S" type orbital dominates (the third component of the bispinor).
For the 2S1/2 spin up orbital, we have:
Now the first component is S-like and there is a radius near ρ = 2 where it goes to zero, whereas the bottom two-component part is P-like.
Negative-energy solutions
In addition to bound states, in which the energy is less than that of an electron infinitely separated from the nucleus, there are solutions to the Dirac equation at higher energy, corresponding to an unbound electron interacting with the nucleus. These solutions are not normalizable, but solutions can be found which tend toward zero as goes to infinity (which is not possible when except at the above-mentioned bound-state values of ). There are similar solutions with These negative-energy solutions are just like positive-energy solutions having the opposite energy but for a case in which the nucleus repels the electron instead of attracting it, except that the solutions for the top two components switch places with those for the bottom two.
Negative-energy solutions to Dirac's equation exist even in the absence of a Coulomb force exerted by a nucleus. Dirac hypothesized that we can consider almost all of these states to be already filled. If one of these negative-energy states is not filled, this manifests itself as though there is an electron which is repelled by a positively-charged nucleus. This prompted Dirac to hypothesize the existence of positively-charged electrons, and his prediction was confirmed with the discovery of the positron.
Beyond Gordon's solution to the Dirac equation
The Dirac equation with a simple Coulomb potential generated by a point-like non-magnetic nucleus was not the last word, and its predictions differ from experimental results as mentioned earlier. More accurate results include the Lamb shift (radiative corrections arising from quantum electrodynamics) and hyperfine structure.
See also
Rydberg atom
Positronium
Exotic atom
Two-electron atom
Hydrogen molecular ion
Notes
References
Tipler, Paul & Ralph Llewellyn (2003). Modern Physics (4th ed.). New York: W. H. Freeman and Company.
Atoms
Quantum mechanics
Hydrogen | Hydrogen-like atom | [
"Physics"
] | 3,794 | [
"Quantum mechanics",
"Theoretical physics",
"Atoms",
"Matter"
] |
11,724,459 | https://en.wikipedia.org/wiki/Bilirubin%20diglucuronide | Bilirubin di-glucuronide is a conjugated form of bilirubin formed in bilirubin metabolism. The hydrophilic character of bilirubin diglucuronide enables it to be water-soluble. It is pumped across the hepatic canalicular membrane into the bile by the transporter MRP2.
See also
Bilirubin mono-glucuronide
References
Metabolism | Bilirubin diglucuronide | [
"Chemistry",
"Biology"
] | 92 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
11,724,589 | https://en.wikipedia.org/wiki/Iota%20Draconis%20b | Iota Draconis b, formally named Hypatia (pronounced or ), is an exoplanet orbiting the K-type giant star Iota Draconis about 101.2 light-years (31 parsecs, or nearly km) from Earth in the constellation Draco. The exoplanet was found by using the radial velocity method, from radial-velocity measurements via observation of Doppler shifts in the spectrum of the planet's parent star. It was the first planet discovered orbiting a giant star.
Physical characteristics
Mass
Iota Draconis b is a "super-Jupiter", a planet that has mass larger than that of the gas giants Jupiter and Saturn. It has an estimated minimum mass of around 11.82 .
In 2021, astrometric observations revealed the true mass of Iota Draconis b to be 16.4 .
Host star
The planet orbits a (K-type) giant star named Edasich (designated Iota Draconis). The star has exhausted the hydrogen supply in its core and is currently fusing helium. The star has a mass of 1.82 and a radius of around 12 . It has a surface temperature of 4545 K and is around 800 million years old based on its evolution. Although much younger than the Sun, the higher mass of this star correlates to a faster evolution, leading to the host star having already departed from the main sequence. When on the main sequence, Edasich was probably a Class A star with surface temperature between 7,400 and 10,000K. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K.
The star's apparent magnitude, a measure of how bright it appears from Earth, is 3.31. Therefore, Edasich can be seen with the naked eye.
Orbit
Iota Draconis b orbits its star with nearly 55 times the Sun's luminosity (55 ) every 511 days at an average distance of 1.275 AU (compared to Mars' orbital distance from the Sun, which is 1.52 AU) It has a very eccentric orbit, with an eccentricity of 0.7124.
Discovery
Discovered in 2002 during a radial velocity study of K-class giant stars, its eccentric orbit aided its detection, as giant stars have pulsations which can mimic the presence of a planet.
Name
Following its discovery the planet was designated Iota Draconis b. In July 2014, the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced that the winning name for this planet was Hypatia. The winning name was submitted by Hypatia, a student society of the Physics Faculty of the Universidad Complutense de Madrid, Spain. Hypatia was a famous Greek astronomer, mathematician, and philosopher.
References
External links
SolStation: Edasich/Iota Draconis
Draco (constellation)
Giant planets
Exoplanets discovered in 2002
Exoplanets detected by radial velocity
Exoplanets detected by astrometry
Exoplanets with proper names | Iota Draconis b | [
"Astronomy"
] | 669 | [
"Constellations",
"Draco (constellation)"
] |
11,724,761 | https://en.wikipedia.org/wiki/Trophic%20level | The trophic level of an organism is the position it occupies in a food web. Within a food web, a food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a part of a wider food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into the ecosystem for recycling. Decomposers, such as bacteria and fungi (mushrooms), feed on waste and dead matter, converting it into inorganic chemicals that can be recycled as mineral nutrients for plants to use again.
Trophic levels can be represented by numbers, starting at level 1 with plants. Further trophic levels are numbered subsequently according to how far the organism is along the food chain.
Level 1 Plants and algae make their own food and are called producers.
Level 2 Herbivores eat plants and are called primary consumers.
Level 3 Carnivores that eat herbivores are called secondary consumers.
Level 4 Carnivores that eat other carnivores are called tertiary consumers.
Apex predator By definition, healthy adult apex predators have no predators (with members of their own species a possible exception) and are at the highest numbered level of their food web.
In real-world ecosystems, there is more than one food chain for most organisms, since most organisms eat more than one kind of food or are eaten by more than one type of predator. A diagram that sets out the intricate network of intersecting and overlapping food chains for an ecosystem is called its food web. Decomposers are often left off food webs, but if included, they mark the end of a food chain. Thus food chains start with primary producers and end with decay and decomposers. Since decomposers recycle nutrients, leaving them so they can be reused by primary producers, they are sometimes regarded as occupying their own trophic level.
The trophic level of a species may vary if it has a choice of diet. Virtually all plants and phytoplankton are purely phototrophic and are at exactly level 1.0. Many worms are at around 2.1; insects 2.2; jellyfish 3.0; birds 3.6. A 2013 study estimates the average trophic level of human beings at 2.21, similar to pigs or anchovies. This is only an average, and plainly both modern and ancient human eating habits are complex and vary greatly. For example, a traditional Inuit living on a diet consisting primarily of seals would have a trophic level of nearly 5.
Biomass transfer efficiency
In general, each trophic level relates to the one below it by absorbing some of the energy it consumes, and in this way can be regarded as resting on, or supported by, the next lower trophic level. Food chains can be diagrammed to illustrate the amount of energy that moves from one feeding level to the next in a food chain. This is called an energy pyramid. The energy transferred between levels can also be thought of as approximating to a transfer in biomass, so energy pyramids can also be viewed as biomass pyramids, picturing the amount of biomass that results at higher levels from biomass consumed at lower levels. However, when primary producers grow rapidly and are consumed rapidly, the biomass at any one moment may be low; for example, phytoplankton (producer) biomass can be low compared to the zooplankton (consumer) biomass in the same area of ocean.
The efficiency with which energy or biomass is transferred from one trophic level to the next is called the ecological efficiency. Consumers at each level convert on average only about 10% of the chemical energy in their food to their own organic tissue (the ten-per cent law). For this reason, food chains rarely extend for more than 5 or 6 levels. At the lowest trophic level (the bottom of the food chain), plants convert about 1% of the sunlight they receive into chemical energy. It follows from this that the total energy originally present in the incident sunlight that is finally embodied in a tertiary consumer is about 0.001%
Evolution
Both the number of trophic levels and the complexity of relationships between them evolve as life diversifies through time, the exception being intermittent mass extinction events.
Fractional trophic levels
Food webs largely define ecosystems, and the trophic levels define the position of organisms within the webs. But these trophic levels are not always simple integers, because organisms often feed at more than one trophic level. For example, some carnivores also eat plants, and some plants are carnivores. A large carnivore may eat both smaller carnivores and herbivores; the bobcat eats rabbits, but the mountain lion eats both bobcats and rabbits. Animals can also eat each other; the bullfrog eats crayfish and crayfish eat young bullfrogs. The feeding habits of a juvenile animal, and, as a consequence, its trophic level, can change as it grows up.
The fisheries scientist Daniel Pauly sets the values of trophic levels to one in plants and detritus, two in herbivores and detritivores (primary consumers), three in secondary consumers, and so on. The definition of the trophic level, TL, for any consumer species is:
where is the fractional trophic level of the prey j, and represents the fraction of j in the diet of i. That is, the consumer trophic level is one plus the weighted average of how much different trophic levels contribute to its food.
In the case of marine ecosystems, the trophic level of most fish and other marine consumers takes a value between
2.0 and 5.0. The upper value, 5.0, is unusual, even for large fish, though it occurs in apex predators of marine mammals, such as polar bears and orcas.
In addition to observational studies of animal behavior, and quantification of animal stomach contents, trophic level can be quantified through stable isotope analysis of animal tissues such as muscle, skin, hair, bone collagen. This is because there is a consistent increase in the nitrogen isotopic composition at each trophic level caused by fractionations that occur with the synthesis of biomolecules; the magnitude of this increase in nitrogen isotopic composition is approximately 3–4‰.
Mean trophic level
In fisheries, the mean trophic level for the fisheries catch across an entire area or ecosystem is calculated for year as:
where is the annual catch of the species or group in year , and is the trophic level for species as defined above.
Fish at higher trophic levels usually have a higher economic value, which can result in overfishing at the higher trophic levels. Earlier reports found precipitous declines in mean trophic level of fisheries catch, in a process known as fishing down the food web. However, more recent work finds no relation between economic value and trophic level; and that mean trophic levels in catches, surveys and stock assessments have not in fact declined, suggesting that fishing down the food web is not a global phenomenon. However Pauly et al. note that trophic levels peaked at 3.4 in 1970 in the northwest and west-central Atlantic, followed by a subsequent decline to 2.9 in 1994. They report a shift away from long-lived, piscivorous, high-trophic-level bottom fishes, such as cod and haddock, to short-lived, planktivorous, low-trophic-level invertebrates (e.g., shrimp) and small, pelagic fish (e.g., herring). This shift from high-trophic-level fishes to low-trophic-level invertebrates and fishes is a response to changes in the relative abundance of the preferred catch. They consider that this is part of the global fishery collapse, which finds an echo in the overfished Mediterranean Sea.
Humans have a mean trophic level of about 2.21, about the same as a pig or an anchovy.
FiB index
Since biomass transfer efficiencies are only about 10%, it follows that the rate of biological production is much greater at lower trophic levels than it is at higher levels. Fisheries catch, at least, to begin with, will tend to increase as the trophic level declines. At this point the fisheries will target species lower in the food web. In 2000, this led Pauly and others to construct a "Fisheries in Balance" index, usually called the FiB index. The FiB index is defined, for any year y, by
where is the catch at year y, is the mean trophic level of the catch at year y, is the catch, the mean trophic level of the catch at the start of the series being analyzed, and is the transfer efficiency of biomass or energy between trophic levels.
The FiB index is stable (zero) over periods of time when changes in trophic levels are matched by appropriate changes in the catch in the opposite direction. The index increases if catches increase for any reason, e.g. higher fish biomass, or geographic expansion. Such decreases explain the "backward-bending" plots of trophic level versus catch originally observed by Pauly and others in 1998.
Tritrophic and other interactions
One aspect of trophic levels is called tritrophic interaction. Ecologists often restrict their research to two trophic levels as a way of simplifying the analysis; however, this can be misleading if tritrophic interactions (such as plant–herbivore–predator) are not easily understood by simply adding pairwise interactions (plant-herbivore plus herbivore–predator, for example). Significant interactions can occur between the first trophic level (plant) and the third trophic level (a predator) in determining herbivore population growth, for example. Simple genetic changes may yield morphological variants in plants that then differ in their resistance to herbivores because of the effects of the plant architecture on enemies of the herbivore. Plants can also develop defenses against herbivores such as chemical defenses.
See also
Cascade effect
Energy flow (ecology)
Marine trophic level
Mesopredator release hypothesis
Trophic cascade
Trophic dynamics – Food web
Trophic state index – applied to lakes
References
External links
Trophic levels BBC. Last updated March 2004.
Fisheries science
Food chains
Ecology | Trophic level | [
"Biology"
] | 2,571 | [
"Ecology"
] |
8,598,906 | https://en.wikipedia.org/wiki/Pharmaceutical%20formulation | Pharmaceutical formulation, in pharmaceutics, is the process in which different chemical substances, including the active drug, are combined to produce a final medicinal product. The word formulation is often used in a way that includes dosage form.
Stages and timeline
Formulation studies involve developing a preparation of the drug which is both stable and acceptable to the patients. For orally administered drugs, this usually involves incorporating the drug into a tablet or a capsule. It is important to make the distinction that a tablet contains a variety of other potentially inert substances apart from the drug itself, and studies have to be carried out to ensure that the encapsulated drug is compatible with these other substances in a way that does not cause harm, whether direct or indirect.
Preformulation involves the characterization of a drug's physical, chemical, and mechanical properties in order to choose what other ingredients (excipients) should be used in the preparation. In dealing with protein pre-formulation, the important aspect is to understand the solution behavior of a given protein under a variety of stress conditions such as freeze/thaw, temperature, shear stress among others to identify mechanisms of degradation and therefore its mitigation.
Formulation studies then consider such factors as particle size, polymorphism, pH, and solubility, as all of these can influence bioavailability and hence the activity of a drug. The drug must be combined with inactive ingredients by a method that ensures that the quantity of drug present is consistent in each dosage unit e.g. each tablet. The dosage should have a uniform appearance, with an acceptable taste, tablet hardness, and capsule disintegration.
It is unlikely that formulation studies will be complete by the time clinical trials commence. This means that simple preparations are developed initially for use in phase I clinical trials. These typically consist of hand-filled capsules containing a small amount of the drug and a diluent. Proof of the long-term stability of these formulations is not required, as they will be used (tested) in a matter of days. Consideration has to be given to what is known as "drug loading" - the ratio of the active drug to the total contents of the dose. A low drug load may cause homogeneity problems. A high drug load may pose flow problems or require large capsules if the compound has a low bulk density.
By the time phase III clinical trials are reached, the formulation of the drug should have been developed to be close to the preparation that will ultimately be used in the market. A knowledge of stability is essential by this stage, and conditions must have been developed to ensure that the drug is stable in the preparation. If the drug proves unstable, it will invalidate the results from clinical trials since it would be impossible to know what the administered dose actually was. Stability studies are carried out to test whether temperature, humidity, oxidation, or photolysis (ultraviolet light or visible light) have any effect, and the preparation is analysed to see if any degradation products have been formed.
Container closure
Formulated drugs are stored in container closure systems for extended periods of time. These include blisters, bottles, vials, ampules, syringes, and cartridges. The containers can be made from a variety of materials including glass, plastic, and metal. The drug may be stored as a solid, liquid, or gas.
It's important to check whether there are any undesired interactions between the preparation and the container. For instance, if a plastic container is used, tests are carried out to see whether any of the ingredients become adsorbed on to the plastic, and whether any plasticizer, lubricants, pigments, or stabilizers leach out of the plastic into the preparation. Even the adhesives for the container label need to be tested, to ensure they do not leach through the plastic container into the preparation.
Formulation types
The drug form varies by the route of administration.
Like capsules, tablets, and pills etc.
Enteral formulations
Oral drugs are normally taken as tablets or capsules.
The drug (active substance) itself needs to be soluble in aqueous solution at a controlled rate. Such factors as particle size and crystal form can significantly affect dissolution. Fast dissolution is not always ideal. For example, slow dissolution rates can prolong the duration of action or avoid initial high plasma levels. Treatment of active ingredient by special ways such as spherical crystallization can have some advantages for drug formulation.
Tablet
A tablet is usually a compressed preparation that contains:
5-10% of the drug (active substance);
80% of fillers, disintegrants, lubricants, glidants, and binders; and
10% of compounds which ensure easy disintegration, disaggregation, and dissolution of the tablet in the stomach or the intestine.
The dissolution time can be modified for a rapid effect or for sustained release.
Special coatings can make the tablet resistant to the stomach acids such that it only disintegrates in the duodenum, jejunum and colon as a result of enzyme action or alkaline pH.
Pills can be coated with sugar, varnish, or wax to disguise the taste. Pharmaceutical ingredients such as APIs can also be coated with a ResonantAcoustic mixer for controlled release and taste-masking.
Capsule
A capsule is a gelatinous envelope enclosing the active substance. Capsules can be designed to remain intact for some hours after ingestion in order to delay absorption. They may also contain a mixture of slow and fast
release particles to produce rapid and sustained absorption in the same dose.
Sustained release
There are a number of methods by which tablets and capsules can be modified in order to allow for sustained release of the active compound as it moves through the digestive tract. One of the most common methods is to embed the active ingredient in an insoluble porous matrix, such that the dissolving drug must make its way out of the matrix before it can be absorbed. In other sustained release formulations the matrix swells to form a gel through which the drug exits.
Another method by which sustained release is achieved is through an osmotic controlled-release oral delivery system, where the active compound is encased in a water-permeable membrane with a laser drilled hole at one end. As water passes through the membrane the drug is pushed out through the hole and into the digestive tract where it can be absorbed.
Parenteral formulations
These are also called injectable formulations and are used with intravenous, subcutaneous, intramuscular, and intra-articular administration. The drug is stored in liquid or if unstable, lyophilized form.
Many parenteral formulations are unstable at higher temperatures and require storage at refrigerated or sometimes frozen conditions. The logistics process of delivering these drugs to the patient is called the cold chain. The cold chain can interfere with delivery of drugs, especially vaccines, to communities where electricity is unpredictable or nonexistent. NGOs like the Gates Foundation are actively working to find solutions. These may include lyophilized formulations which are easier to stabilize at room temperature.
Most protein formulations are parenteral due to the fragile nature of the molecule which would be destroyed by enteric administration. Proteins have tertiary and quaternary structures that can be degraded or cause aggregation at room temperature. This can impact the safety and efficacy of the medicine.
Liquid
Liquid drugs are stored in vials, IV bags, ampoules, cartridges, and prefilled syringes.
As with solid formulations, liquid formulations combine the drug product with a variety of compounds to ensure a stable active medication following storage. These include solubilizers, stabilizers, buffers, tonicity modifiers, bulking agents, viscosity enhancers/reducers, surfactants, chelating agents, and adjuvants.
If concentrated by evaporation, the drug may be diluted before administration. For IV administration, the drug may be transferred from a vial to an IV bag and mixed with other materials.
Lyophilized
Lyophilized drugs are stored in vials, cartridges, dual chamber syringes, and prefilled mixing systems.
Lyophilization, or freeze drying, is a process that removes water from a liquid drug creating a solid powder, or cake. The lyophilized product is stable for extended periods of time and could allow storage at higher temperatures. In protein formulations, stabilizers are added to replace the water and preserve the structure of the molecule.
Before administration, a lyophilized drug is reconstituted as a liquid before being administered. This is done by combining a liquid diluent with the freeze-dried powder, mixing, then injecting. Reconstitution usually requires a reconstitution and delivery system to ensure that the drug is correctly mixed and administered.
Topical formulations
Cutaneous
Options for topical formulation include:
Cream – Emulsion of oil and water in approximately equal proportions. Penetrates stratum corneum outer layers of skin well.
Ointment – Combines oil (80%) and water (20%). Effective barrier against moisture loss.
Gel – Liquefies upon contact with the skin.
Paste – Combines three agents – oil, water, and powder; an ointment in which a powder is suspended.
Powder – A finely subdivided solid substance.
See also
Pesticide formulation
Drug development
Drug delivery
Drug design
Drug discovery
Galenic formulation
References
External links
Comparison Table of Pharmaceutical Dosage Forms
FDA database for Inactive Ingredient Search for Approved Drug Products
Medicinal chemistry | Pharmaceutical formulation | [
"Chemistry",
"Biology"
] | 1,972 | [
"Biochemistry",
"nan",
"Medicinal chemistry"
] |
8,599,124 | https://en.wikipedia.org/wiki/Nonconforming%20use | Nonconforming use in urban planning the use of land that was authorised at the time the use was created but is no longer allowed due to changes made to the zoning restrictions after that time. Secondary suites are commonly permitted as a non-conforming use in the zoning district they are located in because the suite was developed prior to the zoning ordinance coming into effect.
Discontinuance
Intent
The landowner may explicitly intend to discontinue the use by agreeing to end the use.
Implied intent exists if the landowner fails to exercise the nonconforming use. If the landowner discontinues the nonconforming use after a specified period, commonly 21 years in many jurisdictions but shorter in many others, then the nonconforming use will be terminated and parcel will become subject to the zoning requirements of the area in which it is located. Such a discontinuance of the use implies the intent to abandon the use.
Partial destruction
In some jurisdictions, if the structure exercising the nonconforming use is destroyed beyond a certain percentage (usually 50%) it cannot be rebuilt or repaired. Instead, the parcel becomes subject to the applicable zoning regulations thereby ending the nonconforming use's reprieve from the zoning regulations. In others, by statute usually allowed to rebuild within a limited period. If dwelling, usually permitted to rebuild.
Replacement of non-conforming mobile homes
Local governments may, and often do, prohibit the replacement of older non-conforming mobile homes with newer non-conforming mobile homes because the replacement tends to perpetuate and extend the time the non-conforming use continues. See e.g. Lincoln County North Carolina. Code §10.2.3; Linn County, Iowa Code Art. 3, sec.1 para.6; Davenport Florida City Code §5.01.04.
Amortization
Some states allow amortization of the nonconforming use whereby the nonconforming use's immediate value is amortized over the course of a set period and once the value of the nonconforming reaches zero the nonconforming use ends. Normally such an ordinance is upheld unless it is arbitrary or discriminatory or unreasonable, and it is usually limited to certain uses and outside of those uses it may be an unreasonable exercise of police power. A reasonable exercise of an amortization end of the nonconforming use allows for a complete return on the investment.
Many property-rights advocacy groups are critical of amortization, because it forces property owners to cease a previously lawful use without providing them with any compensation. In some states, amortization of a nonconforming use is unconstitutional.
See also
Grandfather clause
Zoning
Zoning in the United States (land use)
Variance (land use)
Spot zoning
Special use permit
References
External links
Definition from Answers.com
Zoning | Nonconforming use | [
"Engineering"
] | 579 | [
"Construction",
"Zoning"
] |
8,599,335 | https://en.wikipedia.org/wiki/Gravitational%20biology | Gravitational biology is the study of the effects gravity has on living organisms. Throughout the history of the Earth life has evolved to survive changing conditions, such as changes in the climate and habitat. However, one constant factor in evolution since life first began on Earth is the force of gravity. As a consequence, all biological processes are accustomed to the ever-present force of gravity and even small variations in this force can have significant impact on the health and function and the system of organisms.
Gravity and life on Earth
The force of gravity on the surface of the Earth, normally denoted g, has remained constant in both direction and magnitude since the formation of the planet. As a result, both plant and animal life have evolved to rely upon and cope with it in various ways. For example, humans employ internal models in motor planning that account for the effects of gravity on gross and fine motor skills.
Plant use of gravity
Plant tropisms are directional movements of a plant with respect to a directional stimulus. One such tropism is gravitropism, or the growth or movement of a plant with respect to gravity. Plant roots grow towards the pull of gravity and away from sunlight, and shoots and stems grow against the pull of gravity and towards sunlight.
Animal struggles with gravity
Gravity has had an effect on the development of animal life since the first single-celled organism.
The size of single biological cells is inversely proportional to the strength of the gravitational field exerted on the cell. That is, in stronger gravitational fields the size of cells decreases, and in weaker gravitational fields the size of cells increases. Gravity is thus a limiting factor in the growth of individual cells.
Cells which were naturally larger than the size that gravity alone would allow for had to develop means to protect against internal sedimentation. Several of these methods are based upon protoplasmic motion, thin and elongated shape of the cell body, increased cytoplasmic viscosity, and a reduced range of specific gravity of cell components relative to the ground-plasma.
The effects of gravity on multicellular organisms is considerably more drastic. During the period when animals first evolved to survive on land some method of directed locomotion and thus a form of inner skeleton or outer skeleton would have been required to cope with the increase in the apparent force of gravity due to the weakened upward force of buoyancy. Prior to this point, most lifeforms were small and had a worm- or jellyfish-like appearance, and without this evolutionary step would not have been able to maintain their form or move on land.
In larger terrestrial vertebrates gravitational forces influence musculoskeletal systems, fluid distribution, and hydrodynamics of the circulation.
See also
Astrobiology
Cell biology
Space exploration
References
Gravity
Astrobiology | Gravitational biology | [
"Astronomy",
"Biology"
] | 557 | [
"Origin of life",
"Speculative evolution",
"Astrobiology",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
8,599,392 | https://en.wikipedia.org/wiki/Gamebike | A gamebike is an interactive fitness bike that requires the user to exercise in order to play their video games. The user must pedal the bike in order for the character to accelerate and must turn the handlebars to steer. The game bike allows users to control the character in their game while getting exercise.
History
Game Bike is the name of an interactive fitness device first invented and patented by Edward H. (Ted) Parks, M.D. in 2000. Dr. Parks sold the rights to his patent to Cateye Co Ltd, a Japanese company with expertise in electronic bicycle accessories, such as bike lights and speedometers. Cateye's initial embodiment of Parks' design used a traditional bicycle attached to what they referred to as their GB100 system. The front tire was placed into a turn style platform that was used to read direction. Sensors were placed of the rear wheel which was mounted in a bicycle trainer to measure the speed. Cateye Co Ltd. first started production of the Game Bike in the GB100 form.
The project was then handled by a group in New Jersey that redesigned the product to be a single package incorporated into a stand-alone indoor exercise bike. The GB200 was born. An immediate need for a commercial version was soon covered by the introduction of the popular GB-300 Game Bike.
Late in 2008, Cateye Co Ltd. stopped all international distribution of the fitness line. Game Bike Production was then done by Source Distributors Inc. out of Dallas, Texas. Source Distributors produced the bike and made modifications to the unit to improve the controller serviceability. The Game Bike is a popular product within the school and YMCA markets. Thousands of bikes were sold in since the start of 2003.
Game Bike is now owned by Hudson Fitness LLC. Game Bike is currently available and Hudson Fitness LLC continues supporting the Game Bike service.
Compatible titles
Compatible titles include:
Any speed-based video game for PS2/PS3/Game Cube/Xbox
References
External links
GameBike Proves There is More to Healthy Gaming than Just DDR at GamePolitics.com
Exercise equipment
Fitness games
Video game accessories
PlayStation (console) accessories
PlayStation 2 accessories
GameCube accessories
Xbox (console) accessories | Gamebike | [
"Technology"
] | 445 | [
"Video game accessories",
"Components"
] |
8,599,665 | https://en.wikipedia.org/wiki/List%20of%20therapeutic%20monoclonal%20antibodies | Therapeutic, diagnostic and preventive monoclonal antibodies are clones of a single parent cell. When used as drugs, the International Nonproprietary Names (INNs) end in -mab. The remaining syllables of the INNs, as well as the column Source, are explained in Nomenclature of monoclonal antibodies.
The abbreviations in the column Type are as follows:
mab: whole monoclonal antibody
Fab: fragment, antigen-binding (one arm)
F(ab')2: fragment, antigen-binding, including hinge region (both arms)
Fab': fragment, antigen-binding, including hinge region (one arm)
Variable fragments:
scFv: single-chain variable fragment
di-scFv: dimeric single-chain variable fragment
sdAb: single-domain antibody
BsAb: bispecific monoclonal antibody:
3funct: trifunctional antibody
BiTE: bi-specific T-cell engager
This list of over 500 monoclonal antibodies includes approved and investigational drugs as well as drugs that have been withdrawn from market; consequently, the column Use does not necessarily indicate clinical usage. See the list of FDA-approved therapeutic monoclonal antibodies in the monoclonal antibody therapy page.
References
Monoclonal Antibodies
+
Monoclonal Antibodies | List of therapeutic monoclonal antibodies | [
"Chemistry",
"Biology"
] | 270 | [
"Immunology",
"Drug-related lists"
] |
8,599,683 | https://en.wikipedia.org/wiki/Procyanidin | Procyanidins are members of the proanthocyanidin (or condensed tannins) class of flavonoids. They are oligomeric compounds, formed from catechin and epicatechin molecules. They yield cyanidin when depolymerized under oxidative conditions.
See the box below entitled "Types of procyanidins" for links to articles on the various types.
Distribution in plants
Procyanidins, including the lesser bioactive / bioavailable polymers (4 or more catechines), represent a group of condensed flavan-3-ols that can be found in many plants, most notably apples, maritime pine bark, cinnamon, aronia fruit, cocoa beans, grape seed, grape skin, and red wines of Vitis vinifera (the common grape). However, bilberry, cranberry, black currant, green tea, black tea, and other plants also contain these flavonoids. Procyanidins can also be isolated from Quercus petraea and Q. robur heartwood (wine barrel oaks). Açaí oil, obtained from the fruit of the açaí palm (Euterpe oleracea), is rich in numerous procyanidin oligomers.
Apples contain on average per serving about eight times the amount of procyanidin found in wine, with some of the highest amounts found in the Red Delicious and Granny Smith varieties.
The seed testas of field beans (Vicia faba) contain procyanidins that affect the digestibility in piglets and could have an inhibitory activity on enzymes. Cistus salviifolius also contains oligomeric procyanidins.
Analysis
Condensed tannins can be characterised by a number of techniques including depolymerisation, asymmetric flow field flow fractionation or small-angle X-ray scattering. DMACA is a dye used for localization of procyanidin compounds in plant histology. The use of the reagent results in blue staining. It can also be used to titrate procyanidins. Total phenols (or antioxidant effect) can be measured using the Folin-Ciocalteu reaction. Results are typically expressed as gallic acid equivalents (GAE).
Procyanidins from field beans (Vicia faba) or barley have been estimated using the vanillin-HCl method, resulting in a red color of the test in the presence of catechin or proanthocyanidins.
Procyanidins can be titrated using the Procyanidolic Index (also called the Bates-Smith Assay). It is a testing method that measures the change in color when the product is mixed with certain chemicals. The greater the color changes, the higher the PCOs content is. However, the Procyanidolic Index is a relative value that can measure well over 100. Unfortunately, a Procyanidolic Index of 95 was erroneously taken to mean 95% PCO by some and began appearing on the labels of finished products. All current methods of analysis suggest that the actual PCO content of these products is much lower than 95%.
An improved colorimetric test, called the Porter Assay or butanol-HCl-iron method, is the most common PCO assay currently in use. The unit of measurement of the Porter Assay is the PVU (Porter Value Unit). The Porter Assay is a chemical test to help determine the potency of procyanidin containing compounds, such as grape seed extract. It is an acid hydrolysis, which splits larger chain units (dimers and trimers) into single unit monomers and oxidizes them. This leads to a colour change, which can be measured using a spectrophotometer. The greater the absorbance at a certain wavelength of light, the greater the potency. Ranges for grape seed extract are from 25 PVU for low grade material to over 300 for premium grape seed extracts.
Gel permeation chromatography (GPC) analysis allows to separate monomers from larger PCO molecules.
Monomers of procyanidins can be characterized by HPLC analysis. Condensed tannins can undergo acid-catalyzed cleavage in the presence of a nucleophile like phloroglucinol (reaction called phloroglucinolysis), thioglycolic acid (thioglycolysis), benzyl mercaptan or cysteamine (processes called thiolysis) leading to the formation of oligomers that can be further analyzed.
Phloroglucinolysis can be used for instance for procyanidins characterisation in wine or in the grape seed and skin tissues.
Thioglycolysis can be used to study procyanidins or the oxidation of condensed tannins. It is also used for lignin quantitation. Reaction on condensed tannins from Douglas fir bark produces epicatechin and catechin thioglycolates.
Condensed tannins from Lithocarpus glaber leaves have been analysed through acid-catalyzed degradation in the presence of cysteamine.
Research
Procyanidin content in dietary supplements has not been well documented. Pycnogenol is a dietary supplement derived from extracts from maritime pine bark that contains 70% procyanidins, and is marketed with claims it can treat many conditions. The medical evidence is insufficient to support its use for the treatment of seven different chronic disorders.
See also
A type proanthocyanidin
B type proanthocyanidin
Procyanidin C1
Procyanidin C2
Tannin
Polyphenol
Phenolic compounds in wine
References
External links
"Pycnogenol: MedlinePlus Supplements"
Food chemistry
Flavonoid antioxidants
de:Oligomere Proanthocyanidine
nl:Proanthocyanidine
zh:原花色素 | Procyanidin | [
"Chemistry",
"Biology"
] | 1,259 | [
"Biochemistry",
"Food chemistry",
"nan"
] |
8,599,781 | https://en.wikipedia.org/wiki/ReLINE%20Software | ReLINE Software was a German game development company founded by Uwe Grabosch and Holger Gehrmann in Hannover in 1987.
The company acted first as a developer for Softgold, Rainbow Arts, Golden Games, Magic Bytes, micro-partner, and Robtek; the company later became significantly more independent and co-published games with Magic Byte.
Titles from the eighties include: Operation Hongkong, Drum Studio, Extensor, Hollywood Poker, Space Port, Amegas, Crystal Hammer, Hollywood Poker Pro, Black Gold (also known as Oil Imperium), Dyter-07, and Window Wizard (also known as Window Willy).
Games of the early nineties include Legend of Faerghail, Fate: Gates of Dawn and Centerbase.
Holger Gehrmann and Olaf Patzenhauer continued reLINE Software as a software label in 1993. The games Biing! and Biing! 2 originated from this time. However, the planned development of Oil Imperium 2 was cancelled.
ReLINE Software closed in 2004. In February 2008, Holger Gehrmann fell to his death from a seven-story office building months before his 40th birthday. Olaf Patzenhauer died some time between late 2011 and mid-2012.
References
reLINE at The Hall of Light (HOL)
Defunct video game companies of Germany
Video game companies established in 1987
1987 establishments in West Germany
German companies established in 1987 | ReLINE Software | [
"Technology"
] | 295 | [
"Computing stubs"
] |
8,599,808 | https://en.wikipedia.org/wiki/Surface%20brightness%20fluctuation | Surface brightness fluctuation (SBF) is a secondary distance indicator used to estimate distances to galaxies. It is useful to 100 Mpc (parsec). The method measures the variance in a galaxy's light distribution arising from fluctuations in the numbers of and luminosities of individual stars per resolution element.
The SBF technique uses the fact that galaxies are made up of a finite number of stars. The number of stars in any small patch of a galaxy will vary from point to point, creating a noise-like fluctuation in its surface brightness. While the various stars present in a galaxy will cover an enormous range of luminosity, the SBF can be characterized as having an average brightness. A galaxy twice as far away appears twice as smooth as a result of the averaging. Older elliptical galaxies have fairly consistent stellar populations, thus it closely approximates a standard candle. In practice, corrections are required to account for variations in age or metallicity from galaxy to galaxy. Calibration of the method is made empirically from Cepheids or theoretically from stellar population models.
The SBF pattern is measured from the power spectrum of the residuals left behind from a deep galaxy image after a smooth model of the galaxy has been subtracted. The SBF pattern is evident as the transform of the point spread function in the Fourier domain. The amplitude of the spectrum gives the luminosity of the fluctuation star. Because the technique depends on a precise understanding of the image structure of the galaxy, extraneous sources such as globular clusters and background galaxies must be excluded. Corrections for interstellar dust absorption must also be accounted. In practice this means that SBF works best for elliptical galaxies or the bulges of S0 galaxies, and less so for spiral galaxies as they generally have complex morphologies and extensive dust features.
SBF is calibrated by use of nearby Cepheid period-luminosity relation (P-L) based on measurements of SBF magnitudes in the bulges of spiral galaxies with distances measured from Cepheid variables.
SBF is an indicator that uses stars in the old stellar populations (Population II).
References
Standard candles | Surface brightness fluctuation | [
"Physics"
] | 448 | [
"Standard candles",
"Astrophysics"
] |
8,599,932 | https://en.wikipedia.org/wiki/Video%20wall | A video wall is a special multi-monitor setup that consists of multiple computer monitors, video projectors, or television sets tiled together contiguously or overlapped in order to form one large screen. Typical display technologies include LCD panels, Direct View LED arrays, blended projection screens, Laser Phosphor Displays, and rear projection cubes. Jumbotron technology was also previously used. Diamond Vision was historically similar to Jumbotron in that they both used cathode-ray tube (CRT) technology, but with slight differences between the two. Early Diamond vision displays used separate flood gun CRTs, one per subpixel. Later Diamond vision displays and all Jumbotrons used field-replaceable modules containing several flood gun CRTs each, one per subpixel, that had common connections shared across all CRTs in a module; the module was connected through a single weather-sealed connector.
Screens specifically designed for use in video walls usually have narrow bezels in order to minimize the gap between active display areas, and are built with long-term serviceability in mind. Such screens often contain the hardware necessary to stack similar screens together, along with connections to daisy chain power, video, and command signals between screens. A command signal may, for example, power all screens in the video wall on or off, or calibrate the brightness of a single screen after bulb replacement (in Projection-based screens).
Reasons for using a video wall instead of a single large screen can include the ability to customize tile layouts, greater screen area per unit cost, and greater pixel density per unit cost, due to the economics of manufacturing single screens which are unusual in shape, size, or resolution.
Video walls are sometimes found in control rooms, stadiums, and other large public venues. Examples include the video wall in Oakland International Airport's baggage claim, where patrons are expected to observe the display at long distances, and the 100 screen video wall at McCarran International Airport, which serves as an advertising platform for the 40 million passengers passing through airport annually. Video walls can also benefit smaller venues when patrons may view the screens both up close and at a distance, respectively necessitating both high pixel density and large size. For example, the 100-inch video wall located in the main lobby of the Lafayette Library and Learning Center has enough size for the distant passerby to view photos while also providing the nearby observer enough resolution to read about upcoming events.
Simple video walls can be driven from multi-monitor video cards, however more complex arrangements may require specialized video processors, specifically designed to manage and drive large video walls. Software-based video wall technology that uses ordinary PCs, displays and networking equipment can also be used for video wall deployments.
The largest video wall as of 2013 was located at the backstretch of the Charlotte Motor Speedway motorsport track. Developed by Panasonic, it measures 200 by 80 feet (61 by 24 m) and uses LED technology. The Texas Motor Speedway installed an even larger screen in 2014, measuring 218 by 125 feet (66 by 38 m).
Video walls are not limited to a single purpose but are now being used in dozens of different applications.
Controllers
A video wall controller (sometimes called “processor”) is a device that splits a single image into parts to be displayed on individual screens. Video wall controllers can be divided into groups:
Hardware-based controllers.
Software-based PC & video-card controllers.
Hardware-based controllers are electronic devices built for specific purpose. They usually are built on array of video processing chipsets and do not have an operating system. The advantage of using a hardware video wall controller is high performance and reliability. Disadvantages include high cost and the lack of flexibility.
The most simple example of video wall controller is single input multiple outputs scaler. It accepts one video input and splits the image into parts corresponding to displays in the video wall.
Most of professional video wall displays also have built-in controller (sometimes called an integrated video matrix processor or splitter). This matrix splitter allows to “stretch” the image from a single video input across all the displays within the whole video wall (typically arranged in a linear matrix, e.g., 2x2, 4x4, etc.). These types of displays typically have loop-through output (usually DVI) that allows installers to daisy-chain all displays and feed them with the same input. Typically setup is done via the remote control and the on-screen display. It is a fairly simple method to build a video wall but it has some disadvantages. First of all, it is impossible to use full pixel resolution of the video wall because the resolution cannot be bigger than the resolution of the input signal. It is also not possible to display multiple inputs at the same time.
Software-based PC & video-card controllers is a computer running an operating system (e.g., Windows, Linux, Mac) in a PC or server equipped with special multiple-output graphic cards and optionally with video capture input cards. These video wall controllers are often built on industrial-grade chassis due to the reliability requirements of control rooms and situational centers. Though this approach is typically more expensive, the advantage of a software-based video wall controller vs the hardware splitter is that it can launch applications like maps, VoIP client (to display IP cameras), SCADA clients, Digital Signage software that can directly utilize the full resolution of the video wall. That is why software-based controllers are widely used in control rooms and high-end Digital Signage. The performance of the software controller depends on both the quality of graphic cards and management software. There are a number of multi-head (multiple output) graphic cards commercially available. Most of general purpose multi-output cards manufactured by AMD (Eyefinity technology), NVidia (Mosaic technology) support up to 6-12 genlocked outputs. General purpose cards also do not have optimizations for displaying multiple video streams from capture cards. To achieve larger number of displays or high video input performance one needs to use specialized graphic cards (e.g. Datapath Limited, Matrox Graphics, Jupiter Systems).
Video wall controllers typically support bezel correction (outside frame of monitor) to correct for any bezel with LED displays or overlap the images to blend edges with projectors.
Matrix, grid and artistic layouts
The integrated video wall scalers are often limited to matrix grid layouts (e.g., 2x2, 3x3, 4x4, etc.) of identical displays. Here the aspect ratio remains the same but the source-image is scaled across the number of displays in the matrix. More advanced controllers enable grid layouts of any configuration (e.g., 1x5, 2x8, etc.) where the aspect ratio of the video wall can be very different from that of individual displays. Others enable displays to be placed anywhere within the canvas, but are limited to portrait or landscape orientation. The most advanced video wall controllers enable full artistic control of the displays, enabling a heterogeneous mix of different displays as well as 360deg multi-angle rotation of any individual display within the video wall canvas.
Multiple simultaneous sources
Advanced video wall controllers will allow you to output multiple sources to groups of displays within the video wall and change these zones at will even during live playback.
The more basic scalers only allow you to output a single source to the entire video wall.
Network video wall
Some video wall controllers can reside in the server room and communicate with their "graphics cards" over the network. This configuration offers advantages in terms of flexibility. Often this is achieved via a traditional video wall controller (with multiple graphics cards) in the server room with a "sender" device attached to each graphics output and a "receiver" attached to each display. These sender/receiver devices are either via Cat5e/Cat6 cable extension or via a more flexible and powerful "video over IP" that can be routed through traditional network switches. Even more advanced is a pure network video wall where the server does not require any video cards and communicates directly over the network with the receiver devices.
Windows-based Network video walls are the most common in the market and will allow a much better functionality.
A network configuration allows video walls to be synchronized with individual digital signs. This means that video walls of different sizes and configurations, as well as individual digital displays
can all show the same content at the same time, referred to as 'mirroring'.
Transparent video walls
Transparent video walls combine transparent LCD screens with a video wall controller to display video and still images on a large transparent surface. Transparent displays are available from a variety of companies and are common in retail and other environments that want to add digital signage to their window displays or in store promotions. Bezel-less transparent displays can be combined using certain video wall controllers to turn the individual displays into a video wall to cover a significantly larger surface.
Rendering clusters
Jason Leigh and others at the Electronic Visualization Laboratory, University of Illinois, Chicago, developed SAGE, the Scalable Adaptive Graphics Environment, allowing the seamless display of various networked applications over a large display wall (LDW) system. Different visualization applications such as 3D rendering, remote desktop, video streams, and 2D maps, stream their rendered pixels to a virtual high-resolution frame buffer on the LDW. Using a high-bandwidth network, remote visualization applications can feed the streams of the data into SAGE. The user interface of SAGE, which works as a separate display node, allows users to relocate and resize the visualization stream in a form of window, which can be found in a conventional graphical user interface. Depending on the location and size of the visualization stream window on the LDW, SAGE reroutes the stream to respective display nodes.
Chromium is an OpenGL system for interactive rendering on graphics clusters. By providing a modified OpenGL library, Chromium can run OpenGL-based applications on a LDW with minimal or no changes. One clear advantage of Chromium is utilizing each rendering cluster and achieving high resolution visualization over a LDW. Chromium streams OpenGL commands from the `app' node to other display nodes of a LDW. The modified OpenGL library in system handles transferring OpenGL commands to necessary nodes based on their viewport and tile coordinates.
David Hughes and others from SGI developed Media Fusion, an architecture designed to exploit the potential of a scalable shared memory and manage multiple visual streams of pixel data into 3D environments. It provides data management solution and interaction in immersive visualization environments. Its focus is streaming pixels across heterogeneous network over the Visual Area Network(VAN) similar to SAGE. However, it is designed for a small number of large displays. Since it relies on a relatively small resolution for the display, pixel data can be streamed under the fundamental limit of the network bandwidth. The system displays high-resolution still images, HD videos, live HD video streams and PC applications. Multiple feeds can be displayed on the wall simultaneously and users can reposition and resize each feed in much the same way they move and resize windows on a PC desktop. Each feed can be scaled up for viewing on several monitors or the entire wall instantly depending upon the user’s discretion.
See also
Multi-image
Multi-monitor
Video sculpture
VESA
References
Multi-monitor
Electronic display devices
Video hardware
User interfaces | Video wall | [
"Technology",
"Engineering"
] | 2,333 | [
"User interfaces",
"Electronic engineering",
"Interfaces",
"Video hardware"
] |
8,602,424 | https://en.wikipedia.org/wiki/Bruce%20Tate | Bruce A. Tate is an American author on the topic of the Java, Ruby, and Elixir programming languages and other computer software. He is also the CTO of icanmakeitbetter.com and the editor of Elixir books for the Pragmatic Bookshelf.
Works
Designing Elixir Systems with OTP
Adopting Elixir
Better, Faster, Lighter Java
Beyond Java: A Glimpse at the Future of Programming Languages
Bitter EJB, co-authored a critical analysis of Java EJBs
Bitter Java, a critical analysis of Java
Deploying Rails Applications
From Java to Ruby: Things Every Manager Should Know
Programming Phoenix
Rails: Up and Running
Seven Languages in Seven Weeks
Seven More Languages in Seven Weeks
References
External links
Review of Tate's book Beyond Java
Weblog
Amazon book review and listings
Tursiops electronicus: Stimulated tutoring of a language trained dolphin interview in the free Prolog chapter of Seven Languages in Seven Weeks.
American non-fiction writers
Living people
Year of birth missing (living people) | Bruce Tate | [
"Technology"
] | 211 | [
"Computing stubs",
"Computer specialist stubs"
] |
8,603,211 | https://en.wikipedia.org/wiki/Plasma-enhanced%20chemical%20vapor%20deposition | Plasma-enhanced chemical vapor deposition (PECVD) is a chemical vapor deposition process used to deposit thin films from a gas state (vapor) to a solid state on a substrate. Chemical reactions are involved in the process, which occur after creation of a plasma of the reacting gases. The plasma is generally created by radio frequency (RF) alternating current (AC) frequency or direct current (DC) discharge between two electrodes, the space between which is filled with the reacting gases.
Discharges for processes
A plasma is any gas in which a significant percentage of the atoms or molecules are ionized. Fractional ionization in plasmas used for deposition and related materials processing varies from about 10−4 in typical capacitive discharges to as high as 5–10% in high-density inductive plasmas. Processing plasmas are typically operated at pressures of a few millitorrs to a few torr, although arc discharges and inductive plasmas can be ignited at atmospheric pressure. Plasmas with low fractional ionization are of great interest for materials processing because electrons are so light, compared to atoms and molecules, that energy exchange between the electrons and neutral gas is very inefficient. Therefore, the electrons can be maintained at very high equivalent temperatures – tens of thousands of kelvins, equivalent to several electronvolts average energy—while the neutral atoms remain at the ambient temperature. These energetic electrons can induce many processes that would otherwise be very improbable at low temperatures, such as dissociation of precursor molecules and the creation of large quantities of free radicals.
The second benefit of deposition within a discharge arises from the fact that electrons are more mobile than ions. As a consequence, the plasma is normally more positive than any object it is in contact with, as otherwise, a large flux of electrons would flow from the plasma to the object. The difference in voltage between the plasma and the objects in its contacts normally occurs across a thin sheath region. Ionized atoms or molecules that diffuse to the edge of the sheath region feel an electrostatic force and are accelerated towards the neighboring surface. Thus, all surfaces exposed to the plasma receive energetic ion bombardment. The potential across the sheath surrounding an electrically isolated object (the floating potential) is typically only 10–20 V, but much higher sheath potentials are achievable by adjustments in reactor geometry and configuration. Thus, films can be exposed to energetic ion bombardment during deposition. This bombardment can lead to increases in the density of the film, and help remove contaminants, improving the film's electrical and mechanical properties. When a high-density plasma is used, the ion density can be high enough that significant sputtering of the deposited film occurs; this sputtering can be employed to help planarize the film and fill trenches or holes.
Reactor types
A simple DC discharge can be readily created at a few torr between two conductive electrodes, and may be suitable for deposition of conductive materials. However, insulating films will quickly extinguish this discharge as they are deposited. It is more common to excite a capacitive discharge by applying an AC or RF signal between an electrode and the conductive walls of a reactor chamber, or between two cylindrical conductive electrodes facing one another. The latter configuration is known as a parallel plate reactor. Frequencies of a few tens of Hz to a few thousand Hz will produce time-varying plasmas that are repeatedly initiated and extinguished; frequencies of tens of kilohertz to tens of megahertz result in reasonably time-independent discharges.
Excitation frequencies in the low-frequency (LF) range, usually around 100 kHz, require several hundred volts to sustain the discharge. These large voltages lead to high-energy ion bombardment of surfaces. High-frequency plasmas are often excited at the standard 13.56 MHz frequency widely available for industrial use; at high frequencies, the displacement current from sheath movement and scattering from the sheath assist in ionization, and thus lower voltages are sufficient to achieve higher plasma densities. Thus one can adjust the chemistry and ion bombardment in the deposition by changing the frequency of excitation, or by using a mixture of low- and high-frequency signals in a dual-frequency reactor. Excitation power of tens to hundreds of watts is typical for an electrode with a diameter of 200 to 300 mm.
Capacitive plasmas are usually very lightly ionized, resulting in limited dissociation of precursors and low deposition rates. Much denser plasmas can be created using inductive discharges, in which an inductive coil excited with a high-frequency signal induces an electric field within the discharge, accelerating electrons in the plasma itself rather than just at the sheath edge. Electron cyclotron resonance reactors and helicon wave antennas have also been used to create high-density discharges. Excitation powers of 10 kW or more are often used in modern reactors.
High density plasmas can also be generated by a DC discharge in an electron-rich environment, obtained by thermionic emission from heated filaments. The voltages required by the arc discharge are of the order of a few tens of volts, resulting in low energy ions. The high density, low energy plasma is exploited for the epitaxial deposition at high rates in low-energy plasma-enhanced chemical vapor deposition reactors.
Origins
Working at Standard Telecommunication Laboratories (STL), Harlow, Essex, R C G Swann discovered that RF discharge promoted the deposition of silicon compounds onto the quartz glass vessel wall. Several internal STL publications were followed in 1964 by French, British and US patent applications. An article was published in the August 1965 volume of Solid State Electronics.
Swann attending to his original prototype glow discharge equipment in the laboratory at STL Harlow, Essex in the 1960s. It represented a breakthrough in the deposition of thin films of amorphous silicon, silicon nitride, silicon dioxide at temperatures significantly lower than that deposited by pyrolytic chemistry.
Film examples and applications
Plasma deposition is often used in semiconductor manufacturing to deposit films conformally (covering sidewalls) and onto wafers containing metal layers or other temperature-sensitive structures. PECVD also yields some of the fastest deposition rates while maintaining film quality (such as roughness, defects/voids), as compared with sputter deposition and thermal/electron-beam evaporation, often at the expense of uniformity.
Silicon dioxide deposition
Silicon dioxide can be deposited using a combination of silicon precursor gasses like dichlorosilane or silane and oxygen precursors, such as oxygen and nitrous oxide, typically at pressures from a few millitorr to a few torr. Plasma-deposited silicon nitride, formed from silane and ammonia or nitrogen, is also widely used, although it is important to note that it is not possible to deposit a pure nitride in this fashion. Plasma nitrides always contain a large amount of hydrogen, which can be bonded to silicon (Si-H) or nitrogen (Si-NH); this hydrogen has an important influence on IR and UV absorption, stability, mechanical stress, and electrical conductivity. This is often used as a surface and bulk passivating layer for commercial multicrystalline silicon photovoltaic cells.
Silicon dioxide can also be deposited from a tetraethylorthosilicate (TEOS) silicon precursor in an oxygen or oxygen-argon plasma. These films can be contaminated with significant carbon and hydrogen as silanol, and can be unstable in air. Pressures of a few torr and small electrode spacings, and/or dual frequency deposition, are helpful to achieve high deposition rates with good film stability.
High-density plasma deposition of silicon dioxide from silane and oxygen/argon has been widely used to create a nearly hydrogen-free film with good conformality over complex surfaces, the latter resulting from intense ion bombardment and consequent sputtering of the deposited molecules from vertical onto horizontal surfaces.
See also
Low-energy plasma-enhanced chemical vapor deposition
References
Chemical vapor deposition
Plasma processing
Semiconductor device fabrication
Thin film deposition | Plasma-enhanced chemical vapor deposition | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,665 | [
"Microtechnology",
"Thin film deposition",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Chemical vapor deposition",
"Planes (geometry)",
"Solid state engineering"
] |
8,603,602 | https://en.wikipedia.org/wiki/Kabura-ya | is a type of Japanese arrow used by the samurai class of feudal Japan. Kabura-ya were arrows which whistled when shot and were used in ritual archery exchanges before formal medieval battles.
Like a wind instrument, the sound was created by a specially carved or perforated bulb of deer horn or wood attached to the tip. In English, these are often called "whistling-bulb arrows", "messenger arrows", or "signal arrows." Kabura literally translates to "turnip", and thus the Japanese term technically means 'turnip[-shaped] arrows'. The Chinese xiangjian (sometimes pronounced and written mingdi) was quite similar, and until the end of the Warlord Era were commonly used by bandits to announce the gang's approach.
In Shinto, the sound made by the kabura-ya arrow in mid-flight is thought to ward off evil influences. Hence, it is used in Shinto rites to purify locations such as shrine grounds and parks. Other sacred bows similarly used in Shinto rituals are the hama yumi and the azusa yumi.
Use
In battle, particularly around the time of the Heian period, kabura-ya would be shot before a battle, to alert the enemy. The whistling sound was also believed to chase away evil spirits, and to alert friendly kami to lend their support. It was not uncommon for archery exchanges to be performed for quite some time, and in the 1183 battle of Kurikara, for example, fifteen arrows were shot by each side, then thirty, then fifty, then one hundred, before these hundred samurai on each side actually engaged one another in battle. It was also not uncommon for messages to be tied to these arrows, which could be shot into fortresses, battle camps or the like. This practice of the formal archery exchange likely died out gradually following the end of the Heian period, as war became less and less ritualized.
The arrows would also be sold at Shintō shrines as good luck charms, particularly around New Year's Day; simply carrying a kabura-ya, like a Hama Ya, is meant to serve as a ward against evil spirits.
See also
Kasagake
Carnyx
Dacian draco
References
Sources
Frederic, Louis (2002). "Japan Encyclopedia." Cambridge, Massachusetts: Harvard University Press.
Billingsley, Phil (1988). "Bandits in Republican China" Stanford University Press
External links
Archery Gallery
Samurai weapons and equipment
Archery equipment of Japan
Arrow types
Amulets
Talismans
Shinto in Japan
Exorcism in Shinto
Shinto religious objects
Magic items
Religious objects | Kabura-ya | [
"Physics"
] | 539 | [
"Magic items",
"Religious objects",
"Physical objects",
"Matter"
] |
8,603,644 | https://en.wikipedia.org/wiki/Tosher | A tosher is someone who scavenges in the sewers, a sewer-hunter, especially in London during the Victorian era. The word tosher was also used to describe the thieves who stripped valuable copper from the hulls of ships moored along the Thames. The related slang term "tosh" referred to valuables thus collected. Both "tosher" and "tosh" are of unknown origin.
In fiction
In the 1960 film The Day They Robbed the Bank of England, which is set in 1901, a tosher becomes involved to help break into the bank through the old sewer system.
A tosher in Victorian London is the profession of the title character in Dodger, a 2012 novel by Terry Pratchett.
The protagonist of Nick Harkaway’s 2012 novel Angelmaker describes the London sewers and backstreets as “Tosher’s Beat”.
The character Murky John is a Tosher in Year of the Rabbit Series 1 Episode 2.
See also
, someone who scavenges in river mud.
List of obsolete occupations
References
External links
Toshers in fiction : "Joe Rat", a novel by Mark Barratt
Toshers and mudlarks in "The Horrid Jobs Quiz"
"Dodger" a novel by Terry Pratchett
Obsolete occupations
Waste collection
Sewerage | Tosher | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 273 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
8,603,776 | https://en.wikipedia.org/wiki/Digital%20Classicist | The Digital Classicist is a community of those interested in the application of digital humanities to the field of classics and to ancient world studies more generally. The project claims the twin aims of bringing together scholars and students with an interest in computing and the ancient world, and disseminating advice and experience to the classics discipline at large. The Digital Classicist was founded in 2005 as a collaborative project based at King's College London and the University of Kentucky, with editors and advisors from the classics discipline at large.
Activities of The Digital Classicist
Membership
Many notable Classicists and Digital Humanists are on the advisory board of the Digital Classicist, including Richard Beacham (of the King's Visualisation Lab), Alan Bowman (professor of Ancient History at University of Oxford), Gregory Crane (of the Perseus Project), Bernard Frischer (of the Virtual World Heritage Laboratory), Michael Fulford (professor of Archaeology and pro-vice-chancellor at University of Reading), Willard McCarty (winner of the Lyman Award and professor of Humanities Computing at Department of Digital Humanities), James O'Donnell (provost of Georgetown University), Silvio Panciera (of University of Rome La Sapienza), and Boris Rankov (professor of Ancient History at Royal Holloway, University of London). A former member was the late Ross Scaife (Stoa Consortium and University of Kentucky).
Blog
The Digital Classicist community have taken an active role in posting news to the long-standing blog of the Stoa Consortium, which concerns itself with both classical and digital humanities topics. A particular focus seems to be the open source and Creative Commons movements, and various communities of scholars with digital interests.
Discussion list
The Digital Classicist discussion list is hosted by JISCmail in the UK. Most list traffic consists of announcements and calls, with occasional flurries of more involved discussion.
Wiki
The main website of the Digital Classicist is a gateway containing links to the Digital Classicist wiki and other resources, including listings for seminars and conference panels. The seminar programmes include: abstract, slides (in pdf), audio (in mp3), and, video recordings from 2013.
The project wiki contains lists of digital classics projects, software tools that have been made available for classicists, and a FAQ that solicits collaborative community advice on a range of topics from simple questions about, e.g., Greek fonts and Unicode, word-processing and printing issues, to more advanced Humanities Computing questions and project management advice. The wiki is hosted on the servers of the Centre for Computing in the Humanities at King's College London.
DCLP
The Digital Corpus of Literary Papyri (DCLP) is an online library that offers information about and transcriptions of Greek and Latin literary and subliterary papyri preserved on papyri, ceramic sherds (ostraka), wooden tablets, and other portable media. DCLP is a joint project of the Institute for Papyrology at the University of Heidelberg and of the Institute for the Study of the Ancient World at New York University.
Seminars and Publications
The members of the Digital Classicist community also report quite heavily on any conference and seminar activity that they take to reflect well on the project as a whole. Among the events cited are a series of summer seminars that have run each year since 2006 at the Institute of Classical Studies in London, and panels at the Classical Association Annual Conference in Birmingham 2007 Glasgow 2009, Durham 2011, and the Digital Resources in the Humanities and Arts conference in September 2008. The Project was also among the sponsors of the Open Source Critical Editions workshop in 2006.
In 2008 the Digital Medievalist published a collaborative issue of Digital Classicist articles in memory of Ross Scaife. A collection of papers from the 2007 seminar series and conference panels have been published by Ashgate: Digital Research in the Study of Classical Antiquity (Bodard and Mahony (eds) 2010). More recent papers have been collected together in a Bulletin of the Institute of Classical Studies: Mahony and Dunn (eds) 2013The Digital Classicist 2013 (2013) London BICS Supplement-122 Institute of Classical Studies.
See also
Digital classics
EpiDoc
Thesaurus Linguae Graecae
References
External links
The Stoa Consortium
The Digital Classicist wiki
Digitalclassicist discussion list (@JISCmail)
Centre for Computing in the Humanities , King's College London
UCL Centre for Digital Humanities, University College London
Digital Medievalist 4 a special joint issue with Digital Classicist
Diotima: Women and Gender in the Ancient World
Projects established in 2005
King's College London
University of Kentucky
Research projects
Digital humanities
Educational projects
MediaWiki websites
Wiki communities
Computing in classical studies
Digital humanities projects | Digital Classicist | [
"Technology"
] | 965 | [
"Digital humanities",
"Computing and society",
"Computing in classical studies"
] |
8,604,538 | https://en.wikipedia.org/wiki/Grand%20600-cell | In geometry, the grand 600-cell or grand polytetrahedron is a regular star 4-polytope with Schläfli symbol {3, 3, 5/2}. It is one of 10 regular Schläfli-Hess polytopes. It is the only one with 600 cells.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It was named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
The grand 600-cell can be seen as the four-dimensional analogue of the great icosahedron (which in turn is analogous to the pentagram); both of these are the only regular n-dimensional star polytopes which are derived by performing stellational operations on the pentagonal polytope which has simplectic faces. It can be constructed analogously to the pentagram, its two-dimensional analogue, via the extension of said (n-1)-D simplex faces of the core nD polytope (tetrahedra for the grand 600-cell, equilateral triangles for the great icosahedron, and line segments for the pentagram) until the figure regains regular faces.
The Grand 600-cell is also dual to the great grand stellated 120-cell, mirroring the great icosahedron's duality with the great stellated dodecahedron (which in turn is also analogous to the pentagram); all of these are the final stellations of the n-dimensional "dodecahedral-type" pentagonal polytope.
Related polytopes
It has the same edge arrangement as the great stellated 120-cell, and grand stellated 120-cell, and same face arrangement as the great icosahedral 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
The Great 600-cell, a Zome Model
Regular 4-polytopes | Grand 600-cell | [
"Mathematics"
] | 576 | [
"Geometry",
"Geometry stubs"
] |
8,604,577 | https://en.wikipedia.org/wiki/Small%20stellated%20120-cell | In geometry, the small stellated 120-cell or stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,5,3}. It is one of 10 regular Schläfli-Hess polytopes.
Related polytopes
It has the same edge arrangement as the great grand 120-cell, and also shares its 120 vertices with the 600-cell and eight other regular star 4-polytopes. It may also be seen as the first stellation of the 120-cell. In this sense it could be seen as analogous to the three-dimensional small stellated dodecahedron, which is the first stellation of the dodecahedron. Indeed, the small stellated 120-cell is dual to the icosahedral 120-cell, which could be taken as a 4D analogue of the great dodecahedron, dual of the small stellated dodecahedron.
The edges of the small stellated 120-cell are τ2 as long as those of the 120-cell core inside the 4-polytope.
See also
List of regular polytopes
Convex regular 4-polytope - Set of convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Zome Model of the Final Stellation of the 120-cell
The First Stellation of the 120-cell, A Zome Model
Regular 4-polytopes | Small stellated 120-cell | [
"Mathematics"
] | 452 | [
"Geometry",
"Geometry stubs"
] |
8,604,585 | https://en.wikipedia.org/wiki/Icosahedral%20120-cell | In geometry, the icosahedral 120-cell, polyicosahedron, faceted 600-cell or icosaplex is a regular star 4-polytope with Schläfli symbol {3,5,5/2}. It is one of 10 regular Schläfli-Hess polytopes.
It is constructed by 5 icosahedra around each edge in a pentagrammic figure. The vertex figure is a great dodecahedron.
Related polytopes
It has the same edge arrangement as the 600-cell, grand 120-cell and great 120-cell, and shares its vertices with all other Schläfli–Hess 4-polytopes except the great grand stellated 120-cell (another stellation of the 120-cell).
As a faceted 600-cell, replacing the simplicial cells of the 600-cell with icosahedral pentagonal polytope cells, it could be seen as a four-dimensional analogue of the great dodecahedron, which replaces the triangular faces of the icosahedron with pentagonal faces. Indeed, the icosahedral 120-cell is dual to the small stellated 120-cell, which could be taken as a 4D analogue of the small stellated dodecahedron, dual of the great dodecahedron.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Regular 4-polytopes | Icosahedral 120-cell | [
"Mathematics"
] | 458 | [
"Geometry",
"Geometry stubs"
] |
8,604,599 | https://en.wikipedia.org/wiki/Grand%20120-cell | In geometry, the grand 120-cell or grand polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,3,5/2}. It is one of 10 regular Schläfli-Hess polytopes.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It is named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
Related polytopes
It has the same edge arrangement as the 600-cell, icosahedral 120-cell and the same face arrangement as the great 120-cell.
It could be seen as another 4D analogue of the three-dimensional great dodecahedron due to being a pentagonal polytope with enlarged facets.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Regular 4-polytopes | Grand 120-cell | [
"Mathematics"
] | 358 | [
"Geometry",
"Geometry stubs"
] |
8,604,610 | https://en.wikipedia.org/wiki/Great%20stellated%20120-cell | In geometry, the great stellated 120-cell or great stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,3,5}. It is one of 10 regular Schläfli-Hess polytopes.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It is named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
Related polytopes
It has the same edge arrangement as the grand 600-cell, icosahedral 120-cell, and the same face arrangement as the grand stellated 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Paper model of 3D cross-section of Great Stellated 120-cell created using nets generated by Stella4D software
Regular 4-polytopes | Great stellated 120-cell | [
"Mathematics"
] | 357 | [
"Geometry",
"Geometry stubs"
] |
8,604,627 | https://en.wikipedia.org/wiki/Great%20grand%20120-cell | In geometry, the great grand 120-cell or great grand polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,5/2,3}. It is one of 10 regular Schläfli-Hess polytopes.
Related polytopes
It has the same edge arrangement as the small stellated 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot polyhedron – regular star polyhedron
Star polygon – regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Regular 4-polytopes | Great grand 120-cell | [
"Mathematics"
] | 268 | [
"Geometry",
"Geometry stubs"
] |
8,604,644 | https://en.wikipedia.org/wiki/Great%20icosahedral%20120-cell | In geometry, the great icosahedral 120-cell, great polyicosahedron or great faceted 600-cell is a regular star 4-polytope with Schläfli symbol {3,5/2,5}. It is one of 10 regular Schläfli-Hess polytopes.
Related polytopes
It has the same edge arrangement as the great stellated 120-cell, and grand stellated 120-cell, and face arrangement of the grand 600-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Regular 4-polytopes | Great icosahedral 120-cell | [
"Mathematics"
] | 292 | [
"Geometry",
"Geometry stubs"
] |
8,604,660 | https://en.wikipedia.org/wiki/Great%20120-cell | In geometry, the great 120-cell or great polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,5/2,5}. It is one of 10 regular Schläfli-Hess polytopes. It is one of the two such polytopes that is self-dual.
Related polytopes
It has the same edge arrangement as the 600-cell, icosahedral 120-cell as well as the same face arrangement as the grand 120-cell.
Due to its self-duality, it does not have a good three-dimensional analogue, but (like all other star polyhedra and polychora) is analogous to the two-dimensional pentagram.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids regular star polyhedron
Star polygon regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Regular 4-polytopes | Great 120-cell | [
"Mathematics"
] | 339 | [
"Geometry",
"Geometry stubs"
] |
8,604,676 | https://en.wikipedia.org/wiki/Grand%20stellated%20120-cell | In geometry, the grand stellated 120-cell or grand stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,5,5/2}. It is one of 10 regular Schläfli-Hess polytopes.
It is also one of two such polytopes that is self-dual.
Related polytopes
It has the same edge arrangement as the grand 600-cell, icosahedral 120-cell, and the same face arrangement as the great stellated 120-cell.
Due to its self-duality, it does not have a good three-dimensional analogue, but (like all other star polyhedra and polychora) is analogous to the two-dimensional pentagram.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Regular 4-polytopes | Grand stellated 120-cell | [
"Mathematics"
] | 349 | [
"Geometry",
"Geometry stubs"
] |
8,604,844 | https://en.wikipedia.org/wiki/Lubiprostone | Lubiprostone, sold under the brand name Amitiza among others, is a medication used in the management of chronic idiopathic constipation, predominantly irritable bowel syndrome-associated constipation in women and opioid-induced constipation. The drug is owned by Mallinckrodt and is marketed by Takeda Pharmaceutical Company.
The drug was developed by Sucampo Pharmaceuticals and approved by the Food and Drug Administration (FDA) in 2006. It was recommended for use in the UK by the National Institute for Health and Care Excellence (NICE) in July 2014. Health Canada approved the drug in 2015. Lubiprostone received approval from the Food and Drug Administration in 2008, to treat irritable bowel syndrome with constipation (IBS-C), and in 2013, for the treatment of opioid-induced constipation in adults with chronic noncancer pain. It is available as a generic medication.
Medical uses
Lubiprostone is a laxative used for the treatment of constipation, specifically:
Chronic idiopathic constipation (difficult or infrequent passage of stools that lasts for 3 months or longer and is not caused by diet, disease, or drugs).
Constipation caused by certain opioid (narcotic) pain medications in people with chronic (ongoing), noncancer pain, or in patients with long-lasting pain caused by a previous cancer or its treatment who do not need weekly increases in opioid dosage.
The effectiveness of lubiprostone has not been established in patients who are taking a diphenylheptane opioid (e.g., methadone).
Irritable bowel syndrome with constipation (IBS-C; a condition that causes stomach pain or cramps, bloating, and infrequent or difficult passage of stools) in women who are at least 18 years of age.
Lubiprostone has not been studied in children. There is current research under way to determine the safety and efficacy in postoperative bowel dysfunction.
It comes in a liquid filled capsule and is available only with a doctor's prescription. If one misses a dose it should be taken as soon as possible unless it is almost time for the next dose, in which case it should be skipped and the user should return to their regular dosing schedule.
Adverse effects
In clinical trials, the most common adverse event was nausea (31%). Other adverse events (≥5% of patients) included diarrhea (13%), headache (13%), abdominal distension (5%), abdominal pain (5%), flatulence (6%), sinusitis (5%), vomiting (5%), and fecal incontinence (1%).
The FDA lists the following:
For subjects with chronic idiopathic constipation taking Amitiza:
Nausea ~ 29% (4% were severe, and 9% of patients discontinued treatment due to nausea. The rate of nausea was lower among male (8%) and elderly (19%) patients. No patients in the clinical studies were hospitalized due to nausea.)
Diarrhea: ~12% (2% were severe, and 2% of patients discontinued treatment due to diarrhea)
Several less common adverse reactions (<1%).
For opioid-induced constipation:
Nausea: ~ 11%; 1% severe nausea and 2% discontinued treatment due to nausea.
Diarrhea: ~ 8%; 2% severe diarrhea and 1% of patients discontinued treatment due to diarrhea.
Less common adverse reactions (<1%): fecal incontinence, blood potassium decreased.
For subjects with irritable bowel syndrome with constipation:
Nausea: ~ 8%; 1% severe nausea and 1% discontinued treatment due to nausea.
Diarrhea: ~ 7%; <1% of patients had severe diarrhea and <1% of patients discontinued treatment due to diarrhea.
Less common adverse reactions: <1%
A 2018 pooled analysis from three phase III, randomized, double-blind, placebo-controlled studies on usage for Opioid-Induced Constipation, found that the numbers of patients reporting adverse effects were similar in both the lubiprostone and placebo treatment groups for all opioid classes (P ≥ 0.125); however, gastrointestinal adverse effects were reported more frequently by those receiving lubiprostone than 2 of the 3 opioid groups. The most commonly reported TEAEs in the lubiprostone treatment groups were nausea (13.4%–18.1%), diarrhea (1.2%–13.9%), and abdominal pain (4.7%–5.6%). In the population overall, the greatest likelihood of experiencing the first episode of any of these three TEAEs was greatest in the first week of treatment and decreased thereafter.
According to Medscape, the most common (>10%) were: Nausea, Diarrhea (7-12%), Headache (2-11%). Less common side effects (1-10%) included: Abdominal pain (4-8%), Abdominal distension (3-6%), Flatulence (4-6%), Vomiting (3%), Loose stools (3%), Edema (1-3%), Abdominal discomfort (1-3%), Dizziness (3%), Chest discomfort/pain (2%), Dyspnea (2%), Dyspepsia (2%), Fatigue (2%), Dry mouth (1%).
Contraindications
Known or suspected mechanical GI obstruction.
Known hypersensitivity to lubiprostone or any ingredient in the formulation.
The effects on pregnancy have not been studied in humans, but testing in guinea pigs resulted in fetal loss.
Lubiprostone is contraindicated in people exhibiting chronic diarrhea, bowel obstruction, or diarrhea-predominant irritable bowel syndrome.
Mechanism of action
Lubiprostone is a bicyclic fatty acid derived from prostaglandin E1 that acts by specifically activating ClC-2 chloride channels on the apical aspect of gastrointestinal epithelial cells, producing a chloride-rich fluid secretion. These secretions soften the stool, increase motility, and promote spontaneous bowel movements.
Pharmacokinetics
Unlike many laxative products, lubiprostone does not show signs of drug tolerance, chemical dependency, or altered serum electrolyte concentration.
Minimal distribution of the drug occurs beyond the immediate gastrointestinal tissues. Lubiprostone is rapidly metabolized by reduction/oxidation, mediated by carbonyl reductase. There is no metabolic involvement of the hepatic cytochrome P450 system. The measurable metabolite, M3, exists in very low levels in plasma and makes up less than 10% of the total administered dose.
Data indicate that metabolism occurs locally in the stomach and jejunum.
Society and culture
Economics
The cost to the NHS was £29.68 per 24 mcg 28-cap pack as of April 2017.
Brand names
Lubiprostone is available in the United States, Japan, Switzerland, India, Bangladesh, the United Kingdom, and Canada.
In Bangladesh and India, lubiprostone is sold under the brand name Lubigut by Ziska Pharmaceuticals, Lubilax by Beacon Pharmaceuticals, and under the brand name Lubowel by Sun Pharmaceutical.
References
Drugs acting on the gastrointestinal system and metabolism
Fatty acids
Laxatives
Organofluorides
Lactols
Ketones
Oxygen heterocycles
Heterocyclic compounds with 2 rings
Drugs developed by Takeda Pharmaceutical Company | Lubiprostone | [
"Chemistry"
] | 1,652 | [
"Ketones",
"Lactols",
"Functional groups"
] |
8,605,007 | https://en.wikipedia.org/wiki/Entropic%20security | Entropic security is a security definition used in the field of cryptography. Modern encryption schemes are generally required to protect communications even when the attacker has substantial information about the messages being encrypted. For example, even if an attacker knows that an intercepted ciphertext encrypts either the message "Attack" or the message "Retreat", a semantically secure encryption scheme will prevent the attacker from learning which of the two messages is encrypted. However, definitions such as semantic security are too strong to achieve with certain specialized encryption schemes. Entropic security is a weaker definition that can be used in the special case where an attacker has very little information about the messages being encrypted.
It is well known that certain types of encryption algorithm cannot satisfy definitions such as semantic security: for example, deterministic encryption algorithms can never be semantically secure. Entropic security definitions relax these definitions to cases where the message space has substantial entropy (from an adversary's point of view). Under this definition it is possible to prove security of deterministic encryption.
Note that in practice entropically-secure encryption algorithms are only "secure" provided that the message distribution possesses high entropy from any reasonable adversary's perspective. This is an unrealistic assumption for a general encryption scheme, since one cannot assume that all likely users will encrypt high-entropy messages. For these schemes, stronger definitions (such as semantic security or indistinguishability under adaptive chosen ciphertext attack) are appropriate. However, there are special cases in which it is reasonable to require high entropy messages. For example, encryption schemes that encrypt only secret key material (e.g., key encapsulation or Key Wrap schemes) can be considered under an entropic security definition. A practical application of this result is the use of deterministic encryption algorithms for secure encryption of secret key material.
Russell and Wang formalized a definition of entropic security for encryption. Their definition resembles the semantic security definition when message spaces have highly-entropic distribution. In one formalization, the definition implies that an adversary given the ciphertext will be unable to compute any predicate on the ciphertext with (substantially) greater probability than an adversary who does not possess the ciphertext. Dodis and Smith later proposed alternate definitions and showed equivalence.
References
A. Russell and Y. Wang. How to fool an unbounded adversary with a short key. Appeared at Advances in Cryptology – Eurocrypt 2002.
Y. Dodis and A. Smith. Entropic Security and the encryption of high-entropy messages. Appeared at the Theory of Cryptography Conference (TCC) 2005.
Cryptography | Entropic security | [
"Mathematics",
"Engineering"
] | 556 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
8,605,453 | https://en.wikipedia.org/wiki/Solar%20neutrino%20unit | The solar neutrino unit (SNU) is a unit of Solar neutrino flux widely used in neutrino astronomy and radiochemical neutrino experiments. It is equal to the neutrino flux producing 10−36 captures per target atom per second. It is convenient given the very low event rates in radiochemical experiments. Typical rate is expected to be from tens SNU to hundred SNU.
There are two ways of detecting solar neutrinos: radiochemical and real time experiments. The principle of radiochemical experiments is the reaction of the form
.
The daughter nucleus's decay is used in the detection. Production rate of the daughter nucleus is given by
,
where
is the solar neutrino flux
is the cross section for the radiochemical reaction
is the number of target atoms.
With typical neutrino flux of 1010 cm−2 s−1 and a typical interaction cross section of about 10−45 cm2, about 1030 target atoms are required to produce one event per day. Taking into account that 1 mole is equal to 6.022 atoms, this number corresponds to ktons of the target substances, whereas present neutrino detectors operate at much lower quantities of those.
See also
Neutrino
Neutrino detector
Mole (unit)
Solar neutrino
Terrestrial Neutrino Units (TNU)
Links
References
Units of frequency | Solar neutrino unit | [
"Mathematics"
] | 277 | [
"Quantity",
"Units of frequency",
"Units of measurement"
] |
8,605,573 | https://en.wikipedia.org/wiki/Comparison%20of%20movie%20cameras | This article summarized the comparison of movie cameras.
35 mm
The 35 mm film gauge has long been the most common gauge in professional usage, and thus enjoys the greatest number of cameras currently available for professional usage. The modern era of 35 mm cameras dates to 1972, when Arri's Arriflex 35BL and Panavision's original Panaflex models emerged as the first self-blimped, lightweight cameras. Another distinguishing characteristic of modern cameras is the adoption of stronger lens mount seatings secured with a breech lock – namely the Arri PL and PV mount, both of which were designs descended from the BNCR mount of Mitchell cameras.
General
Camera model – specific camera body models and variants, usually officially authorized
Camera line – either the body family (similar bodies) or system family (complementary design)
Manufacturer – company of origin
Introduced – first year of known usage
Weight – usually just the body, but may include accessories as mentioned
MOS/sync – Sync-sound cameras are able to both maintain a constant speed (usually crystal lock) and run quietly enough not to be heard by the sound recordist. MOS cameras do not meet either one or both of these requirements, and are usually used either for applications where camera noise is not a concern, or non-standard camera speeds are required. A camera is also deemed MOS if it cannot hold a constant speed, regardless of its noise levels.
Noise level – measured noise made by the camera, dB(A), with film and at a given distance from the film plane, usually one meter. MOS cameras do not have a measured noise level since they are not intended to be used with recorded sound and thus are much louder.
A limited number of cameras prior to the modern period are listed due to their prevalence in special applications.
Lens and gate aperture
Lens mount – the type of mount required for using the camera. Certain lenses may not be able to be used with particular cameras if the mounts are incompatible. The lens mount must be shifted to be centered to accommodate the Super 16 format from standard 16.
Aperture size – the size of the aperture of the gate.
Aperture plate – is the gate removable for inspection and what accessories may it have?
Lens interface – electronic information system located in the lens mount to communicate lens data to the camera and accessories.
Ground glass – interchangeable ground glasses allow for the viewfinder to display whichever aspect ratio is being framed for.
Frameline glow – can the camera make the framelines glow for easier viewing in low-light conditions?
Shutter
Reflex – is the shutter a reflex mirror design?
Design – rotary disc shutters have two common designs – a "half-moon" disc of 180° or "butterfly" of two e.g. 90° segments opposite each other which spin at half-speed.
Location – where the shutter is centered
Adjustment – how the shutter angle can be adjusted. Most manual designs can only be adjusted when the camera is not running, often with the lens removed. All electronic shutters allow adjustment at all times, including when the camera is running.
Angles – shutter angles available and in what increments or stops, if not continuous.
Movement
Movement type – design of the movement mechanism
Pulldown claws – number of claws which engage the film perforations to transport the film while the shutter is closed. Some claws may have more than one pin in order to engage multiple perfs at a time.
Registration pins – number of pins which engage the film perforations during exposure in order to ensure consistent image stability from frame to frame.
Frame rate (forward) – range of speeds in frame/s (frames per second) and smallest increments of change allowed. Accessories noted where required for certain speeds.
Frame rate (reverse) – range of speeds in frame/s (frames per second) and smallest increments of change allowed. Accessories noted where required for certain speeds.
Motor – type of motor, voltage, crystal-controlled speeds (Xtal)
Pulldown – negative pulldown options available
Pitch control? – does the camera allow for adjustment of the pulldown claw to optimize camera noise and avoid perforation damage?
Viewfinder
16 mm
16 mm film occupies a rather curious position within filmmaking – with a wide range encompassing virtually every field – amateur home movies, student films, experimental films, television work, commercials, music videos, corporate films, industrial research, medical applications, and lower budget features. Its robust image quality in relation to its size allows for a much more versatile, accessible, and affordable usage in many fields where neither 35 mm nor Super 8 would be well-suited. Despite current challenges from the burgeoning digital video market, the consistent improvement of cameras, lenses, and film stocks have enabled the Super 16 format to flourish recently, with many labs reporting increased usage. The modern era of 16 mm cameras is concurrent with that of 35 mm for both the same reasons as 35 mm as well as an additional change: the creation of the Super 16 format by Rune Ericsson in 1971. The format expanded the usable film negative horizontally, which required a larger film gate and necessitated either specialized conversion of machined parts or purchase of new cameras designed with Super 16 gates. Since the format took more than a decade to slowly standardize, the competition from both high and low end video cameras has decimated the demand for 16 mm cameras for most non-professional usage. Therefore, there are relatively few Super 16 cameras, although most are considered professional-grade.
General
A limited number of cameras prior to this period are listed due to their prevalence in special applications.
See also
Comparison of digital SLRs
References
Arri official site
Aaton official site
Arri. Arri: A Picture Chronicle, 2001
Burum, Stephen (editor). American Cinematographer Manual, 9th edition. ASC Press, 2004
Diaz-Amador, Jorge. "Cinema Technic: Camera History/Info", 2001–06
Hummel, Rob (editor). American Cinematographer Manual, 8th edition. ASC Press, 2001.
Kaczek, Frédéric G. "The History of Fritz Gabriel Bauer's Moviecam". PDF file formerly available at www.moviecam.com, 2002
Moviecam History (official site)
Panavision History (official site)
Movie cameras
Movie cameras | Comparison of movie cameras | [
"Technology"
] | 1,270 | [
"nan"
] |
8,605,679 | https://en.wikipedia.org/wiki/Pipe%20dope | Pipe dope is any thread lubricant, thread sealing compound, or anaerobic chemical sealant that is used to make a pipe thread joint leakproof and pressure tight. It is also referred to as "thread compound" or "pipe thread sealant." Although common pipe threads are tapered and therefore will achieve an interference fit during proper assembly, machining and finishing variances usually result in a fit that does not result in 100 percent contact between the mating components. The application of pipe dope prior to assembly will fill the minute voids between the threads, thus making the joint pressure tight. Pipe dope also acts as a lubricant and helps prevent seizing of the mating parts, which can later cause difficulty during disassembly.
Composition
A material safety data sheet reports that the "Permatex" 51D pipe joint compound contains kaolin clay, vegetable oil, rosin, ethanol, etc. The ingredients are designed to:
fill minute spaces between mating pipe fittings (kaolin), and
serve as a lubricant as the fittings are forced together (vegetable oil).
Various types of pipe dope formulation exist, the appropriate type being determined by the application, e.g., pneumatic, hydraulic, caustic, etc., as well as the expected pressure. Improper selection of the type of pipe dope may result in leakage despite the best assembly practices.
As recently as the 1950s, toxic lead oxide mixed with spar varnish was used as a dope for drinking water pipes. Litharge (a form of lead oxide) mixed with glycerine was also used in some applications. Litharge was also the recommended pipe dope for fuel piping.
Pipe dopes used in oil drilling frequently include powdered graphite, lead and zinc, and sometimes copper suspended in grease. In the past, nickel, talc, silica, calcium carbonate and clay have been included. Heavy metals make up to 25% of the volume of the dope. Silicone is included to improve application to wet or cold surfaces.
Plastic pipes
Petroleum-based pipe dope is not intended for use on threaded PVC, CPVC or ABS pipe and fittings since it will deteriorate the plastic. Builders in the US are expected to use thread compounds that meet ASTM F2331 - Standard Test Method for Determining Chemical Compatibility of Thread Sealants with Thermoplastic Threaded Pipe and Fittings Materials or thread seal tape on PVC, CPVC and ABS threads.
References
Plumbing
Lubricants | Pipe dope | [
"Engineering"
] | 525 | [
"Construction",
"Plumbing"
] |
8,605,825 | https://en.wikipedia.org/wiki/Bulbocapnine | Bulbocapnine is an alkaloid found in Corydalis (notably the European species C. cava) and Dicentra, genera of the plant family Fumariaceae which have caused (notably the American species Corydalis caseana) the fatal poisoning of sheep and cattle. It has been shown to act as an acetylcholinesterase inhibitor, and inhibits biosynthesis of dopamine via inhibition of the enzyme tyrosine hydroxylase. Like apomorphine, it is reported to be an inhibitor of amyloid beta protein (Aβ) fiber formation, whose presence is a hallmark of Alzheimer's disease (AD). Bulbocapnine is thus a potential therapeutic under the amyloid hypothesis. According to the Dorlands Medical Dictionary, it "inhibits the reflex and motor activities of striated muscle. It has been used in the treatment of muscular tremors and vestibular nystagmus".
A psychiatrist at Tulane University named Robert Heath carried out experiments on prisoners at the Louisiana State Penitentiary using bulbocapnine to induce stupor. This work at Tulane inspired, and was continued parallel to, experiments carried out at the behest of the Central Intelligence Agency. The bulbocapnine work Heath conducted for the government was one component of a large investigation into the potential of psychoactive compounds as aids to interrogation.
Effects
It can induce catalepsy featuring the curious symptom of waxy flexibility and the state produced by the drug has been compared to Akinetic mutism.
In popular culture
In literature
The author William S. Burroughs references the drug in his book Naked Lunch (1959), in which the fictional Dr. Benway uses it to induce obedience in torture victims.
In television
The drug's use to treat Mayor Kane's father-in-law and predecessor is a plot point in season 2 of the TV series Boss, e.g., in episodes s2.e8 ("Consequences"; October 5, 2012) and s2.e9 ("Church"; October 12, 2012).
See also
References
Acetylcholinesterase inhibitors
Aporphine alkaloids
Alkaloids found in Papaveraceae
Phenol ethers
Nitrogen heterocycles
Tyrosine hydroxylase inhibitors
Benzodioxoles
Heterocyclic compounds with 5 rings
Plant toxins
Methoxy compounds
Toxicology | Bulbocapnine | [
"Chemistry",
"Environmental_science"
] | 491 | [
"Toxicology",
"Chemical ecology",
"Plant toxins"
] |
8,606,325 | https://en.wikipedia.org/wiki/Kutta%E2%80%93Joukowski%20theorem | The Kutta–Joukowski theorem is a fundamental theorem in aerodynamics used for the calculation of lift of an airfoil (and any two-dimensional body including circular cylinders) translating in a uniform fluid at a constant speed so large that the flow seen in the body-fixed frame is steady and unseparated. The theorem relates the lift generated by an airfoil to the speed of the airfoil through the fluid, the density of the fluid and the circulation around the airfoil. The circulation is defined as the line integral around a closed loop enclosing the airfoil of the component of the velocity of the fluid tangent to the loop. It is named after Martin Kutta and Nikolai Zhukovsky (or Joukowski) who first developed its key ideas in the early 20th century. Kutta–Joukowski theorem is an inviscid theory, but it is a good approximation for real viscous flow in typical aerodynamic applications.
Kutta–Joukowski theorem relates lift to circulation much like the Magnus effect relates side force (called Magnus force) to rotation. However, the circulation here is not induced by rotation of the airfoil. The fluid flow in the presence of the airfoil can be considered to be the superposition of a translational flow and a rotating flow. This rotating flow is induced by the effects of camber, angle of attack and the sharp trailing edge of the airfoil. It should not be confused with a vortex like a tornado encircling the airfoil. At a large distance from the airfoil, the rotating flow may be regarded as induced by a line vortex (with the rotating line perpendicular to the two-dimensional plane). In the derivation of the Kutta–Joukowski theorem the airfoil is usually mapped onto a circular cylinder. In many textbooks, the theorem is proved for a circular cylinder and the Joukowski airfoil, but it holds true for general airfoils.
Lift force formula
The theorem applies to two-dimensional inviscid flow flow around an airfoil section (or any shape of infinite span). The lift per unit span of the airfoil is given by
where and are the fluid density and the fluid velocity far upstream of the airfoil, and is the circulation defined as the line integral
around a closed contour enclosing the airfoil and followed in the negative (clockwise) direction. As explained below, this path must be in a region of potential flow and not in the boundary layer of the cylinder. The integrand is the component of the local fluid velocity in the direction tangent to the curve , and is an infinitesimal length on the curve . Equation is a form of the Kutta–Joukowski theorem.
Kuethe and Schetzer state the Kutta–Joukowski theorem as follows:
The force per unit length acting on a right cylinder of any cross section whatsoever is equal to and is perpendicular to the direction of
Circulation and the Kutta condition
A lift-producing airfoil either has camber or operates at a positive angle of attack, the angle between the chord line and the fluid flow far upstream of the airfoil. Moreover, the airfoil must have a sharp trailing edge.
Any real fluid is viscous, which implies that the fluid velocity vanishes on the airfoil. Prandtl showed that for large Reynolds number, defined as , and small angle of attack, the flow around a thin airfoil is composed of a narrow viscous region called the boundary layer near the body and an inviscid flow region outside. In applying the Kutta-Joukowski theorem, the loop must be chosen outside this boundary layer. (For example, the circulation calculated using the loop corresponding to the surface of the airfoil would be zero for a viscous fluid.)
The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition.
Kutta and Joukowski showed that for computing the pressure and lift of a thin airfoil for flow at large Reynolds number and small angle of attack, the flow can be assumed inviscid in the entire region outside the airfoil provided the Kutta condition is imposed. This is known as the potential flow theory and works remarkably well in practice.
Derivation
Two derivations are presented below. The first is a heuristic argument, based on physical insight. The second is a formal and technical one, requiring basic vector analysis and complex analysis.
Heuristic argument
For a heuristic argument, consider a thin airfoil of chord and infinite span, moving through air of density . Let the airfoil be inclined to the oncoming flow to produce an air speed on one side of the airfoil, and an air speed on the other side. The circulation is then
The difference in pressure between the two sides of the airfoil can be found by applying Bernoulli's equation:
so the downward force on the air, per unit span, is
and the upward force (lift) on the airfoil is
A differential version of this theorem applies on each element of the plate and is the basis of thin-airfoil theory.
Formal derivation
Lift forces for more complex situations
The lift predicted by the Kutta-Joukowski theorem within the framework of inviscid potential flow theory is quite accurate, even for real viscous flow, provided the flow is steady and unseparated.
In deriving the Kutta–Joukowski theorem, the assumption of irrotational flow was used. When there are free vortices outside of the body, as may be the case for a large number of unsteady flows, the flow is rotational. When the flow is rotational, more complicated theories should be used to derive the lift forces. Below are several important examples.
Impulsively started flow at small angle of attack
For an impulsively started flow such as obtained by suddenly accelerating an airfoil or setting an angle of attack, there is a vortex sheet continuously shed at the trailing edge and the lift force is unsteady or time-dependent. For small angle of attack starting flow, the vortex sheet follows a planar path, and the curve of the lift coefficient as function of time is given by the Wagner function. In this case the initial lift is one half of the final lift given by the Kutta–Joukowski formula. The lift attains 90% of its steady state value when the wing has traveled a distance of about seven chord lengths.
Impulsively started flow at large angle of attack
When the angle of attack is high enough, the trailing edge vortex sheet is initially in a spiral shape and the lift is singular (infinitely large) at the initial time. The lift drops for a very short time period before the usually assumed monotonically increasing lift curve is reached.
Starting flow at large angle of attack for wings with sharp leading edges
If, as for a flat plate, the leading edge is also sharp, then vortices also shed at the leading edge and the role of leading edge vortices is two-fold: 1) they are lift increasing when they are still close to the leading edge, so that they elevate the Wagner lift curve, and 2) they are detrimental to lift when they are convected to the trailing edge, inducing a new trailing edge vortex spiral moving in the lift decreasing direction. For this type of flow a vortex force line (VFL) map can be used to understand the effect of the different vortices in a variety of situations (including more situations than starting flow) and may be used to improve vortex control to enhance or reduce the lift. The vortex force line map is a two dimensional map on which vortex force lines are displayed. For a vortex at any point in the flow, its lift contribution is proportional to its speed, its circulation and the cosine of the angle between the streamline and the vortex force line. Hence the vortex force line map clearly shows whether a given vortex is lift producing or lift detrimental.
Lagally theorem
When a (mass) source is fixed outside the body, a force correction due to this source can be expressed as the product of the strength of outside source and the induced velocity at this source by all the causes except this source. This is known as the Lagally theorem. For two-dimensional inviscid flow, the classical Kutta Joukowski theorem predicts a zero drag. When, however, there is vortex outside the body, there is a vortex induced drag, in a form similar to the induced lift.
Generalized Lagally theorem
For free vortices and other bodies outside one body without bound vorticity and without vortex production, a generalized Lagally theorem holds, with which the forces are expressed as the products of strength of inner singularities (image vortices, sources and doublets inside each body) and the induced velocity at these singularities by all causes except those inside this body. The contribution due to each inner singularity sums up to give the total force. The motion of outside singularities also contributes to forces, and the force component due to this contribution is proportional to the speed of the singularity.
Individual force of each body for multiple-body rotational flow
When in addition to multiple free vortices and multiple bodies, there are bound vortices and vortex production on the body surface, the generalized Lagally theorem still holds, but a force due to vortex production exists. This vortex production force is proportional to the vortex production rate and the distance between the vortex pair in production. With this approach, an explicit and algebraic force formula, taking into account of all causes (inner singularities, outside vortices and bodies, motion of all singularities and bodies, and vortex production) holds individually for each body with the role of other bodies represented by additional singularities. Hence a force decomposition according to bodies is possible.
General three-dimensional viscous flow
For general three-dimensional, viscous and unsteady flow, force formulas are expressed in integral forms. The volume integration of certain flow quantities, such as vorticity moments, is related to forces. Various forms of integral approach are now available for unbounded domain and for artificially truncated domain. The Kutta Joukowski theorem can be recovered from these approaches when applied to a two-dimensional airfoil and when the flow is steady and unseparated.
Lifting line theory for wings, wing-tip vortices and induced drag
A wing has a finite span, and the circulation at any section of the wing varies with the spanwise direction. This variation is compensated by the release of streamwise vortices, called trailing vortices, due to conservation of vorticity or Kelvin Theorem of Circulation Conservation. These streamwise vortices merge to two counter-rotating strong spirals separated by distance close to the wingspan and their cores may be visible if relative humidity is high. Treating the trailing vortices as a series of semi-infinite straight line vortices leads to the well-known lifting line theory. By this theory, the wing has a lift force smaller than that predicted by a purely two-dimensional theory using the Kutta–Joukowski theorem. This is due to the upstream effects of the trailing vortices' added downwash on the angle of attack of the wing. This reduces the wing's effective angle of attack, decreasing the amount of lift produced at a given angle of attack and requiring a higher angle of attack to recover this lost lift. At this new higher angle of attack, drag has also increased. Induced drag effectively reduces the slope of the lift curve of a 2-D airfoil and increases the angle of attack of (while also decreasing the value of ).
See also
Horseshoe vortex
References
Bibliography
Milne-Thomson, L.M. (1973) Theoretical Aerodynamics, Dover Publications Inc, New York
Aircraft aerodynamics
Eponymous theorems of physics
Fluid dynamics
Physics theorems
Aircraft wing design | Kutta–Joukowski theorem | [
"Physics",
"Chemistry",
"Engineering"
] | 2,449 | [
"Equations of physics",
"Chemical engineering",
"Eponymous theorems of physics",
"Piping",
"Physics theorems",
"Fluid dynamics"
] |
8,606,648 | https://en.wikipedia.org/wiki/Pickett%20CCC%20Memorial%20State%20Park | Pickett Civilian Conservation Corps Memorial State Park (also known simply as Pickett State Park or Pickett CCC Memorial State Park) is a Tennessee state park in the upper Cumberland Mountains. It is located in Pickett County, northeast of the city of Jamestown, and is adjacent to the Big South Fork National River and Recreation Area. The park is located on of wilderness including caves, natural bridges, and other rock formations. About are managed by the Tennessee Department of Environment and Conservation as a state park, and the remainder of the property is managed by the Tennessee Division of Forestry as a state forest.
The park was developed by the Civilian Conservation Corps (CCC) between 1934 and 1942 on about of land donated to the State of Tennessee in 1933 by the Stearns Coal and Lumber Company. CCC crews built hiking trails, a recreation lodge, a ranger station, five rustic cabins, and a lake known as Arch Lake. Locally quarried sandstone was used in constructing most of the buildings. The original park facilities are listed on the National Register of Historic Places. Pickett's land area has increased over time as a result of additional land donations and acquisitions, and additional park facilities were built beginning in the 1950s.
In 2015, Pickett State Park was classed as a Dark Sky Park by the International Dark-sky Association. These are areas generally free of artificial light pollution, making them optimal places for stargazing.
The park offers boating, camping, lodging, hiking and many other activities.
Facilities
The park's facilities include 32 campsites and 20 rental cabins. The campsites come with electric and water hookups and have both picnic tables and grills. The campground has a modern bathhouse and a dump station. It is open year-round on a first-come, first-served basis, with a maximum stay limit of two weeks. Of the 20 cabins, there are four types. The first is a rustic CCC cabin that accommodates four people, the second is a Deluxe Cabin that will accommodate up to six people, the third is a Chalet cabin that will accommodate two people, and the last is a Villa cabin that can accommodate up to eight people. Each cabin comes with the following amenities:
Full Modern Bathrooms
Kitchen Appliances
Cooking Utensils
Linens and Towels
Fireplaces
All cabins are available by reservation (up to one year in advance) year round, excepting two rustic units. One deluxe cabin is designated as a pet cabin.
Attractions
At Hazard Cave, the park is home to a species of glow worm discovered in 1975, that is found only in very particular places in the United States, another place being the Big South Fork National Recreation Area. The glow worms are in fact insect larvae of the fungus gnat (Ofelia fultoni). In the dark confines of the Hazard Cave rock house, these glow worms emit blue, glowing light on the cave walls and the surrounding vegetation. These larvae can be seen throughout the year, but are the best and brightest in the early weeks of June.
References
External links
Pickett State Park official website
Pickett State Rustic Park, Tennessee Encyclopedia of History and Culture
State parks of Tennessee
Protected areas of Pickett County, Tennessee
National Register of Historic Places in Pickett County, Tennessee
Civilian Conservation Corps in Tennessee
Historic districts on the National Register of Historic Places in Tennessee
Places with bioluminescence | Pickett CCC Memorial State Park | [
"Chemistry",
"Biology"
] | 675 | [
"Places with bioluminescence",
"Bioluminescence"
] |
8,607,369 | https://en.wikipedia.org/wiki/Overmedication | Overmedication describes the excessive use of over-the-counter or prescription medicines for a person. Overmedication can have harmful effects, such as non-adherence or interactions with multiple prescription drugs.
Over-the-counter medication overuse
Over-the-counter (OTC) medications are generally first-line therapies that people may choose to treat common acute illnesses, such as fevers, colds, allergies, headaches, or other pain. Many of these medications can be bought in retail pharmacies or grocery stores without a prescription. OTC medication overuse is most prevalent in adolescents and young adults. This overuse is common due to the relatively low cost, widespread availability, low perceived dangers, and internet culture associated with OTC medications. OTC medications may be combination formulations that contain multiple drugs. These combination formulations are often used with other substances, which complicates treatment for these types of overdoses. Furthermore, the easy access to information online can sometimes lead to self-diagnosis and self-medication, contributing to the potential for misuse and overuse.
Acetaminophen
Overuse of acetaminophen is the leading cause of liver failure in the Western world. The maximum daily limit of acetaminophen is 4 grams per day for someone with a healthy liver. It is also highly recommended to not go over the maximum daily limit. Exceeding the maximum daily limit could involve severe liver toxicity, liver failure, kidney failure, or even death. People who have poor liver function or with chronic alcohol use disorder should either limit or not ingest acetaminophen to prevent morbidities. Additionally, acetaminophen is an ingredient in many combination medications, increasing the risk of unintentional overdose. Consumers should read labels carefully and consult healthcare providers to ensure they are not consuming excessive doses. In cases of suspected overdose, immediate medical attention is needed to mitigate potential life-threatening consequences.
Codeine
Codeine is an opioid and shares similarities to other opioid overuse. Many OTC medications for cough have formulations that contain codeine, which people may seek to overuse. The common effects of codeine include miosis, respiratory depression, CNS depression, and decreased bowel motility. Despite the risk of death, dependence is another significant issue related to codeine overuse. Tolerance can cause users to use more opioid, leading to dependence, especially with chronic daily use of codeine. Additionally, the misuse of codeine-containing cough syrups has become a public health concern, as it can serve as a gateway to stronger opioids. Education about the risks and signs of opioid addiction can play a role in prevention and early intervention.
Dextromethorphan
Dextromethorphan, also shortened to DXM, affects the NMDA receptor and serotonin receptors which is believed to give its psychoactive effects at high doses. Similarly to codeine, DXM comes primarily in formulations that contain other OTC medications and is not common to find DXM on its own. Moreso, people who use DXM tend to use it concomitantly with other substances such as alcohol, hallucinogens, sedative drugs, and opioids. DXM has dose dependent psychoactive effects with lower dose leading to more restlessness and euphoria and higher doses causing hallucinations, delusional beliefs, paranoia, perceptual distortions, ataxia, and out of body experiences.
Diphenhydramine
Diphenhydramine is typically used for allergy relief, although it may be used to alleviate sleeping problems, anxiety, and overall restlessness. Effects may include euphoria, hallucinations, or psychosis. The anticholinergic activity of diphenhydramine may lead to tachycardia, dry mouth, blurred vision, mydriasis, depression, and urinary retention.
In 2020, purposeful overmedication with Benadryl (diphenhydramine) was a concern due to use of social media by teenagers in the United States, with the FDA issuing a public warning about the possibility of seizures, hallucinations, breathing difficulty or loss of consciousness.
Pseudoephedrine
Pseudoephedrine, ephedrine or phenylpropanolamine can be overused with the intent for weight loss or improving athletic performance, possibly causing insomnia, diminished sense of fatigue, euphoria, and psychotic behavior. The habitual use of the medication has led to dependence, with symptoms of restlessness, dysphoria, and distorted perceptions on withdrawal.
Elderly
Seniors (65 years old and up) are possible users of overmedication. Seniors are disproportionately affected by not only adverse drug events, but also drug interactions and more hospital admissions.
The term for individuals taking five or more medications is polypharmacy, which commonly occurs in elderly people, increasing their risk of overmedication. Medical providers are generally hesitant to prescribe polypharmacy in the elderly due to the risk of harmful drug interactions. Concerns with polypharmacy and elderly groups are reduced medication adherence, increased fall risk, cognitive function impairment, and adverse drug reaction. Almost 75% of clinic visits result in people obtaining a written prescription.
More careful prescribing practices could increase medication adherence in elderly people. Single-pill combination formulations make it easier for a person to monitor medications.
Overprescription
Opioids
Opioids are used for pain management acutely or prescribed after a surgical procedure. While opioids aid in short- and long-term pain management, overprescription or constant opioid-exposure increases the risk for addiction. There is a rise within healthcare systems to manage prescription of opioids. Children prescribed opioids may become susceptible to the harms of addiction.
Reducing or withdrawing prescribed opioids, such as for people with chronic non-cancerous pain, using either dose-reduction or stopping opioid prescriptions, may be effective, but standards are absent for managing withdrawal symptoms and deprescribing coprescription of sedatives. Studies found a significant increase in opioid surplus disposal when individuals were provided with the necessary education or disposal kits.
Antibiotics
As antibiotics inhibit bacterial infections, they are a commonly prescribed medication. Overuse of these medications over the years has contributed to reduced efficacy against certain bacteria due to antimicrobial resistance, a global medical concern. Antibiotic overprescription is a potential problem in acute care, primary hospitals, and dental offices.
Antibiotic-resistant bacterial infections are increasing. A systemic review of admitted COVID-19 patients who were prescribed antibiotics showed that 80% of the admitted people were given antibiotics upon admission without confirmed bacterial coinfections.
Physicians prescribe antibiotics for non-indicated diagnoses, such as viral infections, possibly cntributing to more antibiotic-resistant infections, greater adverse drug events, more drug-drug interactions, and deaths. Dentists may prescribe antibiotics for non-indicated conditions that could otherwise be treated with other interventions, according to clinical guidelines.
References
Drugs
Medical error
Unnecessary health care | Overmedication | [
"Chemistry"
] | 1,481 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
39 | https://en.wikipedia.org/wiki/Albedo | Albedo ( ; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun).
While directional-hemispherical reflectance factor is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages.
Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation).
Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change. Albedo is an important concept in climate science.
Terrestrial albedo
Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds.
Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo.
Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost .
In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend.
White-sky, black-sky, and blue-sky albedo
For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms:
the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and
the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo.
with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as:
This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface.
Changes to albedo due to human activities
Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. Human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites.
Urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate the urban heat island effect. An estimate in 2022 found that on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions."
Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved.
The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change.
Examples of terrestrial albedo effects
Illumination
Albedo is not directly dependent on the illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). However, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics.
Insolation effects
The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes.
Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect.
Climate and weather
Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather.
The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds.
Albedo–temperature feedback
When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating.
Snow
Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica, snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (referred to as the ice–albedo positive feedback).
In Switzerland, the citizens have been protecting their glaciers with large white tarpaulins to slow down the ice melt. These large white sheets are helping to reject the rays from the sun and defecting the heat. Although this method is very expensive, it has been shown to work, reducing snow and ice melt by 60%.
Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming.
Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets.
The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions.
Small-scale effects
Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing.
Solar photovoltaic effects
Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications.
Trees
Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo).
In the case of evergreen forests with seasonal snow cover, albedo reduction may be significant enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts as strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate.
Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit.
In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy.
Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming.
Research in 2023, drawing from 176 flux stations globally, revealed a climate trade-off: increased carbon uptake from afforestation results in reduced albedo. Initially, this reduction may lead to moderate global warming over a span of approximately 20 years, but it is expected to transition into significant cooling thereafter.
Water
Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.
At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle.
Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light.
Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection.
Snow on top of this sea ice increases the albedo to 0.9.
Clouds
Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth."
Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies.
Aerosol effects
Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain.
Black carbon
Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo.
Astronomical albedo
In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved.
Optical or visual albedo
The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids.
Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds.
The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies.
Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion.
In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation.
An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by:
where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude.
Radar albedo
In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power.
Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering.
For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo):
where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have .
Radar albedos of Solar System objects
The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references.
Relationship to surface bulk density
In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships:
.
History
The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria.
See also
Bio-geoengineering
Cool roof
Daisyworld
Emissivity
Exitance
Global dimming
Ice–albedo feedback
Irradiance
Kirchhoff's law of thermal radiation
Opposition surge
Polar see-saw
Radar astronomy
Solar radiation management
References
External links
Albedo Project
Albedo – Encyclopedia of Earth
NASA MODIS BRDF/albedo product site
Ocean surface albedo look-up-table
Surface albedo derived from Meteosat observations
A discussion of Lunar albedos
reflectivity of metals (chart)
Land surface effects on climate
Climate change feedbacks
Climate forcing
Climatology
Electromagnetic radiation
Meteorological quantities
Radiometry
Scattering, absorption and radiative transfer (optics)
Radiation
1760s neologisms | Albedo | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 5,083 | [
"Transport phenomena",
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Telecommunications engineering",
"Physical quantities",
"Electromagnetic radiation",
"Quantity",
"Meteorological quantities",
"Waves",
"Scattering",
"Radiation",
"Radiometry"
] |
334 | https://en.wikipedia.org/wiki/International%20Atomic%20Time | International Atomic Time (abbreviated TAI, from its French name ) is a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth's geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). It is the basis for Coordinated Universal Time (UTC), which is used for civil timekeeping all over the Earth's surface and which has leap seconds.
UTC deviates from TAI by a number of whole seconds. , immediately after the most recent leap second was put into effect, UTC has been exactly 37 seconds behind TAI. The 37 seconds result from the initial difference of 10 seconds at the start of 1972, plus 27 leap seconds in UTC since 1972. In 2022, the General Conference on Weights and Measures decided to abandon the leap second by or before 2035, at which point the difference between TAI and UTC will remain fixed.
TAI may be reported using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian days and the Gregorian calendar are used. TAI in this form was synchronised with Universal Time at the beginning of 1958, and the two have drifted apart ever since, due primarily to the slowing rotation of the Earth.
Operation
TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. The majority of the clocks involved are caesium clocks; the International System of Units (SI) definition of the second is based on caesium. The clocks are compared using GPS signals and two-way satellite time and frequency transfer. Due to the signal averaging TAI is an order of magnitude more stable than its best constituent clock.
The participating institutions each broadcast, in real time, a frequency signal with timecodes, which is their estimate of TAI. Time codes are usually published in the form of UTC, which differs from TAI by a well-known integer number of seconds. These time scales are denoted in the form UTC(NPL) in the UTC form, where NPL here identifies the National Physical Laboratory, UK. The TAI form may be denoted TAI(NPL). The latter is not to be confused with TA(NPL), which denotes an independent atomic time scale, not synchronised to TAI or to anything else.
The clocks at different institutions are regularly compared against each other. The International Bureau of Weights and Measures (BIPM, France), combines these measurements to retrospectively calculate the weighted average that forms the most stable time scale possible. This combined time scale is published monthly in "Circular T", and is the canonical TAI. This time scale is expressed in the form of tables of differences UTC − UTC(k) (equal to TAI − TAI(k)) for each participating institution k. The same circular also gives tables of TAI − TA(k), for the various unsynchronised atomic time scales.
Errors in publication may be corrected by issuing a revision of the faulty Circular T or by errata in a subsequent Circular T. Aside from this, once published in Circular T, the TAI scale is not revised. In hindsight, it is possible to discover errors in TAI and to make better estimates of the true proper time scale. Since the published circulars are definitive, better estimates do not create another version of TAI; it is instead considered to be creating a better realisation of Terrestrial Time (TT).
History
Early atomic time scales consisted of quartz clocks with frequencies calibrated by a single atomic clock; the atomic clocks were not operated continuously. Atomic timekeeping services started experimentally in 1955, using the first caesium atomic clock at the National Physical Laboratory, UK (NPL). It was used as a basis for calibrating the quartz clocks at the Royal Greenwich Observatory and to establish a time scale, called Greenwich Atomic (GA). The United States Naval Observatory began the A.1 scale on 13 September 1956, using an Atomichron commercial atomic clock, followed by the NBS-A scale at the National Bureau of Standards, Boulder, Colorado on 9 October 1957.
The International Time Bureau (BIH) began a time scale, Tm or AM, in July 1955, using both local caesium clocks and comparisons to distant clocks using the phase of VLF radio signals. The BIH scale, A.1, and NBS-A were defined by an epoch at the beginning of 1958 The procedures used by the BIH evolved, and the name for the time scale changed: A3 in 1964 and TA(BIH) in 1969.
The SI second was defined in terms of the caesium atom in 1967. From 1971 to 1975 the General Conference on Weights and Measures and the International Committee for Weights and Measures made a series of decisions that designated the BIPM time scale International Atomic Time (TAI).
In the 1970s, it became clear that the clocks participating in TAI were ticking at different rates due to gravitational time dilation, and the combined TAI scale, therefore, corresponded to an average of the altitudes of the various clocks. Starting from the Julian Date 2443144.5 (1 January 1977 00:00:00 TAI), corrections were applied to the output of all participating clocks, so that TAI would correspond to proper time at the geoid (mean sea level). Because the clocks were, on average, well above sea level, this meant that TAI slowed by about one part in a trillion. The former uncorrected time scale continues to be published under the name EAL (Échelle Atomique Libre, meaning Free Atomic Scale).
The instant that the gravitational correction started to be applied serves as the epoch for Barycentric Coordinate Time (TCB), Geocentric Coordinate Time (TCG), and Terrestrial Time (TT), which represent three fundamental time scales in the solar system. All three of these time scales were defined to read JD 2443144.5003725 (1 January 1977 00:00:32.184) exactly at that instant. TAI was henceforth a realisation of TT, with the equation TT(TAI) = TAI + 32.184 s.
The continued existence of TAI was questioned in a 2007 letter from the BIPM to the ITU-R which stated, "In the case of a redefinition of UTC without leap seconds, the CCTF would consider discussing the possibility of suppressing TAI, as it would remain parallel to the continuous UTC."
Relation to UTC
Contrary to TAI, UTC is a discontinuous time scale. It is occasionally adjusted by leap seconds. Between these adjustments, it is composed of segments that are mapped to atomic time by a constant offset. From its beginning in 1961 through December 1971, the adjustments were made regularly in fractional leap seconds so that UTC approximated UT2. Afterwards, these adjustments were made only in whole seconds to approximate UT1. This was a compromise arrangement in order to enable a publicly broadcast time scale. The less frequent whole-second adjustments meant that the time scale would be more stable and easier to synchronize internationally. The fact that it continues to approximate UT1 means that tasks such as navigation which require a source of Universal Time continue to be well served by the public broadcast of UTC.
See also
Clock synchronization
Time and frequency transfer
Notes
References
Footnotes
Bibliography
External links
BIPM technical services: Time Metrology
Time and Frequency Section - National Physical Laboratory, UK
IERS website
NIST Web Clock FAQs
History of time scales
NIST-F1 Cesium Fountain Atomic Clock
Japan Standard Time Project, NICT, Japan
Standard of time definition: UTC, GPS, LORAN and TAI
Time scales | International Atomic Time | [
"Physics",
"Astronomy"
] | 1,601 | [
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
336 | https://en.wikipedia.org/wiki/Altruism | Altruism is the concern for the well-being of others, independently of personal benefit or reciprocity.
The word altruism was popularised (and possibly coined) by the French philosopher Auguste Comte in French, as , for an antonym of egoism. He derived it from the Italian , which in turn was derived from Latin , meaning "other people" or "somebody else". Altruism may be considered a synonym of selflessness, the opposite of self-centeredness.
Altruism is an important moral value in many cultures and religions. It can expand beyond care for humans to include other sentient beings and future generations.
Altruism, as observed in populations of organisms, is when an individual performs an action at a cost to itself (in terms of e.g. pleasure and quality of life, time, probability of survival or reproduction) that benefits, directly or indirectly, another individual, without the expectation of reciprocity or compensation for that action.
The theory of psychological egoism suggests that no act of sharing, helping, or sacrificing can be "truly" altruistic, as the actor may receive an intrinsic reward in the form of personal gratification. The validity of this argument depends on whether such intrinsic rewards qualify as "benefits".
The term altruism can also refer to an ethical doctrine that claims that individuals are morally obliged to benefit others. Used in this sense, it is usually contrasted with egoism, which claims individuals are morally obligated to serve themselves first.
Effective altruism is the use of evidence and reason to determine the most effective ways to benefit others.
The notion of altruism
The concept of altruism has a history in philosophical and ethical thought. The term was coined in the 19th century by the founding sociologist and philosopher of science Auguste Comte, and has become a major topic for psychologists (especially evolutionary psychology researchers), evolutionary biologists, and ethologists. Whilst ideas about altruism from one field can affect the other fields, the different methods and focuses of these fields always lead to different perspectives on altruism. In simple terms, altruism is caring about the welfare of other people and acting to help them, above oneself.
Cross-cultural perspectives on altruism
Cross-cultural perspectives on altruism show that how we view and experience helping others depends heavily on where we come from. In individualistic cultures, like many Western countries, acts of altruism often bring personal joy and satisfaction, as they align with values that emphasize individual achievement and self-fulfillment. On the other hand, in collectivist cultures, common in many Eastern societies, altruism is often seen as a responsibility to the group rather than a personal choice. This difference means that people in collectivist cultures might not feel the same personal happiness from helping others, as the act is more about fulfilling social obligations. Ultimately, these variations highlight how deeply cultural norms shape the way we approach and experience altruism.
Scientific viewpoints
Anthropology
Marcel Mauss's essay The Gift contains a passage called "Note on alms". This note describes the evolution of the notion of alms (and by extension of altruism) from the notion of sacrifice. In it, he writes:
Evolutionary explanations
In ethology (the scientific study of animal behaviour), and more generally in the study of social evolution, altruism refers to behavior by an individual that increases the fitness of another individual while decreasing the fitness of the actor. In evolutionary psychology this term may be applied to a wide range of human behaviors such as charity, emergency aid, help to coalition partners, tipping, courtship gifts, production of public goods, and environmentalism.
The need for an explanation of altruistic behavior that is compatible with evolutionary origins has driven the development of new theories. Two related strands of research on altruism have emerged from traditional evolutionary analyses and evolutionary game theory: a mathematical model and analysis of behavioral strategies.
Some of the proposed mechanisms are:
Kin selection. That animals and humans are more altruistic towards close kin than to distant kin and non-kin has been confirmed in numerous studies across many different cultures. Even subtle cues indicating kinship may unconsciously increase altruistic behavior. One kinship cue is facial resemblance. One study found that slightly altering photographs to resemble the faces of study participants more closely increased the trust the participants expressed regarding depicted persons. Another cue is having the same family name, especially if rare, which has been found to increase helpful behavior. Another study found more cooperative behavior, the greater the number of perceived kin in a group. Using kinship terms in political speeches increased audience agreement with the speaker in one study. This effect was powerful for firstborns, who are typically close to their families.
Vested interests. People are likely to suffer if their friends, allies and those from similar social ingroups suffer or disappear. Helping such group members may, therefore, also benefit the altruist. Making ingroup membership more noticeable increases cooperativeness. Extreme self-sacrifice towards the ingroup may be adaptive if a hostile outgroup threatens the entire ingroup.
Reciprocal altruism. See also Reciprocity (evolution).
Direct reciprocity. Research shows that it can be beneficial to help others if there is a chance that they will reciprocate the help. The effective tit for tat strategy is one game theoretic example. Many people seem to be following a similar strategy by cooperating if and only if others cooperate in return.
One consequence is that people are more cooperative with one another if they are more likely to interact again in the future. People tend to be less cooperative if they perceive that the frequency of helpers in the population is lower. They tend to help less if they see non-cooperativeness by others, and this effect tends to be stronger than the opposite effect of seeing cooperative behaviors. Simply changing the cooperative framing of a proposal may increase cooperativeness, such as calling it a "Community Game" instead of a "Wall Street Game".
A tendency towards reciprocity implies that people feel obligated to respond if someone helps them. This has been used by charities that give small gifts to potential donors hoping to induce reciprocity. Another method is to announce publicly that someone has given a large donation. The tendency to reciprocate can even generalize, so people become more helpful toward others after being helped. On the other hand, people will avoid or even retaliate against those perceived not to be cooperating. People sometimes mistakenly fail to help when they intended to, or their helping may not be noticed, which may cause unintended conflicts. As such, it may be an optimal strategy to be slightly forgiving of and have a slightly generous interpretation of non-cooperation.
People are more likely to cooperate on a task if they can communicate with one another first. This may be due to better cooperativeness assessments or promises exchange. They are more cooperative if they can gradually build trust instead of being asked to give extensive help immediately. Direct reciprocity and cooperation in a group can be increased by changing the focus and incentives from intra-group competition to larger-scale competitions, such as between groups or against the general population. Thus, giving grades and promotions based only on an individual's performance relative to a small local group, as is common, may reduce cooperative behaviors in the group.
Indirect reciprocity. Because people avoid poor reciprocators and cheaters, a person's reputation is important. A person esteemed for their reciprocity is more likely to receive assistance, even from individuals they have not directly interacted with before.
Strong reciprocity. This form of reciprocity is expressed by people who invest more resources in cooperation and punishment than what is deemed optimal based on established theories of altruism.
Pseudo-reciprocity. An organism behaves altruistically and the recipient does not reciprocate but has an increased chance of acting in a way that is selfish but also as a byproduct benefits the altruist.
Costly signaling and the handicap principle. Altruism, by diverting resources from the altruist, can act as an "honest signal" of available resources and the skills to acquire them. This may signal to others that the altruist is a valuable potential partner. It may also signal interactive and cooperative intentions, since someone who does not expect to interact further in the future gains nothing from such costly signaling. While it's uncertain if costly signaling can predict long-term cooperative traits, people tend to trust helpers more. Costly signaling loses its value when everyone shares identical traits, resources, and cooperative intentions, but it gains significance as population variability in these aspects increases.
Hunters who share meat display a costly signal of ability. The research found that good hunters have higher reproductive success and more adulterous relations even if they receive no more of the hunted meat than anyone else. Similarly, holding large feasts and giving large donations are ways of demonstrating one's resources. Heroic risk-taking has also been interpreted as a costly signal of ability.
Both indirect reciprocity and costly signaling depend on reputation value and tend to make similar predictions. One is that people will be more helpful when they know that their helping behavior will be communicated to people they will interact with later, publicly announced, discussed, or observed by someone else. This has been documented in many studies. The effect is sensitive to subtle cues, such as people being more helpful when there were stylized eyespots instead of a logo on a computer screen. Weak reputational cues such as eyespots may become unimportant if there are stronger cues present and may lose their effect with continued exposure unless reinforced with real reputational effects. Public displays such as public weeping for dead celebrities and participation in demonstrations may be influenced by a desire to be seen as generous. People who know that they are publicly monitored sometimes even wastefully donate the money they know is not needed by the recipient because of reputational concerns.
Typically, women find altruistic men to be attractive partners. When women look for a long-term partner, altruism may be a trait they prefer as it may indicate that the prospective partner is also willing to share resources with her and her children. Men perform charitable acts in the early stages of a romantic relationship or simply when in the presence of an attractive woman. While both sexes state that kindness is the most preferable trait in a partner, there is some evidence that men place less value on this than women and that women may not be more altruistic in the presence of an attractive man. Men may even avoid altruistic women in short-term relationships, which may be because they expect less success.
People may compete for the social benefit of a burnished reputation, which may cause competitive altruism. On the other hand, in some experiments, a proportion of people do not seem to care about reputation and do not help more, even if this is conspicuous. This may be due to reasons such as psychopathy or that they are so attractive that they need not be seen as altruistic. The reputational benefits of altruism occur in the future compared to the immediate costs of altruism. While humans and other organisms generally place less value on future costs/benefits as compared to those in the present, some have shorter time horizons than others, and these people tend to be less cooperative.
Explicit extrinsic rewards and punishments have sometimes been found to have a counterintuitively inverse effect on behaviors when compared to intrinsic rewards. This may be because such extrinsic incentives may replace (partially or in whole) intrinsic and reputational incentives, motivating the person to focus on obtaining the extrinsic rewards, which may make the thus-incentivized behaviors less desirable. People prefer altruism in others when it appears to be due to a personality characteristic rather than overt reputational concerns; simply pointing out that there are reputational benefits of action may reduce them. This may be used as a derogatory tactic against altruists ("you're just virtue signalling"), especially by those who are non-cooperators. A counterargument is that doing good due to reputational concerns is better than doing no good.
Group selection. It has controversially been argued by some evolutionary scientists such as David Sloan Wilson that natural selection can act at the level of non-kin groups to produce adaptations that benefit a non-kin group, even if these adaptations are detrimental at the individual level. Thus, while altruistic persons may under some circumstances be outcompeted by less altruistic persons at the individual level, according to group selection theory, the opposite may occur at the group level where groups consisting of the more altruistic persons may outcompete groups consisting of the less altruistic persons. Such altruism may only extend to ingroup members while directing prejudice and antagonism against outgroup members (see also in-group favoritism). Many other evolutionary scientists have criticized group selection theory.
Such explanations do not imply that humans consciously calculate how to increase their inclusive fitness when doing altruistic acts. Instead, evolution has shaped psychological mechanisms, such as emotions, that promote certain altruistic behaviors.
The benefits for the altruist may be increased, and the costs reduced by being more altruistic towards certain groups. Research has found that people are more altruistic to kin than to no-kin, to friends than strangers, to those attractive than to those unattractive, to non-competitors than competitors, and to members in-groups than to members of out-groups.
The study of altruism was the initial impetus behind George R. Price's development of the Price equation, a mathematical equation used to study genetic evolution. An interesting example of altruism is found in the cellular slime moulds, such as Dictyostelium mucoroides. These protists live as individual amoebae until starved, at which point they aggregate and form a multicellular fruiting body in which some cells sacrifice themselves to promote the survival of other cells in the fruiting body.
Selective investment theory proposes that close social bonds, and associated emotional, cognitive, and neurohormonal mechanisms, evolved to facilitate long-term, high-cost altruism between those closely depending on one another for survival and reproductive success.
Such cooperative behaviors have sometimes been seen as arguments for left-wing politics, for example, by the Russian zoologist and anarchist Peter Kropotkin in his 1902 book Mutual Aid: A Factor of Evolution and moral philosopher Peter Singer in his book A Darwinian Left.
Neurobiology
Jorge Moll and Jordan Grafman, neuroscientists at the National Institutes of Health and LABS-D'Or Hospital Network, provided the first evidence for the neural bases of altruistic giving in normal healthy volunteers, using functional magnetic resonance imaging. In their research, they showed that both pure monetary rewards and charitable donations activated the mesolimbic reward pathway, a primitive part of the brain that usually responds to food and sex. However, when volunteers generously placed the interests of others before their own by making charitable donations, another brain circuit was also selectively activated: the subgenual cortex/septal region. These structures are related to social attachment and bonding in other species. The experiment suggested that altruism is not a higher moral faculty overpowering innate selfish desires, but a fundamental, ingrained, and enjoyable trait in the brain. One brain region, the subgenual anterior cingulate cortex/basal forebrain, contributes to learning altruistic behavior, especially in people with a propensity for empathy.
Bill Harbaugh, a University of Oregon economist, in an fMRI scanner test conducted with his psychologist colleague Dr. Ulrich Mayr, reached the same conclusions as Jorge Moll and Jordan Grafman about giving to charity, although they were able to divide the study group into two groups: "egoists" and "altruists". One of their discoveries was that, though rarely, even some of the considered "egoists" sometimes gave more than expected because that would help others, leading to the conclusion that there are other factors in charity, such as a person's environment and values.
A recent meta-analysis of fMRI studies conducted by Shawn Rhoads, Jo Cutler, and Abigail Marsh analyzed the results of prior studies of generosity in which participants could freely choose to give or not give resources to someone else. The results of this study confirmed that altruism is supported by distinct mechanisms from giving motivated by reciprocity or by fairness. This study also confirmed that the right ventral striatum is recruited during altruistic giving, as well as the ventromedial prefrontal cortex, bilateral anterior cingulate cortex, and bilateral anterior insula, which are regions previously implicated in empathy.
Abigail Marsh has conducted studies of real-world altruists that have also identified an important role for the amygdala in human altruism. In real-world altruists, such as people who have donated kidneys to strangers, the amygdala is larger than in typical adults. Altruists' amygdalas are also more responsive than those of typical adults to the sight of others' distress, which is thought to reflect an empathic response to distress. This structure may also be involved in altruistic choices due to its role in encoding the value of outcomes for others. This is consistent with the findings of research in non-human animals, which has identified neurons within the amygdala that specifically encode the value of others' outcomes, activity in which appears to drive altruistic choices in monkeys.
Psychology
The International Encyclopedia of the Social Sciences defines psychological altruism as "a motivational state to increase another's welfare". Psychological altruism is contrasted with psychological egoism, which refers to the motivation to increase one's welfare. In keeping with this, research in real-world altruists, including altruistic kidney donors, bone marrow donors, humanitarian aid workers, and heroic rescuers findings that these altruists are primarily distinguished from other adults by unselfish traits and decision-making patterns. This suggests that human altruism reflects genuinely high valuation of others' outcomes.
There has been some debate on whether humans are capable of psychological altruism. Some definitions specify a self-sacrificial nature to altruism and a lack of external rewards for altruistic behaviors. However, because altruism ultimately benefits the self in many cases, the selflessness of altruistic acts is difficult to prove. The social exchange theory postulates that altruism only exists when the benefits outweigh the costs to the self.
Daniel Batson, a psychologist, examined this question and argued against the social exchange theory. He identified four significant motives: to ultimately benefit the self (egoism), to ultimately benefit the other person (altruism), to benefit a group (collectivism), or to uphold a moral principle (principlism). Altruism that ultimately serves selfish gains is thus differentiated from selfless altruism, but the general conclusion has been that empathy-induced altruism can be genuinely selfless. The empathy-altruism hypothesis states that psychological altruism exists and is evoked by the empathic desire to help someone suffering. Feelings of empathic concern are contrasted with personal distress, which compels people to reduce their unpleasant emotions and increase their positive ones by helping someone in need. Empathy is thus not selfless since altruism works either as a way to avoid those negative, unpleasant feelings and have positive, pleasant feelings when triggered by others' need for help or as a way to gain social reward or avoid social punishment by helping. People with empathic concern help others in distress even when exposure to the situation could be easily avoided, whereas those lacking in empathic concern avoid allowing it unless it is difficult or impossible to avoid exposure to another's suffering.
Helping behavior is seen in humans from about two years old when a toddler can understand subtle emotional cues.
In psychological research on altruism, studies often observe altruism as demonstrated through prosocial behaviors such as helping, comforting, sharing, cooperation, philanthropy, and community service. People are most likely to help if they recognize that a person is in need and feel personal responsibility for reducing the person's distress. The number of bystanders witnessing pain or suffering affects the likelihood of helping (the Bystander effect). More significant numbers of bystanders decrease individual feelings of responsibility. However, a witness with a high level of empathic concern is likely to assume personal responsibility entirely regardless of the number of bystanders.
Many studies have observed the effects of volunteerism (as a form of altruism) on happiness and health and have consistently found that those who exhibit volunteerism also have better current and future health and well-being. In a study of older adults, those who volunteered had higher life satisfaction and will to live, and less depression, anxiety, and somatization. Volunteerism and helping behavior have not only been shown to improve mental health but physical health and longevity as well, attributable to the activity and social integration it encourages. One study examined the physical health of mothers who volunteered over 30 years and found that 52% of those who did not belong to a volunteer organization experienced a major illness while only 36% of those who did volunteer experienced one. A study on adults aged 55 and older found that during the four-year study period, people who volunteered for two or more organizations had a 63% lower likelihood of dying. After controlling for prior health status, it was determined that volunteerism accounted for a 44% reduction in mortality. Merely being aware of kindness in oneself and others is also associated with greater well-being. A study that asked participants to count each act of kindness they performed for one week significantly enhanced their subjective happiness. Happier people are kinder and more grateful, kinder people are happier and more grateful and more grateful people are happier and kinder, the study suggests.
While research supports the idea that altruistic acts bring about happiness, it has also been found to work in the opposite direction—that happier people are also kinder. The relationship between altruistic behavior and happiness is bidirectional. Studies found that generosity increases linearly from sad to happy affective states.
Feeling over-taxed by the needs of others has negative effects on health and happiness. For example, one study on volunteerism found that feeling overwhelmed by others' demands had an even stronger negative effect on mental health than helping had a positive one (although positive effects were still significant).
Older humans were found to have higher altruism.
Genetics and environment
Both genetics and environment have been implicated in influencing pro-social or altruistic behavior. Candidate genes include OXTR (polymorphisms in the oxytocin receptor), CD38, COMT, DRD4, DRD5, IGF2, AVPR1A and GABRB2. It is theorized that some of these genes influence altruistic behavior by modulating levels of neurotransmitters such as serotonin and dopamine.
According to Christopher Boehm, altruistic behaviour evolved as a way of surviving within a group.
Sociology
"Sociologists have long been concerned with how to build the good society". The structure of our societies and how individuals come to exhibit charitable, philanthropic, and other pro-social, altruistic actions for the common good is a commonly researched topic within the field. The American Sociology Association (ASA) acknowledges public sociology saying, "The intrinsic scientific, policy, and public relevance of this field of investigation in helping to construct 'good societies' is unquestionable". This type of sociology seeks contributions that aid popular and theoretical understandings of what motivates altruism and how it is organized, and promotes an altruistic focus in order to benefit the world and people it studies.
How altruism is framed, organized, carried out, and what motivates it at the group level is an area of focus that sociologists investigate in order to contribute back to the groups it studies and "build the good society". The motivation of altruism is also the focus of study; for example, one study links the occurrence of moral outrage to altruistic compensation of victims. Studies show that generosity in laboratory and in online experiments is contagious – people imitate the generosity they observe in others.
Religious viewpoints
Most, if not all, of the world's religions promote altruism as a very important moral value. Buddhism, Christianity, Hinduism, Islam, Jainism, Judaism, and Sikhism, etc., place particular emphasis on altruistic morality.
Buddhism
Altruism figures prominently in Buddhism. Love and compassion are components of all forms of Buddhism, and are focused on all beings equally: love is the wish that all beings be happy, and compassion is the wish that all beings be free from suffering. "Many illnesses can be cured by the one medicine of love and compassion. These qualities are the ultimate source of human happiness, and the need for them lies at the very core of our being" (Dalai Lama).
The notion of altruism is modified in such a world-view, since the belief is that such a practice promotes the practitioner's own happiness: "The more we care for the happiness of others, the greater our own sense of well-being becomes" (Dalai Lama).
In Buddhism, a person's actions cause karma, which consists of consequences proportional to the moral implications of their actions. Deeds considered to be bad are punished, while those considered to be good are rewarded.
Jainism
The fundamental principles of Jainism revolve around altruism, not only for other humans but for all sentient beings. Jainism preaches – to live and let live, not harming sentient beings, i.e. uncompromising reverence for all life. The first , Rishabhdev, introduced the concept of altruism for all living beings, from extending knowledge and experience to others to donation, giving oneself up for others, non-violence, and compassion for all living things.
The principle of nonviolence seeks to minimize karmas which limit the capabilities of the soul. Jainism views every soul as worthy of respect because it has the potential to become (God in Jainism). Because all living beings possess a soul, great care and awareness is essential in one's actions. Jainism emphasizes the equality of all life, advocating harmlessness towards all, whether the creatures are great or small. This policy extends even to microscopic organisms. Jainism acknowledges that every person has different capabilities and capacities to practice and therefore accepts different levels of compliance for ascetics and householders.
Christianity
Thomas Aquinas interprets the biblical phrase "You should love your neighbour as yourself" as meaning that love for ourselves is the exemplar of love for others. Considering that "the love with which a man loves himself is the form and root of friendship", he quotes Aristotle that "the origin of friendly relations with others lies in our relations to ourselves",. Aquinas concluded that though we are not bound to love others more than ourselves, we naturally seek the common good, the good of the whole, more than any private good, the good of a part. However, he thought we should love God more than ourselves and our neighbours, and more than our bodily life—since the ultimate purpose of loving our neighbour is to share in eternal beatitude: a more desirable thing than bodily well-being. In coining the word "altruism", as stated above, Comte was probably opposing this Thomistic doctrine, which is present in some theological schools within Catholicism. The aim and focus of Christian life is a life that glorifies God, while obeying Christ's command to treat others equally, caring for them and understanding that eternity in heaven is what Jesus' Resurrection at Calvary was all about.
Many biblical authors draw a strong connection between love of others and love of God. John 1:4 states that for one to love God one must love his fellow man, and that hatred of one's fellow man is the same as hatred of God. Thomas Jay Oord has argued in several books that altruism is but one possible form of love. An altruistic action is not always a loving action. Oord defines altruism as acting for the other's good, and he agrees with feminists who note that sometimes love requires acting for one's own good when the other's demands undermine overall well-being.
German philosopher Max Scheler distinguishes two ways in which the strong can help the weak. One way is a sincere expression of Christian love, "motivated by a powerful feeling of security, strength, and inner salvation, of the invincible fullness of one's own life and existence". Another way is merely "one of the many modern substitutes for love,... nothing but the urge to turn away from oneself and to lose oneself in other people's business". At its worst, Scheler says, "love for the small, the poor, the weak, and the oppressed is really disguised hatred, repressed envy, an impulse to detract, etc., directed against the opposite phenomena: wealth, strength, power, largesse."
Islam
In the Arabic language, "" (إيثار) means "preferring others to oneself".
On the topic of donating blood to non-Muslims (a controversial topic within the faith), the Shia religious professor, Fadhil al-Milani has provided theological evidence that makes it positively justifiable. In fact, he considers it a form of religious sacrifice and ithar (altruism).
For Sufis, 'iythar means devotion to others through complete forgetfulness of one's own concerns, where concern for others is deemed as a demand made by God on the human body, considered to be property of God alone. The importance of 'iythar (also known as ) lies in sacrifice for the sake of the greater good; Islam considers those practicing as abiding by the highest degree of nobility.
This is similar to the notion of chivalry. A constant concern for God results in a careful attitude towards people, animals, and other things in this world.
Judaism
Judaism defines altruism as the desired goal of creation. Rabbi Abraham Isaac Kook stated that love is the most important attribute in humanity. Love is defined as bestowal, or giving, which is the intention of altruism. This can be altruism towards humanity that leads to altruism towards the creator or God. Kabbalah defines God as the force of giving in existence. Rabbi Moshe Chaim Luzzatto focused on the "purpose of creation" and how the will of God was to bring creation into perfection and adhesion with this force of giving.
Modern Kabbalah developed by Rabbi Yehuda Ashlag, in his writings about the future generation, focuses on how society could achieve an altruistic social framework. Ashlag proposed that such a framework is the purpose of creation, and everything that happens is to raise humanity to the level of altruism, love for one another. Ashlag focused on society and its relation to divinity.
Sikhism
Altruism is essential to the Sikh religion. The central faith in Sikhism is that the greatest deed anyone can do is to imbibe and live the godly qualities such as love, affection, sacrifice, patience, harmony, and truthfulness. , or selfless service to the community for its own sake, is an important concept in Sikhism.
The fifth Guru, Guru Arjun, sacrificed his life to uphold "22 carats of pure truth, the greatest gift to humanity", according to the Guru Granth Sahib. The ninth Guru, Tegh Bahadur, sacrificed his life to protect weak and defenseless people against atrocity.
In the late seventeenth century, Guru Gobind Singh (the tenth Guru in Sikhism), was at war with the Mughal rulers to protect the people of different faiths when a fellow Sikh, Bhai Kanhaiya, attended the troops of the enemy. He gave water to both friends and foes who were wounded on the battlefield. Some of the enemy began to fight again and some Sikh warriors were annoyed by Bhai Kanhaiya as he was helping their enemy. Sikh soldiers brought Bhai Kanhaiya before Guru Gobind Singh, and complained of his action that they considered counterproductive to their struggle on the battlefield. "What were you doing, and why?" asked the Guru. "I was giving water to the wounded because I saw your face in all of them", replied Bhai Kanhaiya. The Guru responded, "Then you should also give them ointment to heal their wounds. You were practicing what you were coached in the house of the Guru."
Under the tutelage of the Guru, Bhai Kanhaiya subsequently founded a volunteer corps for altruism, which is still engaged today in doing good to others and in training new recruits for this service.
Hinduism
In Hinduism, selflessness (), love (), kindness (), and forgiveness () are considered as the highest acts of humanity or "". Giving alms to the beggars or poor people is considered as a divine act or "" and Hindus believe it will free their souls from guilt or "" and will led them to heaven or "" in afterlife. Altruism is also the central act of various Hindu mythology and religious poems and songs. Mass donation of clothes to poor people (), or blood donation camp or mass food donation () for poor people is common in various Hindu religious ceremonies.
The Bhagavad Gita supports the doctrine of karma yoga (achieving oneness with God through action) and Nishkama Karma or action without expectation or desire for personal gain which can be said to encompass altruism. Altruistic acts are generally celebrated and well received in Hindu literature and are central to Hindu morality.
Philosophy
There is a wide range of philosophical views on humans' obligations or motivations to act altruistically. Proponents of ethical altruism maintain that individuals are morally obligated to act altruistically. The opposing view is ethical egoism, which maintains that moral agents should always act in their own self-interest. Both ethical altruism and ethical egoism contrast with utilitarianism, which maintains that each agent should act in order to maximise the efficacy of their function and the benefit to both themselves and their co-inhabitants.
A related concept in descriptive ethics is psychological egoism, the thesis that humans always act in their own self-interest and that true altruism is impossible. Rational egoism is the view that rationality consists in acting in one's self-interest (without specifying how this affects one's moral obligations).
Effective altruism
Effective altruism is a philosophy and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. Effective altruism encourages individuals to consider all causes and actions and to act in the way that brings about the greatest positive impact, based upon their values. It is the broad, evidence-based, and cause-neutral approach that distinguishes effective altruism from traditional altruism or charity. Effective altruism is part of the larger movement towards evidence-based practices.
While a substantial proportion of effective altruists have focused on the nonprofit sector, the philosophy of effective altruism applies more broadly to prioritizing the scientific projects, companies, and policy initiatives which can be estimated to save lives, help people, or otherwise have the biggest benefit. People associated with the movement include philosopher Peter Singer, Facebook co founder Dustin Moskovitz, Cari Tuna, Oxford-based researchers William MacAskill and Toby Ord, and professional poker player Liv Boeree.
Extreme altruism
Pathological altruism
Pathological altruism is altruism taken to an unhealthy extreme, such that it either harms the altruistic person or the person's well-intentioned actions cause more harm than good.
The term "pathological altruism" was popularised by the book Pathological Altruism.
Examples include depression and burnout seen in healthcare professionals, an unhealthy focus on others to the detriment of one's own needs, animal hoarding, and ineffective philanthropic and social programs that ultimately worsen the situations they are meant to aid.
Extreme altruism also known as costly altruism, extraordinary altruism, or heroic behaviours (shall be distinguished from heroism), refers to selfless acts directed to a stranger which significantly exceed the normal altruistic behaviours, often involving risks or great cost to the altruists themselves. Since acts of extreme altruism are often directed towards strangers, many commonly accepted models of simple altruism appear inadequate in explaining this phenomenon.
One of the initial concepts was introduced by Wilson in 1976, which he referred to as "hard-core" altruism. This form is characterised by impulsive actions directed towards others, typically a stranger and lacking incentives for reward. Since then, several papers have mentioned the possibility of such altruism.
In 21st century the progress in the field slowed down due to adopting ethical guidelines that restrict exposing research participants to costly or risky decisions (see Declaration of Helsinki). Consequently, much research has based their studies on living organ donations and the actions of Carnegie Hero medal Recipients, actions which involve high risk, high cost, and are of infrequent occurrences. A typical example of extreme altruism would be non-directed kidney donation—a living person donating one of their kidneys to a stranger without any benefits or knowing the recipient.
However, current research can only be carried out on a small population that meets the requirements of extreme altruism. Most of the time the research is also via the form of self-report which could lead to self-report biases. Due to the limitations, the current gap between high stakes and normal altruism remains unknown.
Characteristics of extreme altruists
Norms
In 1970, Schwartz hypothesised that extreme altruism is positively related to a person's moral norms and is not influenced by the cost associated with the action. This hypothesis was supported in the same study examining bone marrow donors. Schwartz discovered that individuals with strong personal norms and those who attribute more responsibility to themselves are more inclined to participate in bone marrow donation. Similar findings were observed in a 1986 study by Piliavin and Libby focusing on blood donors. These studies suggest that personal norms lead to the activation of moral norms, leading individuals to feel compelled to help others.
Enhanced Fear Recognition
Abigail Marsh has described psychopaths as the "opposite" group of people to extreme altruists and has conducted a few research, comparing these two groups of individuals. Utilising techniques such as brain imaging and behavioural experiments, Marsh's team observed that kidney donors tend to have larger amygdala sizes and exhibit better abilities in recognizing fearful expressions compared to psychopathic individuals. Furthermore, an improved ability to recognize fear has been associated with an increase in prosocial behaviours, including greater charity contribution.
Fast Decisions when Perform Acts of Extreme Altruism
Rand and Epstein explored the behaviours of 51 Carnegie Hero Medal Recipients, demonstrating how extreme altruistic behaviours often stem from system I of the Dual Process Theory, which leads to rapid and intuitive behaviours. Additionally, a separate by Carlson et al. indicated that such prosocial behaviours are prevalent in emergencies where immediate actions are required.
This discovery has led to ethical debates, particularly in the context of living organ donation, where laws regarding this issue differ by country. As observed in extreme altruists, these decisions are made intuitively, which may reflect insufficient consideration. Critics are concerned about whether this rapid decision encompasses a thorough cost-benefit analysis and question the appropriateness of exposing donors to such risk.
Social discounting
One finding suggests how extreme altruists exhibit lower levels of social discounting as compared to others. With that meaning extreme altruists place a higher value on the welfare of strangers than a typical person does.
Low Social-Economic Status
Analysis of 676 Carnegie Hero Award Recipients and another study on 243 rescuing acts reveal that a significant proportion of rescuers come from lower socio-economic backgrounds. Johnson attributes the distribution to the high-risk occupations that are more prevalent between lower socioeconomic groups. Another hypothesis proposed by Lyons is that individuals from these groups may perceive they have less to lose when engaging in high-risk extreme altruistic behaviours.
Possible explanations
Evolutionary theories such as the kin-selection, reciprocity, vested interest and punishment either contradict or do not fully explain the concept of extreme altruism. As a result, considerable research has attempted for a separate explanation for this behaviour.
Costly Signalling Theory for Extreme Behaviours
Research suggests that males are more likely to engage in heroic and risk-taking behaviours due to a preference among females for such traits. These extreme altruistic behaviours could serve to act as an unconscious "signal" to showcase superior power and ability compared to ordinary individuals. When an extreme altruist survives a high-risk situation, they send an "honest signal" of quality. Three qualities hypothesized to be exhibited by extreme altruists, which could be interpreted as "signals", are: (1) traits that are difficult to fake, (2) a willingness to help, and (3) generous behaviours.
Empathy-Altruism Hypothesis
The empathy altruism hypothesis appears to align with the concept of extreme altruism without contradiction. The hypothesis was supported with further brain scanning research, which indicates how this group of people demonstrate a higher level of empathy concern. The level of empathy concern then triggers activation in specific brain regions, urging the individual to engage in heroic behaviours.
Mistakes and Outliers
While most altruistic behaviours offer some form of benefit, extreme altruism may sometimes result from a mistake where the victim does not reciprocate. Considering the impulsive characteristic of extreme altruists, some researchers suggest that these individuals have made a wrong judgement during the cost-benefit analysis. Furthermore, extreme altruism might be a rare variation of altruism where they lie towards to ends of a normal distribution. In the US, the annual prevalence rate per capita is less than 0.00005%, this shows the rarity of such behaviours.
Digital altruism
Digital altruism is the notion that some are willing to freely share information based on the principle of reciprocity and in the belief that in the end, everyone benefits from sharing information via the Internet.
There are three types of digital altruism: (1) "everyday digital altruism", involving expedience, ease, moral engagement, and conformity; (2) "creative digital altruism", involving creativity, heightened moral engagement, and cooperation; and (3) "co-creative digital altruism" involving creativity, moral engagement, and meta cooperative efforts.
See also
Further reading
Cappelen, Alexander W.; Enke, Benjamin; Tungodden, Bertil (2025). "Universalism: Global Evidence". American Economic Review. 115 (1): 43–76.
Notes
References
External links
Auguste Comte
Defence mechanisms
Interpersonal relationships
Moral psychology
Morality
Philanthropy
Social philosophy | Altruism | [
"Biology"
] | 8,955 | [
"Behavior",
"Altruism",
"Philanthropy",
"Interpersonal relationships",
"Human behavior"
] |
569 | https://en.wikipedia.org/wiki/Anthropology | Anthropology is the scientific study of humanity, concerned with human behavior, human biology, cultures, societies, and linguistics, in both the present and past, including archaic humans. Social anthropology studies patterns of behavior, while cultural anthropology studies cultural meaning, including norms and values. The term sociocultural anthropology is commonly used today. Linguistic anthropology studies how language influences social life. Biological or physical anthropology studies the biological development of humans.
Archaeology, often termed as "anthropology of the past," studies human activity through investigation of physical evidence. It is considered a branch of anthropology in North America and Asia, while in Europe, archaeology is viewed as a discipline in its own right or grouped under other related disciplines, such as history and palaeontology.
Etymology
The abstract noun anthropology is first attested in reference to history. Its present use first appeared in Renaissance Germany in the works of Magnus Hundt and Otto Casmann. Their Neo-Latin derived from the combining forms of the Greek words ánthrōpos (, "human") and lógos (, "study"). Its adjectival form appeared in the works of Aristotle. It began to be used in English, possibly via French , by the early 18th century.
Origin and development of the term
Through the 19th century
In 1647, the Bartholins, early scholars of the University of Copenhagen, defined as follows:
Sporadic use of the term for some of the subject matter occurred subsequently, such as the use by Étienne Serres in 1839 to describe the natural history, or paleontology, of man, based on comparative anatomy, and the creation of a chair in anthropology and ethnography in 1850 at the French National Museum of Natural History by Jean Louis Armand de Quatrefages de Bréau. Various short-lived organizations of anthropologists had already been formed. The Société Ethnologique de Paris, the first to use the term ethnology, was formed in 1839 and focused on methodically studying human races. After the death of its founder, William Frédéric Edwards, in 1842, it gradually declined in activity until it eventually dissolved in 1862.
Meanwhile, the Ethnological Society of New York, currently the American Ethnological Society, was founded on its model in 1842, as well as the Ethnological Society of London in 1843, a break-away group of the Aborigines' Protection Society. These anthropologists of the times were liberal, anti-slavery, and pro-human-rights activists. They maintained international connections.
Anthropology and many other current fields are the intellectual results of the comparative methods developed in the earlier 19th century. Theorists in diverse fields such as anatomy, linguistics, and ethnology, started making feature-by-feature comparisons of their subject matters, and were beginning to suspect that similarities between animals, languages, and folkways were the result of processes or laws unknown to them then. For them, the publication of Charles Darwin's On the Origin of Species was the epiphany of everything they had begun to suspect. Darwin himself arrived at his conclusions through comparison of species he had seen in agronomy and in the wild.
Darwin and Wallace unveiled evolution in the late 1850s. There was an immediate rush to bring it into the social sciences. Paul Broca in Paris was in the process of breaking away from the Société de biologie to form the first of the explicitly anthropological societies, the Société d'Anthropologie de Paris, meeting for the first time in Paris in 1859. When he read Darwin, he became an immediate convert to Transformisme, as the French called evolutionism. His definition now became "the study of the human group, considered as a whole, in its details, and in relation to the rest of nature".
Broca, being what today would be called a neurosurgeon, had taken an interest in the pathology of speech. He wanted to localize the difference between man and the other animals, which appeared to reside in speech. He discovered the speech center of the human brain, today called Broca's area after him. His interest was mainly in Biological anthropology, but a German philosopher specializing in psychology, Theodor Waitz, took up the theme of general and social anthropology in his six-volume work, entitled Die Anthropologie der Naturvölker, 1859–1864. The title was soon translated as "The Anthropology of Primitive Peoples". The last two volumes were published posthumously.
Waitz defined anthropology as "the science of the nature of man". Following Broca's lead, Waitz points out that anthropology is a new field, which would gather material from other fields, but would differ from them in the use of comparative anatomy, physiology, and psychology to differentiate man from "the animals nearest to him". He stresses that the data of comparison must be empirical, gathered by experimentation. The history of civilization, as well as ethnology, are to be brought into the comparison. It is to be presumed fundamentally that the species, man, is a unity, and that "the same laws of thought are applicable to all men".
Waitz was influential among British ethnologists. In 1863, the explorer Richard Francis Burton and the speech therapist James Hunt broke away from the Ethnological Society of London to form the Anthropological Society of London, which henceforward would follow the path of the new anthropology rather than just ethnology. It was the 2nd society dedicated to general anthropology in existence. Representatives from the French Société were present, though not Broca. In his keynote address, printed in the first volume of its new publication, The Anthropological Review, Hunt stressed the work of Waitz, adopting his definitions as a standard. Among the first associates were the young Edward Burnett Tylor, inventor of cultural anthropology, and his brother Alfred Tylor, a geologist. Previously Edward had referred to himself as an ethnologist; subsequently, an anthropologist.
Similar organizations in other countries followed: The Anthropological Society of Madrid (1865), the American Anthropological Association in 1902, the Anthropological Society of Vienna (1870), the Italian Society of Anthropology and Ethnology (1871), and many others subsequently. The majority of these were evolutionists. One notable exception was the Berlin Society for Anthropology, Ethnology, and Prehistory (1869) founded by Rudolph Virchow, known for his vituperative attacks on the evolutionists. Not religious himself, he insisted that Darwin's conclusions lacked empirical foundation.
During the last three decades of the 19th century, a proliferation of anthropological societies and associations occurred, most independent, most publishing their own journals, and all international in membership and association. The major theorists belonged to these organizations. They supported the gradual osmosis of anthropology curricula into the major institutions of higher learning. By 1898, 48 educational institutions in 13 countries had some curriculum in anthropology. None of the 75 faculty members were under a department named anthropology.
20th and 21st centuries
Anthropology as a specialized field of academic study developed much through the end of the 19th century. Then it rapidly expanded beginning in the early 20th century to the point where many of the world's higher educational institutions typically included anthropology departments. Thousands of anthropology departments have come into existence, and anthropology has also diversified from a few major subdivisions to dozens more. Practical anthropology, the use of anthropological knowledge and technique to solve specific problems, has arrived; for example, the presence of buried victims might stimulate the use of a forensic archaeologist to recreate the final scene. The organization has also reached a global level. For example, the World Council of Anthropological Associations (WCAA), "a network of national, regional and international associations that aims to promote worldwide communication and cooperation in anthropology", currently contains members from about three dozen nations.
Since the work of Franz Boas and Bronisław Malinowski in the late 19th and early 20th centuries, social anthropology in Great Britain and cultural anthropology in the US have been distinguished from other social sciences by their emphasis on cross-cultural comparisons, long-term in-depth examination of context, and the importance they place on participant-observation or experiential immersion in the area of research. Cultural anthropology, in particular, has emphasized cultural relativism, holism, and the use of findings to frame cultural critiques. This has been particularly prominent in the United States, from Boas' arguments against 19th-century racial ideology, through Margaret Mead's advocacy for gender equality and sexual liberation, to current criticisms of post-colonial oppression and promotion of multiculturalism. Ethnography is one of its primary research designs as well as the text that is generated from anthropological fieldwork.
In Great Britain and the Commonwealth countries, the British tradition of social anthropology tends to dominate. In the United States, anthropology has traditionally been divided into the four field approach developed by Franz Boas in the early 20th century: biological or physical anthropology; social, cultural, or sociocultural anthropology; archaeological anthropology; and linguistic anthropology. These fields frequently overlap but tend to use different methodologies and techniques.
European countries with overseas colonies tended to practice more ethnology (a term coined and defined by Adam F. Kollár in 1783). It is sometimes referred to as sociocultural anthropology in the parts of the world that were influenced by the European tradition.
Fields
Anthropology is a global discipline involving humanities, social sciences and natural sciences. Anthropology builds upon knowledge from natural sciences, including the discoveries about the origin and evolution of Homo sapiens, human physical traits, human behavior, the variations among different groups of humans, how the evolutionary past of Homo sapiens has influenced its social organization and culture, and from social sciences, including the organization of human social and cultural relations, institutions, social conflicts, etc. Early anthropology originated in Classical Greece and Persia and studied and tried to understand observable cultural diversity. As such, anthropology has been central in the development of several new (late 20th century) interdisciplinary fields such as cognitive science, global studies, and various ethnic studies.
According to Clifford Geertz,
Sociocultural anthropology has been heavily influenced by structuralist and postmodern theories, as well as a shift toward the analysis of modern societies. During the 1970s and 1990s, there was an epistemological shift away from the positivist traditions that had largely informed the discipline. During this shift, enduring questions about the nature and production of knowledge came to occupy a central place in cultural and social anthropology. In contrast, archaeology and biological anthropology remained largely positivist. Due to this difference in epistemology, the four sub-fields of anthropology have lacked cohesion over the last several decades.
Sociocultural
Sociocultural anthropology draws together the principal axes of cultural anthropology and social anthropology. Cultural anthropology is the comparative study of the manifold ways in which people make sense of the world around them, while social anthropology is the study of the relationships among individuals and groups. Cultural anthropology is more related to philosophy, literature and the arts (how one's culture affects the experience for self and group, contributing to a more complete understanding of the people's knowledge, customs, and institutions), while social anthropology is more related to sociology and history. In that, it helps develop an understanding of social structures, typically of others and other populations (such as minorities, subgroups, dissidents, etc.). There is no hard-and-fast distinction between them, and these categories overlap to a considerable degree.
Inquiry in sociocultural anthropology is guided in part by cultural relativism, the attempt to understand other societies in terms of their own cultural symbols and values. Accepting other cultures in their own terms moderates reductionism in cross-cultural comparison. This project is often accommodated in the field of ethnography. Ethnography can refer to both a methodology and the product of ethnographic research, i.e. an ethnographic monograph. As a methodology, ethnography is based upon long-term fieldwork within a community or other research site. Participant observation is one of the foundational methods of social and cultural anthropology. Ethnology involves the systematic comparison of different cultures. The process of participant-observation can be especially helpful to understanding a culture from an emic (conceptual, vs. etic, or technical) point of view.
The study of kinship and social organization is a central focus of sociocultural anthropology, as kinship is a human universal. Sociocultural anthropology also covers economic and political organization, law and conflict resolution, patterns of consumption and exchange, material culture, technology, infrastructure, gender relations, ethnicity, childrearing and socialization, religion, myth, symbols, values, etiquette, worldview, sports, music, nutrition, recreation, games, food, festivals, and language (which is also the object of study in linguistic anthropology).
Comparison across cultures is a key element of method in sociocultural anthropology, including the industrialized (and de-industrialized) West. The Standard Cross-Cultural Sample (SCCS) includes 186 such cultures.
Biological
Biological anthropology and physical anthropology are synonymous terms to describe anthropological research focused on the study of humans and non-human primates in their biological, evolutionary, and demographic dimensions. It examines the biological and social factors that have affected the evolution of humans and other primates, and that generate, maintain or change contemporary genetic and physiological variation.
Archaeological
Archaeology is the study of the human past through its material remains. Artifacts, faunal remains, and human altered landscapes are evidence of the cultural and material lives of past societies. Archaeologists examine material remains in order to deduce patterns of past human behavior and cultural practices. Ethnoarchaeology is a type of archaeology that studies the practices and material remains of living human groups in order to gain a better understanding of the evidence left behind by past human groups, who are presumed to have lived in similar ways.
Linguistic
Linguistic anthropology (not to be confused with anthropological linguistics) seeks to understand the processes of human communications, verbal and non-verbal, variation in language across time and space, the social uses of language, and the relationship between language and culture. It is the branch of anthropology that brings linguistic methods to bear on anthropological problems, linking the analysis of linguistic forms and processes to the interpretation of sociocultural processes. Linguistic anthropologists often draw on related fields including sociolinguistics, pragmatics, cognitive linguistics, semiotics, discourse analysis, and narrative analysis.
Ethnography
Ethnography is a method of analysing social or cultural interaction. It often involves participant observation though an ethnographer may also draw from texts written by participants of in social interactions. Ethnography views first-hand experience and social context as important.
Tim Ingold distinguishes ethnography from anthropology arguing that anthropology tries to construct general theories of human experience, applicable in general and novel settings, while ethnography concerns itself with fidelity. He argues that the anthropologist must make his writing consistent with their understanding of literature and other theory but notes that ethnography may be of use to the anthropologists and the fields inform one another.
Key topics by field: sociocultural
Art, media, music, dance and film
Art
One of the central problems in the anthropology of art concerns the universality of 'art' as a cultural phenomenon. Several anthropologists have noted that the Western categories of 'painting', 'sculpture', or 'literature', conceived as independent artistic activities, do not exist, or exist in a significantly different form, in most non-Western contexts. To surmount this difficulty, anthropologists of art have focused on formal features in objects which, without exclusively being 'artistic', have certain evident 'aesthetic' qualities. Boas' Primitive Art, Claude Lévi-Strauss' The Way of the Masks (1982) or Geertz's 'Art as Cultural System' (1983) are some examples in this trend to transform the anthropology of 'art' into an anthropology of culturally specific 'aesthetics'.
Media
Media anthropology (also known as the anthropology of media or mass media) emphasizes ethnographic studies as a means of understanding producers, audiences, and other cultural and social aspects of mass media. The types of ethnographic contexts explored range from contexts of media production (e.g., ethnographies of newsrooms in newspapers, journalists in the field, film production) to contexts of media reception, following audiences in their everyday responses to media. Other types include cyber anthropology, a relatively new area of internet research, as well as ethnographies of other areas of research which happen to involve media, such as development work, social movements, or health education. This is in addition to many classic ethnographic contexts, where media such as radio, the press, new media, and television have started to make their presences felt since the early 1990s.
Music
Ethnomusicology is an academic field encompassing various approaches to the study of music (broadly defined), that emphasize its cultural, social, material, cognitive, biological, and other dimensions or contexts instead of or in addition to its isolated sound component or any particular repertoire.
Ethnomusicology can be used in a wide variety of fields, such as teaching, politics, cultural anthropology etc. While the origins of ethnomusicology date back to the 18th and 19th centuries, it was formally termed "ethnomusicology" by Dutch scholar Jaap Kunst . Later, the influence of study in this area spawned the creation of the periodical Ethnomusicology and the Society of Ethnomusicology.
Visual
Visual anthropology is concerned, in part, with the study and production of ethnographic photography, film and, since the mid-1990s, new media. While the term is sometimes used interchangeably with ethnographic film, visual anthropology also encompasses the anthropological study of visual representation, including areas such as performance, museums, art, and the production and reception of mass media. Visual representations from all cultures, such as sandpaintings, tattoos, sculptures and reliefs, cave paintings, scrimshaw, jewelry, hieroglyphs, paintings, and photographs are included in the focus of visual anthropology.
Economic, political economic, applied and development
Economic
Economic anthropology attempts to explain human economic behavior in its widest historic, geographic and cultural scope. It has a complex relationship with the discipline of economics, of which it is highly critical. Its origins as a sub-field of anthropology begin with the Polish-British founder of anthropology, Bronisław Malinowski, and his French compatriot, Marcel Mauss, on the nature of gift-giving exchange (or reciprocity) as an alternative to market exchange. Economic Anthropology remains, for the most part, focused upon exchange. The school of thought derived from Marx and known as Political Economy focuses on production, in contrast. Economic anthropologists have abandoned the primitivist niche they were relegated to by economists, and have now turned to examine corporations, banks, and the global financial system from an anthropological perspective.
Political economy
Political economy in anthropology is the application of the theories and methods of historical materialism to the traditional concerns of anthropology, including, but not limited to, non-capitalist societies. Political economy introduced questions of history and colonialism to ahistorical anthropological theories of social structure and culture. Three main areas of interest rapidly developed. The first of these areas was concerned with the "pre-capitalist" societies that were subject to evolutionary "tribal" stereotypes. Sahlin's work on hunter-gatherers as the "original affluent society" did much to dissipate that image. The second area was concerned with the vast majority of the world's population at the time, the peasantry, many of whom were involved in complex revolutionary wars such as in Vietnam. The third area was on colonialism, imperialism, and the creation of the capitalist world-system. More recently, these political economists have more directly addressed issues of industrial (and post-industrial) capitalism around the world.
Applied
Applied anthropology refers to the application of the method and theory of anthropology to the analysis and solution of practical problems. It is a "complex of related, research-based, instrumental methods which produce change or stability in specific cultural systems through the provision of data, initiation of direct action, and/or the formulation of policy". Applied anthropology is the practical side of anthropological research; it includes researcher involvement and activism within the participating community. It is closely related to development anthropology (distinct from the more critical anthropology of development).
Development
Anthropology of development tends to view development from a critical perspective. The kind of issues addressed and implications for the approach involve pondering why, if a key development goal is to alleviate poverty, is poverty increasing? Why is there such a gap between plans and outcomes? Why are those working in development so willing to disregard history and the lessons it might offer? Why is development so externally driven rather than having an internal basis? In short, why does so much planned development fail?
Kinship, feminism, gender and sexuality
Kinship
Kinship can refer both to the study of the patterns of social relationships in one or more human cultures, or it can refer to the patterns of social relationships themselves. Over its history, anthropology has developed a number of related concepts and terms, such as "descent", "descent groups", "lineages", "affines", "cognates", and even "fictive kinship". Broadly, kinship patterns may be considered to include people related both by descent (one's social relations during development), and also relatives by marriage. Within kinship you have two different families. People have their biological families and it is the people they share DNA with. This is called consanguinity or "blood ties". People can also have a chosen family in which they chose who they want to be a part of their family. In some cases, people are closer with their chosen family more than with their biological families.
Feminist
Feminist anthropology is a four field approach to anthropology (archeological, biological, cultural, linguistic) that seeks to reduce male bias in research findings, anthropological hiring practices, and the scholarly production of knowledge. Anthropology engages often with feminists from non-Western traditions, whose perspectives and experiences can differ from those of white feminists of Europe, America, and elsewhere. From the perspective of the Western world, historically such 'peripheral' perspectives have been ignored, observed only from an outsider perspective, and regarded as less-valid or less-important than knowledge from the Western world. Exploring and addressing that double bias against women from marginalized racial or ethnic groups is of particular interest in intersectional feminist anthropology.
Feminist anthropologists have stated that their publications have contributed to anthropology, along the way correcting against the systemic biases beginning with the "patriarchal origins of anthropology (and (academia)" and note that from 1891 to 1930 doctorates in anthropology went to males more than 85%, more than 81% were under 35, and only 7.2% to anyone over 40 years old, thus reflecting an age gap in the pursuit of anthropology by first-wave feminists until later in life. This correction of systemic bias may include mainstream feminist theory, history, linguistics, archaeology, and anthropology. Feminist anthropologists are often concerned with the construction of gender across societies. Gender constructs are of particular interest when studying sexism.
According to St. Clair Drake, Vera Mae Green was, until "[w]ell into the 1960s", the only African American female anthropologist who was also a Caribbeanist. She studied ethnic and family relations in the Caribbean as well as the United States, and thereby tried to improve the way black life, experiences, and culture were studied. However, Zora Neale Hurston, although often primarily considered to be a literary author, was trained in anthropology by Franz Boas, and published Tell my Horse about her "anthropological observations" of voodoo in the Caribbean (1938).
Feminist anthropology is inclusive of the anthropology of birth as a specialization, which is the anthropological study of pregnancy and childbirth within cultures and societies.
Medical, nutritional, psychological, cognitive and transpersonal
Medical
Medical anthropology is an interdisciplinary field which studies "human health and disease, health care systems, and biocultural adaptation". It is believed that William Caudell was the first to discover the field of medical anthropology. Currently, research in medical anthropology is one of the main growth areas in the field of anthropology as a whole. It focuses on the following six basic fields:
The development of systems of medical knowledge and medical care
The patient-physician relationship
The integration of alternative medical systems in culturally diverse environments
The interaction of social, environmental and biological factors which influence health and illness both in the individual and the community as a whole
The critical analysis of interaction between psychiatric services and migrant populations ("critical ethnopsychiatry": Beneduce 2004, 2007)
The impact of biomedicine and biomedical technologies in non-Western settings
Other subjects that have become central to medical anthropology worldwide are violence and social suffering (Farmer, 1999, 2003; Beneduce, 2010) as well as other issues that involve physical and psychological harm and suffering that are not a result of illness. On the other hand, there are fields that intersect with medical anthropology in terms of research methodology and theoretical production, such as cultural psychiatry and transcultural psychiatry or ethnopsychiatry.
Nutritional
Nutritional anthropology is a synthetic concept that deals with the interplay between economic systems, nutritional status and food security, and how changes in the former affect the latter. If economic and environmental changes in a community affect access to food, food security, and dietary health, then this interplay between culture and biology is in turn connected to broader historical and economic trends associated with globalization. Nutritional status affects overall health status, work performance potential, and the overall potential for economic development (either in terms of human development or traditional western models) for any given group of people.
Psychological
Psychological anthropology is an interdisciplinary subfield of anthropology that studies the interaction of cultural and mental processes. This subfield tends to focus on ways in which humans' development and enculturation within a particular cultural group – with its own history, language, practices, and conceptual categories – shape processes of human cognition, emotion, perception, motivation, and mental health. It also examines how the understanding of cognition, emotion, motivation, and similar psychological processes inform or constrain our models of cultural and social processes.
Cognitive
Cognitive anthropology seeks to explain patterns of shared knowledge, cultural innovation, and transmission over time and space using the methods and theories of the cognitive sciences (especially experimental psychology and evolutionary biology) often through close collaboration with historians, ethnographers, archaeologists, linguists, musicologists and other specialists engaged in the description and interpretation of cultural forms. Cognitive anthropology is concerned with what people from different groups know and how that implicit knowledge changes the way people perceive and relate to the world around them.
Transpersonal
Transpersonal anthropology studies the relationship between altered states of consciousness and culture. As with transpersonal psychology, the field is much concerned with altered states of consciousness (ASC) and transpersonal experience. However, the field differs from mainstream transpersonal psychology in taking more cognizance of cross-cultural issues – for instance, the roles of myth, ritual, diet, and text in evoking and interpreting extraordinary experiences.
Political and legal
Political
Political anthropology concerns the structure of political systems, looked at from the basis of the structure of societies. Political anthropology developed as a discipline concerned primarily with politics in stateless societies, a new development started from the 1960s, and is still unfolding: anthropologists started increasingly to study more "complex" social settings in which the presence of states, bureaucracies and markets entered both ethnographic accounts and analysis of local phenomena. The turn towards complex societies meant that political themes were taken up at two main levels. Firstly, anthropologists continued to study political organization and political phenomena that lay outside the state-regulated sphere (as in patron-client relations or tribal political organization). Secondly, anthropologists slowly started to develop a disciplinary concern with states and their institutions (and on the relationship between formal and informal political institutions). An anthropology of the state developed, and it is a most thriving field today. Geertz's comparative work on "Negara", the Balinese state, is an early, famous example.
Legal
Legal anthropology or anthropology of law specializes in "the cross-cultural study of social ordering". Earlier legal anthropological research often focused more narrowly on conflict management, crime, sanctions, or formal regulation. More recent applications include issues such as human rights, legal pluralism, and political uprisings.
Public
Public anthropology was created by Robert Borofsky, a professor at Hawaii Pacific University, to "demonstrate the ability of anthropology and anthropologists to effectively address problems beyond the discipline – illuminating larger social issues of our times as well as encouraging broad, public conversations about them with the explicit goal of fostering social change".
Nature, science, and technology
Cyborg
Cyborg anthropology originated as a sub-focus group within the American Anthropological Association's annual meeting in 1993. The sub-group was very closely related to STS and the Society for the Social Studies of Science. Donna Haraway's 1985 Cyborg Manifesto could be considered the founding document of cyborg anthropology by first exploring the philosophical and sociological ramifications of the term. Cyborg anthropology studies humankind and its relations with the technological systems it has built, specifically modern technological systems that have reflexively shaped notions of what it means to be human beings.
Digital
Digital anthropology is the study of the relationship between humans and digital-era technology and extends to various areas where anthropology and technology intersect. It is sometimes grouped with sociocultural anthropology, and sometimes considered part of material culture. The field is new, and thus has a variety of names with a variety of emphases. These include techno-anthropology, digital ethnography, cyberanthropology, and virtual anthropology.
Ecological
Ecological anthropology is defined as the "study of cultural adaptations to environments". The sub-field is also defined as, "the study of relationships between a population of humans and their biophysical environment". The focus of its research concerns "how cultural beliefs and practices helped human populations adapt to their environments, and how their environments change across space and time. The contemporary perspective of environmental anthropology, and arguably at least the backdrop, if not the focus of most of the ethnographies and cultural fieldworks of today, is political ecology. Many characterize this new perspective as more informed with culture, politics and power, globalization, localized issues, century anthropology and more. The focus and data interpretation is often used for arguments for/against or creation of policy, and to prevent corporate exploitation and damage of land. Often, the observer has become an active part of the struggle either directly (organizing, participation) or indirectly (articles, documentaries, books, ethnographies). Such is the case with environmental justice advocate Melissa Checker and her relationship with the people of Hyde Park.
Environment
Social sciences, like anthropology, can provide interdisciplinary approaches to the environment. Professor Kay Milton, Director of the Anthropology research network in the School of History and Anthropology, describes anthropology as distinctive, with its most distinguishing feature being its interest in non-industrial indigenous and traditional societies. Anthropological theory is distinct because of the consistent presence of the concept of culture; not an exclusive topic but a central position in the study and a deep concern with the human condition. Milton describes three trends that are causing a fundamental shift in what characterizes anthropology: dissatisfaction with the cultural relativist perspective, reaction against cartesian dualisms which obstructs progress in theory (nature culture divide), and finally an increased attention to globalization (transcending the barriers or time/space).
Environmental discourse appears to be characterized by a high degree of globalization. (The troubling problem is borrowing non-indigenous practices and creating standards, concepts, philosophies and practices in western countries.) Anthropology and environmental discourse now have become a distinct position in anthropology as a discipline. Knowledge about diversities in human culture can be important in addressing environmental problems - anthropology is now a study of human ecology. Human activity is the most important agent in creating environmental change, a study commonly found in human ecology which can claim a central place in how environmental problems are examined and addressed. Other ways anthropology contributes to environmental discourse is by being theorists and analysts, or by refinement of definitions to become more neutral/universal, etc. In exploring environmentalism - the term typically refers to a concern that the environment should be protected, particularly from the harmful effects of human activities. Environmentalism itself can be expressed in many ways. Anthropologists can open the doors of environmentalism by looking beyond industrial society, understanding the opposition between industrial and non-industrial relationships, knowing what ecosystem people and biosphere people are and are affected by, dependent and independent variables, "primitive" ecological wisdom, diverse environments, resource management, diverse cultural traditions, and knowing that environmentalism is a part of culture.
Historical
Ethnohistory is the study of ethnographic cultures and indigenous customs by examining historical records. It is also the study of the history of various ethnic groups that may or may not exist today. Ethnohistory uses both historical and ethnographic data as its foundation. Its historical methods and materials go beyond the standard use of documents and manuscripts. Practitioners recognize the utility of such source material as maps, music, paintings, photography, folklore, oral tradition, site exploration, archaeological materials, museum collections, enduring customs, language, and place names.
Religion
The anthropology of religion involves the study of religious institutions in relation to other social institutions, and the comparison of religious beliefs and practices across cultures. Modern anthropology assumes that there is complete continuity between magical thinking and religion, and that every religion is a cultural product, created by the human community that worships it.
Urban
Urban anthropology is concerned with issues of urbanization, poverty, and neoliberalism. Ulf Hannerz quotes a 1960s remark that traditional anthropologists were "a notoriously agoraphobic lot, anti-urban by definition". Various social processes in the Western World as well as in the "Third World" (the latter being the habitual focus of attention of anthropologists) brought the attention of "specialists in 'other cultures'" closer to their homes. There are two main approaches to urban anthropology: examining the types of cities or examining the social issues within the cities. These two methods are overlapping and dependent of each other. By defining different types of cities, one would use social factors as well as economic and political factors to categorize the cities. By directly looking at the different social issues, one would also be studying how they affect the dynamic of the city.
Key topics by field: archaeological and biological
Anthrozoology
Anthrozoology (also known as "human–animal studies") is the study of interaction between living things. It is an interdisciplinary field that overlaps with a number of other disciplines, including anthropology, ethology, medicine, psychology, veterinary medicine and zoology. A major focus of anthrozoologic research is the quantifying of the positive effects of human-animal relationships on either party and the study of their interactions. It includes scholars from a diverse range of fields, including anthropology, sociology, biology, and philosophy.
Biocultural
Biocultural anthropology is the scientific exploration of the relationships between human biology and culture. Physical anthropologists throughout the first half of the 20th century viewed this relationship from a racial perspective; that is, from the assumption that typological human biological differences lead to cultural differences. After World War II the emphasis began to shift toward an effort to explore the role culture plays in shaping human biology.
Evolutionary
Evolutionary anthropology is the interdisciplinary study of the evolution of human physiology and human behaviour and the relation between hominins and non-hominin primates. Evolutionary anthropology is based in natural science and social science, combining the human development with socioeconomic factors. Evolutionary anthropology is concerned with both biological and cultural evolution of humans, past and present. It is based on a scientific approach, and brings together fields such as archaeology, behavioral ecology, psychology, primatology, and genetics. It is a dynamic and interdisciplinary field, drawing on many lines of evidence to understand the human experience, past and present.
Forensic
Forensic anthropology is the application of the science of physical anthropology and human osteology in a legal setting, most often in criminal cases where the victim's remains are in the advanced stages of decomposition. A forensic anthropologist can assist in the identification of deceased individuals whose remains are decomposed, burned, mutilated or otherwise unrecognizable. The adjective "forensic" refers to the application of this subfield of science to a court of law.
Palaeoanthropology
Paleoanthropology combines the disciplines of paleontology and physical anthropology. It is the study of ancient humans, as found in fossil hominid evidence such as petrifacted bones and footprints. Genetics and morphology of specimens are crucially important to this field. Markers on specimens, such as enamel fractures and dental decay on teeth, can also give insight into the behaviour and diet of past populations.
Organizations
Contemporary anthropology is an established science with academic departments at most universities and colleges. The single largest organization of anthropologists is the American Anthropological Association (AAA), which was founded in 1903. Its members are anthropologists from around the globe.
In 1989, a group of European and American scholars in the field of anthropology established the European Association of Social Anthropologists (EASA) which serves as a major professional organization for anthropologists working in Europe. The EASA seeks to advance the status of anthropology in Europe and to increase visibility of marginalized anthropological traditions and thereby contribute to the project of a global anthropology or world anthropology.
Hundreds of other organizations exist in the various sub-fields of anthropology, sometimes divided up by nation or region, and many anthropologists work with collaborators in other disciplines, such as geology, physics, zoology, paleontology, anatomy, music theory, art history, sociology and so on, belonging to professional societies in those disciplines as well.
List of major organizations
American Anthropological Association
American Ethnological Society
Asociación de Antropólogos Iberoamericanos en Red, AIBR
Anthropological Society of London
Center for World Indigenous Studies
Ethnological Society of London
European Association of Social Anthropologists
Max Planck Institute for Evolutionary Anthropology
Network of Concerned Anthropologists
N.N. Miklukho-Maklai Institute of Ethnology and Anthropology
Royal Anthropological Institute of Great Britain and Ireland
Society for Anthropological Sciences
Society for Applied Anthropology
USC Center for Visual Anthropology
Ethics
As the field has matured it has debated and arrived at ethical principles aimed at protecting both the subjects of anthropological research as well as the researchers themselves, and professional societies have generated codes of ethics.
Anthropologists, like other researchers (especially historians and scientists engaged in field research), have over time assisted state policies and projects, especially colonialism.
Some commentators have contended:
That the discipline grew out of colonialism, perhaps was in league with it, and derives some of its key notions from it, consciously or not. (See, for example, Gough, Pels and Salemink, but cf. Lewis 2004).
That ethnographic work is often ahistorical, writing about people as if they were "out of time" in an "ethnographic present" (Johannes Fabian, Time and Its Other).
In his article "The Misrepresentation of Anthropology and Its Consequences," Herbert S. Lewis critiqued older anthropological works that presented other cultures as if they were strange and unusual. While the findings of those researchers should not be discarded, the field should learn from its mistakes.
Cultural relativism
As part of their quest for scientific objectivity, present-day anthropologists typically urge cultural relativism, which has an influence on all the sub-fields of anthropology. This is the notion that cultures should not be judged by another's values or viewpoints, but be examined dispassionately on their own terms. There should be no notions, in good anthropology, of one culture being better or worse than another culture.
Ethical commitments in anthropology include noticing and documenting genocide, infanticide, racism, sexism, mutilation (including circumcision and subincision), and torture. Topics like racism, slavery, and human sacrifice attract anthropological attention and theories ranging from nutritional deficiencies, to genes, to acculturation, to colonialism, have been proposed to explain their origins and continued recurrences.
To illustrate the depth of an anthropological approach, one can take just one of these topics, such as racism, and find thousands of anthropological references, stretching across all the major and minor sub-fields.
Military involvement
Anthropologists' involvement with the U.S. government, in particular, has caused bitter controversy within the discipline. Franz Boas publicly objected to US participation in World War I, and after the war he published a brief exposé and condemnation of the participation of several American archaeologists in espionage in Mexico under their cover as scientists.
But by the 1940s, many of Boas' anthropologist contemporaries were active in the allied war effort against the Axis Powers (Nazi Germany, Fascist Italy, and Imperial Japan). Many served in the armed forces, while others worked in intelligence (for example, Office of Strategic Services and the Office of War Information). At the same time, David H. Price's work on American anthropology during the Cold War provides detailed accounts of the pursuit and dismissal of several anthropologists from their jobs for communist sympathies.
Attempts to accuse anthropologists of complicity with the CIA and government intelligence activities during the Vietnam War years have turned up little. Many anthropologists (students and teachers) were active in the antiwar movement. Numerous resolutions condemning the war in all its aspects were passed overwhelmingly at the annual meetings of the American Anthropological Association (AAA).
Professional anthropological bodies often object to the use of anthropology for the benefit of the state. Their codes of ethics or statements may proscribe anthropologists from giving secret briefings. The Association of Social Anthropologists of the UK and Commonwealth (ASA) has called certain scholarship ethically dangerous. The "Principles of Professional Responsibility" issued by the American Anthropological Association and amended through November 1986 stated that "in relation with their own government and with host governments ... no secret research, no secret reports or debriefings of any kind should be agreed to or given." The current "Principles of Professional Responsibility" does not make explicit mention of ethics surrounding state interactions.
Anthropologists, along with other social scientists, were working with the US military as part of the US Army's strategy in Afghanistan. The Christian Science Monitor reports that "Counterinsurgency efforts focus on better grasping and meeting local needs" in Afghanistan, under the Human Terrain System (HTS) program; in addition, HTS teams are working with the US military in Iraq. In 2009, the American Anthropological Association's Commission on the Engagement of Anthropology with the US Security and Intelligence Communities (CEAUSSIC) released its final report concluding, in part, that:
Post-World War II developments
Before WWII British 'social anthropology' and American 'cultural anthropology' were still distinct traditions. After the war, enough British and American anthropologists borrowed ideas and methodological approaches from one another that some began to speak of them collectively as 'sociocultural' anthropology.
Basic trends
There are several characteristics that tend to unite anthropological work. One of the central characteristics is that anthropology tends to provide a comparatively more holistic account of phenomena and tends to be highly empirical. The quest for holism leads most anthropologists to study a particular place, problem or phenomenon in detail, using a variety of methods, over a more extensive period than normal in many parts of academia.
In the 1990s and 2000s, calls for clarification of what constitutes a culture, of how an observer knows where his or her own culture ends and another begins, and other crucial topics in writing anthropology were heard. These dynamic relationships, between what can be observed on the ground, as opposed to what can be observed by compiling many local observations remain fundamental in any kind of anthropology, whether cultural, biological, linguistic or archaeological.
Biological anthropologists are interested in both human variation and in the possibility of human universals (behaviors, ideas or concepts shared by virtually all human cultures). They use many different methods of study, but modern population genetics, participant observation and other techniques often take anthropologists "into the field," which means traveling to a community in its own setting, to do something called "fieldwork." On the biological or physical side, human measurements, genetic samples, nutritional data may be gathered and published as articles or monographs.
Along with dividing up their project by theoretical emphasis, anthropologists typically divide the world up into relevant time periods and geographic regions. Human time on Earth is divided up into relevant cultural traditions based on material, such as the Paleolithic and the Neolithic, of particular use in archaeology. Further cultural subdivisions according to tool types, such as Olduwan or Mousterian or Levalloisian help archaeologists and other anthropologists in understanding major trends in the human past. Anthropologists and geographers share approaches to culture regions as well, since mapping cultures is central to both sciences. By making comparisons across cultural traditions (time-based) and cultural regions (space-based), anthropologists have developed various kinds of comparative method, a central part of their science.
Commonalities between fields
Because anthropology developed from so many different enterprises (see History of anthropology), including but not limited to fossil-hunting, exploring, documentary film-making, paleontology, primatology, antiquity dealings and curatorship, philology, etymology, genetics, regional analysis, ethnology, history, philosophy, and religious studies, it is difficult to characterize the entire field in a brief article, although attempts to write histories of the entire field have been made.
Some authors argue that anthropology originated and developed as the study of "other cultures", both in terms of time (past societies) and space (non-European/non-Western societies). For example, the classic of urban anthropology, Ulf Hannerz in the introduction to his seminal Exploring the City: Inquiries Toward an Urban Anthropology mentions that the "Third World" had habitually received most of attention; anthropologists who traditionally specialized in "other cultures" looked for them far away and started to look "across the tracks" only in late 1960s.
Now there exist many works focusing on peoples and topics very close to the author's "home". It is also argued that other fields of study, like History and Sociology, on the contrary focus disproportionately on the West.
In France, the study of Western societies has been traditionally left to sociologists, but this is increasingly changing, starting in the 1970s from scholars like Isac Chiva and journals like Terrain ("fieldwork") and developing with the center founded by Marc Augé (Le Centre d'anthropologie des mondes contemporains, the Anthropological Research Center of Contemporary Societies).
Since the 1980s it has become common for social and cultural anthropologists to set ethnographic research in the North Atlantic region, frequently examining the connections between locations rather than limiting research to a single locale. There has also been a related shift toward broadening the focus beyond the daily life of ordinary people; increasingly, research is set in settings such as scientific laboratories, social movements, governmental and nongovernmental organizations and businesses.
See also
Christian anthropology, a sub-field of theology
Philosophical anthropology, a sub-field of philosophy
Lists
Notes
References
Works cited
Further reading
Dictionaries and encyclopedias
Fieldnotes and memoirs
Histories
.
Textbooks and key theoretical works
External links
Open Encyclopedia of Anthropology.
(AIO)
Behavioural sciences
Humans | Anthropology | [
"Biology"
] | 9,763 | [
"Behavioural sciences",
"Behavior"
] |
573 | https://en.wikipedia.org/wiki/Alchemy | Alchemy (from the Arabic word , ) is an ancient branch of natural philosophy, a philosophical and protoscientific tradition that was historically practised in China, India, the Muslim world, and Europe. In its Western form, alchemy is first attested in a number of pseudepigraphical texts written in Greco-Roman Egypt during the first few centuries AD. Greek-speaking alchemists often referred to their craft as "the Art" (τέχνη) or "Knowledge" (ἐπιστήμη), and it was often characterised as mystic (μυστική), sacred (ἱɛρά), or divine (θɛíα).
Alchemists attempted to purify, mature, and perfect certain materials. Common aims were chrysopoeia, the transmutation of "base metals" (e.g., lead) into "noble metals" (particularly gold); the creation of an elixir of immortality; and the creation of panaceas able to cure any disease. The perfection of the human body and soul was thought to result from the alchemical magnum opus ("Great Work"). The concept of creating the philosophers' stone was variously connected with all of these projects.
Islamic and European alchemists developed a basic set of laboratory techniques, theories, and terms, some of which are still in use today. They did not abandon the Ancient Greek philosophical idea that everything is composed of four elements, and they tended to guard their work in secrecy, often making use of cyphers and cryptic symbolism. In Europe, the 12th-century translations of medieval Islamic works on science and the rediscovery of Aristotelian philosophy gave birth to a flourishing tradition of Latin alchemy. This late medieval tradition of alchemy would go on to play a significant role in the development of early modern science (particularly chemistry and medicine).
Modern discussions of alchemy are generally split into an examination of its exoteric practical applications and its esoteric spiritual aspects, despite criticisms by scholars such as Eric J. Holmyard and Marie-Louise von Franz that they should be understood as complementary. The former is pursued by historians of the physical sciences, who examine the subject in terms of early chemistry, medicine, and charlatanism, and the philosophical and religious contexts in which these events occurred. The latter interests historians of esotericism, psychologists, and some philosophers and spiritualists. The subject has also made an ongoing impact on literature and the arts.
Etymology
The word alchemy comes from old French alquemie, alkimie, used in Medieval Latin as . This name was itself adopted from the Arabic word (). The Arabic in turn was a borrowing of the Late Greek term khēmeía (), also spelled khumeia () and khēmía (), with al- being the Arabic definite article 'the'. Together this association can be interpreted as 'the process of transmutation by which to fuse or reunite with the divine or original form'. Several etymologies have been proposed for the Greek term. The first was proposed by Zosimos of Panopolis (3rd–4th centuries), who derived it from the name of a book, the Khemeu. Hermann Diels argued in 1914 that it rather derived from χύμα, used to describe metallic objects formed by casting.
Others trace its roots to the Egyptian name (hieroglyphic 𓆎𓅓𓏏𓊖 ), meaning 'black earth', which refers to the fertile and auriferous soil of the Nile valley, as opposed to red desert sand. According to the Egyptologist Wallis Budge, the Arabic word ʾ actually means "the Egyptian [science]", borrowing from the Coptic word for "Egypt", (or its equivalent in the Mediaeval Bohairic dialect of Coptic, ). This Coptic word derives from Demotic , itself from ancient Egyptian . The ancient Egyptian word referred to both the country and the colour "black" (Egypt was the "black Land", by contrast with the "red Land", the surrounding desert).
History
Alchemy encompasses several philosophical traditions spanning some four millennia and three continents. These traditions' general penchant for cryptic and symbolic language makes it hard to trace their mutual influences and genetic relationships. One can distinguish at least three major strands, which appear to be mostly independent, at least in their earlier stages: Chinese alchemy, centered in China; Indian alchemy, centered on the Indian subcontinent; and Western alchemy, which occurred around the Mediterranean and whose center shifted over the millennia from Greco-Roman Egypt to the Islamic world, and finally medieval Europe. Chinese alchemy was closely connected to Taoism and Indian alchemy with the Dharmic faiths. In contrast, Western alchemy developed its philosophical system mostly independent of but influenced by various Western religions. It is still an open question whether these three strands share a common origin, or to what extent they influenced each other.
Hellenistic Egypt
The start of Western alchemy may generally be traced to ancient and Hellenistic Egypt, where the city of Alexandria was a center of alchemical knowledge, and retained its pre-eminence through most of the Greek and Roman periods. Following the work of André-Jean Festugière, modern scholars see alchemical practice in the Roman Empire as originating from the Egyptian goldsmith's art, Greek philosophy and different religious traditions. Tracing the origins of the alchemical art in Egypt is complicated by the pseudepigraphic nature of texts from the Greek alchemical corpus. The treatises of Zosimos of Panopolis, the earliest historically attested author (fl. c. 300 AD), can help in situating the other authors. Zosimus based his work on that of older alchemical authors, such as Mary the Jewess, Pseudo-Democritus, and Agathodaimon, but very little is known about any of these authors. The most complete of their works, The Four Books of Pseudo-Democritus, were probably written in the first century AD.
Recent scholarship tends to emphasize the testimony of Zosimus, who traced the alchemical arts back to Egyptian metallurgical and ceremonial practices. It has also been argued that early alchemical writers borrowed the vocabulary of Greek philosophical schools but did not implement any of its doctrines in a systematic way. Zosimos of Panopolis wrote in the Final Abstinence (also known as the "Final Count"). Zosimos explains that the ancient practice of "tinctures" (the technical Greek name for the alchemical arts) had been taken over by certain "demons" who taught the art only to those who offered them sacrifices. Since Zosimos also called the demons "the guardians of places" (, ) and those who offered them sacrifices "priests" (, ), it is fairly clear that he was referring to the gods of Egypt and their priests. While critical of the kind of alchemy he associated with the Egyptian priests and their followers, Zosimos nonetheless saw the tradition's recent past as rooted in the rites of the Egyptian temples.
Mythology
Zosimos of Panopolis asserted that alchemy dated back to Pharaonic Egypt where it was the domain of the priestly class, though there is little to no evidence for his assertion. Alchemical writers used Classical figures from Greek, Roman, and Egyptian mythology to illuminate their works and allegorize alchemical transmutation. These included the pantheon of gods related to the Classical planets, Isis, Osiris, Jason, and many others.
The central figure in the mythology of alchemy is Hermes Trismegistus (or Thrice-Great Hermes). His name is derived from the god Thoth and his Greek counterpart Hermes. Hermes and his caduceus or serpent-staff, were among alchemy's principal symbols. According to Clement of Alexandria, he wrote what were called the "forty-two books of Hermes", covering all fields of knowledge. The Hermetica of Thrice-Great Hermes is generally understood to form the basis for Western alchemical philosophy and practice, called the hermetic philosophy by its early practitioners. These writings were collected in the first centuries of the common era.
Technology
The dawn of Western alchemy is sometimes associated with that of metallurgy, extending back to 3500 BC. Many writings were lost when the Roman emperor Diocletian ordered the burning of alchemical books after suppressing a revolt in Alexandria (AD 292). Few original Egyptian documents on alchemy have survived, most notable among them the Stockholm papyrus and the Leyden papyrus X. Dating from AD 250–300, they contained recipes for dyeing and making artificial gemstones, cleaning and fabricating pearls, and manufacturing of imitation gold and silver. These writings lack the mystical, philosophical elements of alchemy, but do contain the works of Bolus of Mendes (or Pseudo-Democritus), which aligned these recipes with theoretical knowledge of astrology and the classical elements. Between the time of Bolus and Zosimos, the change took place that transformed this metallurgy into a Hermetic art.
Philosophy
Alexandria acted as a melting pot for philosophies of Pythagoreanism, Platonism, Stoicism and Gnosticism which formed the origin of alchemy's character. An important example of alchemy's roots in Greek philosophy, originated by Empedocles and developed by Aristotle, was that all things in the universe were formed from only four elements: earth, air, water, and fire. According to Aristotle, each element had a sphere to which it belonged and to which it would return if left undisturbed. The four elements of the Greek were mostly qualitative aspects of matter, not quantitative, as our modern elements are; "...True alchemy never regarded earth, air, water, and fire as corporeal or chemical substances in the present-day sense of the word. The four elements are simply the primary, and most general, qualities by means of which the amorphous and purely quantitative substance of all bodies first reveals itself in differentiated form." Later alchemists extensively developed the mystical aspects of this concept.
Alchemy coexisted alongside emerging Christianity. Lactantius believed Hermes Trismegistus had prophesied its birth. St Augustine later affirmed this in the 4th and 5th centuries, but also condemned Trismegistus for idolatry. Examples of Pagan, Christian, and Jewish alchemists can be found during this period.
Most of the Greco-Roman alchemists preceding Zosimos are known only by pseudonyms, such as Moses, Isis, Cleopatra, Democritus, and Ostanes. Others authors such as Komarios, and Chymes, we only know through fragments of text. After AD 400, Greek alchemical writers occupied themselves solely in commenting on the works of these predecessors. By the middle of the 7th century alchemy was almost an entirely mystical discipline. It was at that time that Khalid Ibn Yazid sparked its migration from Alexandria to the Islamic world, facilitating the translation and preservation of Greek alchemical texts in the 8th and 9th centuries.
Byzantium
Greek alchemy was preserved in medieval Byzantine manuscripts after the fall of Egypt, and yet historians have only relatively recently begun to pay attention to the study and development of Greek alchemy in the Byzantine period.
India
The 2nd millennium BC text Vedas describe a connection between eternal life and gold. A considerable knowledge of metallurgy has been exhibited in a third-century AD text called Arthashastra which provides ingredients of explosives (Agniyoga) and salts extracted from fertile soils and plant remains (Yavakshara) such as saltpetre/nitre, perfume making (different qualities of perfumes are mentioned), granulated (refined) Sugar. Buddhist texts from the 2nd to 5th centuries mention the transmutation of base metals to gold. According to some scholars Greek alchemy may have influenced Indian alchemy but there are no hard evidences to back this claim.
The 11th-century Persian chemist and physician Abū Rayhān Bīrūnī, who visited Gujarat as part of the court of Mahmud of Ghazni, reported that they
The goals of alchemy in India included the creation of a divine body (Sanskrit divya-deham) and immortality while still embodied (Sanskrit jīvan-mukti). Sanskrit alchemical texts include much material on the manipulation of mercury and sulphur, that are homologized with the semen of the god Śiva and the menstrual blood of the goddess Devī.
Some early alchemical writings seem to have their origins in the Kaula tantric schools associated to the teachings of the personality of Matsyendranath. Other early writings are found in the Jaina medical treatise Kalyāṇakārakam of Ugrāditya, written in South India in the early 9th century.
Two famous early Indian alchemical authors were Nāgārjuna Siddha and Nityanātha Siddha. Nāgārjuna Siddha was a Buddhist monk. His book, Rasendramangalam, is an example of Indian alchemy and medicine. Nityanātha Siddha wrote Rasaratnākara, also a highly influential work. In Sanskrit, rasa translates to "mercury", and Nāgārjuna Siddha was said to have developed a method of converting mercury into gold.
Scholarship on Indian alchemy is in the publication of The Alchemical Body by David Gordon White.
A modern bibliography on Indian alchemical studies has been written by White.
The contents of 39 Sanskrit alchemical treatises have been analysed in detail in G. Jan Meulenbeld's History of Indian Medical Literature. The discussion of these works in HIML gives a summary of the contents of each work, their special features, and where possible the evidence concerning their dating. Chapter 13 of HIML, Various works on rasaśāstra and ratnaśāstra (or Various works on alchemy and gems) gives brief details of a further 655 (six hundred and fifty-five) treatises. In some cases Meulenbeld gives notes on the contents and authorship of these works; in other cases references are made only to the unpublished manuscripts of these titles.
A great deal remains to be discovered about Indian alchemical literature. The content of the Sanskrit alchemical corpus has not yet (2014) been adequately integrated into the wider general history of alchemy.
Islamic world
After the fall of the Roman Empire, the focus of alchemical development moved to the Islamic World. Much more is known about Islamic alchemy because it was better documented: indeed, most of the earlier writings that have come down through the years were preserved as Arabic translations. The word alchemy itself was derived from the Arabic word al-kīmiyā (الكيمياء). The early Islamic world was a melting pot for alchemy. Platonic and Aristotelian thought, which had already been somewhat appropriated into hermetical science, continued to be assimilated during the late 7th and early 8th centuries through Syriac translations and scholarship.
In the late ninth and early tenth centuries, the Arabic works attributed to Jābir ibn Hayyān (Latinized as "Geber" or "Geberus") introduced a new approach to alchemy. Paul Kraus, who wrote the standard reference work on Jabir, put it as follows:
Islamic philosophers also made great contributions to alchemical hermeticism. The most influential author in this regard was arguably Jabir. Jabir's ultimate goal was Takwin, the artificial creation of life in the alchemical laboratory, up to, and including, human life. He analysed each Aristotelian element in terms of four basic qualities of hotness, coldness, dryness, and moistness. According to Jabir, in each metal two of these qualities were interior and two were exterior. For example, lead was externally cold and dry, while gold was hot and moist. Thus, Jabir theorized, by rearranging the qualities of one metal, a different metal would result. By this reasoning, the search for the philosopher's stone was introduced to Western alchemy. Jabir developed an elaborate numerology whereby the root letters of a substance's name in Arabic, when treated with various transformations, held correspondences to the element's physical properties.
The elemental system used in medieval alchemy also originated with Jabir. His original system consisted of seven elements, which included the five classical elements (aether, air, earth, fire, and water) in addition to two chemical elements representing the metals: sulphur, "the stone which burns", which characterized the principle of combustibility, and mercury, which contained the idealized principle of metallic properties. Shortly thereafter, this evolved into eight elements, with the Arabic concept of the three metallic principles: sulphur giving flammability or combustion, mercury giving volatility and stability, and salt giving solidity. The atomic theory of corpuscularianism, where all physical bodies possess an inner and outer layer of minute particles or corpuscles, also has its origins in the work of Jabir.
From the 9th to 14th centuries, alchemical theories faced criticism from a variety of practical Muslim chemists, including Alkindus, Abū al-Rayhān al-Bīrūnī, Avicenna and Ibn Khaldun. In particular, they wrote refutations against the idea of the transmutation of metals.
From the 14th century onwards, many materials and practices originally belonging to Indian alchemy (Rasayana) were assimilated in the Persian texts written by Muslim scholars.
East Asia
Researchers have found evidence that Chinese alchemists and philosophers discovered complex mathematical phenomena that were shared with Arab alchemists during the medieval period. Discovered in BC China, the "magic square of three" was propagated to followers of Abū Mūsā Jābir ibn Ḥayyān at some point over the proceeding several hundred years. Other commonalities shared between the two alchemical schools of thought include discrete naming for ingredients and heavy influence from the natural elements. The silk road provided a clear path for the exchange of goods, ideas, ingredients, religion, and many other aspects of life with which alchemy is intertwined.
Whereas European alchemy eventually centered on the transmutation of base metals into noble metals, Chinese alchemy had a more obvious connection to medicine. The philosopher's stone of European alchemists can be compared to the Grand Elixir of Immortality sought by Chinese alchemists. In the hermetic view, these two goals were not unconnected, and the philosopher's stone was often equated with the universal panacea; therefore, the two traditions may have had more in common than initially appears.
As early as 317 AD, Ge Hong documented the use of metals, minerals, and elixirs in early Chinese medicine. Hong identified three ancient Chinese documents, titled Scripture of Great Clarity, Scripture of the Nine Elixirs, and Scripture of the Golden Liquor, as texts containing fundamental alchemical information. He also described alchemy, along with meditation, as the sole spiritual practices that could allow one to gain immortality or to transcend. In his work Inner Chapters of the Book of the Master Who Embraces Spontaneous Nature (317 AD), Hong argued that alchemical solutions such as elixirs were preferable to traditional medicinal treatment due to the spiritual protection they could provide. In the centuries following Ge Hong's death, the emphasis placed on alchemy as a spiritual practice among Chinese Daoists was reduced. In 499 AD, Tao Hongjing refuted Hong's statement that alchemy is as important a spiritual practice as Shangqing meditation. While Hongjing did not deny the power of alchemical elixirs to grant immortality or provide divine protection, he ultimately found the Scripture of the Nine Elixirs to be ambiguous and spiritually unfulfilling, aiming to implement more accessible practising techniques.
In the early 700s, Neidan (also known as internal alchemy) was adopted by Daoists as a new form of alchemy. Neidan emphasized appeasing the inner gods that inhabit the human body by practising alchemy with compounds found in the body, rather than the mixing of natural resources that was emphasized in early Dao alchemy. For example, saliva was often considered nourishment for the inner gods and did not require any conscious alchemical reaction to produce. The inner gods were not thought of as physical presences occupying each person, but rather a collection of deities that are each said to represent and protect a specific body part or region. Although those who practised Neidan prioritized meditation over external alchemical strategies, many of the same elixirs and constituents from previous Daoist alchemical schools of thought continued to be utilized in tandem with meditation. Eternal life remained a consideration for Neidan alchemists, as it was believed that one would become immortal if an inner god were to be immortalized within them through spiritual fulfilment.
Black powder may have been an important invention of Chinese alchemists. It is said that the Chinese invented gunpowder while trying to find a potion for eternal life. Described in 9th-century texts and used in fireworks in China by the 10th century, it was used in cannons by 1290. From China, the use of gunpowder spread to Japan, the Mongols, the Muslim world, and Europe. Gunpowder was used by the Mongols against the Hungarians in 1241, and in Europe by the 14th century.
Chinese alchemy was closely connected to Taoist forms of traditional Chinese medicine, such as Acupuncture and Moxibustion. In the early Song dynasty, followers of this Taoist idea (chiefly the elite and upper class) would ingest mercuric sulfide, which, though tolerable in low levels, led many to suicide. Thinking that this consequential death would lead to freedom and access to the Taoist heavens, the ensuing deaths encouraged people to eschew this method of alchemy in favour of external sources (the aforementioned Tai Chi Chuan, mastering of the qi, etc.) Chinese alchemy was introduced to the West by Obed Simon Johnson.
Medieval Europe
The introduction of alchemy to Latin Europe may be dated to 11 February 1144, with the completion of Robert of Chester's translation of the ("Book on the Composition of Alchemy") from an Arabic work attributed to Khalid ibn Yazid. Although European craftsmen and technicians pre-existed, Robert notes in his preface that alchemy (here still referring to the elixir rather than to the art itself) was unknown in Latin Europe at the time of his writing. The translation of Arabic texts concerning numerous disciplines including alchemy flourished in 12th-century Toledo, Spain, through contributors like Gerard of Cremona and Adelard of Bath. Translations of the time included the Turba Philosophorum, and the works of Avicenna and Muhammad ibn Zakariya al-Razi. These brought with them many new words to the European vocabulary for which there was no previous Latin equivalent. Alcohol, carboy, elixir, and athanor are examples.
Meanwhile, theologian contemporaries of the translators made strides towards the reconciliation of faith and experimental rationalism, thereby priming Europe for the influx of alchemical thought. The 11th-century St Anselm put forth the opinion that faith and rationalism were compatible and encouraged rationalism in a Christian context. In the early 12th century, Peter Abelard followed Anselm's work, laying down the foundation for acceptance of Aristotelian thought before the first works of Aristotle had reached the West. In the early 13th century, Robert Grosseteste used Abelard's methods of analysis and added the use of observation, experimentation, and conclusions when conducting scientific investigations. Grosseteste also did much work to reconcile Platonic and Aristotelian thinking.
Through much of the 12th and 13th centuries, alchemical knowledge in Europe remained centered on translations, and new Latin contributions were not made. The efforts of the translators were succeeded by that of the encyclopaedists. In the 13th century, Albertus Magnus and Roger Bacon were the most notable of these, their work summarizing and explaining the newly imported alchemical knowledge in Aristotelian terms. Albertus Magnus, a Dominican friar, is known to have written works such as the Book of Minerals where he observed and commented on the operations and theories of alchemical authorities like Hermes Trismegistus, pseudo-Democritus and unnamed alchemists of his time. Albertus critically compared these to the writings of Aristotle and Avicenna, where they concerned the transmutation of metals. From the time shortly after his death through to the 15th century, more than 28 alchemical tracts were misattributed to him, a common practice giving rise to his reputation as an accomplished alchemist. Likewise, alchemical texts have been attributed to Albert's student Thomas Aquinas.
Roger Bacon, a Franciscan friar who wrote on a wide variety of topics including optics, comparative linguistics, and medicine, composed his Great Work () for as part of a project towards rebuilding the medieval university curriculum to include the new learning of his time. While alchemy was not more important to him than other sciences and he did not produce allegorical works on the topic, he did consider it and astrology to be important parts of both natural philosophy and theology and his contributions advanced alchemy's connections to soteriology and Christian theology. Bacon's writings integrated morality, salvation, alchemy, and the prolongation of life. His correspondence with Clement highlighted this, noting the importance of alchemy to the papacy. Like the Greeks before him, Bacon acknowledged the division of alchemy into practical and theoretical spheres. He noted that the theoretical lay outside the scope of Aristotle, the natural philosophers, and all Latin writers of his time. The practical confirmed the theoretical, and Bacon advocated its uses in natural science and medicine. In later European legend, he became an archmage. In particular, along with Albertus Magnus, he was credited with the forging of a brazen head capable of answering its owner's questions.
Soon after Bacon, the influential work of Pseudo-Geber (sometimes identified as Paul of Taranto) appeared. His Summa Perfectionis remained a staple summary of alchemical practice and theory through the medieval and renaissance periods. It was notable for its inclusion of practical chemical operations alongside sulphur-mercury theory, and the unusual clarity with which they were described. By the end of the 13th century, alchemy had developed into a fairly structured system of belief. Adepts believed in the macrocosm-microcosm theories of Hermes, that is to say, they believed that processes that affect minerals and other substances could have an effect on the human body (for example, if one could learn the secret of purifying gold, one could use the technique to purify the human soul). They believed in the four elements and the four qualities as described above, and they had a strong tradition of cloaking their written ideas in a labyrinth of coded jargon set with traps to mislead the uninitiated. Finally, the alchemists practised their art: they actively experimented with chemicals and made observations and theories about how the universe operated. Their entire philosophy revolved around their belief that man's soul was divided within himself after the fall of Adam. By purifying the two parts of man's soul, man could be reunited with God.
In the 14th century, alchemy became more accessible to Europeans outside the confines of Latin-speaking churchmen and scholars. Alchemical discourse shifted from scholarly philosophical debate to an exposed social commentary on the alchemists themselves. Dante, Piers Plowman, and Chaucer all painted unflattering pictures of alchemists as thieves and liars. Pope John XXII's 1317 edict, Spondent quas non-exhibent forbade the false promises of transmutation made by pseudo-alchemists. Roman Catholic Inquisitor General Nicholas Eymerich's Directorium Inquisitorum, written in 1376, associated alchemy with the performance of demonic rituals, which Eymerich differentiated from magic performed in accordance with scripture. This did not, however, lead to any change in the Inquisition's monitoring or prosecution of alchemists. In 1404, Henry IV of England banned the practice of multiplying metals by the passing of the (5 Hen. 4. c. 4) (although it was possible to buy a licence to attempt to make gold alchemically, and a number were granted by Henry VI and Edward IV). These critiques and regulations centered more around pseudo-alchemical charlatanism than the actual study of alchemy, which continued with an increasingly Christian tone. The 14th century saw the Christian imagery of death and resurrection employed in the alchemical texts of Petrus Bonus, John of Rupescissa, and in works written in the name of Raymond Lull and Arnold of Villanova.
Nicolas Flamel is a well-known alchemist to the point where he had many pseudepigraphic imitators. Although the historical Flamel existed, the writings and legends assigned to him only appeared in 1612.
A common idea in European alchemy in the medieval era was a metaphysical "Homeric chain of wise men that link[ed] heaven and earth" that included ancient pagan philosophers and other important historical figures.
Renaissance and early modern Europe
During the Renaissance, Hermetic and Platonic foundations were restored to European alchemy. The dawn of medical, pharmaceutical, occult, and entrepreneurial branches of alchemy followed.
In the late 15th century, Marsilio Ficino translated the Corpus Hermeticum and the works of Plato into Latin. These were previously unavailable to Europeans who for the first time had a full picture of the alchemical theory that Bacon had declared absent. Renaissance Humanism and Renaissance Neoplatonism guided alchemists away from physics to refocus on mankind as the alchemical vessel.
Esoteric systems developed that blended alchemy into a broader occult Hermeticism, fusing it with magic, astrology, and Christian cabala. A key figure in this development was German Heinrich Cornelius Agrippa (1486–1535), who received his Hermetic education in Italy in the schools of the humanists. In his De Occulta Philosophia, he attempted to merge Kabbalah, Hermeticism, and alchemy. He was instrumental in spreading this new blend of Hermeticism outside the borders of Italy.
Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim, 1493–1541) cast alchemy into a new form, rejecting some of Agrippa's occultism and moving away from chrysopoeia. Paracelsus pioneered the use of chemicals and minerals in medicine and wrote, "Many have said of Alchemy, that it is for the making of gold and silver. For me such is not the aim, but to consider only what virtue and power may lie in medicines."
His hermetical views were that sickness and health in the body relied on the harmony of man the microcosm and Nature the macrocosm. He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Iatrochemistry refers to the pharmaceutical applications of alchemy championed by Paracelsus.
John Dee (13 July 1527 – December 1608) followed Agrippa's occult tradition. Although better known for angel summoning, divination, and his role as astrologer, cryptographer, and consultant to Queen Elizabeth I, Dee's alchemical Monas Hieroglyphica, written in 1564 was his most popular and influential work. His writing portrayed alchemy as a sort of terrestrial astronomy in line with the Hermetic axiom As above so below. During the 17th century, a short-lived "supernatural" interpretation of alchemy became popular, including support by fellows of the Royal Society: Robert Boyle and Elias Ashmole. Proponents of the supernatural interpretation of alchemy believed that the philosopher's stone might be used to summon and communicate with angels.
Entrepreneurial opportunities were common for the alchemists of Renaissance Europe. Alchemists were contracted by the elite for practical purposes related to mining, medical services, and the production of chemicals, medicines, metals, and gemstones. Rudolf II, Holy Roman Emperor, in the late 16th century, famously received and sponsored various alchemists at his court in Prague, including Dee and his associate Edward Kelley. King James IV of Scotland, Julius, Duke of Brunswick-Lüneburg, Henry V, Duke of Brunswick-Lüneburg, Augustus, Elector of Saxony, Julius Echter von Mespelbrunn, and Maurice, Landgrave of Hesse-Kassel all contracted alchemists. John's son Arthur Dee worked as a court physician to Michael I of Russia and Charles I of England but also compiled the alchemical book Fasciculus Chemicus.
Although most of these appointments were legitimate, the trend of pseudo-alchemical fraud continued through the Renaissance. Betrüger would use sleight of hand, or claims of secret knowledge to make money or secure patronage. Legitimate mystical and medical alchemists such as Michael Maier and Heinrich Khunrath wrote about fraudulent transmutations, distinguishing themselves from the con artists. False alchemists were sometimes prosecuted for fraud.
The terms "chemia" and "alchemia" were used as synonyms in the early modern period, and the differences between alchemy, chemistry and small-scale assaying and metallurgy were not as neat as in the present day. There were important overlaps between practitioners, and trying to classify them into alchemists, chemists and craftsmen is anachronistic. For example, Tycho Brahe (1546–1601), an alchemist better known for his astronomical and astrological investigations, had a laboratory built at his Uraniborg observatory/research institute. Michael Sendivogius (Michał Sędziwój, 1566–1636), a Polish alchemist, philosopher, medical doctor and pioneer of chemistry wrote mystical works but is also credited with distilling oxygen in a lab sometime around 1600. Sendivogious taught his technique to Cornelius Drebbel who, in 1621, applied this in a submarine. Isaac Newton devoted considerably more of his writing to the study of alchemy (see Isaac Newton's occult studies) than he did to either optics or physics. Other early modern alchemists who were eminent in their other studies include Robert Boyle, and Jan Baptist van Helmont. Their Hermeticism complemented rather than precluded their practical achievements in medicine and science.
Later modern period
The decline of European alchemy was brought about by the rise of modern science with its emphasis on rigorous quantitative experimentation and its disdain for "ancient wisdom". Although the seeds of these events were planted as early as the 17th century, alchemy still flourished for some two hundred years, and in fact may have reached its peak in the 18th century. As late as 1781 James Price claimed to have produced a powder that could transmute mercury into silver or gold. Early modern European alchemy continued to exhibit a diversity of theories, practices, and purposes: "Scholastic and anti-Aristotelian, Paracelsian and anti-Paracelsian, Hermetic, Neoplatonic, mechanistic, vitalistic, and more—plus virtually every combination and compromise thereof."
Robert Boyle (1627–1691) pioneered the scientific method in chemical investigations. He assumed nothing in his experiments and compiled every piece of relevant data. Boyle would note the place in which the experiment was carried out, the wind characteristics, the position of the Sun and Moon, and the barometer reading, all just in case they proved to be relevant. This approach eventually led to the founding of modern chemistry in the 18th and 19th centuries, based on revolutionary discoveries and ideas of Lavoisier and John Dalton.
Beginning around 1720, a rigid distinction began to be drawn for the first time between "alchemy" and "chemistry". By the 1740s, "alchemy" was now restricted to the realm of gold making, leading to the popular belief that alchemists were charlatans, and the tradition itself nothing more than a fraud. In order to protect the developing science of modern chemistry from the negative censure to which alchemy was being subjected, academic writers during the 18th-century scientific Enlightenment attempted to divorce and separate the "new" chemistry from the "old" practices of alchemy. This move was mostly successful, and the consequences of this continued into the 19th, 20th and 21st centuries.
During the occult revival of the early 19th century, alchemy received new attention as an occult science. The esoteric or occultist school that arose during the 19th century held the view that the substances and operations mentioned in alchemical literature are to be interpreted in a spiritual sense, less than as a practical tradition or protoscience. This interpretation claimed that the obscure language of the alchemical texts, which 19th century practitioners were not always able to decipher, were an allegorical guise for spiritual, moral or mystical processes.
Two seminal figures during this period were Mary Anne Atwood and Ethan Allen Hitchcock, who independently published similar works regarding spiritual alchemy. Both rebuffed the growing successes of chemistry, developing a completely esoteric view of alchemy. Atwood wrote: "No modern art or chemistry, notwithstanding all its surreptitious claims, has any thing in common with Alchemy." Atwood's work influenced subsequent authors of the occult revival including Eliphas Levi, Arthur Edward Waite, and Rudolf Steiner. Hitchcock, in his Remarks Upon Alchymists (1855) attempted to make a case for his spiritual interpretation with his claim that the alchemists wrote about a spiritual discipline under a materialistic guise in order to avoid accusations of blasphemy from the church and state. In 1845, Baron Carl Reichenbach, published his studies on Odic force, a concept with some similarities to alchemy, but his research did not enter the mainstream of scientific discussion.
In 1946, Louis Cattiaux published the Message Retrouvé, a work that was at once philosophical, mystical and highly influenced by alchemy. In his lineage, many researchers, including Emmanuel and Charles d'Hooghvorst, are updating alchemical studies in France and Belgium.
Women
Several women appear in the earliest history of alchemy. Michael Maier names four women who were able to make the philosophers' stone: Mary the Jewess, Cleopatra the Alchemist, Medera, and Taphnutia. Zosimos' sister Theosebia (later known as Euthica the Arab) and Isis the Prophetess also played roles in early alchemical texts.
The first alchemist whose name we know was Mary the Jewess (). Early sources claim that Mary (or Maria) devised a number of improvements to alchemical equipment and tools as well as novel techniques in chemistry. Her best known advances were in heating and distillation processes. The laboratory water-bath, known eponymously (especially in France) as the bain-marie, is said to have been invented or at least improved by her. Essentially a double-boiler, it was (and is) used in chemistry for processes that required gentle heating. The tribikos (a modified distillation apparatus) and the kerotakis (a more intricate apparatus used especially for sublimations) are two other advancements in the process of distillation that are credited to her. Although we have no writing from Mary herself, she is known from the early-fourth-century writings of Zosimos of Panopolis. After the Greco-Roman period, women's names appear less frequently in alchemical literature.
Towards the end of the Middle Ages and beginning of the Renaissance, due to the emergence of print, women were able to access the alchemical knowledge from texts of the preceding centuries. Caterina Sforza, the Countess of Forlì and Lady of Imola, is one of the few confirmed female alchemists after Mary the Jewess. As she owned an apothecary, she would practice science and conduct experiments in her botanic gardens and laboratories. Being knowledgeable in alchemy and pharmacology, she recorded all of her alchemical ventures in a manuscript named ('Experiments'). The manuscript contained more than four hundred recipes covering alchemy as well as cosmetics and medicine. One of these recipes was for the water of talc. Talc, which makes up talcum powder, is a mineral which, when combined with water and distilled, was said to produce a solution which yielded many benefits. These supposed benefits included turning silver to gold and rejuvenation. When combined with white wine, its powder form could be ingested to counteract poison. Furthermore, if that powder was mixed and drunk with white wine, it was said to be a source of protection from any poison, sickness, or plague. Other recipes were for making hair dyes, lotions, lip colours. There was also information on how to treat a variety of ailments from fevers and coughs to epilepsy and cancer. In addition, there were instructions on producing the quintessence (or aether), an elixir which was believed to be able to heal all sicknesses, defend against diseases, and perpetuate youthfulness. She also wrote about creating the illustrious philosophers' stone.
Some women known for their interest in alchemy were Catherine de' Medici, the Queen of France, and Marie de' Medici, the following Queen of France, who carried out experiments in her personal laboratory. Also, Isabella d'Este, the Marchioness of Mantua, made perfumes herself to serve as gifts. Due to the proliferation in alchemical literature of pseudepigrapha and anonymous works, however, it is difficult to know which of the alchemists were actually women. This contributed to a broader pattern in which male authors credited prominent noblewomen for beauty products with the purpose of appealing to a female audience. For example, in ("Gallant Recipe-Book"), the distillation of lemons and roses was attributed to Elisabetta Gonzaga, the duchess of Urbino. In the same book, Isabella d'Aragona, the daughter of Alfonso II of Naples, is accredited for recipes involving alum and mercury. Ippolita Maria Sforza is even referred to in an anonymous manuscript about a hand lotion created with rose powder and crushed bones.
As the sixteenth century went on, scientific culture flourished and people began collecting "secrets". During this period "secrets" referred to experiments, and the most coveted ones were not those which were bizarre, but the ones which had been proven to yield the desired outcome. In this period, the only book of secrets ascribed to a woman was ('The Secrets of Signora Isabella Cortese'). This book contained information on how to turn base metals into gold, medicine, and cosmetics. However, it is rumoured that a man, Girolamo Ruscelli, was the real author and only used a female voice to attract female readers.
In the nineteenth-century, Mary Anne Atwood's A Suggestive Inquiry into the Hermetic Mystery (1850) marked the return of women during the occult revival.
Modern historical research
The history of alchemy has become a recognized subject of academic study. As the language of the alchemists is analysed, historians are becoming more aware of the connections between that discipline and other facets of Western cultural history, such as the evolution of science and philosophy, the sociology and psychology of the intellectual communities, kabbalism, spiritualism, Rosicrucianism, and other mystic movements. Institutions involved in this research include The Chymistry of Isaac Newton project at Indiana University, the University of Exeter Centre for the Study of Esotericism (EXESESO), the European Society for the Study of Western Esotericism (ESSWE), and the University of Amsterdam's Sub-department for the History of Hermetic Philosophy and Related Currents. A large collection of books on alchemy is kept in the Bibliotheca Philosophica Hermetica in Amsterdam.
Journals which publish regularly on the topic of Alchemy include Ambix, published by the Society for the History of Alchemy and Chemistry, and Isis, published by the History of Science Society.
Core concepts
Western alchemical theory corresponds to the worldview of late antiquity in which it was born. Concepts were imported from Neoplatonism and earlier Greek cosmology. As such, the classical elements appear in alchemical writings, as do the seven classical planets and the corresponding seven metals of antiquity. Similarly, the gods of the Roman pantheon who are associated with these luminaries are discussed in alchemical literature. The concepts of prima materia and anima mundi are central to the theory of the philosopher's stone.
Magnum opus
The Great Work of Alchemy is often described as a series of four stages represented by colours.
nigredo, a blackening or melanosis
albedo, a whitening or leucosis
citrinitas, a yellowing or xanthosis
rubedo, a reddening, purpling, or iosis
Modernity
Due to the complexity and obscurity of alchemical literature, and the 18th-century diffusion of remaining alchemical practitioners into the area of chemistry, the general understanding of alchemy in the 19th and 20th centuries was influenced by several distinct and radically different interpretations. Those focusing on the exoteric, such as historians of science Lawrence M. Principe and William R. Newman, have interpreted the 'Decknamen' (or code words) of alchemy as physical substances. These scholars have reconstructed physicochemical experiments that they say are described in medieval and early modern texts. At the opposite end of the spectrum, focusing on the esoteric, scholars, such as Florin George Călian and Anna Marie Roos, who question the reading of Principe and Newman, interpret these same Decknamen as spiritual, religious, or psychological concepts.
New interpretations of alchemy are still perpetuated, sometimes merging in concepts from New Age or radical environmentalism movements. Groups like the Rosicrucians and Freemasons have a continued interest in alchemy and its symbolism. Since the Victorian revival of alchemy, "occultists reinterpreted alchemy as a spiritual practice, involving the self-transformation of the practitioner and only incidentally or not at all the transformation of laboratory substances", which has contributed to a merger of magic and alchemy in popular thought.
Esoteric interpretations of historical texts
In the eyes of a variety of modern esoteric and Neo-Hermetic practitioners, alchemy is primarily spiritual. In this interpretation, transmutation of lead into gold is presented as an analogy for personal transmutation, purification, and perfection.
According to this view, early alchemists such as Zosimos of Panopolis () highlighted the spiritual nature of the alchemical quest, symbolic of a religious regeneration of the human soul. This approach is held to have continued in the Middle Ages, as metaphysical aspects, substances, physical states, and material processes are supposed to have been used as metaphors for spiritual entities, spiritual states, and, ultimately, transformation. In this sense, the literal meanings of 'Alchemical Formulas' hid a spiritual philosophy. In the Neo-Hermeticist interpretation, both the transmutation of common metals into gold and the universal panacea are held to symbolize evolution from an imperfect, diseased, corruptible, and ephemeral state toward a perfect, healthy, incorruptible, and everlasting state, so the philosopher's stone then represented a mystic key that would make this evolution possible. Applied to the alchemist, the twin goal symbolized their evolution from ignorance to enlightenment, and the stone represented a hidden spiritual truth or power that would lead to that goal. In texts that are believed to have been written according to this view, the cryptic alchemical symbols, diagrams, and textual imagery of late alchemical works are supposed to contain multiple layers of meanings, allegories, and references to other equally cryptic works; which must be laboriously decoded to discover their true meaning.
In his 1766 Alchemical Catechism, Théodore Henri de Tschudi suggested that the usage of the metals was symbolic:
Psychology
Alchemical symbolism has been important in analytical psychology and was revived and popularized from near extinction by the Swiss psychologist Carl Gustav Jung. Jung was initially confounded and at odds with alchemy and its images but after being given a copy of The Secret of the Golden Flower, a Chinese alchemical text translated by his friend Richard Wilhelm, he discovered a direct correlation or parallel between the symbolic images in the alchemical drawings and the inner, symbolic images coming up in his patients' dreams, visions, or fantasies. He observed these alchemical images occurring during the psychic process of transformation, a process that Jung called "individuation". Specifically, he regarded the conjuring up of images of gold or Lapis as symbolic expressions of the origin and goal of this "process of individuation". Together with his alchemical mystica soror (mystical sister) Jungian Swiss analyst Marie-Louise von Franz, Jung began collecting old alchemical texts, compiled a lexicon of key phrases with cross-references, and pored over them. The volumes of work he wrote shed new light onto understanding the art of transubstantiation and renewed alchemy's popularity as a symbolic process of coming into wholeness as a human being where opposites are brought into contact and inner and outer, spirit and matter are reunited in the hieros gamos, or divine marriage. His writings are influential in general psychology, but especially to those who have an interest in understanding the importance of dreams, symbols, and the unconscious archetypal forces (archetypes) that comprise all psychic life.
Both von Franz and Jung have contributed significantly to the subject and work of alchemy and its continued presence in psychology as well as contemporary culture. Among the volumes Jung wrote on alchemy, his magnum opus is Volume 14 of his Collected Works, Mysterium Coniunctionis.
Literature
Alchemy has had a long-standing relationship with art, seen both in alchemical texts and in mainstream entertainment. Literary alchemy appears throughout the history of English literature from Shakespeare to J. K. Rowling, and also the popular Japanese manga Fullmetal Alchemist. Here, characters or plot structure follow an alchemical magnum opus. In the 14th century, Chaucer began a trend of alchemical satire that can still be seen in recent fantasy works like those of the late Sir Terry Pratchett. Another literary work taking inspiration from the alchemical tradition is the 1988 novel The Alchemist by Brazilian writer Paulo Coelho.
Visual artists have had a similar relationship with alchemy. While some used it as a source of satire, others worked with the alchemists themselves or integrated alchemical thought or symbols in their work. Music was also present in the works of alchemists and continues to influence popular performers. In the last hundred years, alchemists have been portrayed in a magical and spagyric role in fantasy fiction, film, television, novels, comics and video games.
Science
One goal of alchemy, the transmutation of base substances into gold, is now known to be impossible by means of traditional chemistry, but possible by other physical means. Although not financially worthwhile, gold was synthesized in particle accelerators as early as 1941.
See also
Alchemical symbol
Chemistry
Corentin Louis Kervran § Biological transmutation
Cupellation
Historicism
History of chemistry
List of alchemical substances
List of alchemists
List of obsolete occupations
Nuclear transmutation
Outline of alchemy
Porta Alchemica
Renaissance magic
Spagyric
Superseded theories in science
Synthesis of precious metals
Thaumaturgy
Western esotericism
Notes
References
Citations
Sources used
Bibliography
Introductions and textbooks
(focus on technical aspects)
(focus on technical aspects)
(general overview)
(Greek and Byzantine alchemy)
(focus on technical aspects)
(Greek and Byzantine alchemy)
(the second part of volume 1 was never published; the other volumes deal with the modern period and are not relevant for alchemy)
(general overview, focus on esoteric aspects)
(general overview, written in a highly accessible style)
Greco-Egyptian alchemy
Texts
Marcellin Berthelot and Charles-Émile Ruelle (eds.), Collection des anciens alchimistes grecs (CAAG), 3 vols., 1887–1888, Vol 1: https://gallica.bnf.fr/ark:/12148/bpt6k96492923, Vol 2: https://gallica.bnf.fr/ark:/12148/bpt6k9680734p, Vol. 3: https://gallica.bnf.fr/ark:/12148/bpt6k9634942s.
André-Jean Festugière, La Révélation d'Hermès Trismégiste, Paris, Les Belles Lettres, 2014 (, OCLC 897235256).
and (eds.), Les alchimistes grecs, t. 1 : Papyrus de Leyde – Papyrus de Stockholm – Recettes, Paris, Les Belles Lettres, 1981.
Otto Lagercrantz (ed), Papyrus Graecus Holmiensis, Uppsala, A.B. Akademiska Bokhandeln, 1913, Papyrus graecus holmiensis (P. holm.); Recepte für Silber, Steine und Purpur, bearb. von Otto Lagercrantz. Hrsg. mit Unterstützung des Vilh. Ekman'schen Universitätsfonds.
Michèle Mertens and (ed.), Les alchimistes grecs, t. 4.1 : Zosime de Panopolis. Mémoires authentiques, Paris, Les Belles Lettres, 1995.
Andrée Collinet and (ed.), Les alchimistes grecs, t. 10 : L'Anonyme de Zuretti ou l'Art sacré and divin de la chrysopée par un anonyme, Paris, Les Belles Lettres, 2000.
Andrée Collinet (ed), Les alchimistes grecs, t. 11 : Recettes alchimiques (Par. Gr. 2419; Holkhamicus 109) – Cosmas le Hiéromoine – Chrysopée, Paris, Les Belles Lettres, 2000.
Matteo Martelli (ed), The Four Books of Pseudo-Democritus, Maney Publishing, 2014.
Studies
Dylan M. Burns, " μίξεώς τινι τέχνῃ κρείττονι : Alchemical Metaphor in the Paraphrase of Shem (NHC VII,1) ", Aries 15 (2015), p. 79–106.
Alberto Camplani, " Procedimenti magico-alchemici e discorso filosofico ermetico " in Giuliana Lanata (ed.), Il Tardoantico alle soglie del Duemila, ETS, 2000, p. 73–98.
Alberto Camplani and Marco Zambon, " Il sacrificio come problema in alcune correnti filosofice di età imperiale ", Annali di storia dell'esegesi 19 (2002), p. 59–99.
Régine Charron and Louis Painchaud, " 'God is a Dyer,' The Background and Significance of a Puzzling Motif in the Coptic Gospel According to Philip (CG II, 3), Le Muséon 114 (2001), p. 41-50.
Régine Charron, " The Apocryphon of John (NHC II,1) and the Greco-Egyptian Alchemical Literature ", Vigiliae Christinae 59 (2005), p. 438-456.
Philippe Derchain, "L'Atelier des Orfèvres à Dendara et les origines de l'alchimie," Chronique d'Égypte, vol. 65, no 130, 1990, p. 219–242.
Korshi Dosoo, " A History of the Theban Magical Library ", Bulletin of the American Society of Papyrologists 53 (2016), p. 251–274.
Olivier Dufault, Early Greek Alchemy, Patronage and Innovation in Late Antiquity, California Classical Studies, 2019, Early Greek Alchemy, Patronage and Innovation in Late Antiquity.
Sergio Knipe, " Sacrifice and self-transformation in the alchemical writings of Zosimus of Panopolis ", in Christopher Kelly, Richard Flower, Michael Stuart Williams (eds.), Unclassical Traditions. Volume II: Perspectives from East and West in Late Antiquity, Cambridge University Press, 2011, p. 59–69.
André-Jean Festugière, La Révélation d'Hermès Trismégiste, Paris, Les Belles Lettres, 2014 , .
Kyle A. Fraser, " Zosimos of Panopolis and the Book of Enoch: Alchemy as Forbidden Knowledge ", Aries 4.2 (2004), p. 125–147.
Kyle A. Fraser, " Baptized in Gnosis: The Spiritual Alchemy of Zosimos of Panopolis ", Dionysius 25 (2007), p. 33–54.
Kyle A. Fraser, " Distilling Nature's Secrets: The Sacred Art of Alchemy ", in John Scarborough and Paul Keyser (eds.), Oxford Handbook of Science and Medicine in the Classical World, Oxford University Press, 2018, p. 721–742. 2018. .
Shannon Grimes, Becoming Gold: Zosimos of Panopolis and the Alchemical Arts in Roman Egypt, Auckland, Rubedo Press, 2018,
Paul T. Keyser, " Greco-Roman Alchemy and Coins of Imitation Silver ", American Journal of Numismatics 7–8 (1995–1996), p. 209–234.
Paul Keyser, " The Longue Durée of Alchemy ", in John Scarborough and Paul Keyser (eds.), Oxford Handbook of Science and Medicine in the Classical World, Oxford University Press, 2018, p. 409–430.
Jean Letrouit, "Chronologie des alchimistes grecs," in Didier Kahn and Sylvain Matton, Alchimie: art, histoire et mythes, SEHA-Archè, 1995, p. 11–93.
Lindsay, Jack. The Origins of Alchemy in Greco-Roman Egypt. Barnes & Noble, 1970.
Paul Magdalino and Maria Mavroudi (eds.), The Occult Sciences in Byzantium, La Pomme d'or, 2006.
Matteo Martelli, " Alchemy, Medicine and Religion: Zosimus of Panopolis and the Egyptian Priests ", Religion in the Roman Empire 3.2 (2017), p. 202–220.
Daniel Stolzenberg, " Unpropitious Tinctures: Alchemy, Astrology & Gnosis According to Zosimos of Panopolis ", Archives internationales d'histoire des sciences 49 (1999), p. 3–31.
Cristina Viano, " Byzantine Alchemy, or the Era of Systematization ", in John Scarborough and Paul Keyser (eds.), Oxford Handbook of Science and Medicine in the Classical World, Oxford University Press, 2018, p. 943–964.
Early modern
Principe, Lawrence and William Newman. Alchemy Tried in the Fire: Starkey, Boyle, and the Fate of Helmontian Chymistry. University of Chicago Press, 2002.
External links
SHAC: Society for the History of Alchemy and Chemistry
ESSWE: European Society for the Study of Western Esotericism
Association for the Study of Esotericism
Eastern esotericism
Western esotericism
Natural philosophy
History of science | Alchemy | [
"Technology"
] | 12,672 | [
"History of science",
"History of science and technology"
] |
580 | https://en.wikipedia.org/wiki/Astronomer | An astronomer is a scientist in the field of astronomy who focuses on a specific question or field outside the scope of Earth. Astronomers observe astronomical objects, such as stars, planets, moons, comets and galaxies – in either observational (by analyzing the data) or theoretical astronomy. Examples of topics or fields astronomers study include planetary science, solar astronomy, the origin or evolution of stars, or the formation of galaxies. A related but distinct subject is physical cosmology, which studies the Universe as a whole.
Types
Astronomers typically fall under either of two main types: observational and theoretical. Observational astronomers make direct observations of celestial objects and analyze the data. In contrast, theoretical astronomers create and investigate models of things that cannot be observed. Because it takes millions to billions of years for a system of stars or a galaxy to complete a life cycle, astronomers must observe snapshots of different systems at unique points in their evolution to determine how they form, evolve, and die. They use this data to create models or simulations to theorize how different celestial objects work.
Further subcategories under these two main branches of astronomy include planetary astronomy, astrobiology, stellar astronomy, astrometry, galactic astronomy, extragalactic astronomy, or physical cosmology. Astronomers can also specialize in certain specialties of observational astronomy, such as infrared astronomy, neutrino astronomy, x-ray astronomy, and gravitational-wave astronomy.
Academic
History
Historically, astronomy was more concerned with the classification and description of phenomena in the sky, while astrophysics attempted to explain these phenomena and the differences between them using physical laws. Today, that distinction has mostly disappeared and the terms "astronomer" and "astrophysicist" are interchangeable. Professional astronomers are highly educated individuals who typically have a PhD in physics or astronomy and are employed by research institutions or universities. They spend the majority of their time working on research, although they quite often have other duties such as teaching, building instruments, or aiding in the operation of an observatory.
The American Astronomical Society, which is the major organization of professional astronomers in North America, has approximately 8,200 members (as of 2024). This number includes scientists from other fields such as physics, geology, and engineering, whose research interests are closely related to astronomy. The International Astronomical Union comprises about 12,700 members from 92 countries who are involved in astronomical research at the PhD level and beyond (as of 2024).
Contrary to the classical image of an old astronomer peering through a telescope through the dark hours of the night, it is far more common to use a charge-coupled device (CCD) camera to record a long, deep exposure, allowing a more sensitive image to be created because the light is added over time. Before CCDs, photographic plates were a common method of observation. Modern astronomers spend relatively little time at telescopes, usually just a few weeks per year. Analysis of observed phenomena, along with making predictions as to the causes of what they observe, takes the majority of observational astronomers' time.
Activities and graduate degree training
Astronomers who serve as faculty spend much of their time teaching undergraduate and graduate classes. Most universities also have outreach programs, including public telescope time and sometimes planetariums, as a public service to encourage interest in the field.
Those who become astronomers usually have a broad background in physics, mathematics, sciences, and computing in high school. Taking courses that teach how to research, write, and present papers are part of the higher education of an astronomer, while most astronomers attain both a Master's degree and eventually a PhD degree in astronomy, physics or astrophysics.
PhD training typically involves 5-6 years of study, including completion of upper-level courses in the core sciences, a competency examination, experience with teaching undergraduates and participating in outreach programs, work on research projects under the student's supervising professor, completion of a PhD thesis, and passing a final oral exam. Throughout the PhD training, a successful student is financially supported with a stipend.
Amateur astronomers
While there is a relatively low number of professional astronomers, the field is popular among amateurs. Most cities have amateur astronomy clubs that meet on a regular basis and often host star parties. The Astronomical Society of the Pacific is the largest general astronomical society in the world, comprising both professional and amateur astronomers as well as educators from 70 different nations.
As with any hobby, most people who practice amateur astronomy may devote a few hours a month to stargazing and reading the latest developments in research. However, amateurs span the range from so-called "armchair astronomers" to the highly ambitious people who own science-grade telescopes and instruments with which they are able to make their own discoveries, create astrophotographs, and assist professional astronomers in research.
See also
List of astronomers
List of women astronomers
List of Muslim astronomers
List of French astronomers
List of Hungarian astronomers
List of Russian astronomers and astrophysicists
List of Slovenian astronomers
References
Sources
External links
American Astronomical Society
European Astronomical Society
International Astronomical Union
Astronomical Society of the Pacific
Space's astronomy news
Astronomy
Science occupations | Astronomer | [
"Astronomy"
] | 1,022 | [
"Astronomers",
"nan",
"People associated with astronomy"
] |
586 | https://en.wikipedia.org/wiki/ASCII | ASCII ( ), an acronym for American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. ASCII has just 128 code points, of which only 95 are , which severely limit its scope. The set of available punctuation had significant impact on the syntax of computer languages and text markup. ASCII hugely influenced the design of character sets used by modern computers, including Unicode which has over a million code points, but the first 128 of these are the same as ASCII.
The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding.
ASCII is one of the IEEE milestones.
Overview
ASCII was developed in part from telegraph code. Its first commercial use was in the Teletype Model 33 and the Teletype Model 35 as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters.
The use of ASCII format for Network Interchange was described in 1969. That document was formally elevated to an Internet Standard in 2015.
Originally based on the (modern) English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart in this article. Ninety-five of the encoded characters are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols. In addition, the original ASCII specification included 33 non-printing control codes which originated with s; most of these are now obsolete, although a few are still commonly used, such as the carriage return, line feed, and tab codes.
For example, lowercase i would be represented in the ASCII encoding by binary 1101001 = hexadecimal 69 (i is the ninth letter) = decimal 105.
Despite being an American standard, ASCII does not have a code point for the cent (¢). It also does not support English terms with diacritical marks such as résumé and jalapeño, or proper nouns with diacritical marks such as Beyoncé (although on certain devices characters could be combined with punctuation such as Tilde (~) and Backtick (`) to approximate such characters.)
History
The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA later became the United States of America Standards Institute (USASI) and ultimately became the American National Standards Institute (ANSI).
With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks 6 and 7, and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. Locating the lowercase letters in sticks 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.
The X3 committee made other changes, including other new characters (the brace and vertical bar characters), renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed). ASCII was subsequently updated as USAS X3.4-1967, then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986.
Revisions
ASA X3.4-1963
ASA X3.4-1965 (approved, but not published, nevertheless used by IBM 2260 & 2265 Display Stations and IBM 2848 Display Control)
USAS X3.4-1967
USAS X3.4-1968
ANSI X3.4-1977
ANSI X3.4-1986
ANSI X3.4-1986 (R1992)
ANSI X3.4-1986 (R1997)
ANSI INCITS 4-1986 (R2002)
ANSI INCITS 4-1986 (R2007)
INCITS 4-1986 (R2012)
INCITS 4-1986 (R2017)
INCITS 4-1986 (R2022)
In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first) and recorded on perforated tape. They proposed a 9-track standard for magnetic tape and attempted to deal with some punched card formats.
Design considerations
Bit width
The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1932, FIELDATA (1956), and early EBCDIC (1963), more than 64 codes were required for ASCII.
ITA2 was in turn based on Baudot code, the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874.
The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.
The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0.
Internal organization
The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks (32 positions) were reserved for control characters. The "space" character had to come before graphics to make sorting easier, so it became position 20hex; for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41hex to match the draft of the corresponding British standard. The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward (for example, 5 in encoded to 0110101, where 5 is 0101 in binary).
Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'() early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick, positions 1–5, corresponding to the digits 1–5 in the adjacent stick. The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters.
Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers following the IBM PC (1981), especially Model M (1984) and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=.
Some then-common typewriter characters were not included, notably ½ ¼ ¢, while ^ ` ~ were included as diacritics for international use, and < > for mathematical use, together with the simple line characters \ | (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40hex, right before the letter A.
The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns.
Character order
ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are:
All uppercase come before lowercase letters; for example, "Z" precedes "a"
Digits and many punctuation marks come before letters
An intermediate order converts uppercase letters to lowercase before comparing ASCII values.
Character set
Character groups
Control characters
ASCII reserves the first 32 code points (numbers 0–31 decimal) and the last one (number 127 decimal) for control characters. These are codes intended to control peripheral devices (such as printers), or to provide meta-information about data streams, such as those stored on magnetic tape. Despite their name, these code points do not represent printable characters (i.e. they are not characters at all, but signals). For debugging purposes, "placeholder" symbols (such as those given in ISO 2047 and its predecessors) are assigned to them.
For example, character 0x0A represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.
The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example the standard is unclear about the meaning of "delete".
Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 (delete) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.
When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning, but control-Q is replaced by a second control-S to resume output.
The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively.
Delete vs backspace
The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send a BS (backspace). Instead, there was a key marked that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus the DEL character was assigned to erase the previous character. Because of this, DEC video terminals (by default) sent the DEL character for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence; many other competing terminals sent a BS character for the backspace key.
The early Unix tty drivers, unlike some modern implementations, allowed only one character to be set to erase the previous character in canonical input processing (where a very simple line editor is available); this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using (shells that allow line editing, such as ksh, bash, and zsh, understand both). The assumption that no key sent a BS character allowed Ctrl+H to be used for other purposes, such as the "help" prefix command in GNU Emacs.
Escape
Many more of the control characters have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this interpretation has been co-opted and has eventually been changed.
In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence, which can be used to address the cursor, scroll a region, set/query various terminal properties, and more. They are usually in the form of a so-called "ANSI escape code" (often starting with a "Control Sequence Introducer", "CSI", "") from ECMA-48 (1972) and its successors. Some escape sequences do not have introducers, like the "Reset to Initial State", "RIS" command "".
In contrast, an ESC read from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether.
End of line
The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while the typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line.
DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along, the convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M, he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system.
Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which the concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, and Windows in turn inherited it from MS-DOS.
Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. The tty driver would handle the LF to CRLF conversion on output so files can be directly printed to terminal, and NL (newline) is often used to refer to CRLF in UNIX documents. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS, Apple DOS, and ProDOS used carriage return (CR) alone as a line terminator; however, since Apple later replaced these obsolete operating systems with their Unix-based macOS (formerly named OS X) operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.
Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention.
End of file/stream
The PDP-6 monitor, and its PDP-10 successor TOPS-10, used control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file. For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text character (ETX), also known as control-C, was inappropriate for a variety of reasons, while using control-Z as the control character to end a file is analogous to the letter Z's position at the end of the alphabet, and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX character convention to interrupt and halt a program via an input data stream, usually from a keyboard.
The Unix terminal driver uses the end-of-transmission character (EOT), also known as control-D, to indicate the end of a data stream.
In the C programming language, and in Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".
Table of codes
Control code table
Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.
Printable character table
At the time of adoption, the codes 20hex to 7Ehex would cause the printing of a visible character (a glyph), and thus were designated "printable characters". These codes represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total.
The empty space between words, as produced by the space bar of a keyboard, is character code 20hex. Since the space character is visible in printed text it considered a "printable character", even though it is unique in having no visible glyph. It is listed in the printable character table, as per the ASCII standard, instead of in the control character table.
Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is listed in the control character table.
Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex).
Usage
ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence. His British colleague Hugh McGregor Ross helped to popularize this work according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe". Because of his extensive work on ASCII, Bemer has been called "the father of ASCII".
On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating:
I have also approved recommendations of the Secretary of Commerce [Luther H. Hodges] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations.
All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.
ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII.
Variants and derivations
As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.
7-bit codes
From early in its development, ASCII was intended to be just one of several national variants of an international character code standard.
Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£); e.g. with code page 1104. Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters.
Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).
It would share most characters in common, but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967 caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.
ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and, therefore, which character a code represented, and in general, text-processing systems could cope with only one variant anyway.
Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as
ä aÄiÜ = 'Ön'; ü
instead of
{ a[i] = '\n'; }
C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".
In Japan and Korea, still a variation of ASCII is used, in which the backslash (5C hex) is rendered as ¥ (a Yen sign, in Japan) or ₩ (a Won sign, in Korea). This means that, for example, the file path C:\Users\Smith is shown as C:¥Users¥Smith (in Japan) or C:₩Users₩Smith (in Korea).
In Europe, teletext character sets, which are variants of ASCII, are used for broadcast TV subtitles, defined by World System Teletext and broadcast using the DVB-TXT standard for embedding teletext into DVB transmissions. In the case that the subtitles were initially authored for teletext and converted, the derived subtitle formats are constrained to the same character sets.
8-bit codes
Eventually, as 8-, 16-, and 32-bit (and later 64-bit) computers began to replace 12-, 18-, and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters. ASCII itself remained a seven-bit code: the term "extended ASCII" has no official status.
For some countries, 8-bit extensions of ASCII were developed that included support for characters used in local languages; for example, ISCII for India and VISCII for Vietnam. Kaypro CP/M computers used the "upper" 128 characters for the Greek alphabet.
Even for markets where it was not necessary to add many characters to support additional languages, manufacturers of early home computer systems often developed their own 8-bit extensions of ASCII to include additional characters, such as box-drawing characters, semigraphics, and video game sprites. Often, these additions also replaced control characters (index 0 to 31, as well as index 127) with even more platform-specific extensions. In other cases, the extra bit was used for some other purpose, such as toggling inverse video; this approach was used by ATASCII, an extension of ASCII developed by Atari.
Most ASCII extensions are based on ASCII-1967 (the current standard), but some extensions are instead based on the earlier ASCII-1963. For example, PETSCII, which was developed by Commodore International for their 8-bit systems, is based on ASCII-1963. Likewise, many Sharp MZ character sets are based on ASCII-1963.
IBM defined code page 437 for the IBM PC, replacing the control characters with graphic symbols such as smiley faces, and mapping additional graphic characters to the upper 128 positions. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. Apple defined Mac OS Roman for the Macintosh and Adobe defined the PostScript Standard Encoding for PostScript; both sets contained "international" letters, typographic symbols and punctuation marks instead of graphics, more like modern character sets.
The ISO/IEC 8859 standard (derived from the DEC-MCS) provided a standard that most systems copied (or at least were based on, when not copied exactly). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encoding methods on the World Wide Web until 2008, when UTF-8 overtook them.
ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system.
Unicode
Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called code points) and encoding (to 8-, 16-, or 32-bit binary formats, called UTF-8, UTF-16, and UTF-32, respectively).
ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged.
See also
3568 ASCII – an asteroid named after the character encoding
Basic Latin (Unicode block) – ASCII as a subset of Unicode
HTML decimal character rendering
Jargon File – a glossary of computer programmer slang which includes a list of common slang names for ASCII characters
List of computer character sets
List of Unicode characters
Notes
References
Further reading
from:
(facsimile, not machine readable)
External links
Computer-related introductions in 1963
Character sets
Character encoding
Latin-script representations
Presentation layer protocols
American National Standards Institute standards | ASCII | [
"Technology"
] | 7,646 | [
"American National Standards Institute standards",
"Computer standards",
"Natural language and computing",
"Character encoding"
] |
612 | https://en.wikipedia.org/wiki/Arithmetic%20mean | In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the mean or average (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic.
In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population.
While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency.
Definition
The arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. Symbolically, for a data set consisting of the values , the arithmetic mean is defined by the formula:
(For an explanation of the summation operator, see summation.)
In simpler terms, the formula for the arithmetic mean is:
For example, if the monthly salaries of employees are , then the arithmetic mean is:
If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the population mean and denoted by the Greek letter . If the data set is a statistical sample (a subset of the population), it is called the sample mean (which for a data set is denoted as ).
The arithmetic mean can be similarly defined for vectors in multiple dimensions, not only scalar values; this is often referred to as a centroid. More generally, because the arithmetic mean is a convex combination (meaning its coefficients sum to ), it can be defined on a convex space, not only a vector space.
History
The statistician Churchill Eisenhart, senior researcher fellow at the U. S. National Bureau of Standards, traced the history of the arithmetic mean in detail. In the modern age it started to be used as a way of combining various observations that should be identical, but were not such as estimates of the direction of magnetic north.
In 1635 the mathematician Henry Gellibrand described as “meane” the midpoint of a lowest and highest number, not quite the arithmetic mean. In 1668, a person known as “DB” was quoted in the Transactions of the Royal Society describing “taking the mean” of five values:
Motivating properties
The arithmetic mean has several properties that make it interesting, especially as a measure of central tendency. These include:
If numbers have mean , then . Since is the distance from a given number to the mean, one way to interpret this property is by saying that the numbers to the left of the mean are balanced by the numbers to the right. The mean is the only number for which the residuals (deviations from the estimate) sum to zero. This can also be interpreted as saying that the mean is translationally invariant in the sense that for any real number , .
If it is required to use a single number as a "typical" value for a set of known numbers , then the arithmetic mean of the numbers does this best since it minimizes the sum of squared deviations from the typical value: the sum of . The sample mean is also the best single predictor because it has the lowest root mean squared error. If the arithmetic mean of a population of numbers is desired, then the estimate of it that is unbiased is the arithmetic mean of a sample drawn from the population.
The arithmetic mean is independent of scale of the units of measurement, in the sense that So, for example, calculating a mean of liters and then converting to gallons is the same as converting to gallons first and then calculating the mean. This is also called first order homogeneity.
Additional properties
The arithmetic mean of a sample is always between the largest and smallest values in that sample.
The arithmetic mean of any amount of equal-sized number groups together is the arithmetic mean of the arithmetic means of each group.
Contrast with median
The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger, and no more than half are smaller than it. If elements in the data increase arithmetically when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample . The mean is , as is the median. However, when we consider a sample that cannot be arranged to increase arithmetically, such as , the median and arithmetic average can differ significantly. In this case, the arithmetic average is , while the median is . The average value can vary considerably from most values in the sample and can be larger or smaller than most.
There are applications of this phenomenon in many fields. For example, since the 1980s, the median income in the United States has increased more slowly than the arithmetic average of income.
Generalizations
Weighted average
A weighted average, or weighted mean, is an average in which some data points count more heavily than others in that they are given more weight in the calculation. For example, the arithmetic mean of and is , or equivalently . In contrast, a weighted mean in which the first number receives, for example, twice as much weight as the second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as . Here the weights, which necessarily sum to one, are and , the former being twice the latter. The arithmetic mean (sometimes called the "unweighted average" or "equally weighted average") can be interpreted as a special case of a weighted average in which all weights are equal to the same number ( in the above example and in a situation with numbers being averaged).
Continuous probability distributions
If a numerical property, and any sample of data from it, can take on any value from a continuous range instead of, for example, just integers, then the probability of a number falling into some range of possible values can be described by integrating a continuous probability distribution across this range, even when the naive probability for a sample number taking one certain value from infinitely many is zero. In this context, the analog of a weighted average, in which there are infinitely many possibilities for the precise value of the variable in each range, is called the mean of the probability distribution. The most widely encountered probability distribution is called the normal distribution; it has the property that all measures of its central tendency, including not just the mean but also the median mentioned above and the mode (the three Ms), are equal. This equality does not hold for other probability distributions, as illustrated for the log-normal distribution here.
Angles
Particular care is needed when using cyclic data, such as phases or angles. Taking the arithmetic mean of 1° and 359° yields a result of 180°.
This is incorrect for two reasons:
Firstly, angle measurements are only defined up to an additive constant of 360° ( or , if measuring in radians). Thus, these could easily be called 1° and -1°, or 361° and 719°, since each one of them produces a different average.
Secondly, in this situation, 0° (or 360°) is geometrically a better average value: there is lower dispersion about it (the points are both 1° from it and 179° from 180°, the putative average).
In general application, such an oversight will lead to the average value artificially moving towards the middle of the numerical range. A solution to this problem is to use the optimization formulation (that is, define the mean as the central point: the point about which one has the lowest dispersion) and redefine the difference as a modular distance (i.e., the distance on the circle: so the modular distance between 1° and 359° is 2°, not 358°).
Symbols and encoding
The arithmetic mean is often denoted by a bar (vinculum or macron), as in .
Some software (text processors, web browsers) may not display the "x̄" symbol correctly. For example, the HTML symbol "x̄" combines two codes — the base letter "x" plus a code for the line above ( ̄ or ¯).
In some document formats (such as PDF), the symbol may be replaced by a "¢" (cent) symbol when copied to a text processor such as Microsoft Word.
See also
Fréchet mean
Generalized mean
Inequality of arithmetic and geometric means
Sample mean and covariance
Standard deviation
Standard error of the mean
Summary statistics
Notes
References
Further reading
External links
Calculations and comparisons between arithmetic mean and geometric mean of two numbers
Calculate the arithmetic mean of a series of numbers on fxSolver
Means | Arithmetic mean | [
"Physics",
"Mathematics"
] | 1,903 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
621 | https://en.wikipedia.org/wiki/Amphibian | Amphibians are ectothermic, anamniotic, four-limbed vertebrate animals that constitute the class Amphibia. In its broadest sense, it is a paraphyletic group encompassing all tetrapods excluding the amniotes (tetrapods with an amniotic membrane, such as modern reptiles, birds and mammals). All extant (living) amphibians belong to the monophyletic subclass Lissamphibia, with three living orders: Anura (frogs and toads), Urodela (salamanders), and Gymnophiona (caecilians). Evolved to be mostly semiaquatic, amphibians have adapted to inhabit a wide variety of habitats, with most species living in freshwater, wetland or terrestrial ecosystems (such as riparian woodland, fossorial and even arboreal habitats). Their life cycle typically starts out as aquatic larvae with gills known as tadpoles, but some species have developed behavioural adaptations to bypass this.
Young amphibians generally undergo metamorphosis from an aquatic larval form with gills to an air-breathing adult form with lungs. Amphibians use their skin as a secondary respiratory interface and some small terrestrial salamanders and frogs lack lungs and rely entirely on their skin. They are superficially similar to reptiles like lizards, but unlike reptiles and other amniotes, require access to water bodies to breed. With their complex reproductive needs and permeable skins, amphibians are often ecological indicators to habitat conditions; in recent decades there has been a dramatic decline in amphibian populations for many species around the globe.
The earliest amphibians evolved in the Devonian period from tetrapodomorph sarcopterygians (lobe-finned fish with articulated limb-like fins) that evolved primitive lungs, which were helpful in adapting to dry land. They diversified and became ecologically dominant during the Carboniferous and Permian periods, but were later displaced in terrestrial environments by early reptiles and basal synapsids (predecessors of mammals). The origin of modern lissamphibians, which first appeared during the Early Triassic, around 250 million years ago, has long been contentious. The most popular hypothesis is that they likely originated from temnospondyls, the most diverse group of prehistoric amphibians, during the Permian period. Another hypothesis is that they emerged from lepospondyls. A fourth group of lissamphibians, the Albanerpetontidae, became extinct around 2 million years ago.
The number of known amphibian species is approximately 8,000, of which nearly 90% are frogs. The smallest amphibian (and vertebrate) in the world is a frog from New Guinea (Paedophryne amauensis) with a length of just . The largest living amphibian is the South China giant salamander (Andrias sligoi), but this is dwarfed by prehistoric temnospondyls such as Mastodonsaurus which could reach up to in length. The study of amphibians is called batrachology, while the study of both reptiles and amphibians is called herpetology.
Classification
The word amphibian is derived from the Ancient Greek term (), which means 'both kinds of life', meaning 'of both kinds' and meaning 'life'. The term was initially used as a general adjective for animals that could live on land or in water, including seals and otters. Traditionally, the class Amphibia includes all tetrapod vertebrates that are not amniotes. Amphibia in its widest sense () was divided into three subclasses, two of which are extinct:
Subclass Lepospondyli† (A potentially polyphyletic Late Paleozoic group of small forms, likely more closely related to amniotes than Lissamphibia)
Subclass Temnospondyli† (diverse Late Paleozoic and early Mesozoic grade, some of which were large predators)
Subclass Lissamphibia (all modern amphibians, including frogs, toads, salamanders, newts and caecilians)
Salientia (frogs, toads and relatives): Early Triassic to present—7,360 current species in 53 families. Modern (crown group) salientians are described via the name Anura.
Caudata (salamanders, newts and relatives): Late Triassic to present—764 current species in 9 families. Modern (crown group) caudatans are described via the name Urodela.
Gymnophiona (caecilians and relatives): Late Triassic to present—215 current species in 10 families. The name Apoda is also sometimes used for caecilians.
Allocaudata† (Albanerpetontidae) Middle Jurassic – Early Pleistocene
These three subclasses do not include all extinct amphibians. Other extinct amphibian groups include Embolomeri (Late Paleozoic large aquatic predators), Seymouriamorpha (semiaquatic to terrestrial Permian forms related to amniotes)[citation needed], among others. Names such as Tetrapoda and Stegocephalia encompass the entirety of amphibian-grade tetrapods, while Reptiliomorpha or Anthracosauria are variably used to describe extinct amphibians more closely related to amniotes than to lissamphibians.
The actual number of species in each group depends on the taxonomic classification followed. The two most common systems are the classification adopted by the website AmphibiaWeb, University of California, Berkeley, and the classification by herpetologist Darrel Frost and the American Museum of Natural History, available as the online reference database "Amphibian Species of the World". The numbers of species cited above follows Frost and the total number of known (living) amphibian species as of March 31, 2019, is exactly 8,000, of which nearly 90% are frogs.
With the phylogenetic classification, the taxon Labyrinthodontia has been discarded as it is a polyparaphyletic group without unique defining features apart from shared primitive characteristics. Classification varies according to the preferred phylogeny of the author and whether they use a stem-based or a node-based classification. Traditionally, amphibians as a class are defined as all tetrapods with a larval stage, while the group that includes the common ancestors of all living amphibians (frogs, salamanders and caecilians) and all their descendants is called Lissamphibia. The phylogeny of Paleozoic amphibians is uncertain, and Lissamphibia may possibly fall within extinct groups, like the Temnospondyli (traditionally placed in the subclass Labyrinthodontia) or the Lepospondyli, and in some analyses even in the amniotes. This means that advocates of phylogenetic nomenclature have removed a large number of basal Devonian and Carboniferous amphibian-type tetrapod groups that were formerly placed in Amphibia in Linnaean taxonomy, and included them elsewhere under cladistic taxonomy. If the common ancestor of amphibians and amniotes is included in Amphibia, it becomes a paraphyletic group.
All modern amphibians are included in the subclass Lissamphibia, which is usually considered a clade, a group of species that have evolved from a common ancestor. The three modern orders are Anura (the frogs), Caudata (or Urodela, the salamanders), and Gymnophiona (or Apoda, the caecilians). It has been suggested that salamanders arose separately from a temnospondyl-like ancestor, and even that caecilians are the sister group of the advanced reptiliomorph amphibians, and thus of amniotes. Although the fossils of several older proto-frogs with primitive characteristics are known, the oldest "true frog", with hopping adaptations is Prosalirus bitis, from the Early Jurassic Kayenta Formation of Arizona. It is anatomically very similar to modern frogs. The oldest known caecilians are Funcusvermis gilmorei (from the Late Triassic) and Eocaecilia micropodia (from the Early Jurassic), both from Arizona. The earliest salamander is Beiyanerpeton jianpingensis from the Late Jurassic of northeastern China.
Authorities disagree as to whether Salientia is a superorder that includes the order Anura, or whether Anura is a sub-order of the order Salientia. The Lissamphibia are traditionally divided into three orders, but an extinct salamander-like family, the Albanerpetontidae, is now considered part of Lissamphibia alongside the superorder Salientia. Furthermore, Salientia includes all three recent orders plus the Triassic proto-frog, Triadobatrachus.
Evolutionary history
The first major groups of amphibians developed in the Devonian period, around 370 million years ago, from lobe-finned fish which were similar to the modern coelacanth and lungfish. These ancient lobe-finned fish had evolved multi-jointed leg-like fins with digits that enabled them to crawl along the sea bottom. Some fish had developed primitive lungs that help them breathe air when the stagnant pools of the Devonian swamps were low in oxygen. They could also use their strong fins to hoist themselves out of the water and onto dry land if circumstances so required. Eventually, their bony fins would evolve into limbs and they would become the ancestors to all tetrapods, including modern amphibians, reptiles, birds, and mammals. Despite being able to crawl on land, many of these prehistoric tetrapodomorph fish still spent most of their time in the water. They had started to develop lungs, but still breathed predominantly with gills.
Many examples of species showing transitional features have been discovered. Ichthyostega was one of the first primitive amphibians, with nostrils and more efficient lungs. It had four sturdy limbs, a neck, a tail with fins and a skull very similar to that of the lobe-finned fish, Eusthenopteron. Amphibians evolved adaptations that allowed them to stay out of the water for longer periods. Their lungs improved and their skeletons became heavier and stronger, better able to support the weight of their bodies on land. They developed "hands" and "feet" with five or more digits; the skin became more capable of retaining body fluids and resisting desiccation. The fish's hyomandibula bone in the hyoid region behind the gills diminished in size and became the stapes of the amphibian ear, an adaptation necessary for hearing on dry land. An affinity between the amphibians and the teleost fish is the multi-folded structure of the teeth and the paired supra-occipital bones at the back of the head, neither of these features being found elsewhere in the animal kingdom.
At the end of the Devonian period (360 million years ago), the seas, rivers and lakes were teeming with life while the land was the realm of early plants and devoid of vertebrates, though some, such as Ichthyostega, may have sometimes hauled themselves out of the water. It is thought they may have propelled themselves with their forelimbs, dragging their hindquarters in a similar manner to that used by the elephant seal. In the early Carboniferous (360 to 323 million years ago), the climate was relatively wet and warm. Extensive swamps developed with mosses, ferns, horsetails and calamites. Air-breathing arthropods evolved and invaded the land where they provided food for the carnivorous amphibians that began to adapt to the terrestrial environment. There were no other tetrapods on the land and the amphibians were at the top of the food chain, with some occupying ecological positions currently held by crocodiles. Though equipped with limbs and the ability to breathe air, most still had a long tapering body and strong tail. Others were the top land predators, sometimes reaching several metres in length, preying on the large insects of the period and the many types of fish in the water. They still needed to return to water to lay their shell-less eggs, and even most modern amphibians have a fully aquatic larval stage with gills like their fish ancestors. It was the development of the amniotic egg, which prevents the developing embryo from drying out, that enabled the reptiles to reproduce on land and which led to their dominance in the period that followed.
After the Carboniferous rainforest collapse amphibian dominance gave way to reptiles, and amphibians were further devastated by the Permian–Triassic extinction event. During the Triassic Period (252 to 201 million years ago), the reptiles continued to out-compete the amphibians, leading to a reduction in both the amphibians' size and their importance in the biosphere. According to the fossil record, Lissamphibia, which includes all modern amphibians and is the only surviving lineage, may have branched off from the extinct groups Temnospondyli and Lepospondyli at some period between the Late Carboniferous and the Early Triassic. The relative scarcity of fossil evidence precludes precise dating, but the most recent molecular study, based on multilocus sequence typing, suggests a Late Carboniferous/Early Permian origin for extant amphibians.
The origins and evolutionary relationships between the three main groups of amphibians is a matter of debate. A 2005 molecular phylogeny, based on rDNA analysis, suggests that salamanders and caecilians are more closely related to each other than they are to frogs. It also appears that the divergence of the three groups took place in the Paleozoic or early Mesozoic (around 250 million years ago), before the breakup of the supercontinent Pangaea and soon after their divergence from the lobe-finned fish. The briefness of this period, and the swiftness with which radiation took place, would help account for the relative scarcity of primitive amphibian fossils. There are large gaps in the fossil record, the discovery of the dissorophoid temnospondyl Gerobatrachus from the Early Permian in Texas in 2008 provided a missing link with many of the characteristics of modern frogs. Molecular analysis suggests that the frog–salamander divergence took place considerably earlier than the palaeontological evidence indicates. One study suggested that the last common ancestor of all modern amphibians lived about 315 million years ago, and that stereospondyl temnospondyls are the closest relatives to the caecilians. However, most studies support a single monophyletic origin of all modern amphibians within the dissorophoid temnospondyls.
As they evolved from lunged fish, amphibians had to make certain adaptations for living on land, including the need to develop new means of locomotion. In the water, the sideways thrusts of their tails had propelled them forward, but on land, quite different mechanisms were required. Their vertebral columns, limbs, limb girdles and musculature needed to be strong enough to raise them off the ground for locomotion and feeding. Terrestrial adults discarded their lateral line systems and adapted their sensory systems to receive stimuli via the medium of the air. They needed to develop new methods to regulate their body heat to cope with fluctuations in ambient temperature. They developed behaviours suitable for reproduction in a terrestrial environment. Their skins were exposed to harmful ultraviolet rays that had previously been absorbed by the water. The skin changed to become more protective and prevent excessive water loss.
Characteristics
The superclass Tetrapoda is divided into four classes of vertebrate animals with four limbs. Reptiles, birds and mammals are amniotes, the eggs of which are either laid or carried by the female and are surrounded by several membranes, some of which are impervious. Lacking these membranes, amphibians require water bodies for reproduction, although some species have developed various strategies for protecting or bypassing the vulnerable aquatic larval stage. They are not found in the sea with the exception of one or two frogs that live in brackish water in mangrove swamps; the Anderson's salamander meanwhile occurs in brackish or salt water lakes. On land, amphibians are restricted to moist habitats because of the need to keep their skin damp.
Modern amphibians have a simplified anatomy compared to their ancestors due to paedomorphosis, caused by two evolutionary trends: miniaturization and an unusually large genome, which result in a slower growth and development rate compared to other vertebrates. Another reason for their size is associated with their rapid metamorphosis, which seems to have evolved only in the ancestors of Lissamphibia; in all other known lines the development was much more gradual. Because a remodeling of the feeding apparatus means they do not eat during the metamorphosis, the metamorphosis has to go faster the smaller the individual is, so it happens at an early stage when the larvae are still small. (The largest species of salamanders do not go through a metamorphosis.) Amphibians that lay eggs on land often go through the whole metamorphosis inside the egg. An anamniotic terrestrial egg is less than 1 cm in diameter due to diffusion problems, a size which puts a limit on the amount of posthatching growth.
The smallest amphibian (and vertebrate) in the world is a microhylid frog from New Guinea (Paedophryne amauensis) first discovered in 2012. It has an average length of and is part of a genus that contains four of the world's ten smallest frog species. The largest living amphibian is the Chinese giant salamander (Andrias davidianus) but this is a great deal smaller than the largest amphibian that ever existed—the extinct Prionosuchus, a crocodile-like temnospondyl dating to 270 million years ago from the middle Permian of Brazil. The largest frog is the African Goliath frog (Conraua goliath), which can reach and weigh .
Amphibians are ectothermic (cold-blooded) vertebrates that do not maintain their body temperature through internal physiological processes. Their metabolic rate is low and as a result, their food and energy requirements are limited. In the adult state, they have tear ducts and movable eyelids, and most species have ears that can detect airborne or ground vibrations. They have muscular tongues, which in many species can be protruded. Modern amphibians have fully ossified vertebrae with articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, apart from a few fish-like scales in certain caecilians. The skin contains many mucous glands and in some species, poison glands (a type of granular gland). The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Most amphibians lay their eggs in water and have aquatic larvae that undergo metamorphosis to become terrestrial adults. Amphibians breathe by means of a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin.
Anura
The order Anura (from the Ancient Greek a(n)- meaning "without" and oura meaning "tail") comprises the frogs and toads. They usually have long hind limbs that fold underneath them, shorter forelimbs, webbed toes with no claws, no tails, large eyes and glandular moist skin. Members of this order with smooth skins are commonly referred to as frogs, while those with warty skins are known as toads. The difference is not a formal one taxonomically and there are numerous exceptions to this rule. Members of the family Bufonidae are known as the "true toads". Frogs range in size from the Goliath frog (Conraua goliath) of West Africa to the Paedophryne amauensis, first described in Papua New Guinea in 2012, which is also the smallest known vertebrate. Although most species are associated with water and damp habitats, some are specialised to live in trees or in deserts. They are found worldwide except for polar areas.
Anura is divided into three suborders that are broadly accepted by the scientific community, but the relationships between some families remain unclear. Future molecular studies should provide further insights into their evolutionary relationships. The suborder Archaeobatrachia contains four families of primitive frogs. These are Ascaphidae, Bombinatoridae, Discoglossidae and Leiopelmatidae which have few derived features and are probably paraphyletic with regard to other frog lineages. The six families in the more evolutionarily advanced suborder Mesobatrachia are the fossorial Megophryidae, Pelobatidae, Pelodytidae, Scaphiopodidae and Rhinophrynidae and the obligatorily aquatic Pipidae. These have certain characteristics that are intermediate between the two other suborders. Neobatrachia is by far the largest suborder and includes the remaining families of modern frogs, including most common species. Approximately 96% of the over 5,000 extant species of frog are neobatrachians.
Caudata
The order Caudata (from the Latin cauda meaning "tail") consists of the salamanders—elongated, low-slung animals that mostly resemble lizards in form. This is a symplesiomorphic trait and they are no more closely related to lizards than they are to mammals. Salamanders lack claws, have scale-free skins, either smooth or covered with tubercles, and tails that are usually flattened from side to side and often finned. They range in size from the Chinese giant salamander (Andrias davidianus), which has been reported to grow to a length of , to the diminutive Thorius pennatulus from Mexico which seldom exceeds in length. Salamanders have a mostly Laurasian distribution, being present in much of the Holarctic region of the northern hemisphere. The family Plethodontidae is also found in Central America and South America north of the Amazon basin; South America was apparently invaded from Central America by about the start of the Miocene, 23 million years ago. Urodela is a name sometimes used for all the extant species of salamanders. Members of several salamander families have become paedomorphic and either fail to complete their metamorphosis or retain some larval characteristics as adults. Most salamanders are under long. They may be terrestrial or aquatic and many spend part of the year in each habitat. When on land, they mostly spend the day hidden under stones or logs or in dense vegetation, emerging in the evening and night to forage for worms, insects and other invertebrates.
The suborder Cryptobranchoidea contains the primitive salamanders. A number of fossil cryptobranchids have been found, but there are only three living species, the Chinese giant salamander (Andrias davidianus), the Japanese giant salamander (Andrias japonicus) and the hellbender (Cryptobranchus alleganiensis) from North America. These large amphibians retain several larval characteristics in their adult state; gills slits are present and the eyes are unlidded. A unique feature is their ability to feed by suction, depressing either the left side of their lower jaw or the right. The males excavate nests, persuade females to lay their egg strings inside them, and guard them. As well as breathing with lungs, they respire through the many folds in their thin skin, which has capillaries close to the surface.
The suborder Salamandroidea contains the advanced salamanders. They differ from the cryptobranchids by having fused prearticular bones in the lower jaw, and by using internal fertilisation. In salamandrids, the male deposits a bundle of sperm, the spermatophore, and the female picks it up and inserts it into her cloaca where the sperm is stored until the eggs are laid. The largest family in this group is Plethodontidae, the lungless salamanders, which includes 60% of all salamander species. The family Salamandridae includes the true salamanders and the name "newt" is given to members of its subfamily Pleurodelinae.
The third suborder, Sirenoidea, contains the four species of sirens, which are in a single family, Sirenidae. Members of this order are eel-like aquatic salamanders with much reduced forelimbs and no hind limbs. Some of their features are primitive while others are derived. Fertilisation is likely to be external as sirenids lack the cloacal glands used by male salamandrids to produce spermatophores and the females lack spermathecae for sperm storage. Despite this, the eggs are laid singly, a behaviour not conducive for external fertilisation.
Gymnophiona
The order Gymnophiona (from the Greek gymnos meaning "naked" and ophis meaning "serpent") or Apoda comprises the caecilians. These are long, cylindrical, limbless animals with a snake- or worm-like form. The adults vary in length from 8 to 75 centimetres (3 to 30 inches) with the exception of Thomson's caecilian (Caecilia thompsoni), which can reach . A caecilian's skin has a large number of transverse folds and in some species contains tiny embedded dermal scales. It has rudimentary eyes covered in skin, which are probably limited to discerning differences in light intensity. It also has a pair of short tentacles near the eye that can be extended and which have tactile and olfactory functions. Most caecilians live underground in burrows in damp soil, in rotten wood and under plant debris, but some are aquatic. Most species lay their eggs underground and when the larvae hatch, they make their way to adjacent bodies of water. Others brood their eggs and the larvae undergo metamorphosis before the eggs hatch. A few species give birth to live young, nourishing them with glandular secretions while they are in the oviduct. Caecilians have a mostly Gondwanan distribution, being found in tropical regions of Africa, Asia and Central and South America.
Anatomy and physiology
Skin
The integumentary structure contains some typical characteristics common to terrestrial vertebrates, such as the presence of highly cornified outer layers, renewed periodically through a moulting process controlled by the pituitary and thyroid glands. Local thickenings (often called warts) are common, such as those found on toads. The outside of the skin is shed periodically mostly in one piece, in contrast to mammals and birds where it is shed in flakes. Amphibians often eat the sloughed skin. Caecilians are unique among amphibians in having mineralized dermal scales embedded in the dermis between the furrows in the skin. The similarity of these to the scales of bony fish is largely superficial. Lizards and some frogs have somewhat similar osteoderms forming bony deposits in the dermis, but this is an example of convergent evolution with similar structures having arisen independently in diverse vertebrate lineages.
Amphibian skin is permeable to water. Gas exchange can take place through the skin (cutaneous respiration) and this allows adult amphibians to respire without rising to the surface of water and to hibernate at the bottom of ponds. To compensate for their thin and delicate skin, amphibians have evolved mucous glands, principally on their heads, backs and tails. The secretions produced by these help keep the skin moist. In addition, most species of amphibian have granular glands that secrete distasteful or poisonous substances. Some amphibian toxins can be lethal to humans while others have little effect. The main poison-producing glands, the parotoids, produce the neurotoxin bufotoxin and are located behind the ears of toads, along the backs of frogs, behind the eyes of salamanders and on the upper surface of caecilians.
The skin colour of amphibians is produced by three layers of pigment cells called chromatophores. These three cell layers consist of the melanophores (occupying the deepest layer), the guanophores (forming an intermediate layer and containing many granules, producing a blue-green colour) and the lipophores (yellow, the most superficial layer). The colour change displayed by many species is initiated by hormones secreted by the pituitary gland. Unlike bony fish, there is no direct control of the pigment cells by the nervous system, and this results in the colour change taking place more slowly than happens in fish. A vividly coloured skin usually indicates that the species is toxic and is a warning sign to predators.
Skeletal system and locomotion
Amphibians have a skeletal system that is structurally homologous to other tetrapods, though with a number of variations. They all have four limbs except for the legless caecilians and a few species of salamander with reduced or no limbs. The bones are hollow and lightweight. The musculoskeletal system is strong to enable it to support the head and body. The bones are fully ossified and the vertebrae interlock with each other by means of overlapping processes. The pectoral girdle is supported by muscle, and the well-developed pelvic girdle is attached to the backbone by a pair of sacral ribs. The ilium slopes forward and the body is held closer to the ground than is the case in mammals.
In most amphibians, there are four digits on the fore foot and five on the hind foot, but no claws on either. Some salamanders have fewer digits and the amphiumas are eel-like in appearance with tiny, stubby legs. The sirens are aquatic salamanders with stumpy forelimbs and no hind limbs. The caecilians are limbless. They burrow in the manner of earthworms with zones of muscle contractions moving along the body. On the surface of the ground or in water they move by undulating their body from side to side.
In frogs, the hind legs are larger than the fore legs, especially so in those species that principally move by jumping or swimming. In the walkers and runners the hind limbs are not so large, and the burrowers mostly have short limbs and broad bodies. The feet have adaptations for the way of life, with webbing between the toes for swimming, broad adhesive toe pads for climbing, and keratinised tubercles on the hind feet for digging (frogs usually dig backwards into the soil). In most salamanders, the limbs are short and more or less the same length and project at right angles from the body. Locomotion on land is by walking and the tail often swings from side to side or is used as a prop, particularly when climbing. In their normal gait, only one leg is advanced at a time in the manner adopted by their ancestors, the lobe-finned fish. Some salamanders in the genus Aneides and certain plethodontids climb trees and have long limbs, large toepads and prehensile tails. In aquatic salamanders and in frog tadpoles, the tail has dorsal and ventral fins and is moved from side to side as a means of propulsion. Adult frogs do not have tails and caecilians have only very short ones.
Salamanders use their tails in defence and some are prepared to jettison them to save their lives in a process known as autotomy. Certain species in the Plethodontidae have a weak zone at the base of the tail and use this strategy readily. The tail often continues to twitch after separation which may distract the attacker and allow the salamander to escape. Both tails and limbs can be regenerated. Adult frogs are unable to regrow limbs but tadpoles can do so.
Circulatory system
Amphibians have a juvenile stage and an adult stage, and the circulatory systems of the two are distinct. In the juvenile (or tadpole) stage, the circulation is similar to that of a fish; the two-chambered heart pumps the blood through the gills where it is oxygenated, and is spread around the body and back to the heart in a single loop. In the adult stage, amphibians (especially frogs) lose their gills and develop lungs. They have a heart that consists of a single ventricle and two atria. When the ventricle starts contracting, deoxygenated blood is pumped through the pulmonary artery to the lungs. Continued contraction then pumps oxygenated blood around the rest of the body. Mixing of the two bloodstreams is minimized by the anatomy of the chambers.
Nervous and sensory systems
The nervous system is basically the same as in other vertebrates, with a central brain, a spinal cord, and nerves throughout the body. The amphibian brain is relatively simple but broadly the same structurally as in reptiles, birds and mammals. Their brains are elongated, except in caecilians, and contain the usual motor and sensory areas of tetrapods. The pineal body, known to regulate sleep patterns in humans, is thought to produce the hormones involved in hibernation and aestivation in amphibians.
Tadpoles retain the lateral line system of their ancestral fishes, but this is lost in terrestrial adult amphibians. Many aquatic salamanders and some caecilians possess electroreceptors called ampullary organs (completely absent in anurans), that allow them to locate objects around them when submerged in water. The ears are well developed in frogs. There is no external ear, but the large circular eardrum lies on the surface of the head just behind the eye. This vibrates and sound is transmitted through a single bone, the stapes, to the inner ear. Only high-frequency sounds like mating calls are heard in this way, but low-frequency noises can be detected through another mechanism. There is a patch of specialized haircells, called papilla amphibiorum, in the inner ear capable of detecting deeper sounds. Another feature, unique to frogs and salamanders, is the columella-operculum complex adjoining the auditory capsule which is involved in the transmission of both airborne and seismic signals. The ears of salamanders and caecilians are less highly developed than those of frogs as they do not normally communicate with each other through the medium of sound.
The eyes of tadpoles lack lids, but at metamorphosis, the cornea becomes more dome-shaped, the lens becomes flatter, and eyelids and associated glands and ducts develop. The adult eyes are an improvement on invertebrate eyes and were a first step in the development of more advanced vertebrate eyes. They allow colour vision and depth of focus. In the retinas are green rods, which are receptive to a wide range of wavelengths.
Digestive and excretory systems
Many amphibians catch their prey by flicking out an elongated tongue with a sticky tip and drawing it back into the mouth before seizing the item with their jaws. Some use inertial feeding to help them swallow the prey, repeatedly thrusting their head forward sharply causing the food to move backwards in their mouth by inertia. Most amphibians swallow their prey whole without much chewing so they possess voluminous stomachs. The short oesophagus is lined with cilia that help to move the food to the stomach and mucus produced by glands in the mouth and pharynx eases its passage. The enzyme chitinase produced in the stomach helps digest the chitinous cuticle of arthropod prey.
Amphibians possess a pancreas, liver and gall bladder. The liver is usually large with two lobes. Its size is determined by its function as a glycogen and fat storage unit, and may change with the seasons as these reserves are built or used up. Adipose tissue is another important means of storing energy and this occurs in the abdomen (in internal structures called fat bodies), under the skin and, in some salamanders, in the tail.
There are two kidneys located dorsally, near the roof of the body cavity. Their job is to filter the blood of metabolic waste and transport the urine via ureters to the urinary bladder where it is stored before being passed out periodically through the cloacal vent. Larvae and most aquatic adult amphibians excrete the nitrogen as ammonia in large quantities of dilute urine, while terrestrial species, with a greater need to conserve water, excrete the less toxic product urea. Some tree frogs with limited access to water excrete most of their metabolic waste as uric acid.
Urinary bladder
Most aquatic and semi-aquatic amphibians have a membranous skin which allows them to absorb water directly through it. Some semi-aquatic animals also have similarly permeable bladder membrane. As a result, they tend to have high rates of urine production to offset this high water intake, and have urine which is low in dissolved salts. The urinary bladder assists such animals to retain salts. Some aquatic amphibian such as Xenopus do not reabsorb water, to prevent excessive water influx. For land-dwelling amphibians, dehydration results in reduced urine output.
The amphibian bladder is usually highly distensible and among some land-dwelling species of frogs and salamanders may account for between 20% and 50% of their total body weight. Urine flows from the kidneys through the ureters into the bladder and is periodically released from the bladder to the cloaca.
Respiratory system
The lungs in amphibians are primitive compared to those of amniotes, possessing few internal septa and large alveoli, and consequently having a comparatively slow diffusion rate for oxygen entering the blood. Ventilation is accomplished by buccal pumping. Most amphibians, however, are able to exchange gases with the water or air via their skin. To enable sufficient cutaneous respiration, the surface of their highly vascularised skin must remain moist to allow the oxygen to diffuse at a sufficiently high rate. Because oxygen concentration in the water increases at both low temperatures and high flow rates, aquatic amphibians in these situations can rely primarily on cutaneous respiration, as in the Titicaca water frog and the hellbender salamander. In air, where oxygen is more concentrated, some small species can rely solely on cutaneous gas exchange, most famously the plethodontid salamanders, which have neither lungs nor gills. Many aquatic salamanders and all tadpoles have gills in their larval stage, with some (such as the axolotl) retaining gills as aquatic adults.
Reproduction
For the purpose of reproduction, most amphibians require fresh water although some lay their eggs on land and have developed various means of keeping them moist. A few (e.g. Fejervarya raja) can inhabit brackish water, but there are no true marine amphibians. There are reports, however, of particular amphibian populations unexpectedly invading marine waters. Such was the case with the Black Sea invasion of the natural hybrid Pelophylax esculentus reported in 2010.
Several hundred frog species in adaptive radiations (e.g., Eleutherodactylus, the Pacific Platymantis, the Australo-Papuan microhylids, and many other tropical frogs), however, do not need any water for breeding in the wild. They reproduce via direct development, an ecological and evolutionary adaptation that has allowed them to be completely independent from free-standing water. Almost all of these frogs live in wet tropical rainforests and their eggs hatch directly into miniature versions of the adult, passing through the tadpole stage within the egg. Reproductive success of many amphibians is dependent not only on the quantity of rainfall, but the seasonal timing.
In the tropics, many amphibians breed continuously or at any time of year. In temperate regions, breeding is mostly seasonal, usually in the spring, and is triggered by increasing day length, rising temperatures or rainfall. Experiments have shown the importance of temperature, but the trigger event, especially in arid regions, is often a storm. In anurans, males usually arrive at the breeding sites before females and the vocal chorus they produce may stimulate ovulation in females and the endocrine activity of males that are not yet reproductively active.
In caecilians, fertilisation is internal, the male extruding an intromittent organ, the , and inserting it into the female cloaca. The paired Müllerian glands inside the male cloaca secrete a fluid which resembles that produced by mammalian prostate glands and which may transport and nourish the sperm. Fertilisation probably takes place in the oviduct.
The majority of salamanders also engage in internal fertilisation. In most of these, the male deposits a spermatophore, a small packet of sperm on top of a gelatinous cone, on the substrate either on land or in the water. The female takes up the sperm packet by grasping it with the lips of the cloaca and pushing it into the vent. The spermatozoa move to the spermatheca in the roof of the cloaca where they remain until ovulation which may be many months later. Courtship rituals and methods of transfer of the spermatophore vary between species. In some, the spermatophore may be placed directly into the female cloaca while in others, the female may be guided to the spermatophore or restrained with an embrace called amplexus. Certain primitive salamanders in the families Sirenidae, Hynobiidae and Cryptobranchidae practice external fertilisation in a similar manner to frogs, with the female laying the eggs in water and the male releasing sperm onto the egg mass.
With a few exceptions, frogs use external fertilisation. The male grasps the female tightly with his forelimbs either behind the arms or in front of the back legs, or in the case of Epipedobates tricolor, around the neck. They remain in amplexus with their cloacae positioned close together while the female lays the eggs and the male covers them with sperm. Roughened nuptial pads on the male's hands aid in retaining grip. Often the male collects and retains the egg mass, forming a sort of basket with the hind feet. An exception is the granular poison frog (Oophaga granulifera) where the male and female place their cloacae in close proximity while facing in opposite directions and then release eggs and sperm simultaneously. The tailed frog (Ascaphus truei) exhibits internal fertilisation. The "tail" is only possessed by the male and is an extension of the cloaca and used to inseminate the female. This frog lives in fast-flowing streams and internal fertilisation prevents the sperm from being washed away before fertilisation occurs. The sperm may be retained in storage tubes attached to the oviduct until the following spring.
Most frogs can be classified as either prolonged or explosive breeders. Typically, prolonged breeders congregate at a breeding site, the males usually arriving first, calling and setting up territories. Other satellite males remain quietly nearby, waiting for their opportunity to take over a territory. The females arrive sporadically, mate selection takes place and eggs are laid. The females depart and territories may change hands. More females appear and in due course, the breeding season comes to an end. Explosive breeders on the other hand are found where temporary pools appear in dry regions after rainfall. These frogs are typically fossorial species that emerge after heavy rains and congregate at a breeding site. They are attracted there by the calling of the first male to find a suitable place, perhaps a pool that forms in the same place each rainy season. The assembled frogs may call in unison and frenzied activity ensues, the males scrambling to mate with the usually smaller number of females.
There is a direct competition between males to win the attention of the females in salamanders and newts, with elaborate courtship displays to keep the female's attention long enough to get her interested in choosing him to mate with. Some species store sperm through long breeding seasons, as the extra time may allow for interactions with rival sperm.
Unisexual reproduction
Unisexual female mole salamanders (genus Ambystoma) are common in the Great Lakes region of North America. These salamanders are the oldest known unisexual vertebrate lineage, having emerged about 5 million years ago. Genome exchange can sometimes occur between the unisexual female Ambystoma and males from sympatric sexual species.
Life cycle
Most amphibians go through metamorphosis, a process of significant morphological change after birth. In typical amphibian development, eggs are laid in water and larvae are adapted to an aquatic lifestyle. Frogs, toads and salamanders all hatch from the egg as larvae with external gills. Metamorphosis in amphibians is regulated by thyroxine concentration in the blood, which stimulates metamorphosis, and prolactin, which counteracts thyroxine's effect. Specific events are dependent on threshold values for different tissues. Because most embryonic development is outside the parental body, it is subject to many adaptations due to specific environmental circumstances. For this reason tadpoles can have horny ridges instead of teeth, whisker-like skin extensions or fins. They also make use of a sensory lateral line organ similar to that of fish. After metamorphosis, these organs become redundant and will be reabsorbed by controlled cell death, called apoptosis. The variety of adaptations to specific environmental circumstances among amphibians is wide, with many discoveries still being made.
Eggs
In the egg, the embryo is suspended in perivitelline fluid and surrounded by semi-permeable gelatinous capsules, with the yolk mass providing nutrients. As the larvae hatch, the capsules are dissolved by enzymes secreted from gland at the tip of the snout. The eggs of some salamanders and frogs contain unicellular green algae. These penetrate the jelly envelope after the eggs are laid and may increase the supply of oxygen to the embryo through photosynthesis. They seem to both speed up the development of the larvae and reduce mortality. In the wood frog (Rana sylvatica), the interior of the globular egg cluster has been found to be up to warmer than its surroundings, which is an advantage in its cool northern habitat.
The eggs may be deposited singly, in cluster or in long strands. Sites for laying eggs include water, mud, burrows, debris and on plants or under logs or stones. The greenhouse frog (Eleutherodactylus planirostris) lays eggs in small groups in the soil where they develop in about two weeks directly into juvenile frogs without an intervening larval stage. The tungara frog (Physalaemus pustulosus) builds a floating nest from foam to protect its eggs. First a raft is built, then eggs are laid in the centre, and finally a foam cap is overlaid. The foam has anti-microbial properties. It contains no detergents but is created by whipping up proteins and lectins secreted by the female.
Larvae
The eggs of amphibians are typically laid in water and hatch into free-living larvae that complete their development in water and later transform into either aquatic or terrestrial adults. In many species of frog and in most lungless salamanders (Plethodontidae), direct development takes place, the larvae growing within the eggs and emerging as miniature adults. Many caecilians and some other amphibians lay their eggs on land, and the newly hatched larvae wriggle or are transported to water bodies. Some caecilians, the alpine salamander (Salamandra atra) and some of the African live-bearing toads (Nectophrynoides spp.) are viviparous. Their larvae feed on glandular secretions and develop within the female's oviduct, often for long periods. Other amphibians, but not caecilians, are ovoviviparous. The eggs are retained in or on the parent's body, but the larvae subsist on the yolks of their eggs and receive no nourishment from the adult. The larvae emerge at varying stages of their growth, either before or after metamorphosis, according to their species. The toad genus Nectophrynoides exhibits all of these developmental patterns among its dozen or so members. Amphibian larvae are known as tadpoles. They have thick, rounded bodies with powerful muscular tails.
Frogs
Unlike in other amphibians, frog tadpoles do not resemble adults. The free-living larvae are normally fully aquatic, but the tadpoles of some species (such as Nannophrys ceylonensis) are semi-terrestrial and live among wet rocks. Tadpoles have cartilaginous skeletons, gills for respiration (external gills at first, internal gills later), lateral line systems and large tails that they use for swimming. Newly hatched tadpoles soon develop gill pouches that cover the gills. These internal gills and operculum are not homologous with those of fish, and are only found in tadpoles as both salamanders and caecilians have external gills only. Combined with buccal pumping the internal gills has allowed tadpoles to adopt a filter feeding lifestyle, even if several species have since evolved other types of feeding strategies. The lungs develop early and are used as accessory breathing organs, the tadpoles rising to the water surface to gulp air. Some species complete their development inside the egg and hatch directly into small frogs. These larvae do not have gills but instead have specialised areas of skin through which respiration takes place. While tadpoles do not have true teeth, in most species, the jaws have long, parallel rows of small keratinized structures called keradonts surrounded by a horny beak. Front legs are formed under the gill sac and hind legs become visible a few days later.
Iodine and T4 (over stimulate the spectacular apoptosis [programmed cell death] of the cells of the larval gills, tail and fins) also stimulate the evolution of nervous systems transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog with better neurological, visuospatial, olfactory and cognitive abilities for hunting.
In fact, tadpoles developing in ponds and streams are typically herbivorous. Pond tadpoles tend to have deep bodies, large caudal fins and small mouths; they swim in the quiet waters feeding on growing or loose fragments of vegetation. Stream dwellers mostly have larger mouths, shallow bodies and caudal fins; they attach themselves to plants and stones and feed on the surface films of algae and bacteria. They also feed on diatoms, filtered from the water through the gills, and stir up the sediment at bottom of the pond, ingesting edible fragments. They have a relatively long, spiral-shaped gut to enable them to digest this diet. Some species are carnivorous at the tadpole stage, eating insects, smaller tadpoles and fish. Young of the Cuban tree frog (Osteopilus septentrionalis) can occasionally be cannibalistic, the younger tadpoles attacking a larger, more developed tadpole when it is undergoing metamorphosis.
At metamorphosis, rapid changes in the body take place as the lifestyle of the frog changes completely. The spiral-shaped mouth with horny tooth ridges is reabsorbed together with the spiral gut. The animal develops a large jaw, and its gills disappear along with its gill sac. Eyes and legs grow quickly, and a tongue is formed. There are associated changes in the neural networks such as development of stereoscopic vision and loss of the lateral line system. All this can happen in about a day. A few days later, the tail is reabsorbed, due to the higher thyroxine concentration required for this to take place.
Salamanders
At hatching, a typical salamander larva has eyes without lids, teeth in both upper and lower jaws, three pairs of feathery external gills, and a long tail with dorsal and ventral fins. The forelimbs may be partially developed and the hind limbs are rudimentary in pond-living species but may be rather more developed in species that reproduce in moving water. Pond-type larvae often have a pair of balancers, rod-like structures on either side of the head that may prevent the gills from becoming clogged up with sediment. Both of these are able to breed. Some have larvae that never fully develop into the adult form, a condition known as neoteny. Neoteny occurs when the animal's growth rate is very low and is usually linked to adverse conditions such as low water temperatures that may change the response of the tissues to the hormone thyroxine. as well as lack of food. There are fifteen species of obligate neotenic salamanders, including species of Necturus, Proteus and Amphiuma, and many examples of facultative ones, such as the northwestern salamander (Ambystoma gracile) and the tiger salamander (A. tigrinum) that adopt this strategy under appropriate environmental circumstances.
Lungless salamanders in the family Plethodontidae are terrestrial and lay a small number of unpigmented eggs in a cluster among damp leaf litter. Each egg has a large yolk sac and the larva feeds on this while it develops inside the egg, emerging fully formed as a juvenile salamander. The female salamander often broods the eggs. In the genus Ensatinas, the female has been observed to coil around them and press her throat area against them, effectively massaging them with a mucous secretion.
In newts and salamanders, metamorphosis is less dramatic than in frogs. This is because the larvae are already carnivorous and continue to feed as predators when they are adults so few changes are needed to their digestive systems. Their lungs are functional early, but the larvae do not make as much use of them as do tadpoles. Their gills are never covered by gill sacs and are reabsorbed just before the animals leave the water. Other changes include the reduction in size or loss of tail fins, the closure of gill slits, thickening of the skin, the development of eyelids, and certain changes in dentition and tongue structure. Salamanders are at their most vulnerable at metamorphosis as swimming speeds are reduced and transforming tails are encumbrances on land. Adult salamanders often have an aquatic phase in spring and summer, and a land phase in winter. For adaptation to a water phase, prolactin is the required hormone, and for adaptation to the land phase, thyroxine. External gills do not return in subsequent aquatic phases because these are completely absorbed upon leaving the water for the first time.
Caecilians
Most terrestrial caecilians that lay eggs do so in burrows or moist places on land near bodies of water. The development of the young of Ichthyophis glutinosus, a species from Sri Lanka, has been much studied. The eel-like larvae hatch out of the eggs and make their way to water. They have three pairs of external red feathery gills, a blunt head with two rudimentary eyes, a lateral line system and a short tail with fins. They swim by undulating their body from side to side. They are mostly active at night, soon lose their gills and make sorties onto land. Metamorphosis is gradual. By the age of about ten months they have developed a pointed head with sensory tentacles near the mouth and lost their eyes, lateral line systems and tails. The skin thickens, embedded scales develop and the body divides into segments. By this time, the caecilian has constructed a burrow and is living on land.
In the majority of species of caecilians, the young are produced by viviparity. Typhlonectes compressicauda, a species from South America, is typical of these. Up to nine larvae can develop in the oviduct at any one time. They are elongated and have paired sac-like gills, small eyes and specialised scraping teeth. At first, they feed on the yolks of the eggs, but as this source of nourishment declines they begin to rasp at the ciliated epithelial cells that line the oviduct. This stimulates the secretion of fluids rich in lipids and mucoproteins on which they feed along with scrapings from the oviduct wall. They may increase their length sixfold and be two-fifths as long as their mother before being born. By this time they have undergone metamorphosis, lost their eyes and gills, developed a thicker skin and mouth tentacles, and reabsorbed their teeth. A permanent set of teeth grow through soon after birth.
Gills are only necessarily during embryonic development, and in species that give birth the offspring is born after gill degeneration. In egg laying caecilians the gills are either reabsorbed before hatching, or, in species that hatch with gill remnants still present, short lived and only leaves behind a gill slit. For species with scales under their skin, the scales does not form before during metamorphosis.
The ringed caecilian (Siphonops annulatus) has developed a unique adaptation for the purposes of reproduction. The progeny feed on a skin layer that is specially developed by the adult in a phenomenon known as maternal dermatophagy. The brood feed as a batch for about seven minutes at intervals of approximately three days which gives the skin an opportunity to regenerate. Meanwhile, they have been observed to ingest fluid exuded from the maternal cloaca.
Parental care
The care of offspring among amphibians has been little studied but, in general, the larger the number of eggs in a batch, the less likely it is that any degree of parental care takes place. Nevertheless, it is estimated that in up to 20% of amphibian species, one or both adults play some role in the care of the young. Those species that breed in smaller water bodies or other specialised habitats tend to have complex patterns of behaviour in the care of their young.
Many woodland salamanders lay clutches of eggs under dead logs or stones on land. The black mountain salamander (Desmognathus welteri) does this, the mother brooding the eggs and guarding them from predation as the embryos feed on the yolks of their eggs. When fully developed, they break their way out of the egg capsules and disperse as juvenile salamanders. The male hellbender, a primitive salamander, excavates an underwater nest and encourages females to lay there. The male then guards the site for the two or three months before the eggs hatch, using body undulations to fan the eggs and increase their supply of oxygen.
The male Colostethus subpunctatus, a tiny frog, protects the egg cluster which is hidden under a stone or log. When the eggs hatch, the male transports the tadpoles on his back, stuck there by a mucous secretion, to a temporary pool where he dips himself into the water and the tadpoles drop off. The male midwife toad (Alytes obstetricans) winds egg strings round his thighs and carries the eggs around for up to eight weeks. He keeps them moist and when they are ready to hatch, he visits a pond or ditch and releases the tadpoles. The female gastric-brooding frog (Rheobatrachus spp.) reared larvae in her stomach after swallowing either the eggs or hatchlings; however, this stage was never observed before the species became extinct. The tadpoles secrete a hormone that inhibits digestion in the mother whilst they develop by consuming their very large yolk supply. The pouched frog (Assa darlingtoni) lays eggs on the ground. When they hatch, the male carries the tadpoles around in brood pouches on his hind legs. The aquatic Surinam toad (Pipa pipa) raises its young in pores on its back where they remain until metamorphosis. The granular poison frog (Oophaga granulifera) is typical of a number of tree frogs in the poison dart frog family Dendrobatidae. Its eggs are laid on the forest floor and when they hatch, the tadpoles are carried one by one on the back of an adult to a suitable water-filled crevice such as the axil of a leaf or the rosette of a bromeliad. The female visits the nursery sites regularly and deposits unfertilised eggs in the water and these are consumed by the tadpoles.
Genetics and genomics
Amphibians are notable among vertebrates for their diversity of chromosomes and genomes. The karyotypes (chromosomes) have been determined for at least 1,193 (14.5%) of the ≈8,200 known (diploid) species, including 963 anurans, 209 salamanders, and 21 caecilians. Generally, the karyotypes of diploid amphibians are characterized by 20–26 bi-armed chromosomes. Amphibians have also very large genomes compared to other taxa of vertebrates and corresponding variation in genome size (C-value: picograms of DNA in haploid nuclei). The genome sizes range from 0.95 to 11.5 pg in frogs, from 13.89 to 120.56 pg in salamanders, and from 2.94 to 11.78 pg in caecilians.
The large genome sizes have prevented whole-genome sequencing of amphibians although a number of genomes have been published recently. The 1.7GB draft genome of Xenopus tropicalis was the first to be reported for amphibians in 2010. Compared to some salamanders this frog genome is tiny. For instance, the genome of the Mexican axolotl turned out to be 32 Gb, which is more than 10 times larger than the human genome (3GB).
Feeding and diet
With a few exceptions, adult amphibians are predators, feeding on virtually anything that moves that they can swallow. The diet mostly consists of small prey that do not move too fast such as beetles, caterpillars, earthworms and spiders. The sirens (Siren spp.) often ingest aquatic plant material with the invertebrates on which they feed and a Brazilian tree frog (Xenohyla truncata) includes a large quantity of fruit in its diet. The Mexican burrowing toad (Rhinophrynus dorsalis) has a specially adapted tongue for picking up ants and termites. It projects it with the tip foremost whereas other frogs flick out the rear part first, their tongues being hinged at the front.
Food is mostly selected by sight, even in conditions of dim light. Movement of the prey triggers a feeding response. Frogs have been caught on fish hooks baited with red flannel and green frogs (Rana clamitans) have been found with stomachs full of elm seeds that they had seen floating past. Toads, salamanders and caecilians also use smell to detect prey. This response is mostly secondary because salamanders have been observed to remain stationary near odoriferous prey but only feed if it moves. Cave-dwelling amphibians normally hunt by smell. Some salamanders seem to have learned to recognize immobile prey when it has no smell, even in complete darkness.
Amphibians usually swallow food whole but may chew it lightly first to subdue it. They typically have small hinged pedicellate teeth, a feature unique to amphibians. The base and crown of these are composed of dentine separated by an uncalcified layer and they are replaced at intervals. Salamanders, caecilians and some frogs have one or two rows of teeth in both jaws, but some frogs (Rana spp.) lack teeth in the lower jaw, and toads (Bufo spp.) have no teeth. In many amphibians there are also vomerine teeth attached to a facial bone in the roof of the mouth.
The tiger salamander (Ambystoma tigrinum) is typical of the frogs and salamanders that hide under cover ready to ambush unwary invertebrates. Other amphibians, such as the Bufo spp. toads, actively search for prey, while the Argentine horned frog (Ceratophrys ornata) lures inquisitive prey closer by raising its hind feet over its back and vibrating its yellow toes. Among leaf litter frogs in Panama, frogs that actively hunt prey have narrow mouths and are slim, often brightly coloured and toxic, while ambushers have wide mouths and are broad and well-camouflaged. Caecilians do not flick their tongues, but catch their prey by grabbing it with their slightly backward-pointing teeth. The struggles of the prey and further jaw movements work it inwards and the caecilian usually retreats into its burrow. The subdued prey is gulped down whole.
When they are newly hatched, frog larvae feed on the yolk of the egg. When this is exhausted some move on to feed on bacteria, algal crusts, detritus and raspings from submerged plants. Water is drawn in through their mouths, which are usually at the bottom of their heads, and passes through branchial food traps between their mouths and their gills where fine particles are trapped in mucus and filtered out. Others have specialised mouthparts consisting of a horny beak edged by several rows of labial teeth. They scrape and bite food of many kinds as well as stirring up the bottom sediment, filtering out larger particles with the papillae around their mouths. Some, such as the spadefoot toads, have strong biting jaws and are carnivorous or even cannibalistic.
Vocalization
The calls made by caecilians and salamanders are limited to occasional soft squeaks, grunts or hisses and have not been much studied. A clicking sound sometimes produced by caecilians may be a means of orientation, as in bats, or a form of communication. Most salamanders are considered voiceless, but the California giant salamander (Dicamptodon ensatus) has vocal cords and can produce a rattling or barking sound. Some species of salamander emit a quiet squeak or yelp if attacked.
Frogs are much more vocal, especially during the breeding season when they use their voices to attract mates. The presence of a particular species in an area may be more easily discerned by its characteristic call than by a fleeting glimpse of the animal itself. In most species, the sound is produced by expelling air from the lungs over the vocal cords into one or more air sacs in the throat or at the corner of the mouth. This may distend like a balloon and acts as a resonator, helping to transfer the sound to the atmosphere, or the water at times when the animal is submerged. The main vocalisation is the male's loud advertisement call which seeks to both encourage a female to approach and discourage other males from intruding on its territory. This call is modified to a quieter courtship call on the approach of a female or to a more aggressive version if a male intruder draws near. Calling carries the risk of attracting predators and involves the expenditure of much energy. Other calls include those given by a female in response to the advertisement call and a release call given by a male or female during unwanted attempts at amplexus. When a frog is attacked, a distress or fright call is emitted, often resembling a scream. The usually nocturnal Cuban tree frog (Osteopilus septentrionalis) produces a rain call when there is rainfall during daylight hours.
Territorial behaviour
Little is known of the territorial behaviour of caecilians, but some frogs and salamanders defend home ranges. These are usually feeding, breeding or sheltering sites. Males normally exhibit such behaviour though in some species, females and even juveniles are also involved. Although in many frog species, females are larger than males, this is not the case in most species where males are actively involved in territorial defence. Some of these have specific adaptations such as enlarged teeth for biting or spines on the chest, arms or thumbs.
In salamanders, defence of a territory involves adopting an aggressive posture and if necessary attacking the intruder. This may involve snapping, chasing and sometimes biting, occasionally causing the loss of a tail. The behaviour of red back salamanders (Plethodon cinereus) has been much studied. 91% of marked individuals that were later recaptured were within a metre (yard) of their original daytime retreat under a log or rock. A similar proportion, when moved experimentally a distance of , found their way back to their home base. The salamanders left odour marks around their territories which averaged in size and were sometimes inhabited by a male and female pair. These deterred the intrusion of others and delineated the boundaries between neighbouring areas. Much of their behaviour seemed stereotyped and did not involve any actual contact between individuals. An aggressive posture involved raising the body off the ground and glaring at the opponent who often turned away submissively. If the intruder persisted, a biting lunge was usually launched at either the tail region or the naso-labial grooves. Damage to either of these areas can reduce the fitness of the rival, either because of the need to regenerate tissue or because it impairs its ability to detect food.
In frogs, male territorial behaviour is often observed at breeding locations; calling is both an announcement of ownership of part of this resource and an advertisement call to potential mates. In general, a deeper voice represents a heavier and more powerful individual, and this may be sufficient to prevent intrusion by smaller males. Much energy is used in the vocalization and it takes a toll on the territory holder who may be displaced by a fitter rival if he tires. There is a tendency for males to tolerate the holders of neighbouring territories while vigorously attacking unknown intruders. Holders of territories have a "home advantage" and usually come off better in an encounter between two similar-sized frogs. If threats are insufficient, chest to chest tussles may take place. Fighting methods include pushing and shoving, deflating the opponent's vocal sac, seizing him by the head, jumping on his back, biting, chasing, splashing, and ducking him under the water.
Defence mechanisms
Amphibians have soft bodies with thin skins, and lack claws, defensive armour, or spines. Nevertheless, they have evolved various defence mechanisms to keep themselves alive. The first line of defence in salamanders and frogs is the mucous secretion that they produce. This keeps their skin moist and makes them slippery and difficult to grip. The secretion is often sticky and distasteful or toxic. Snakes have been observed yawning and gaping when trying to swallow African clawed frogs (Xenopus laevis), which gives the frogs an opportunity to escape. Caecilians have been little studied in this respect, but the Cayenne caecilian (Typhlonectes compressicauda) produces toxic mucus that has killed predatory fish in a feeding experiment in Brazil. In some salamanders, the skin is poisonous. The rough-skinned newt (Taricha granulosa) from North America and other members of its genus contain the neurotoxin tetrodotoxin (TTX), the most toxic non-protein substance known and almost identical to that produced by pufferfish. Handling the newts does not cause harm, but ingestion of even the most minute amounts of the skin is deadly. In feeding trials, fish, frogs, reptiles, birds and mammals were all found to be susceptible. The only predators with some tolerance to the poison are certain populations of common garter snake (Thamnophis sirtalis).
In locations where both snake and salamander co-exist, the snakes have developed immunity through genetic changes and they feed on the amphibians with impunity. Coevolution occurs with the newt increasing its toxic capabilities at the same rate as the snake further develops its immunity. Some frogs and toads are toxic, the main poison glands being at the side of the neck and under the warts on the back. These regions are presented to the attacking animal and their secretions may be foul-tasting or cause various physical or neurological symptoms. Altogether, over 200 toxins have been isolated from the limited number of amphibian species that have been investigated.
Poisonous species often use bright colouring to warn potential predators of their toxicity. These warning colours tend to be red or yellow combined with black, with the fire salamander (Salamandra salamandra) being an example. Once a predator has sampled one of these, it is likely to remember the colouration next time it encounters a similar animal. In some species, such as the fire-bellied toad (Bombina spp.), the warning colouration is on the belly and these animals adopt a defensive pose when attacked, exhibiting their bright colours to the predator. The frog Allobates zaparo is not poisonous, but mimics the appearance of other toxic species in its locality, a strategy that may deceive predators.
Many amphibians are nocturnal and hide during the day, thereby avoiding diurnal predators that hunt by sight. Other amphibians use camouflage to avoid being detected. They have various colourings such as mottled browns, greys and olives to blend into the background. Some salamanders adopt defensive poses when faced by a potential predator such as the North American northern short-tailed shrew (Blarina brevicauda). Their bodies writhe and they raise and lash their tails which makes it difficult for the predator to avoid contact with their poison-producing granular glands. A few salamanders will autotomise their tails when attacked, sacrificing this part of their anatomy to enable them to escape. The tail may have a constriction at its base to allow it to be easily detached. The tail is regenerated later, but the energy cost to the animal of replacing it is significant.
Some frogs and toads inflate themselves to make themselves look large and fierce, and some spadefoot toads (Pelobates spp) scream and leap towards the attacker. Giant salamanders of the genus Andrias, as well as Ceratophrine and Pyxicephalus frogs possess sharp teeth and are capable of drawing blood with a defensive bite. The blackbelly salamander (Desmognathus quadramaculatus) can bite an attacking common garter snake (Thamnophis sirtalis) two or three times its size on the head and often manages to escape.
Cognition
In amphibians, there is evidence of habituation, associative learning through both classical and instrumental learning, and discrimination abilities. Amphibians are widely considered to be sentient, able to feel emotions such as anxiety and fear.
In one experiment, when offered live fruit flies (Drosophila virilis), salamanders chose the larger of 1 vs 2 and 2 vs 3. Frogs can distinguish between low numbers (1 vs 2, 2 vs 3, but not 3 vs 4) and large numbers (3 vs 6, 4 vs 8, but not 4 vs 6) of prey. This is irrespective of other characteristics, i.e. surface area, volume, weight and movement, although discrimination among large numbers may be based on surface area.
Conservation
Dramatic declines in amphibian populations, including population crashes and mass localized extinction, have been noted since the late 1980s from locations all over the world, and amphibian declines are thus perceived to be one of the most critical threats to global biodiversity. In 2004, the International Union for Conservation of Nature (IUCN) reported stating that currently birds, mammals, and amphibians extinction rates were at minimum 48 times greater than natural extinction rates—possibly 1,024 times higher. In 2006, there were believed to be 4,035 species of amphibians that depended on water at some stage during their life cycle. Of these, 1,356 (33.6%) were considered to be threatened and this figure is likely to be an underestimate because it excludes 1,427 species for which there was insufficient data to assess their status. A number of causes are believed to be involved, including habitat destruction and modification, over-exploitation, pollution, introduced species, global warming, endocrine-disrupting pollutants, destruction of the ozone layer (ultraviolet radiation has shown to be especially damaging to the skin, eyes, and eggs of amphibians), and diseases like chytridiomycosis. However, many of the causes of amphibian declines are still poorly understood, and are a topic of ongoing discussion.
Food webs and predation
Any decline in amphibian numbers will affect the patterns of predation. The loss of carnivorous species near the top of the food chain will upset the delicate ecosystem balance and may cause dramatic increases in opportunistic species.
Predators that feed on amphibians are affected by their decline. The western terrestrial garter snake (Thamnophis elegans) in California is largely aquatic and depends heavily on two species of frog that are decreasing in numbers, the Yosemite toad (Bufo canorus) and the mountain yellow-legged frog (Rana muscosa), putting the snake's future at risk. If the snake were to become scarce, this would affect birds of prey and other predators that feed on it. Meanwhile, in the ponds and lakes, fewer frogs means fewer tadpoles. These normally play an important role in controlling the growth of algae and also forage on detritus that accumulates as sediment on the bottom. A reduction in the number of tadpoles may lead to an overgrowth of algae, resulting in depletion of oxygen in the water when the algae later die and decompose. Aquatic invertebrates and fish might then die and there would be unpredictable ecological consequences.
Pollution and pesticides
The decline in amphibian and reptile populations has led to an awareness of the effects of pesticides on reptiles and amphibians. In the past, the argument that amphibians or reptiles were more susceptible to any chemical contamination than any land aquatic vertebrate was not supported by research until recently. Amphibians and reptiles have complex life cycles, live in different climate and ecological zones, and are more vulnerable to chemical exposure. Certain pesticides, such as organophosphates, neonicotinoids, and carbamates, react via cholinesterase inhibition. Cholinesterase is an enzyme that causes the hydrolysis of acetylcholine, an excitatory neurotransmitter that is abundant in the nervous system. AChE inhibitors are either reversible or irreversible, and carbamates are safer than organophosphorus insecticides, which are more likely to cause cholinergic poisoning. Reptile exposure to an AChE inhibitory pesticide may result in disruption of neural function in reptiles. The buildup of these inhibitory effects on motor performance, such as food consumption and other activities.
Conservation and protection strategies
The Amphibian Specialist Group of the IUCN is spearheading efforts to implement a comprehensive global strategy for amphibian conservation. Amphibian Ark is an organization that was formed to implement the ex-situ conservation recommendations of this plan, and they have been working with zoos and aquaria around the world, encouraging them to create assurance colonies of threatened amphibians. One such project is the Panama Amphibian Rescue and Conservation Project that built on existing conservation efforts in Panama to create a country-wide response to the threat of chytridiomycosis.
Another measure would be to stop exploitation of frogs for human consumption. In the Middle East, a growing appetite for eating frog legs and the consequent gathering of them for food was already linked to an increase in mosquitoes and thus has direct consequences for human health.
See also
Amphibian and reptile tunnel
Amphibious fish
Cultural depictions of amphibians
List of amphibians
List of amphibian genera
List of threatened reptiles and amphibians of the United States
Softshell turtle – A taxonomic family of a number of turtle genera that ability to able to "breathe" underwater with rhythmic movements of their mouth cavity, which contains numerous processes copiously supplied with blood, acting similarly to gill filaments in fish.
Annulated sea snake – A species of venomous sea snake that has ability to breathe underwater with help of extensive vascular network across the top of its head to absorb oxygen from the surrounding water.
Cutaneous respiration
References
Cited texts
Further reading
Duellman, William E., Berg, Barbara (1962), Type Specimens of Amphibians and Reptiles in the Museum of Natural History, the University of Kansas
External links
Amphibians – AnimalSpot.net
ArchéoZooThèque : Amphibians skeletons drawings : available in vector, image and PDF formats
Amphibian Specialist Group
Amphibian Ark
AmphibiaWeb
Global Amphibian Assessment
Amphibian vocalisations on Archival Sound Recordings
Amphibious organisms
Extant Late Devonian first appearances
Taxa named by John Edward Gray | Amphibian | [
"Biology"
] | 17,218 | [
"Animals",
"Amphibians"
] |
633 | https://en.wikipedia.org/wiki/Algae | Algae ( , ; : alga ) is an informal term for any organisms of a large and diverse group of photosynthetic eukaryotes, which include species from multiple distinct clades. Such organisms range from unicellular microalgae such as Chlorella, Prototheca and the diatoms, to multicellular macroalgae such as the giant kelp, a large brown alga which may grow up to in length. Most algae are aquatic organisms and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds. In contrast, the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried passively by water are plankton, specifically phytoplankton.
Algae constitute a polyphyletic group since they do not include a common ancestor, and although their chlorophyll-bearing plastids seem to have a single origin (from symbiogenesis with cyanobacteria), they were acquired in different ways. Green algae are a prominent examples of algae that have primary chloroplasts derived from endosymbiont cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from endosymbiotic red algae, which they acquired via phagocytosis. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction via spores.
Algae lack the various structures that characterize plants (which evolved from freshwater green algae), such as the phyllids (leaf-like structures) and rhizoids of bryophytes (non-vascular plants), and the roots, leaves and other xylemic/phloemic organs found in tracheophytes (vascular plants). Most algae are autotrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external energy sources and have limited or no photosynthetic apparatus. Some other heterotrophic organisms, such as the apicomplexans, are also derived from cells whose ancestors possessed chlorophyllic plastids, but are not traditionally considered as algae. Algae have photosynthetic machinery ultimately derived from cyanobacteria that produce oxygen as a byproduct of splitting water molecules, unlike other organisms that conduct anoxygenic photosynthesis such as purple and green sulfur bacteria. Fossilized filamentous algae from the Vindhya basin have been dated to 1.6 to 1.7 billion years ago.
Because of the wide range of algae types, they have increasingly different industrial and traditional applications in human society. Traditional seaweed farming practices have existed for thousands of years and have strong traditions in East Asia food cultures. More modern algaculture applications extend the food traditions for other applications, including cattle feed, using algae for bioremediation or pollution control, transforming sunlight into algae fuels or other chemicals used in industrial processes, and in medical and scientific applications. A 2020 review found that these applications of algae could play an important role in carbon sequestration to mitigate climate change while providing lucrative value-added products for global economies.
Etymology and study
The singular is the Latin word for 'seaweed' and retains that meaning in English. The etymology is obscure. Although some speculate that it is related to Latin , 'be cold', no reason is known to associate seaweed with temperature. A more likely source is , 'binding, entwining'.
The Ancient Greek word for 'seaweed' was (), which could mean either the seaweed (probably red algae) or a red dye derived from it. The Latinization, , meant primarily the cosmetic rouge. The etymology is uncertain, but a strong candidate has long been some word related to the Biblical (), 'paint' (if not that word itself), a cosmetic eye-shadow used by the ancient Egyptians and other inhabitants of the eastern Mediterranean. It could be any color: black, red, green, or blue.
The study of algae is most commonly called phycology (); the term algology is falling out of use.
Classifications
One definition of algae is that they "have chlorophyll as their primary photosynthetic pigment and lack a sterile covering of cells around their reproductive cells". On the other hand, the colorless Prototheca under Chlorophyta are all devoid of any chlorophyll. Although cyanobacteria are often referred to as "blue-green algae", most authorities exclude all prokaryotes, including cyanobacteria, from the definition of algae.
The algae contain chloroplasts that are similar in structure to cyanobacteria. Chloroplasts contain circular DNA like that in cyanobacteria and are interpreted as representing reduced endosymbiotic cyanobacteria. However, the exact origin of the chloroplasts is different among separate lineages of algae, reflecting their acquisition during different endosymbiotic events. The table below describes the composition of the three major groups of algae. Their lineage relationships are shown in the figure in the upper right. Many of these groups contain some members that are no longer photosynthetic. Some retain plastids, but not chloroplasts, while others have lost plastids entirely.
Phylogeny based on plastid not nucleocytoplasmic genealogy:
Linnaeus, in Species Plantarum (1753), the starting point for modern botanical nomenclature, recognized 14 genera of algae, of which only four are currently considered among algae. In Systema Naturae, Linnaeus described the genera Volvox and Corallina, and a species of Acetabularia (as Madrepora), among the animals.
In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves.
W. H. Harvey (1811–1866) and Lamouroux (1813) were the first to divide macroscopic algae into four divisions based on their pigmentation. This is the first use of a biochemical criterion in plant systematics. Harvey's four divisions are: red algae (Rhodospermae), brown algae (Melanospermae), green algae (Chlorospermae), and Diatomaceae.
At this time, microscopic algae were discovered and reported by a different group of workers (e.g., O. F. Müller and Ehrenberg) studying the Infusoria (microscopic organisms). Unlike macroalgae, which were clearly viewed as plants, microalgae were frequently considered animals because they are often motile. Even the nonmotile (coccoid) microalgae were sometimes merely seen as stages of the lifecycle of plants, macroalgae, or animals.
Although used as a taxonomic category in some pre-Darwinian classifications, e.g., Linnaeus (1753), de Jussieu (1789), Lamouroux (1813), Harvey (1836), Horaninow (1843), Agassiz (1859), Wilson & Cassin (1864), in further classifications, the "algae" are seen as an artificial, polyphyletic group.
Throughout the 20th century, most classifications treated the following groups as divisions or classes of algae: cyanophytes, rhodophytes, chrysophytes, xanthophytes, bacillariophytes, phaeophytes, pyrrhophytes (cryptophytes and dinophytes), euglenophytes, and chlorophytes. Later, many new groups were discovered (e.g., Bolidophyceae), and others were splintered from older groups: charophytes and glaucophytes (from chlorophytes), many heterokontophytes (e.g., synurophytes from chrysophytes, or eustigmatophytes from xanthophytes), haptophytes (from chrysophytes), and chlorarachniophytes (from xanthophytes).
With the abandonment of plant-animal dichotomous classification, most groups of algae (sometimes all) were included in Protista, later also abandoned in favour of Eukaryota. However, as a legacy of the older plant life scheme, some groups that were also treated as protozoans in the past still have duplicated classifications (see ambiregnal protists).
Some parasitic algae (e.g., the green algae Prototheca and Helicosporidium, parasites of metazoans, or Cephaleuros, parasites of plants) were originally classified as fungi, sporozoans, or protistans of incertae sedis, while others (e.g., the green algae Phyllosiphon and Rhodochytrium, parasites of plants, or the red algae Pterocladiophila and Gelidiocolax mammillatus, parasites of other red algae, or the dinoflagellates Oodinium, parasites of fish) had their relationship with algae conjectured early. In other cases, some groups were originally characterized as parasitic algae (e.g., Chlorochytrium), but later were seen as endophytic algae. Some filamentous bacteria (e.g., Beggiatoa) were originally seen as algae. Furthermore, groups like the apicomplexans are also parasites derived from ancestors that possessed plastids, but are not included in any group traditionally seen as algae.
Evolution
Algae are polyphyletic thus their origin cannot be traced back to single hypothetical common ancestor. It is thought that they came into existence when photosynthetic coccoid cyanobacteria got phagocytized by a unicellular heterotrophic eukaryote (a protist), giving rise to double-membranous primary plastids. Such symbiogenic events (primary symbiogenesis) are believed to have occurred more than 1.5 billion years ago during the Calymmian period, early in Boring Billion, but it is difficult to track the key events because of so much time gap. Primary symbiogenesis gave rise to three divisions of archaeplastids, namely the Viridiplantae (green algae and later plants), Rhodophyta (red algae) and Glaucophyta ("grey algae"), whose plastids further spread into other protist lineages through eukaryote-eukaryote predation, engulfments and subsequent endosymbioses (secondary and tertiary symbiogenesis). This process of serial cell "capture" and "enslavement" explains the diversity of photosynthetic eukaryotes.
Recent genomic and phylogenomic approaches have significantly clarified plastid genome evolution, the horizontal movement of endosymbiont genes to the "host" nuclear genome, and plastid spread throughout the eukaryotic tree of life.
Relationship to land plants
Fossils of isolated spores suggest land plants may have been around as long as 475 million years ago (mya) during the Late Cambrian/Early Ordovician period, from sessile shallow freshwater charophyte algae much like Chara, which likely got stranded ashore when riverine/lacustrine water levels dropped during dry seasons. These charophyte algae probably already developed filamentous thalli and holdfasts that superficially resembled plant stems and roots, and probably had an isomorphic alternation of generations. They perhaps evolved some 850 mya and might even be as early as 1 Gya during the late phase of the Boring Billion.
Morphology
A range of algal morphologies is exhibited, and convergence of features in unrelated groups is common. The only groups to exhibit three-dimensional multicellular thalli are the reds and browns, and some chlorophytes. Apical growth is constrained to subsets of these groups: the florideophyte reds, various browns, and the charophytes. The form of charophytes is quite different from those of reds and browns, because they have distinct nodes, separated by internode 'stems'; whorls of branches reminiscent of the horsetails occur at the nodes. Conceptacles are another polyphyletic trait; they appear in the coralline algae and the Hildenbrandiales, as well as the browns.
Most of the simpler algae are unicellular flagellates or amoeboids, but colonial and nonmotile forms have developed independently among several of the groups. Some of the more common organizational levels, more than one of which may occur in the lifecycle of a species, are
Colonial: small, regular groups of motile cells
Capsoid: individual non-motile cells embedded in mucilage
Coccoid: individual non-motile cells with cell walls
Palmelloid: nonmotile cells embedded in mucilage
Filamentous: a string of connected nonmotile cells, sometimes branching
Parenchymatous: cells forming a thallus with partial differentiation of tissues
In three lines, even higher levels of organization have been reached, with full tissue differentiation. These are the brown algae,—some of which may reach 50 m in length (kelps)—the red algae, and the green algae. The most complex forms are found among the charophyte algae (see Charales and Charophyta), in a lineage that eventually led to the higher land plants. The innovation that defines these nonalgal plants is the presence of female reproductive organs with protective cell layers that protect the zygote and developing embryo. Hence, the land plants are referred to as the Embryophytes.
Turfs
The term algal turf is commonly used but poorly defined. Algal turfs are thick, carpet-like beds of seaweed that retain sediment and compete with foundation species like corals and kelps, and they are usually less than 15 cm tall. Such a turf may consist of one or more species, and will generally cover an area in the order of a square metre or more. Some common characteristics are listed:
Algae that form aggregations that have been described as turfs include diatoms, cyanobacteria, chlorophytes, phaeophytes and rhodophytes. Turfs are often composed of numerous species at a wide range of spatial scales, but monospecific turfs are frequently reported.
Turfs can be morphologically highly variable over geographic scales and even within species on local scales and can be difficult to identify in terms of the constituent species.
Turfs have been defined as short algae, but this has been used to describe height ranges from less than 0.5 cm to more than 10 cm. In some regions, the descriptions approached heights which might be described as canopies (20 to 30 cm).
Physiology
Many algae, particularly species of the Characeae, have served as model experimental organisms to understand the mechanisms of the water permeability of membranes, osmoregulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials. Plant hormones are found not only in higher plants, but in algae, too.
Symbiotic algae
Some species of algae form symbiotic relationships with other organisms. In these symbioses, the algae supply photosynthates (organic substances) to the host organism providing protection to the algal cells. The host organism derives some or all of its energy requirements from the algae. Examples are:
Lichens
Lichens are defined by the International Association for Lichenology to be "an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body having a specific structure". The fungi, or mycobionts, are mainly from the Ascomycota with a few from the Basidiomycota. In nature, they do not occur separate from lichens. It is unknown when they began to associate. One or more mycobiont associates with the same phycobiont species, from the green algae, except that alternatively, the mycobiont may associate with a species of cyanobacteria (hence "photobiont" is the more accurate term). A photobiont may be associated with many different mycobionts or may live independently; accordingly, lichens are named and classified as fungal species. The association is termed a morphogenesis because the lichen has a form and capabilities not possessed by the symbiont species alone (they can be experimentally isolated). The photobiont possibly triggers otherwise latent genes in the mycobiont.
Trentepohlia is an example of a common green alga genus worldwide that can grow on its own or be lichenised. Lichen thus share some of the habitat and often similar appearance with specialized species of algae (aerophytes) growing on exposed surfaces such as tree trunks and rocks and sometimes discoloring them.
Coral reefs
Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia (stony corals). These animals metabolize sugar and oxygen to obtain energy for their cell-building processes, including secretion of the exoskeleton, with water and carbon dioxide as byproducts. Dinoflagellates (algal protists) are often endosymbionts in the cells of the coral-forming marine invertebrates, where they accelerate host-cell metabolism by generating sugar and oxygen immediately available through photosynthesis using incident light and the carbon dioxide produced by the host. Reef-building stony corals (hermatypic corals) require endosymbiotic algae from the genus Symbiodinium to be in a healthy condition. The loss of Symbiodinium from the host is known as coral bleaching, a condition which leads to the deterioration of a reef.
Sea sponges
Endosymbiontic green algae live close to the surface of some sponges, for example, breadcrumb sponges (Halichondria panicea). The alga is thus protected from predators; the sponge is provided with oxygen and sugars which can account for 50 to 80% of sponge growth in some species.
Life cycle
Rhodophyta, Chlorophyta, and Heterokontophyta, the three main algal divisions, have life cycles which show considerable variation and complexity. In general, an asexual phase exists where the seaweed's cells are diploid, a sexual phase where the cells are haploid, followed by fusion of the male and female gametes. Asexual reproduction permits efficient population increases, but less variation is possible. Commonly, in sexual reproduction of unicellular and colonial algae, two specialized, sexually compatible, haploid gametes make physical contact and fuse to form a zygote. To ensure a successful mating, the development and release of gametes is highly synchronized and regulated; pheromones may play a key role in these processes. Sexual reproduction allows for more variation and provides the benefit of efficient recombinational repair of DNA damages during meiosis, a key stage of the sexual cycle. However, sexual reproduction is more costly than asexual reproduction. Meiosis has been shown to occur in many different species of algae.
Numbers
The Algal Collection of the US National Herbarium (located in the National Museum of Natural History) consists of approximately 320,500 dried specimens, which, although not exhaustive (no exhaustive collection exists), gives an idea of the order of magnitude of the number of algal species (that number remains unknown). Estimates vary widely. For example, according to one standard textbook, in the British Isles, the UK Biodiversity Steering Group Report estimated there to be 20,000 algal species in the UK. Another checklist reports only about 5,000 species. Regarding the difference of about 15,000 species, the text concludes: "It will require many detailed field surveys before it is possible to provide a reliable estimate of the total number of species ..."
Regional and group estimates have been made, as well:
5,000–5,500 species of red algae worldwide
"some 1,300 in Australian Seas"
400 seaweed species for the western coastline of South Africa, and 212 species from the coast of KwaZulu-Natal. Some of these are duplicates, as the range extends across both coasts, and the total recorded is probably about 500 species. Most of these are listed in List of seaweeds of South Africa. These exclude phytoplankton and crustose corallines.
669 marine species from California (US)
642 in the check-list of Britain and Ireland
and so on, but lacking any scientific basis or reliable sources, these numbers have no more credibility than the British ones mentioned above. Most estimates also omit microscopic algae, such as phytoplankton.
The most recent estimate suggests 72,500 algal species worldwide.
Distribution
The distribution of algal species has been fairly well studied since the founding of phytogeography in the mid-19th century. Algae spread mainly by the dispersal of spores analogously to the dispersal of cryptogamic plants by spores. Spores can be found in a variety of environments: fresh and marine waters, air, soil, and in or on other organisms. Whether a spore is to grow into an adult organism depends on the species and the environmental conditions where the spore lands.
The spores of freshwater algae are dispersed mainly by running water and wind, as well as by living carriers. However, not all bodies of water can carry all species of algae, as the chemical composition of certain water bodies limits the algae that can survive within them. Marine spores are often spread by ocean currents. Ocean water presents many vastly different habitats based on temperature and nutrient availability, resulting in phytogeographic zones, regions, and provinces.
To some degree, the distribution of algae is subject to floristic discontinuities caused by geographical features, such as Antarctica, long distances of ocean or general land masses. It is, therefore, possible to identify species occurring by locality, such as "Pacific algae" or "North Sea algae". When they occur out of their localities, hypothesizing a transport mechanism is usually possible, such as the hulls of ships. For example, Ulva reticulata and U. fasciata travelled from the mainland to Hawaii in this manner.
Mapping is possible for select species only: "there are many valid examples of confined distribution patterns." For example, Clathromorphum is an arctic genus and is not mapped far south of there. However, scientists regard the overall data as insufficient due to the "difficulties of undertaking such studies."
Ecology
Algae are prominent in bodies of water, common in terrestrial environments, and are found in unusual environments, such as on snow and ice. Seaweeds grow mostly in shallow marine waters, under deep; however, some such as Navicula pennata have been recorded to a depth of . A type of algae, Ancylonema nordenskioeldii, was found in Greenland in areas known as the 'Dark Zone', which caused an increase in the rate of melting ice sheet. The same algae was found in the Italian Alps, after pink ice appeared on parts of the Presena glacier.
The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column (phytoplankton) provide the food base for most marine food chains. In very high densities (algal blooms), these algae may discolor the water and outcompete, poison, or asphyxiate other life forms.
Algae can be used as indicator organisms to monitor pollution in various aquatic systems. In many cases, algal metabolism is sensitive to various pollutants. Due to this, the species composition of algal populations may shift in the presence of chemical pollutants. To detect these changes, algae can be sampled from the environment and maintained in laboratories with relative ease.
On the basis of their habitat, algae can be categorized as: aquatic (planktonic, benthic, marine, freshwater, lentic, lotic), terrestrial, aerial (subaerial), lithophytic, halophytic (or euryhaline), psammon, thermophilic, cryophilic, epibiont (epiphytic, epizoic), endosymbiont (endophytic, endozoic), parasitic, calcifilic or lichenic (phycobiont).
Cultural associations
In classical Chinese, the word is used both for "algae" and (in the modest tradition of the imperial scholars) for "literary talent". The third island in Kunming Lake beside the Summer Palace in Beijing is known as the Zaojian Tang Dao (藻鑒堂島), which thus simultaneously means "Island of the Algae-Viewing Hall" and "Island of the Hall for Reflecting on Literary Talent".
Cultivation
Seaweed farming
Bioreactors
Uses
Agar
Agar, a gelatinous substance derived from red algae, has a number of commercial uses. It is a good medium on which to grow bacteria and fungi, as most microorganisms cannot digest agar.
Alginates
Alginic acid, or alginate, is extracted from brown algae. Its uses range from gelling agents in food, to medical dressings. Alginic acid also has been used in the field of biotechnology as a biocompatible medium for cell encapsulation and cell immobilization. Molecular cuisine is also a user of the substance for its gelling properties, by which it becomes a delivery vehicle for flavours.
Between 100,000 and 170,000 wet tons of Macrocystis are harvested annually in New Mexico for alginate extraction and abalone feed.
Energy source
To be competitive and independent from fluctuating support from (local) policy on the long run, biofuels should equal or beat the cost level of fossil fuels. Here, algae-based fuels hold great promise, directly related to the potential to produce more biomass per unit area in a year than any other form of biomass. The break-even point for algae-based biofuels is estimated to occur by 2025.
Fertilizer
For centuries, seaweed has been used as a fertilizer; George Owen of Henllys writing in the 16th century referring to drift weed in South Wales:
Today, algae are used by humans in many ways; for example, as fertilizers, soil conditioners, and livestock feed. Aquatic and microscopic species are cultured in clear tanks or ponds and are either harvested or used to treat effluents pumped through the ponds. Algaculture on a large scale is an important type of aquaculture in some places. Maerl is commonly used as a soil conditioner.
As food
Algae are used as foods in many countries: China consumes more than 70 species, including fat choy, a cyanobacterium considered a vegetable; Japan, over 20 species such as nori and aonori; Ireland, dulse; Chile, cochayuyo. Laver is used to make laverbread in Wales, where it is known as . In Korea, green laver is used to make .
Three forms of algae used as food:
Chlorella: This form of alga is found in freshwater and contains photosynthetic pigments in its chloroplast.
Klamath AFA: A subspecies of Aphanizomenon flos-aquae found wild in many bodies of water worldwide but harvested only from Upper Klamath Lake, Oregon.
Spirulina: Known otherwise as a cyanobacterium (a prokaryote or a "blue-green alga")
The oils from some algae have high levels of unsaturated fatty acids. Some varieties of algae favored by vegetarianism and veganism contain the long-chain, essential omega-3 fatty acids, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). Fish oil contains the omega-3 fatty acids, but the original source is algae (microalgae in particular), which are eaten by marine life such as copepods and are passed up the food chain.
Pollution control
Sewage can be treated with algae, reducing the use of large amounts of toxic chemicals that would otherwise be needed.
Algae can be used to capture fertilizers in runoff from farms. When subsequently harvested, the enriched algae can be used as fertilizer.
Aquaria and ponds can be filtered using algae, which absorb nutrients from the water in a device called an algae scrubber, also known as an algae turf scrubber.
Agricultural Research Service scientists found that 60–90% of nitrogen runoff and 70–100% of phosphorus runoff can be captured from manure effluents using a horizontal algae scrubber, also called an algal turf scrubber (ATS). Scientists developed the ATS, which consists of shallow, 100-foot raceways of nylon netting where algae colonies can form, and studied its efficacy for three years. They found that algae can readily be used to reduce the nutrient runoff from agricultural fields and increase the quality of water flowing into rivers, streams, and oceans. Researchers collected and dried the nutrient-rich algae from the ATS and studied its potential as an organic fertilizer. They found that cucumber and corn seedlings grew just as well using ATS organic fertilizer as they did with commercial fertilizers. Algae scrubbers, using bubbling upflow or vertical waterfall versions, are now also being used to filter aquaria and ponds.
Polymers
Various polymers can be created from algae, which can be especially useful in the creation of bioplastics. These include hybrid plastics, cellulose-based plastics, poly-lactic acid, and bio-polyethylene. Several companies have begun to produce algae polymers commercially, including for use in flip-flops and in surf boards.
Bioremediation
The alga Stichococcus bacillaris has been seen to colonize silicone resins used at archaeological sites; biodegrading the synthetic substance.
Pigments
The natural pigments (carotenoids and chlorophylls) produced by algae can be used as alternatives to chemical dyes and coloring agents.
The presence of some individual algal pigments, together with specific pigment concentration ratios, are taxon-specific: analysis of their concentrations with various analytical methods, particularly high-performance liquid chromatography, can therefore offer deep insight into the taxonomic composition and relative abundance of natural algae populations in sea water samples.
Stabilizing substances
Carrageenan, from the red alga Chondrus crispus, is used as a stabilizer in milk products.
Additional images
See also
AlgaeBase
AlgaePARC
Eutrophication
Iron fertilization
Marimo algae
Microbiofuels
Microphyte
Photobioreactor
Phycotechnology
Plants
Toxoid – anatoxin
References
Bibliography
General
.
Regional
Britain and Ireland
Australia
New Zealand
Europe
Arctic
Greenland
Faroe Islands
.
Canary Islands
Morocco
South Africa
North America
External links
– a database of all algal names including images, nomenclature, taxonomy, distribution, bibliography, uses, extracts
Endosymbiotic events
Polyphyletic groups
Common names of organisms | Algae | [
"Biology"
] | 6,739 | [
"Symbiosis",
"Algae",
"Common names of organisms",
"Endosymbiotic events",
"Biological nomenclature",
"Polyphyletic groups",
"Phylogenetics"
] |
639 | https://en.wikipedia.org/wiki/Alkane | In organic chemistry, an alkane, or paraffin (a historical trivial name that also has other meanings), is an acyclic saturated hydrocarbon. In other words, an alkane consists of hydrogen and carbon atoms arranged in a tree structure in which all the carbon–carbon bonds are single. Alkanes have the general chemical formula . The alkanes range in complexity from the simplest case of methane (), where n = 1 (sometimes called the parent molecule), to arbitrarily large and complex molecules, like pentacontane () or 6-ethyl-2-methyl-5-(1-methylethyl) octane, an isomer of tetradecane ().
The International Union of Pure and Applied Chemistry (IUPAC) defines alkanes as "acyclic branched or unbranched hydrocarbons having the general formula , and therefore consisting entirely of hydrogen atoms and saturated carbon atoms". However, some sources use the term to denote any saturated hydrocarbon, including those that are either monocyclic (i.e. the cycloalkanes) or polycyclic, despite them having a distinct general formula (e.g. cycloalkanes are ).
In an alkane, each carbon atom is sp3-hybridized with 4 sigma bonds (either C–C or C–H), and each hydrogen atom is joined to one of the carbon atoms (in a C–H bond). The longest series of linked carbon atoms in a molecule is known as its carbon skeleton or carbon backbone. The number of carbon atoms may be considered as the size of the alkane.
One group of the higher alkanes are waxes, solids at standard ambient temperature and pressure (SATP), for which the number of carbon atoms in the carbon backbone is greater than about 17.
With their repeated – units, the alkanes constitute a homologous series of organic compounds in which the members differ in molecular mass by multiples of 14.03 u (the total mass of each such methylene-bridge unit, which comprises a single carbon atom of mass 12.01 u and two hydrogen atoms of mass ~1.01 u each).
Methane is produced by methanogenic bacteria and some long-chain alkanes function as pheromones in certain animal species or as protective waxes in plants and fungi. Nevertheless, most alkanes do not have much biological activity. They can be viewed as molecular trees upon which can be hung the more active/reactive functional groups of biological molecules.
The alkanes have two main commercial sources: petroleum (crude oil) and natural gas.
An alkyl group is an alkane-based molecular fragment that bears one open valence for bonding. They are generally abbreviated with the symbol for any organyl group, R, although Alk is sometimes used to specifically symbolize an alkyl group (as opposed to an alkenyl group or aryl group).
Structure and classification
Ordinarily the C-C single bond distance is .
Saturated hydrocarbons can be linear, branched, or cyclic. The third group is sometimes called cycloalkanes. Very complicated structures are possible by combining linear, branch, cyclic alkanes.
Isomerism
Alkanes with more than three carbon atoms can be arranged in various ways, forming structural isomers. The simplest isomer of an alkane is the one in which the carbon atoms are arranged in a single chain with no branches. This isomer is sometimes called the n-isomer (n for "normal", although it is not necessarily the most common). However, the chain of carbon atoms may also be branched at one or more points. The number of possible isomers increases rapidly with the number of carbon atoms. For example, for acyclic alkanes:
C1: methane only
C2: ethane only
C3: propane only
C4: 2 isomers: butane and isobutane
C5: 3 isomers: pentane, isopentane, and neopentane
C6: 5 isomers: hexane, 2-methylpentane, 3-methylpentane, 2,2-dimethylbutane, and 2,3-dimethylbutane
C7: 9 isomers: heptane, 2-methylhexane, 3-methylhexane, 2,2-dimethylpentane, 2,3-dimethylpentane, 2,4-dimethylpentane, 3,3-dimethylpentane, 3-ethylpentane, 2,2,3-trimethylbutane
C8: 18 isomers: octane, 2-methylheptane, 3-methylheptane, 4-methylheptane, 2,2-dimethylhexane, 2,3-dimethylhexane, 2,4-dimethylhexane, 2,5-dimethylhexane, 3,3-dimethylhexane, 3,4-dimethylhexane, 3-ethylhexane, 2,2,3-trimethylpentane, 2,2,4-trimethylpentane, 2,3,3-trimethylpentane, 2,3,4-trimethylpentane, 3-ethyl-2-methylpentane, 3-ethyl-3-methylpentane, 2,2,3,3-tetramethylbutane
C9: 35 isomers
C10: 75 isomers
C12: 355 isomers
C32: 27,711,253,769 isomers
C60: 22,158,734,535,770,411,074,184 isomers, many of which are not stable
Branched alkanes can be chiral. For example, 3-methylhexane and its higher homologues are chiral due to their stereogenic center at carbon atom number 3. The above list only includes differences of connectivity, not stereochemistry. In addition to the alkane isomers, the chain of carbon atoms may form one or more rings. Such compounds are called cycloalkanes, and are also excluded from the above list because changing the number of rings changes the molecular formula. For example, cyclobutane and methylcyclopropane are isomers of each other (C4H8), but are not isomers of butane (C4H10).
Branched alkanes are more thermodynamically stable than their linear (or less branched) isomers. For example, the highly branched 2,2,3,3-tetramethylbutane is about 1.9 kcal/mol more stable than its linear isomer, n-octane.
Nomenclature
The IUPAC nomenclature (systematic way of naming compounds) for alkanes is based on identifying hydrocarbon chains. Unbranched, saturated hydrocarbon chains are named systematically with a Greek numerical prefix denoting the number of carbons and the suffix "-ane".
In 1866, August Wilhelm von Hofmann suggested systematizing nomenclature by using the whole sequence of vowels a, e, i, o and u to create suffixes -ane, -ene, -ine (or -yne), -one, -une, for the hydrocarbons CnH2n+2, CnH2n, CnH2n−2, CnH2n−4, CnH2n−6. In modern nomenclature, the first three specifically name hydrocarbons with single, double and triple bonds; while "-one" now represents a ketone.
Linear alkanes
Straight-chain alkanes are sometimes indicated by the prefix "n-" or "n-"(for "normal") where a non-linear isomer exists. Although this is not strictly necessary and is not part of the IUPAC naming system, the usage is still common in cases where one wishes to emphasize or distinguish between the straight-chain and branched-chain isomers, e.g., "n-butane" rather than simply "butane" to differentiate it from isobutane. Alternative names for this group used in the petroleum industry are linear paraffins or n-paraffins.
The first eight members of the series (in terms of number of carbon atoms) are named as follows:
methane CH4 – one carbon and 4 hydrogen
ethane C2H6 – two carbon and 6 hydrogen
propane C3H8 – three carbon and 8 hydrogen
butane C4H10 – four carbon and 10 hydrogen
pentane C5H12 – five carbon and 12 hydrogen
hexane C6H14 – six carbon and 14 hydrogen
heptane C7H16 – seven carbons and 16 hydrogen
octane C8H18 – eight carbons and 18 hydrogen
The first four names were derived from methanol, ether, propionic acid and butyric acid. Alkanes with five or more carbon atoms are named by adding the suffix -ane to the appropriate numerical multiplier prefix with elision of any terminal vowel (-a or -o) from the basic numerical term. Hence, pentane, C5H12; hexane, C6H14; heptane, C7H16; octane, C8H18; etc. The numeral prefix is generally Greek; however, alkanes with a carbon atom count ending in nine, for example nonane, use the Latin prefix non-.
Branched alkanes
Simple branched alkanes often have a common name using a prefix to distinguish them from linear alkanes, for example n-pentane, isopentane, and neopentane.
IUPAC naming conventions can be used to produce a systematic name.
The key steps in the naming of more complicated branched alkanes are as follows:
Identify the longest continuous chain of carbon atoms
Name this longest root chain using standard naming rules
Name each side chain by changing the suffix of the name of the alkane from "-ane" to "-yl"
Number the longest continuous chain in order to give the lowest possible numbers for the side-chains
Number and name the side chains before the name of the root chain
If there are multiple side chains of the same type, use prefixes such as "di-" and "tri-" to indicate it as such, and number each one.
Add side chain names in alphabetical (disregarding "di-" etc. prefixes) order in front of the name of the root chain
Saturated cyclic hydrocarbons
Though technically distinct from the alkanes, this class of hydrocarbons is referred to by some as the "cyclic alkanes." As their description implies, they contain one or more rings.
Simple cycloalkanes have a prefix "cyclo-" to distinguish them from alkanes. Cycloalkanes are named as per their acyclic counterparts with respect to the number of carbon atoms in their backbones, e.g., cyclopentane (C5H10) is a cycloalkane with 5 carbon atoms just like pentane (C5H12), but they are joined up in a five-membered ring. In a similar manner, propane and cyclopropane, butane and cyclobutane, etc.
Substituted cycloalkanes are named similarly to substituted alkanes – the cycloalkane ring is stated, and the substituents are according to their position on the ring, with the numbering decided by the Cahn–Ingold–Prelog priority rules.
Trivial/common names
The trivial (non-systematic) name for alkanes is 'paraffins'. Together, alkanes are known as the 'paraffin series'. Trivial names for compounds are usually historical artifacts. They were coined before the development of systematic names, and have been retained due to familiar usage in industry. Cycloalkanes are also called naphthenes.
Branched-chain alkanes are called isoparaffins. "Paraffin" is a general term and often does not distinguish between pure compounds and mixtures of isomers, i.e., compounds of the same chemical formula, e.g., pentane and isopentane.
In IUPAC
The following trivial names are retained in the IUPAC system:
isobutane for 2-methylpropane
isopentane for 2-methylbutane
neopentane for 2,2-dimethylpropane.
Non-IUPAC
Some non-IUPAC trivial names are occasionally used:
cetane, for hexadecane
cerane, for hexacosane
Physical properties
All alkanes are colorless. Alkanes with the lowest molecular weights are gases, those of intermediate molecular weight are liquids, and the heaviest are waxy solids.
Table of alkanes
Boiling point
Alkanes experience intermolecular van der Waals forces. The cumulative effects of these intermolecular forces give rise to greater boiling points of alkanes.
Two factors influence the strength of the van der Waals forces:
the number of electrons surrounding the molecule, which increases with the alkane's molecular weight
the surface area of the molecule
Under standard conditions, from CH4 to C4H10 alkanes are gaseous; from C5H12 to C17H36 they are liquids; and after C18H38 they are solids. As the boiling point of alkanes is primarily determined by weight, it should not be a surprise that the boiling point has an almost linear relationship with the size (molecular weight) of the molecule. As a rule of thumb, the boiling point rises 20–30 °C for each carbon added to the chain; this rule applies to other homologous series.
A straight-chain alkane will have a boiling point higher than a branched-chain alkane due to the greater surface area in contact, and thus greater van der Waals forces, between adjacent molecules. For example, compare isobutane (2-methylpropane) and n-butane (butane), which boil at −12 and 0 °C, and 2,2-dimethylbutane and 2,3-dimethylbutane which boil at 50 and 58 °C, respectively.
On the other hand, cycloalkanes tend to have higher boiling points than their linear counterparts due to the locked conformations of the molecules, which give a plane of intermolecular contact.
Melting points
The melting points of the alkanes follow a similar trend to boiling points for the same reason as outlined above. That is, (all other things being equal) the larger the molecule the higher the melting point. However, alkanes' melting points follow a more complex pattern, due to variations in the properties of their solid crystals.
One difference in crystal structure that even-numbered alkanes (from hexane onwards) tend to form denser-packed crystals compared to their odd-numbered neighbors. This causes them to have a greater enthalpy of fusion (amount of energy required to melt them), raising their melting point. A second difference in crystal structure is that even-numbered alkanes (from octane onwards) tend to form more rotationally-ordered crystals compared to their odd-numbered neighbors. This causes them to have a greater entropy of fusion (increase in disorder from the solid to the liquid state), lowering their melting point.
While these effects operate in opposing directions, the first effect tends to be slightly stronger, leading even-numbered alkanes to have slightly higher melting points than the average of their odd-numbered neighbors.
This trend does not apply to methane, which has an unusually high melting point, higher than both ethane and propane. This is because it has a very low entropy of fusion, attributable to its high molecular symmetry and the rotational disorder in solid methane near its melting point (Methane I).
The melting points of branched-chain alkanes can be either higher or lower than those of the corresponding straight-chain alkanes, again depending on these two factors. More symmetric alkanes tend towards higher melting points, due to enthalpic effects when they form ordered crystals, and entropic effects when they form disordered crystals (e.g. neopentane).
Conductivity and solubility
Alkanes do not conduct electricity in any way, nor are they substantially polarized by an electric field. For this reason, they do not form hydrogen bonds and are insoluble in polar solvents such as water. Since the hydrogen bonds between individual water molecules are aligned away from an alkane molecule, the coexistence of an alkane and water leads to an increase in molecular order (a reduction in entropy). As there is no significant bonding between water molecules and alkane molecules, the second law of thermodynamics suggests that this reduction in entropy should be minimized by minimizing the contact between alkane and water: Alkanes are said to be hydrophobic as they are insoluble in water.
Their solubility in nonpolar solvents is relatively high, a property that is called lipophilicity. Alkanes are, for example, miscible in all proportions among themselves.
The density of the alkanes usually increases with the number of carbon atoms but remains less than that of water. Hence, alkanes form the upper layer in an alkane–water mixture.
Molecular geometry
The molecular structure of the alkanes directly affects their physical and chemical characteristics. It is derived from the electron configuration of carbon, which has four valence electrons. The carbon atoms in alkanes are described as sp3 hybrids; that is to say that, to a good approximation, the valence electrons are in orbitals directed towards the corners of a tetrahedron which are derived from the combination of the 2s orbital and the three 2p orbitals. Geometrically, the angle between the bonds are cos−1(−) ≈ 109.47°. This is exact for the case of methane, while larger alkanes containing a combination of C–H and C–C bonds generally have bonds that are within several degrees of this idealized value.
Bond lengths and bond angles
An alkane has only C–H and C–C single bonds. The former result from the overlap of an sp3 orbital of carbon with the 1s orbital of a hydrogen; the latter by the overlap of two sp3 orbitals on adjacent carbon atoms. The bond lengths amount to 1.09 × 10−10 m for a C–H bond and 1.54 × 10−10 m for a C–C bond.
The spatial arrangement of the bonds is similar to that of the four sp3 orbitals—they are tetrahedrally arranged, with an angle of 109.47° between them. Structural formulae that represent the bonds as being at right angles to one another, while both common and useful, do not accurately depict the geometry.
Conformation
The spatial arrangement of the C-C and C-H bonds are described by the torsion angles of the molecule is known as its conformation. In ethane, the simplest case for studying the conformation of alkanes, there is nearly free rotation about a carbon–carbon single bond. Two limiting conformations are important: eclipsed conformation and staggered conformation. The staggered conformation is 12.6 kJ/mol (3.0 kcal/mol) lower in energy (more stable) than the eclipsed conformation (the least stable). In highly branched alkanes, the bond angle may differ from the optimal value (109.5°) to accommodate bulky groups. Such distortions introduce a tension in the molecule, known as steric hindrance or strain. Strain substantially increases reactivity.
Spectroscopic properties
Spectroscopic signatures for alkanes are obtainable by the major characterization techniques.
Infrared spectroscopy
The C-H stretching mode gives a strong absorptions between 2850 and 2960 cm−1 and weaker bands for the C-C stretching mode absorbs between 800 and 1300 cm−1. The carbon–hydrogen bending modes depend on the nature of the group: methyl groups show bands at 1450 cm−1 and 1375 cm−1, while methylene groups show bands at 1465 cm−1 and 1450 cm−1. Carbon chains with more than four carbon atoms show a weak absorption at around 725 cm−1.
NMR spectroscopy
The proton resonances of alkanes are usually found at δH = 0.5–1.5. The carbon-13 resonances depend on the number of hydrogen atoms attached to the carbon: δC = 8–30 (primary, methyl, –CH3), 15–55 (secondary, methylene, –CH2–), 20–60 (tertiary, methyne, C–H) and quaternary. The carbon-13 resonance of quaternary carbon atoms is characteristically weak, due to the lack of nuclear Overhauser effect and the long relaxation time, and can be missed in weak samples, or samples that have not been run for a sufficiently long time.
Mass spectrometry
Since alkanes have high ionization energies, their electron impact mass spectra show weak currents for their molecular ions. The fragmentation pattern can be difficult to interpret, but in the case of branched chain alkanes, the carbon chain is preferentially cleaved at tertiary or quaternary carbons due to the relative stability of the resulting free radicals. The mass spectra for straight-chain alkanes is illustrated by that for dodecane: the fragment resulting from the loss of a single methyl group (M − 15) is absent, fragments are more intense than the molecular ion and are spaced by intervals of 14 mass units, corresponding to loss of CH2 groups.
Chemical properties
Alkanes are only weakly reactive with most chemical compounds. They only reacts with the strongest of electrophilic reagents by virtue of their strong C–H bonds (~100 kcal/mol) and C–C bonds (~90 kcal/mol). They are also relatively unreactive toward free radicals. This inertness is the source of the term paraffins (with the meaning here of "lacking affinity"). In crude oil the alkane molecules have remained chemically unchanged for millions of years.
Acid-base behavior
The acid dissociation constant (pKa) values of all alkanes are estimated to range from 50 to 70, depending on the extrapolation method, hence they are extremely weak acids that are practically inert to bases (see: carbon acids). They are also extremely weak bases, undergoing no observable protonation in pure sulfuric acid (H0 ~ −12), although superacids that are at least millions of times stronger have been known to protonate them to give hypercoordinate alkanium ions (see: methanium ion). Thus, a mixture of antimony pentafluoride (SbF5) and fluorosulfonic acid (HSO3F), called magic acid, can protonate alkanes.
Reactions with oxygen (combustion reaction)
All alkanes react with oxygen in a combustion reaction, although they become increasingly difficult to ignite as the number of carbon atoms increases. The general equation for complete combustion is:
CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO2
or CnH2n+2 + () O2 → (n + 1) H2O + n CO2
In the absence of sufficient oxygen, carbon monoxide or even soot can be formed, as shown below:
CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO
CnH2n+2 + (n + ) O2 → (n + 1) H2O + n C
For example, methane:
2 CH4 + 3 O2 → 4 H2O + 2 CO
CH4 + O2 → 2 H2O + C
See the alkane heat of formation table for detailed data.
The standard enthalpy change of combustion, ΔcH⊖, for alkanes increases by about 650 kJ/mol per CH2 group. Branched-chain alkanes have lower values of ΔcH⊖ than straight-chain alkanes of the same number of carbon atoms, and so can be seen to be somewhat more stable.
Biodegradation
Some organisms are capable of metalbolizing alkanes. The methane monooxygenases convert methane to methanol. For higher alkanes, cytochrome P450 convert alkanes to alcohols, which are then susceptible to degradation.
Free radical reactions
Free radicals, molecules with unpaired electrons, play a large role in most reactions of alkanes. Free radical halogenation reactions occur with halogens, leading to the production of haloalkanes. The hydrogen atoms of the alkane are progressively replaced by halogen atoms. The reaction of alkanes and fluorine is highly exothermic and can lead to an explosion. These reactions are an important industrial route to halogenated hydrocarbons. There are three steps:
Initiation the halogen radicals form by homolysis. Usually, energy in the form of heat or light is required.
Chain reaction or Propagation then takes place—the halogen radical abstracts a hydrogen from the alkane to give an alkyl radical. This reacts further.
Chain termination where the radicals recombine.
Experiments have shown that all halogenation produces a mixture of all possible isomers, indicating that all hydrogen atoms are susceptible to reaction. The mixture produced, however, is not statistical: Secondary and tertiary hydrogen atoms are preferentially replaced due to the greater stability of secondary and tertiary free-radicals. An example can be seen in the monobromination of propane:
In the Reed reaction, sulfur dioxide and chlorine convert hydrocarbons to sulfonyl chlorides under the influence of light.
Under some conditions, alkanes will undergo Nitration.
C-H activation
Certain transition metal complexes promote non-radical reactions with alkanes, resulting in so C–H bond activation reactions.
Cracking
Cracking breaks larger molecules into smaller ones. This reaction requires heat and catalysts. The thermal cracking process follows a homolytic mechanism with formation of free radicals. The catalytic cracking process involves the presence of acid catalysts (usually solid acids such as silica-alumina and zeolites), which promote a heterolytic (asymmetric) breakage of bonds yielding pairs of ions of opposite charges, usually a carbocation. Carbon-localized free radicals and cations are both highly unstable and undergo processes of chain rearrangement, C–C scission in position beta (i.e., cracking) and intra- and intermolecular hydrogen transfer or hydride transfer. In both types of processes, the corresponding reactive intermediates (radicals, ions) are permanently regenerated, and thus they proceed by a self-propagating chain mechanism. The chain of reactions is eventually terminated by radical or ion recombination.
Isomerization and reformation
Dragan and his colleague were the first to report about isomerization in alkanes. Isomerization and reformation are processes in which straight-chain alkanes are heated in the presence of a platinum catalyst. In isomerization, the alkanes become branched-chain isomers. In other words, it does not lose any carbons or hydrogens, keeping the same molecular weight. In reformation, the alkanes become cycloalkanes or aromatic hydrocarbons, giving off hydrogen as a by-product. Both of these processes raise the octane number of the substance. Butane is the most common alkane that is put under the process of isomerization, as it makes many branched alkanes with high octane numbers.
Other reactions
In steam reforming, alkanes react with steam in the presence of a nickel catalyst to give hydrogen and carbon monoxide.
Occurrence
Occurrence of alkanes in the Universe
Alkanes form a small portion of the atmospheres of the outer gas planets such as Jupiter (0.1% methane, 2 ppm ethane), Saturn (0.2% methane, 5 ppm ethane), Uranus (1.99% methane, 2.5 ppm ethane) and Neptune (1.5% methane, 1.5 ppm ethane). Titan (1.6% methane), a satellite of Saturn, was examined by the Huygens probe, which indicated that Titan's atmosphere periodically rains liquid methane onto the moon's surface. Also on Titan, the Cassini mission has imaged seasonal methane/ethane lakes near the polar regions of Titan. Methane and ethane have also been detected in the tail of the comet Hyakutake. Chemical analysis showed that the abundances of ethane and methane were roughly equal, which is thought to imply that its ices formed in interstellar space, away from the Sun, which would have evaporated these volatile molecules. Alkanes have also been detected in meteorites such as carbonaceous chondrites.
Occurrence of alkanes on Earth
Traces of methane gas (about 0.0002% or 1745 ppb) occur in the Earth's atmosphere, produced primarily by methanogenic microorganisms, such as Archaea in the gut of ruminants.
The most important commercial sources for alkanes are natural gas and oil. Natural gas contains primarily methane and ethane, with some propane and butane: oil is a mixture of liquid alkanes and other hydrocarbons. These hydrocarbons were formed when marine animals and plants (zooplankton and phytoplankton) died and sank to the bottom of ancient seas and were covered with sediments in an anoxic environment and converted over many millions of years at high temperatures and high pressure to their current form. Natural gas resulted thereby for example from the following reaction:
C6H12O6 → 3 CH4 + 3 CO2
These hydrocarbon deposits, collected in porous rocks trapped beneath impermeable cap rocks, comprise commercial oil fields. They have formed over millions of years and once exhausted cannot be readily replaced. The depletion of these hydrocarbons reserves is the basis for what is known as the energy crisis.
Alkanes have a low solubility in water, so the content in the oceans is negligible; however, at high pressures and low temperatures (such as at the bottom of the oceans), methane can co-crystallize with water to form a solid methane clathrate (methane hydrate). Although this cannot be commercially exploited at the present time, the amount of combustible energy of the known methane clathrate fields exceeds the energy content of all the natural gas and oil deposits put together. Methane extracted from methane clathrate is, therefore, a candidate for future fuels.
Biological occurrence
Aside from petroleum and natural gas, alkanes occur significantly in nature only as methane, which is produced by some archaea by the process of methanogenesis. These organisms are found in the gut of termites and cows. The methane is produced from carbon dioxide or other organic compounds. Energy is released by the oxidation of hydrogen:
CO2 + 4 H2 → CH4 + 2 H2O
It is probable that our current deposits of natural gas were formed in a similar way.
Certain types of bacteria can metabolize alkanes: they prefer even-numbered carbon chains as they are easier to degrade than odd-numbered chains.
Alkanes play a negligible role in higher organisms, with rare exception.
Some yeasts, e.g., Candida tropicale, Pichia sp., Rhodotorula sp., can use alkanes as a source of carbon or energy. The fungus Amorphotheca resinae prefers the longer-chain alkanes in aviation fuel, and can cause serious problems for aircraft in tropical regions.
In plants, the solid long-chain alkanes are found in the plant cuticle and epicuticular wax of many species, but are only rarely major constituents. They protect the plant against water loss, prevent the leaching of important minerals by the rain, and protect against bacteria, fungi, and harmful insects. The carbon chains in plant alkanes are usually odd-numbered, between 27 and 33 carbon atoms in length, and are made by the plants by decarboxylation of even-numbered fatty acids. The exact composition of the layer of wax is not only species-dependent but also changes with the season and such environmental factors as lighting conditions, temperature or humidity.
The Jeffrey pine is noted for producing exceptionally high levels of n-heptane in its resin, for which reason its distillate was designated as the zero point for one octane rating. Floral scents have also long been known to contain volatile alkane components, and n-nonane is a significant component in the scent of some roses. Emission of gaseous and volatile alkanes such as ethane, pentane, and hexane by plants has also been documented at low levels, though they are not generally considered to be a major component of biogenic air pollution.
Edible vegetable oils also typically contain small fractions of biogenic alkanes with a wide spectrum of carbon numbers, mainly 8 to 35, usually peaking in the low to upper 20s, with concentrations up to dozens of milligrams per kilogram (parts per million by weight) and sometimes over a hundred for the total alkane fraction.
Alkanes are found in animal products, although they are less important than unsaturated hydrocarbons. One example is the shark liver oil, which is approximately 14% pristane (2,6,10,14-tetramethylpentadecane, C19H40). They are important as pheromones, chemical messenger materials, on which insects depend for communication. In some species, e.g. the support beetle Xylotrechus colonus, pentacosane (C25H52), 3-methylpentaicosane (C26H54) and 9-methylpentaicosane (C26H54) are transferred by body contact. With others like the tsetse fly Glossina morsitans morsitans, the pheromone contains the four alkanes 2-methylheptadecane (C18H38), 17,21-dimethylheptatriacontane (C39H80), 15,19-dimethylheptatriacontane (C39H80) and 15,19,23-trimethylheptatriacontane (C40H82), and acts by smell over longer distances. Waggle-dancing honey bees produce and release two alkanes, tricosane and pentacosane.
Ecological relations
One example, in which both plant and animal alkanes play a role, is the ecological relationship between the sand bee (Andrena nigroaenea) and the early spider orchid (Ophrys sphegodes); the latter is dependent for pollination on the former. Sand bees use pheromones in order to identify a mate; in the case of A. nigroaenea, the females emit a mixture of tricosane (C23H48), pentacosane (C25H52) and heptacosane (C27H56) in the ratio 3:3:1, and males are attracted by specifically this odor. The orchid takes advantage of this mating arrangement to get the male bee to collect and disseminate its pollen; parts of its flower not only resemble the appearance of sand bees but also produce large quantities of the three alkanes in the same ratio as female sand bees. As a result, numerous males are lured to the blooms and attempt to copulate with their imaginary partner: although this endeavor is not crowned with success for the bee, it allows the orchid to transfer its pollen,
which will be dispersed after the departure of the frustrated male to other blooms.
Production
Petroleum refining
The most important source of alkanes is natural gas and crude oil. Alkanes are separated in an oil refinery by fractional distillation. Unsaturated hydrocarbons are converted to alkanes by hydrogenation:
(R = alkyl)
Another route to alkanes is hydrogenolysis, which entails cleavage of C-heteroatom bonds using hydrogen. In industry, the main substrates are organonitrogen and organosulfur impurities, i.e. the heteroatoms are N and S. The specific processes are called hydrodenitrification and hydrodesulfurization:
Hydrogenolysis can be applied to the conversion of virtually any functional group into hydrocarbons. Substrates include haloalkanes, alcohols, aldehydes, ketones, carboxylic acids, etc. Both hydrogenolysis and hydrogenation are practiced in refineries. The can be effected by using lithium aluminium hydride, Clemmenson reduction and other specialized routes.
Coal
Coal is a more traditional precursor to alkanes. A wide range of technologies have been intensively practiced for centuries. Simply heating coal gives alkanes, leaving behind coke. Relevant technologies include the Bergius process and coal liquifaction. Partial combustion of coal and related solid organic compounds generates carbon monoxide, which can be hydrogenated using the Fischer–Tropsch process. This technology allows the synthesize liquid hydrocarbons, including alkanes. This method is used to produce substitutes for petroleum distillates.
Laboratory preparation
Rarely is there any interest in the synthesis of alkanes, since they are usually commercially available and less valued than virtually any precursor. The best-known method is hydrogenation of alkenes. Many C-X bonds can be converted to C-H bonds using lithium aluminium hydride, Clemmenson reduction, and other specialized routes. Hydrolysis of Alkyl Grignard reagents and alkyl lithium compounds gives alkanes.
Applications
Fuels
The dominant use of alkanes is as fuels. Propane and butane, easily liquified gases, are commonly known as liquified petroleum gas (LPG). From pentane to octane the alkanes are highly volatile liquids. They are used as fuels in internal combustion engines, as they vaporize easily on entry into the combustion chamber without forming droplets, which would impair the uniformity of the combustion. Branched-chain alkanes are preferred as they are much less prone to premature ignition, which causes knocking, than their straight-chain homologues. This propensity to premature ignition is measured by the octane rating of the fuel, where 2,2,4-trimethylpentane (isooctane) has an arbitrary value of 100, and heptane has a value of zero. Apart from their use as fuels, the middle alkanes are also good solvents for nonpolar substances. Alkanes from nonane to, for instance, hexadecane (an alkane with sixteen carbon atoms) are liquids of higher viscosity, less and less suitable for use in gasoline. They form instead the major part of diesel and aviation fuel. Diesel fuels are characterized by their cetane number, cetane being an old name for hexadecane. However, the higher melting points of these alkanes can cause problems at low temperatures and in polar regions, where the fuel becomes too thick to flow correctly.
Precursors to chemicals
By the process of cracking, alkanes can be converted to alkenes. Simple alkenes are precursors to polymers, such as polyethylene and polypropylene. When the cracking is taken to extremes, alkanes can be converted to carbon black, which is a significant tire component.
Chlorination of methane gives chloromethanes, which are used as solvents and building blocks for complex compounds. Similarly treatment of methane with sulfur gives carbon disulfide. Still other chemicals are prepared by reaction with sulfur trioxide and nitric oxide
Other
Some light hydrocarbons are used as aerosol sprays.
Alkanes from hexadecane upwards form the most important components of fuel oil and lubricating oil. In the latter function, they work at the same time as anti-corrosive agents, as their hydrophobic nature means that water cannot reach the metal surface. Many solid alkanes find use as paraffin wax, for example, in candles. This should not be confused however with true wax, which consists primarily of esters.
Alkanes with a chain length of approximately 35 or more carbon atoms are found in bitumen, used, for example, in road surfacing. However, the higher alkanes have little value and are usually split into lower alkanes by cracking.
Hazards
Alkanes are highly flammable, but they have low toxicities. Methane "is toxicologically virtually inert." Alkanes can be asphyxiants and narcotic.
See also
Alkene
Alkyne
Cycloalkane
Higher alkanes
Aliphatic compound
Notes
References
Further reading
Virtual Textbook of Organic Chemistry
Visualizations of the low-temperature crystal structures of alkanes (methane to nonane)
Hydrocarbons | Alkane | [
"Chemistry"
] | 8,738 | [
"Organic compounds",
"Hydrocarbons",
"Alkanes"
] |
655 | https://en.wikipedia.org/wiki/Abacus | An abacus (: abaci or abacuses), also called a counting frame, is a hand-operated calculating tool which was used from ancient times in the ancient Near East, Europe, China, and Russia, until the adoption of the Hindu–Arabic numeral system. An abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation.
Each rod typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. , , and in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic.
Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations).
In the ancient world, abacuses were a practical calculating tool. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. The abacus is still used to teach the fundamentals of mathematics to children in many countries such as Japan and China.
Etymology
The word abacus dates to at least 1387 AD when a Middle English work borrowed the word from Latin that described a sandboard abacus. The Latin word is derived from ancient Greek () which means something without a base, and colloquially, any piece of rectangular material. Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust", or "drawing-board covered with dust (for the use of mathematics)" (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, ()). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion. Greek probably borrowed from a Northwest Semitic language like Phoenician, evidenced by a cognate with the Hebrew word ʾābāq (), or "dust" (in the post-Biblical sense "sand used as a writing surface").
Both abacuses and abaci are used as plurals. The user of an abacus is called an abacist.
History
Mesopotamia
The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system.
Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".
Egypt
Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered.
Persia
At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire – which is how the abacus may have been exported to other countries.
Greece
The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution.
The Salamis Tablet, found on the Greek island Salamis in 1846 AD, dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble in length, wide, and thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Also from this time frame, the Darius Vase was unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other.
Rome
The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles () were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system.
Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.
One example of archaeological evidence of the Roman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives (five units, five tens, etc.) resembling a bi-quinary coded decimal system related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions).
Medieval Europe
The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century. Wealthy abacists used decorative minted counters, called jetons.
Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.
China
The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.
The Chinese abacus, also known as the suanpan (算盤/算盘, lit. "calculating tray"), comes in various lengths and widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one, to represent numbers in a bi-quinary coded decimal-like system. The beads are usually rounded and made of hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not. One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. The suanpan can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center.
The prototype of the Chinese abacus appeared during the Han dynasty, and the beads are oval. The Song dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.
In the early Ming dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five.
Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it.
In the long scroll Along the River During the Qingming Festival painted by Zhang Zeduan during the Song dynasty (960–1297), a suanpan is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao).
The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2. Incidentally, this ancient Chinese calculation system 市用制 (Shì yòng zhì) allows use with a hexadecimal numeral system (or any base up to 18) which is used for traditional Chinese measures of weight [(jīn (斤) and liǎng (兩)]. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower).
Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a placeholder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians.
India
The Abhidharmakośabhāṣya of Vasubandhu (316–396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus.
Japan
In Japan, the abacus is called soroban (, lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s.
Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a 1:4 device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village in Yamagata City. Japan also used a 2:5 type abacus.
The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China, an abacus with an aluminium frame and plastic beads has been used. The file is next to the four beads, and pressing the "clearing" button puts the upper bead in the upper position, and the lower bead in the lower position.
The abacus is still manufactured in Japan, despite the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery, one can complete a calculation as quickly as with a physical instrument.
Korea
The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty.
Native America
Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli – the account -; and tzintzin – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh , who were students dedicated to taking the accounts of skies, from childhood.
The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row.
The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed.
The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures.
Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles.
The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.
Russia
The Russian abacus, the schoty (, plural from , counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916. The Russian abacus is used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color.
The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia. According to Yakov Perelman, some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974.
The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. The abacus had fallen out of use in western Europe in the 16th century with the rise of decimal notation and algorismic methods. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians.
School abacus
Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic.
In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image).
The wireframe may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires.
The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework.
Feynman vs the abacus
Physicist Richard Feynman was noted for facility in mathematical calculations. He wrote about an encounter in Brazil with a Japanese abacus expert, who challenged him to speed contests between Feynman's pen and paper, and the abacus. The abacus was much faster for addition, somewhat faster for multiplication, but Feynman was faster at division. When the abacus was used for more complex operations, i.e. cube roots, Feynman won easily. However, the number chosen at random was close to a number Feynman happened to know was an exact cube, allowing him to use approximate methods.
Neurological analysis
Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.
Renaissance abacuses
Binary abacus
The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position.
Visually impaired users
An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root.
Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades. Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.
See also
Chinese Zhusuan
Chisanbop
Logical abacus
Napier's bones
Sand table
Slide rule
Notes
Footnotes
References
Further reading
External links
Tutorials
Min Multimedia
History
Curiosities
Abacus in Various Number Systems at cut-the-knot
Java applet of Chinese, Japanese and Russian abaci
An atomic-scale abacus
Examples of Abaci
Aztex Abacus
Indian Abacus
Abacus Course
Mathematical tools
Chinese mathematics
Egyptian mathematics
Greek mathematics
Indian mathematics
Japanese mathematics
Korean mathematics
Ancient Roman mathematics | Abacus | [
"Mathematics",
"Technology"
] | 5,121 | [
"Applied mathematics",
"Mathematical tools",
"History of computing",
"nan"
] |
656 | https://en.wikipedia.org/wiki/Acid | An acid is a molecule or ion capable of either donating a proton (i.e. hydrogen ion, H+), known as a Brønsted–Lowry acid, or forming a covalent bond with an electron pair, known as a Lewis acid.
The first category of acids are the proton donors, or Brønsted–Lowry acids. In the special case of aqueous solutions, proton donors form the hydronium ion H3O+ and are known as Arrhenius acids. Brønsted and Lowry generalized the Arrhenius theory to include non-aqueous solvents. A Brønsted or Arrhenius acid usually contains a hydrogen atom bonded to a chemical structure that is still energetically favorable after loss of H+.
Aqueous Arrhenius acids have characteristic properties that provide a practical description of an acid. Acids form aqueous solutions with a sour taste, can turn blue litmus red, and react with bases and certain metals (like calcium) to form salts. The word acid is derived from the Latin , meaning 'sour'. An aqueous solution of an acid has a pH less than 7 and is colloquially also referred to as "acid" (as in "dissolved in acid"), while the strict definition refers only to the solute. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic.
Common aqueous acids include hydrochloric acid (a solution of hydrogen chloride that is found in gastric acid in the stomach and activates digestive enzymes), acetic acid (vinegar is a dilute aqueous solution of this liquid), sulfuric acid (used in car batteries), and citric acid (found in citrus fruits). As these examples show, acids (in the colloquial sense) can be solutions or pure substances, and can be derived from acids (in the strict sense) that are solids, liquids, or gases. Strong acids and some concentrated weak acids are corrosive, but there are exceptions such as carboranes and boric acid.
The second category of acids are Lewis acids, which form a covalent bond with an electron pair. An example is boron trifluoride (BF3), whose boron atom has a vacant orbital that can form a covalent bond by sharing a lone pair of electrons on an atom in a base, for example the nitrogen atom in ammonia (NH3). Lewis considered this as a generalization of the Brønsted definition, so that an acid is a chemical species that accepts electron pairs either directly or by releasing protons (H+) into the solution, which then accept electron pairs. Hydrogen chloride, acetic acid, and most other Brønsted–Lowry acids cannot form a covalent bond with an electron pair, however, and are therefore not Lewis acids. Conversely, many Lewis acids are not Arrhenius or Brønsted–Lowry acids. In modern terminology, an acid is implicitly a Brønsted acid and not a Lewis acid, since chemists almost always refer to a Lewis acid explicitly as such.
Definitions and concepts
Modern definitions are concerned with the fundamental chemical reactions common to all acids.
Most acids encountered in everyday life are aqueous solutions, or can be dissolved in water, so the Arrhenius and Brønsted–Lowry definitions are the most relevant.
The Brønsted–Lowry definition is the most widely used definition; unless otherwise specified, acid–base reactions are assumed to involve the transfer of a proton (H+) from an acid to a base.
Hydronium ions are acids according to all three definitions. Although alcohols and amines can be Brønsted–Lowry acids, they can also function as Lewis bases due to the lone pairs of electrons on their oxygen and nitrogen atoms.
Arrhenius acids
In 1884, Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+), later described as protons or hydrons. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Chemists often write H+(aq) and refer to the hydrogen ion when describing acid–base reactions but the free hydrogen nucleus, a proton, does not exist alone in water, it exists as the hydronium ion (H3O+) or other forms (H5O2+, H9O4+). Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. Examples include molecular substances such as hydrogen chloride and acetic acid.
An Arrhenius base, on the other hand, is a substance that increases the concentration of hydroxide (OH−) ions when dissolved in water. This decreases the concentration of hydronium because the ions react to form H2O molecules:
H3O + OH ⇌ H2O(liq) + H2O(liq)
Due to this equilibrium, any increase in the concentration of hydronium is accompanied by a decrease in the concentration of hydroxide. Thus, an Arrhenius acid could also be said to be one that decreases hydroxide concentration, while an Arrhenius base increases it.
In an acidic solution, the concentration of hydronium ions is greater than 10−7 moles per liter. Since pH is defined as the negative logarithm of the concentration of hydronium ions, acidic solutions thus have a pH of less than 7.
Brønsted–Lowry acids
While the Arrhenius concept is useful for describing many reactions, it is also quite limited in its scope. In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid–base reactions involve the transfer of a proton. A Brønsted–Lowry acid (or simply Brønsted acid) is a species that donates a proton to a Brønsted–Lowry base. Brønsted–Lowry acid–base theory has several advantages over Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH), the organic acid that gives vinegar its characteristic taste:
Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a proton to water. In the second example CH3COOH undergoes the same transformation, in this case donating a proton to ammonia (NH3), but does not relate to the Arrhenius definition of an acid because the reaction does not produce hydronium. Nevertheless, CH3COOH is both an Arrhenius and a Brønsted–Lowry acid.
Brønsted–Lowry theory can be used to describe reactions of molecular compounds in nonaqueous solution or the gas phase. Hydrogen chloride (HCl) and ammonia combine under several different conditions to form ammonium chloride, NH4Cl. In aqueous solution HCl behaves as hydrochloric acid and exists as hydronium and chloride ions. The following reactions illustrate the limitations of Arrhenius's definition:
H3O + Cl + NH3 → Cl + NH(aq) + H2O
HCl(benzene) + NH3(benzene) → NH4Cl(s)
HCl(g) + NH3(g) → NH4Cl(s)
As with the acetic acid reactions, both definitions work for the first example, where water is the solvent and hydronium ion is formed by the HCl solute. The next two reactions do not involve the formation of ions but are still proton-transfer reactions. In the second reaction hydrogen chloride and ammonia (dissolved in benzene) react to form solid ammonium chloride in a benzene solvent and in the third gaseous HCl and NH3 combine to form the solid.
Lewis acids
A third, only marginally related concept was proposed in 1923 by Gilbert N. Lewis, which includes reactions with acid–base characteristics that do not involve a proton transfer. A Lewis acid is a species that accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid–base reactions are proton transfer reactions while Lewis acid–base reactions are electron pair transfers. Many Lewis acids are not Brønsted–Lowry acids. Contrast how the following reactions are described in terms of acid–base chemistry:
In the first reaction a fluoride ion, F−, gives up an electron pair to boron trifluoride to form the product tetrafluoroborate. Fluoride "loses" a pair of valence electrons because the electrons shared in the B—F bond are located in the region of space between the two atomic nuclei and are therefore more distant from the fluoride nucleus than they are in the lone fluoride ion. BF3 is a Lewis acid because it accepts the electron pair from fluoride. This reaction cannot be described in terms of Brønsted theory because there is no proton transfer.
The second reaction can be described using either theory. A proton is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion. The species that gains the electron pair is the Lewis acid; for example, the oxygen atom in H3O+ gains a pair of electrons when one of the H—O bonds is broken and the electrons shared in the bond become localized on oxygen.
Depending on the context, a Lewis acid may also be described as an oxidizer or an electrophile. Organic Brønsted acids, such as acetic, citric, or oxalic acid, are not Lewis acids. They dissociate in water to produce a Lewis acid, H+, but at the same time, they also yield an equal amount of a Lewis base (acetate, citrate, or oxalate, respectively, for the acids mentioned). This article deals mostly with Brønsted acids rather than Lewis acids.
Dissociation and equilibrium
Reactions of acids are often generalized in the form , where HA represents the acid and A− is the conjugate base. This reaction is referred to as protolysis. The protonated form (HA) of an acid is also sometimes referred to as the free acid.
Acid–base conjugate pairs differ by one proton, and can be interconverted by the addition or removal of a proton (protonation and deprotonation, respectively). The acid can be the charged species and the conjugate base can be neutral in which case the generalized reaction scheme could be written as . In solution there exists an equilibrium between the acid and its conjugate base. The equilibrium constant K is an expression of the equilibrium concentrations of the molecules or the ions in solution. Brackets indicate concentration, such that [H2O] means the concentration of H2O. The acid dissociation constant Ka is generally used in the context of acid–base reactions. The numerical value of Ka is equal to the product (multiplication) of the concentrations of the products divided by the concentration of the reactants, where the reactant is the acid (HA) and the products are the conjugate base and H+.
The stronger of two acids will have a higher Ka than the weaker acid; the ratio of hydrogen ions to acid will be higher for the stronger acid as the stronger acid has a greater tendency to lose its proton. Because the range of possible values for Ka spans many orders of magnitude, a more manageable constant, pKa is more frequently used, where pKa = −log10 Ka. Stronger acids have a smaller pKa than weaker acids. Experimentally determined pKa at 25 °C in aqueous solution are often quoted in textbooks and reference material.
Nomenclature
Arrhenius acids are named according to their anions. In the classical naming system, the ionic suffix is dropped and replaced with a new suffix, according to the table following. The prefix "hydro-" is used when the acid is made up of just hydrogen and one other element. For example, HCl has chloride as its anion, so the hydro- prefix is used, and the -ide suffix makes the name take the form hydrochloric acid.
Classical naming system:
In the IUPAC naming system, "aqueous" is simply added to the name of the ionic compound. Thus, for hydrogen chloride, as an acid solution, the IUPAC name is aqueous hydrogen chloride.
Acid strength
The strength of an acid refers to its ability or tendency to lose a proton. A strong acid is one that completely dissociates in water; in other words, one mole of a strong acid HA dissolves in water yielding one mole of H+ and one mole of the conjugate base, A−, and none of the protonated acid HA. In contrast, a weak acid only partially dissociates and at equilibrium both the acid and the conjugate base are in solution. Examples of strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). In water each of these essentially ionizes 100%. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H—A bond and the size of atom A, which determines the strength of the H—A bond. Acid strengths are also often discussed in terms of the stability of the conjugate base.
Stronger acids have a larger acid dissociation constant, Ka and a lower pKa than weaker acids.
Sulfonic acids, which are organic oxyacids, are a class of strong acids. A common example is toluenesulfonic acid (tosylic acid). Unlike sulfuric acid itself, sulfonic acids can be solids. In fact, polystyrene functionalized into polystyrene sulfonate is a solid strongly acidic plastic that is filterable.
Superacids are acids stronger than 100% sulfuric acid. Examples of superacids are fluoroantimonic acid, magic acid and perchloric acid. The strongest known acid is helium hydride ion, with a proton affinity of 177.8kJ/mol. Superacids can permanently protonate water to give ionic, crystalline hydronium "salts". They can also quantitatively stabilize carbocations.
While Ka measures the strength of an acid compound, the strength of an aqueous acid solution is measured by pH, which is an indication of the concentration of hydronium in the solution. The pH of a simple solution of an acid compound in water is determined by the dilution of the compound and the compound's Ka.
Lewis acid strength in non-aqueous solutions
Lewis acids have been classified in the ECW model and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent.
Chemical characteristics
Monoprotic acids
Monoprotic acids, also known as monobasic acids, are those acids that are able to donate one proton per molecule during the process of dissociation (sometimes called ionization) as shown below (symbolized by HA):
Ka
Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group and sometimes these acids are known as monocarboxylic acid. Examples in organic acids include formic acid (HCOOH), acetic acid (CH3COOH) and benzoic acid (C6H5COOH).
Polyprotic acids
Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. Specific types of polyprotic acids have more specific names, such as diprotic (or dibasic) acid (two potential protons to donate), and triprotic (or tribasic) acid (three potential protons to donate). Some macromolecules such as proteins and nucleic acids can have a very large number of acidic protons.
A diprotic acid (here symbolized by H2A) can undergo one or two dissociations depending on the pH. Each dissociation has its own dissociation constant, Ka1 and Ka2.
Ka1
Ka2
The first dissociation constant is typically greater than the second (i.e., Ka1 > Ka2). For example, sulfuric acid (H2SO4) can donate one proton to form the bisulfate anion (HSO), for which Ka1 is very large; then it can donate a second proton to form the sulfate anion (SO), wherein the Ka2 is intermediate strength. The large Ka1 for the first dissociation makes sulfuric a strong acid. In a similar manner, the weak unstable carbonic acid can lose one proton to form bicarbonate anion and lose a second to form carbonate anion (CO). Both Ka values are small, but Ka1 > Ka2 .
A triprotic acid (H3A) can undergo one, two, or three dissociations and has three dissociation constants, where Ka1 > Ka2 > Ka3.
Ka1
Ka2
Ka3
An inorganic example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. All three protons can be successively lost to yield H2PO, then HPO, and finally PO, the orthophosphate ion, usually just called phosphate. Even though the positions of the three protons on the original phosphoric acid molecule are equivalent, the successive Ka values differ since it is energetically less favorable to lose a proton if the conjugate base is more negatively charged. An organic example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion.
Although the subsequent loss of each hydrogen ion is less favorable, all of the conjugate bases are present in solution. The fractional concentration, α (alpha), for each species can be calculated. For example, a generic diprotic acid will generate 3 species in solution: H2A, HA−, and A2−. The fractional concentrations can be calculated as below when given either the pH (which can be converted to the [H+]) or the concentrations of the acid with all its conjugate bases:
A plot of these fractional concentrations against pH, for given K1 and K2, is known as a Bjerrum plot. A pattern is observed in the above equations and can be expanded to the general n -protic acid that has been deprotonated i -times:
where K0 = 1 and the other K-terms are the dissociation constants for the acid.
Neutralization
Neutralization is the reaction between an acid and a base, producing a salt and neutralized base; for example, hydrochloric acid and sodium hydroxide form sodium chloride and water:
HCl(aq) + NaOH(aq) → H2O(l) + NaCl(aq)
Neutralization is the basis of titration, where a pH indicator shows equivalence point when the equivalent number of moles of a base have been added to an acid. It is often wrongly assumed that neutralization should result in a solution with pH 7.0, which is only the case with similar acid and base strengths during a reaction.
Neutralization with a base weaker than the acid results in a weakly acidic salt. An example is the weakly acidic ammonium chloride, which is produced from the strong acid hydrogen chloride and the weak base ammonia. Conversely, neutralizing a weak acid with a strong base gives a weakly basic salt (e.g., sodium fluoride from hydrogen fluoride and sodium hydroxide).
Weak acid–weak base equilibrium
In order for a protonated acid to lose a proton, the pH of the system must rise above the pKa of the acid. The decreased concentration of H+ in that basic solution shifts the equilibrium towards the conjugate base form (the deprotonated form of the acid). In lower-pH (more acidic) solutions, there is a high enough H+ concentration in the solution to cause the acid to remain in its protonated form.
Solutions of weak acids and salts of their conjugate bases form buffer solutions.
Titration
To determine the concentration of an acid in an aqueous solution, an acid–base titration is commonly performed. A strong base solution with a known concentration, usually NaOH or KOH, is added to neutralize the acid solution according to the color change of the indicator with the amount of base added. The titration curve of an acid titrated by a base has two axes, with the base volume on the x-axis and the solution's pH value on the y-axis. The pH of the solution always goes up as the base is added to the solution.
Example: Diprotic acid
For each diprotic acid titration curve, from left to right, there are two midpoints, two equivalence points, and two buffer regions.
Equivalence points
Due to the successive dissociation processes, there are two equivalence points in the titration curve of a diprotic acid. The first equivalence point occurs when all first hydrogen ions from the first ionization are titrated. In other words, the amount of OH− added equals the original amount of H2A at the first equivalence point. The second equivalence point occurs when all hydrogen ions are titrated. Therefore, the amount of OH− added equals twice the amount of H2A at this time. For a weak diprotic acid titrated by a strong base, the second equivalence point must occur at pH above 7 due to the hydrolysis of the resulted salts in the solution. At either equivalence point, adding a drop of base will cause the steepest rise of the pH value in the system.
Buffer regions and midpoints
A titration curve for a diprotic acid contains two midpoints where pH=pKa. Since there are two different Ka values, the first midpoint occurs at pH=pKa1 and the second one occurs at pH=pKa2. Each segment of the curve that contains a midpoint at its center is called the buffer region. Because the buffer regions consist of the acid and its conjugate base, it can resist pH changes when base is added until the next equivalent points.
Applications of acids
In industry
Acids are fundamental reagents in treating almost all processes in modern industry. Sulfuric acid, a diprotic acid, is the most widely used acid in industry, and is also the most-produced industrial chemical in the world. It is mainly used in producing fertilizer, detergent, batteries and dyes, as well as used in processing many products such like removing impurities. According to the statistics data in 2011, the annual production of sulfuric acid was around 200 million tonnes in the world. For example, phosphate minerals react with sulfuric acid to produce phosphoric acid for the production of phosphate fertilizers, and zinc is produced by dissolving zinc oxide into sulfuric acid, purifying the solution and electrowinning.
In the chemical industry, acids react in neutralization reactions to produce salts. For example, nitric acid reacts with ammonia to produce ammonium nitrate, a fertilizer. Additionally, carboxylic acids can be esterified with alcohols, to produce esters.
Acids are often used to remove rust and other corrosion from metals in a process known as pickling. They may be used as an electrolyte in a wet cell battery, such as sulfuric acid in a car battery.
In food
Tartaric acid is an important component of some commonly used foods like unripened mangoes and tamarind. Natural fruits and vegetables also contain acids. Citric acid is present in oranges, lemon and other citrus fruits. Oxalic acid is present in tomatoes, spinach, and especially in carambola and rhubarb; rhubarb leaves and unripe carambolas are toxic because of high concentrations of oxalic acid. Ascorbic acid (Vitamin C) is an essential vitamin for the human body and is present in such foods as amla (Indian gooseberry), lemon, citrus fruits, and guava.
Many acids can be found in various kinds of food as additives, as they alter their taste and serve as preservatives. Phosphoric acid, for example, is a component of cola drinks. Acetic acid is used in day-to-day life as vinegar. Citric acid is used as a preservative in sauces and pickles.
Carbonic acid is one of the most common acid additives that are widely added in soft drinks. During the manufacturing process, CO2 is usually pressurized to dissolve in these drinks to generate carbonic acid. Carbonic acid is very unstable and tends to decompose into water and CO2 at room temperature and pressure. Therefore, when bottles or cans of these kinds of soft drinks are opened, the soft drinks fizz and effervesce as CO2 bubbles come out.
Certain acids are used as drugs. Acetylsalicylic acid (Aspirin) is used as a pain killer and for bringing down fevers.
In human bodies
Acids play important roles in the human body. The hydrochloric acid present in the stomach aids digestion by breaking down large and complex food molecules. Amino acids are required for synthesis of proteins required for growth and repair of body tissues. Fatty acids are also required for growth and repair of body tissues. Nucleic acids are important for the manufacturing of DNA and RNA and transmitting of traits to offspring through genes. Carbonic acid is important for maintenance of pH equilibrium in the body.
Human bodies contain a variety of organic and inorganic compounds, among those dicarboxylic acids play an essential role in many biological behaviors. Many of those acids are amino acids, which mainly serve as materials for the synthesis of proteins. Other weak acids serve as buffers with their conjugate bases to keep the body's pH from undergoing large scale changes that would be harmful to cells. The rest of the dicarboxylic acids also participate in the synthesis of various biologically important compounds in human bodies.
Acid catalysis
Acids are used as catalysts in industrial and organic chemistry; for example, sulfuric acid is used in very large quantities in the alkylation process to produce gasoline. Some acids, such as sulfuric, phosphoric, and hydrochloric acids, also effect dehydration and condensation reactions. In biochemistry, many enzymes employ acid catalysis.
Biological occurrence
Many biologically important molecules are acids. Nucleic acids, which contain acidic phosphate groups, include DNA and RNA. Nucleic acids contain the genetic code that determines many of an organism's characteristics, and is passed from parents to offspring. DNA contains the chemical blueprint for the synthesis of proteins, which are made up of amino acid subunits. Cell membranes contain fatty acid esters such as phospholipids.
An α-amino acid has a central carbon (the α or alpha carbon) that is covalently bonded to a carboxyl group (thus they are carboxylic acids), an amino group, a hydrogen atom and a variable group. The variable group, also called the R group or side chain, determines the identity and many of the properties of a specific amino acid. In glycine, the simplest amino acid, the R group is a hydrogen atom, but in all other amino acids it is contains one or more carbon atoms bonded to hydrogens, and may contain other elements such as sulfur, oxygen or nitrogen. With the exception of glycine, naturally occurring amino acids are chiral and almost invariably occur in the L-configuration. Peptidoglycan, found in some bacterial cell walls contains some D-amino acids. At physiological pH, typically around 7, free amino acids exist in a charged form, where the acidic carboxyl group (-COOH) loses a proton (-COO−) and the basic amine group (-NH2) gains a proton (-NH). The entire molecule has a net neutral charge and is a zwitterion, with the exception of amino acids with basic or acidic side chains. Aspartic acid, for example, possesses one protonated amine and two deprotonated carboxyl groups, for a net charge of −1 at physiological pH.
Fatty acids and fatty acid derivatives are another group of carboxylic acids that play a significant role in biology. These contain long hydrocarbon chains and a carboxylic acid group on one end. The cell membrane of nearly all organisms is primarily made up of a phospholipid bilayer, a micelle of hydrophobic fatty acid esters with polar, hydrophilic phosphate "head" groups. Membranes contain additional components, some of which can participate in acid–base reactions.
In humans and many other animals, hydrochloric acid is a part of the gastric acid secreted within the stomach to help hydrolyze proteins and polysaccharides, as well as converting the inactive pro-enzyme, pepsinogen into the enzyme, pepsin. Some organisms produce acids for defense; for example, ants produce formic acid.
Acid–base equilibrium plays a critical role in regulating mammalian breathing. Oxygen gas (O2) drives cellular respiration, the process by which animals release the chemical potential energy stored in food, producing carbon dioxide (CO2) as a byproduct. Oxygen and carbon dioxide are exchanged in the lungs, and the body responds to changing energy demands by adjusting the rate of ventilation. For example, during periods of exertion the body rapidly breaks down stored carbohydrates and fat, releasing CO2 into the blood stream. In aqueous solutions such as blood CO2 exists in equilibrium with carbonic acid and bicarbonate ion.
It is the decrease in pH that signals the brain to breathe faster and deeper, expelling the excess CO2 and resupplying the cells with O2.
Cell membranes are generally impermeable to charged or large, polar molecules because of the lipophilic fatty acyl chains comprising their interior. Many biologically important molecules, including a number of pharmaceutical agents, are organic weak acids that can cross the membrane in their protonated, uncharged form but not in their charged form (i.e., as the conjugate base). For this reason the activity of many drugs can be enhanced or inhibited by the use of antacids or acidic foods. The charged form, however, is often more soluble in blood and cytosol, both aqueous environments. When the extracellular environment is more acidic than the neutral pH within the cell, certain acids will exist in their neutral form and will be membrane soluble, allowing them to cross the phospholipid bilayer. Acids that lose a proton at the intracellular pH will exist in their soluble, charged form and are thus able to diffuse through the cytosol to their target. Ibuprofen, aspirin and penicillin are examples of drugs that are weak acids.
Common acids
Mineral acids (inorganic acids)
Hydrogen halides and their solutions: hydrofluoric acid (HF), hydrochloric acid (HCl), hydrobromic acid (HBr), hydroiodic acid (HI)
Halogen oxoacids: hypochlorous acid (HClO), chlorous acid (HClO2), chloric acid (HClO3), perchloric acid (HClO4), and corresponding analogs for bromine and iodine
Hypofluorous acid (HFO), the only known oxoacid for fluorine.
Sulfuric acid (H2SO4)
Fluorosulfuric acid (HSO3F)
Nitric acid (HNO3)
Phosphoric acid (H3PO4)
Fluoroantimonic acid (HSbF6)
Fluoroboric acid (HBF4)
Hexafluorophosphoric acid (HPF6)
Chromic acid (H2CrO4)
Boric acid (H3BO3)
Sulfonic acids
A sulfonic acid has the general formula RS(=O)2–OH, where R is an organic radical.
Methanesulfonic acid (or mesylic acid, CH3SO3H)
Ethanesulfonic acid (or esylic acid, CH3CH2SO3H)
Benzenesulfonic acid (or besylic acid, C6H5SO3H)
p-Toluenesulfonic acid (or tosylic acid, CH3C6H4SO3H)
Trifluoromethanesulfonic acid (or triflic acid, CF3SO3H)
Polystyrene sulfonic acid (sulfonated polystyrene, [CH2CH(C6H4)SO3H]n)
Carboxylic acids
A carboxylic acid has the general formula R-C(O)OH, where R is an organic radical. The carboxyl group -C(O)OH contains a carbonyl group, C=O, and a hydroxyl group, O-H.
Acetic acid (CH3COOH)
Citric acid (C6H8O7)
Formic acid (HCOOH)
Gluconic acid HOCH2-(CHOH)4-COOH
Lactic acid (CH3-CHOH-COOH)
Oxalic acid (HOOC-COOH)
Tartaric acid (HOOC-CHOH-CHOH-COOH)
Halogenated carboxylic acids
Halogenation at alpha position increases acid strength, so that the following acids are all stronger than acetic acid.
Fluoroacetic acid
Trifluoroacetic acid
Chloroacetic acid
Dichloroacetic acid
Trichloroacetic acid
Vinylogous carboxylic acids
Normal carboxylic acids are the direct union of a carbonyl group and a hydroxyl group. In vinylogous carboxylic acids, a carbon-carbon double bond separates the carbonyl and hydroxyl groups.
Ascorbic acid
Nucleic acids
Deoxyribonucleic acid (DNA)
Ribonucleic acid (RNA)
References
Listing of strengths of common acids and bases
External links
Curtipot: Acid–Base equilibria diagrams, pH calculation and titration curves simulation and analysis – freeware
Acid–base chemistry | Acid | [
"Chemistry"
] | 7,465 | [
"Equilibrium chemistry",
"Acid–base chemistry",
"Acids",
"nan"
] |
657 | https://en.wikipedia.org/wiki/Bitumen | Bitumen ( , ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In American English, the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century, the term asphaltum was in general use. The word derives from the Ancient Greek word (), which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world is the Pitch Lake of southwest Trinidad, which is estimated to contain 10 million tons.
About 70% of annual bitumen production is destined for road construction, its primary use. In this application, bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant.
In material sciences and engineering, the terms asphalt and bitumen are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term bitumen for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, bitumen is the prevalent term in much of the world; however, in American English, asphalt is more commonly used. To help avoid confusion, the terms "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. to distinguish it from asphalt concrete. Colloquially, various forms of bitumen are sometimes referred to as "tar", as in the name of the La Brea Tar Pits.
Naturally occurring bitumen is sometimes specified by the term crude bitumen. Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural bitumen in the Athabasca oil sands, which cover , an area larger than England.
Terminology
Etymology
The Latin word traces to the Proto-Indo-European root *gʷet- "pitch".
The expression "bitumen" originated in the Sanskrit, where we find the words "jatu", meaning "pitch", and "jatu-krit", meaning "pitch creating", "pitch producing" (referring to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally "gwitu-men" (pertaining to pitch), and by others, "pixtumens" (exuding or bubbling pitch), which was subsequently shortened to "bitumen", thence passing via French into English. From the same root is derived the Anglo Saxon word "cwidu" (Mastix), the German word "Kitt" (cement or mastic) and the old Norse word "kvada".
The word "ašphalt" is claimed to have been derived from the Accadian term "asphaltu" or "sphallo", meaning "to split". It was later adopted by the Homeric Greeks in the form of the adjective ἄσφαλἤς, ἐς signifying "firm", "stable", "secure", and the corresponding verb ἄσφαλίξω, ίσω meaning "to make firm or stable", "to secure".
The word "asphalt" is derived from the late Middle English, in turn from French asphalte, based on Late Latin asphalton, asphaltum, which is the latinisation of the Greek (ásphaltos, ásphalton), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and (sphallein), "to cause to fall, baffle, (in passive) err, (in passive) be balked of".
The first use of asphalt by the ancients was as a cement to secure or join various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall.
From the Greek, the word passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). In French, the term asphalte is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads.
Modern terminology
Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today.
In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac").
In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit".
"Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material.
Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around.
Composition
Normal composition
The components of bitumen include four main classes of compounds:
Naphthene aromatics (naphthalene), consisting of partially hydrogenated polycyclic aromatic compounds
Polar aromatics, consisting of high molecular weight phenols and carboxylic acids produced by partial oxidation of the material
Saturated hydrocarbons; the percentage of saturated compounds in asphalt correlates with its softening point
Asphaltenes, consisting of high molecular weight phenols and heterocyclic compounds
Bitumen typically contains, elementally 80% by weight of carbon; 10% hydrogen; up to 6% sulfur; and molecularly, between 5 and 25% by weight of asphaltenes dispersed in 90% to 65% maltenes. Most natural bitumens also contain organosulfur compounds, nickel and vanadium are found at <10 parts per million, as is typical of some petroleum. The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of bitumen, because the number of molecules with different chemical structure is extremely large".
Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion include La Brea Tar Pits and the Canadian tar sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake.
Additives, mixtures and contaminants
For economic and other reasons, bitumen is sometimes sold combined with other materials, often without being labeled as anything other than simply "bitumen".
Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of bitumen and poorer-performing pavement.
Occurrence
The majority of bitumen used commercially is obtained from petroleum. Nonetheless, large amounts of bitumen occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These natural deposits of bitumen have been formed during the Carboniferous period, when giant swamp forests dominated many parts of the Earth. They were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50°C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum.
Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and the McKittrick Tar Pits in California, as well as in the Dead Sea.
Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US.
The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States.
The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage.
Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen.
Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis.
Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands.
History
Paleolithic times
Bitumen use goes back to the Middle Paleolithic, where it was shaped into tool handles or used as an adhesive for attaching stone tools to hafts.
The earliest evidence of bitumen use was discovered when archeologists identified bitumen material on Levallois flint artefacts that date to about 71,000 years BP at the Umm el Tlel open-air site, located on the northern slope of the Qdeir Plateau in el Kowm Basin in Central Syria. Microscopic analyses found bituminous residue on two-thirds of the stone artefacts, suggesting that bitumen was an important and frequently-used component of tool making for people in that region at that time. Geochemical analyses of the asphaltic residues places its source to localized natural bitumen outcroppings in the Bichri Massif, about 40 km northeast of the Umm el Tlel archeological site.
A re-examination of artifacts uncovered in 1908 at Le Moustier rock shelters in France has identified Mousterian stone tools that were attached to grips made of ochre and bitumen. The grips were formulated with 55% ground goethite ochre and 45% cooked liquid bitumen to create a moldable putty that hardened into handles. Earlier, less-careful excavations at Le Moustier prevent conclusive identification of the archaeological culture and age, but the European Mousterian style of these tools suggests they are associated with Neanderthals during the late Middle Paleolithic into the early Upper Paleolithic between 60,000 and 35,000 years before present. It is the earliest evidence of multicomponent adhesive in Europe.
Ancient times
The use of natural bitumen for waterproofing and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro.
In the ancient Near East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon.
The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis () was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent.
Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is moom, which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as Palus Asphaltites (Asphalt Lake).
In approximately 40 AD, Dioscorides described the Dead Sea material as Judaicum bitumen, and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC.
In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that, when layered on objects, became hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China.
In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer. Bitumen was also used to waterproof plank canoes used by indigenous peoples in pre-colonial southern California.
Continental Europe
In 1553, Pierre Belon described in his work Observations that pissasphalto, a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships.
An 1838 edition of Mechanics Magazine cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation".
But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835.
United Kingdom
Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's Polygraphice (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources.
The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)".
Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. Claridge's Patent Asphalte Companyformed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France","laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". The Bonnington Chemical Works manufactured asphalt using coal tar and by 1839 had installed it in Bonnington.
In 1838, there was a flurry of entrepreneurial activity involving bitumen, which had uses beyond paving. For example, bitumen could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. One of the earliest surviving examples of its use can be seen at Highgate Cemetery where it was used in 1839 to seal the roof of the terrace catacombs. On the London stockmarket, there were various claims as to the exclusivity of bitumen quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s".
In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company.
Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm.
United States
The first use of bitumen in the New World was by aboriginal peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes.
Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial.
In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889.
In 1900, Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways.
Canada
Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance."
The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site.
Photography and art
Bitumen was used in early photographic technology. In 1826, or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes.
Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle.
Modern use
Global use
The vast majority of refined bitumen is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, bitumen is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of bitumen is approximately 102 million tonnes per year. Approximately 85% of all the bitumen produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the bitumen to modify its properties according to the application for which the bitumen is ultimately intended.
A further 10% of global bitumen production is used in roofing applications, where its waterproofing qualities are invaluable.
The remaining 5% of bitumen is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Bitumen is applied in the construction and maintenance of many structures, systems, and components, such as:
Highways
Airport runways
Footways and pedestrian ways
Car parks
Racetracks
Tennis courts
Roofing
Damp proofing
Dams
Reservoir and pool linings
Soundproofing
Pipe coatings
Cable coatings
Paints
Building water proofing
Tile underlying waterproofing
Newspaper ink production
Rolled asphalt concrete
The largest use of bitumen is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the bitumen consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe.
Asphalt concrete pavement mixes are typically composed of 5% bitumen (known as asphalt cement in the US) and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, bitumen must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required.
The weight of an asphalt pavement depends upon the aggregate type, the bitumen, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness.
When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The bitumen in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the bitumen removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use.
Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways.
Mastic asphalt
Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher bitumen (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick.
Bitumen emulsion
Bitumen emulsions are colloidal mixtures of bitumen and water. Due to the different surface tensions of the two liquids, stable emulsions cannot be created simply by mixing. Therefore, various emulsifiers and stabilizers are added. Emulsifiers are amphiphilic molecules that differ in the charge of their polar head group. They reduce the surface tension of the emulsion and thus prevent bitumen particles from fusing. The emulsifier charge defines the type of emulsion: anionic (negatively charged) and cationic (positively charged). The concentration of an emulsifier is a critical parameter affecting the size of the bitumen particles—higher concentrations lead to smaller bitumen particles. Thus, emulsifiers have a great impact on the stability, viscosity, breaking strength, and adhesion of the bitumen emulsion. The size of bitumen particles is usually between 0.1 and 50μm with a main fraction between 1μm and 10μm. Laser diffraction techniques can be used to determine the particle size distribution quickly and easily. Cationic emulsifiers primarily include long-chain amines such as imidazolines, amido-amines, and diamines, which acquire a positive charge when an acid is added. Anionic emulsifiers are often fatty acids extracted from lignin, tall oil, or tree resin saponified with bases such as NaOH, which creates a negative charge.
During the storage of bitumen emulsions, bitumen particles sediment, agglomerate (flocculation), or fuse (coagulation), which leads to a certain instability of the bitumen emulsion. How fast this process occurs depends on the formulation of the bitumen emulsion but also storage conditions such as temperature and humidity. When emulsified bitumen gets into contact with aggregates, emulsifiers lose their effectiveness, the emulsion breaks down, and an adhering bitumen film is formed referred to as 'breaking'. Bitumen particles almost instantly create a continuous bitumen film by coagulating and separating from water which evaporates. Not each asphalt emulsion reacts as fast as the other when it gets into contact with aggregates. That enables a classification into Rapid-setting (R), Slow-setting (SS), and Medium-setting (MS) emulsions, but also an individual, application-specific optimization of the formulation and a wide field of application (1). For example, Slow-breaking emulsions ensure a longer processing time which is particularly advantageous for fine aggregates (1).
Adhesion problems are reported for anionic emulsions in contact with quartz-rich aggregates. They are substituted by cationic emulsions achieving better adhesion. The extensive range of bitumen emulsions is covered insufficiently by standardization. DIN EN 13808 for cationic asphalt emulsions has been existing since July 2005. Here, a classification of bitumen emulsions based on letters and numbers is described, considering charges, viscosities, and the type of bitumen. The production process of bitumen emulsions is very complex. Two methods are commonly used, the "Colloid mill" method and the "High Internal Phase Ratio (HIPR)" method. In the "Colloid mill" method, a rotor moves at high speed within a stator by adding bitumen and a water-emulsifier mixture. The resulting shear forces generate bitumen particles between 5μm and 10μm coated with emulsifiers. The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations.
T The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).he "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).
Bitumen emulsions are used in a wide variety of applications. They are used in road construction and building protection and primarily include the application in cold recycling mixtures, adhesive coating, and surface treatment (1). Due to the lower viscosity in comparison to hot bitumen, processing requires less energy and is associated with significantly less risk of fire and burns. Chipseal involves spraying the road surface with bitumen emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of bitumen emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from bitumen emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and bitumen emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements. Bitumen emulsion based techniques are known to be useful for all classes of roads, their use may also be possible in the following applications: 1. Asphalts for heavily trafficked roads (based on the use of polymer modified emulsions) 2. Warm emulsion based mixtures, to improve both their maturation time and mechanical properties 3. Half-warm technology, in which aggregates are heated up to 100 degrees, producing mixtures with similar properties to those of hot asphalts 4. High performance surface dressing.
Synthetic crude oil
Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100-ton capacity) power shovels and loaded into even larger (400-ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States.
In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants.
Non-upgraded crude bitumen
Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States.
Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude.
Radioactive waste encapsulation matrix
Bitumen was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problems is the swelling of bitumen exposed to radiation and to water. Bitumen swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the bitumen acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging.
Different types of bitumen have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons.
Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations.
Other uses
Roofing shingles and roll roofing account for most of the remaining bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Bitumen is also used to seal some alkaline batteries during the manufacturing process. Bitumen is also commonly used as a ground in the etching process of intaglio printmaking.
Production
About 164,000,000 tons were produced in 2019. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500°C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude bitumen is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous.
Bitumen is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns.
Oil sands
Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years.
Alternatives and bioasphalt
Although uncompetitive economically, bitumen can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Bitumen can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently.
Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use bitumen alternatives are called green parking lots.
Albanian deposits
Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120°C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%.
Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale.
Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine.
Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags.
A life-cycle assessment study of the natural selenizza compared with petroleum bitumen has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission.
Recycling
Bitumen is a commonly recycled material in the construction industry. The two most common recycled materials that contain bitumen are reclaimed asphalt pavement (RAP) and reclaimed asphalt shingles (RAS). RAP is recycled at a greater rate than any other material in the United States, and typically contains approximately 5–6% bitumen binder. Asphalt shingles typically contain 20–40% bitumen binder.
Bitumen naturally becomes stiffer over time due to oxidation, evaporation, exudation, and physical hardening. For this reason, recycled asphalt is typically combined with virgin asphalt, softening agents, and/or rejuvenating additives to restore its physical and chemical properties.
Economics
Although bitumen typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material.
During bitumen's early use in modern paving, oil refiners gave it away. However, bitumen is a highly traded commodity today. Its prices increased substantially in the early 21st Century. A U.S. government report states:
"In 2002, asphalt sold for approximately $160 per ton. By the end of 2006, the cost had doubled to approximately $320 per ton, and then it almost doubled again in 2012 to approximately $610 per ton."
The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years."
The Middle East is a significant exporter of bitumen, particularly to India and China. According to the Argus Bitumen Report (2024/07/12), India is the largest importer, driven by extensive infrastructure projects. The report projects a CAGR of 4.5% for India's bitumen imports over the next five years, while China's imports are expected to grow at a CAGR of 3.8%. The current export price to India is approximately $350 per metric ton, and for China, it is around $360 per metric ton. The Middle East's strategic advantage in crude oil production underpins its capacity to meet these demands.
Health and safety
People can be exposed to bitumen in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5mg/m3 over a 15-minute period.
Bitumen is a largely inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with bitumen, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the bitumen emissions. In particular, temperatures greater than 199°C (390°F), were shown to produce a greater exposure risk than when bitumen was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans.
In 2020, scientists reported that bitumen currently is a significant and largely overlooked source of air pollution in urban areas, especially during hot and sunny periods.
A bitumen-like substance found in the Himalayas and known as shilajit is sometimes used as an Ayurveda medicine, but is not in fact a tar, resin or bitumen.
See also
Asphalt plant
Asphaltene
Bioasphalt
Bitumen-based fuel
Bituminous coal
Bituminous rocks
Blacktop
Cariphalte
Duxit
Macadam
Oil sands
Pitch drop experiment
Pitch (resin)
Road surface
Tar
Tarmac
Sealcoat
Stamped asphalt
Notes
References
Sources
.
External links
Pavement Interactive – Asphalt
CSU Sacramento, The World Famous Asphalt Museum!
National Institute for Occupational Safety and Health – Asphalt Fumes
Scientific American, "Asphalt", 20 August 1881, pp.121
Amorphous solids
Building materials
Chemical mixtures
IARC Group 2B carcinogens
Pavements
Petroleum products
Road construction materials | Bitumen | [
"Physics",
"Chemistry",
"Engineering"
] | 11,862 | [
"Petroleum products",
"Building engineering",
"Unsolved problems in physics",
"Architecture",
"Construction",
"Petroleum",
"Materials",
"Chemical mixtures",
"nan",
"Asphalt",
"Amorphous solids",
"Matter",
"Building materials"
] |
659 | https://en.wikipedia.org/wiki/American%20National%20Standards%20Institute | The American National Standards Institute (ANSI ) is a private nonprofit organization that oversees the development of voluntary consensus standards for products, services, processes, systems, and personnel in the United States. The organization also coordinates U.S. standards with international standards so that American products can be used worldwide.
ANSI accredits standards that are developed by representatives of other standards organizations, government agencies, consumer groups, companies, and others. These standards ensure that the characteristics and performance of products are consistent, that people use the same definitions and terms, and that products are tested the same way. ANSI also accredits organizations that carry out product or personnel certification in accordance with requirements defined in international standards.
The organization's headquarters are in Washington, D.C. ANSI's operations office is located in New York City. The ANSI annual operating budget is funded by the sale of publications, membership dues and fees, accreditation services, fee-based programs, and international standards programs.
Many ANSI regulations are incorporated by reference into United States federal statutes (i.e. by OSHA regulations referring to individual ANSI specifications). ANSI does not make these standards publicly available, and charges money for access to these documents; it further claims that it is copyright infringement for them to be provided to the public by others free of charge. These assertions have been the subject of criticism and litigation.
History
ANSI was most likely formed in 1918, when five engineering societies and three government agencies founded the American Engineering Standards Committee (AESC). In 1928, the AESC became the American Standards Association (ASA). In 1966, the ASA was reorganized and became United States of America Standards Institute (USASI). The present name was adopted in 1969.
Prior to 1918, these five founding engineering societies:
American Institute of Electrical Engineers (AIEE, now IEEE)
American Society of Mechanical Engineers (ASME)
American Society of Civil Engineers (ASCE)
American Institute of Mining Engineers (AIME, now American Institute of Mining, Metallurgical, and Petroleum Engineers)
American Society for Testing and Materials (now ASTM International)
had been members of the United Engineering Society (UES). At the behest of the AIEE, they invited the U.S. government Departments of War, Navy (combined in 1947 to become the Department of Defense or DOD) and Commerce to join in founding a national standards organization.
According to Adam Stanton, the first permanent secretary and head of staff in 1919, AESC started as an ambitious program and little else. Staff for the first year consisted of one executive, Clifford B. LePage, who was on loan from a founding member, ASME. An annual budget of $7,500 was provided by the founding bodies.
In 1931, the organization (renamed ASA in 1928) became affiliated with the U.S. National Committee of the International Electrotechnical Commission (IEC), which had been formed in 1904 to develop electrical and electronics standards.
Members
ANSI's members are government agencies, organizations, academic and international bodies, and individuals. In total, the Institute represents the interests of more than 270,000 companies and organizations and 30 million professionals worldwide.
ANSI's market-driven, decentralized approach has been criticized in comparison with more planned and organized international approaches to standardization. An underlying issue is the difficulty of balancing "the interests of both the nation's industrial and commercial sectors and the nation as a whole."
Process
Although ANSI itself does not develop standards, the Institute oversees the development and use of standards by accrediting the procedures of standards developing organizations. ANSI accreditation signifies that the procedures used by standards developing organizations meet the institute's requirements for openness, balance, consensus, and due process.
ANSI also designates specific standards as American National Standards, or ANS, when the Institute determines that the standards were developed in an environment that is equitable, accessible and responsive to the requirements of various stakeholders.
Voluntary consensus standards quicken the market acceptance of products while making clear how to improve the safety of those products for the protection of consumers. There are approximately 9,500 American National Standards that carry the ANSI designation.
The American National Standards process involves:
consensus by a group that is open to representatives from all interested parties
broad-based public review and comment on draft standards
consideration of and response to comments
incorporation of submitted changes that meet the same consensus requirements into a draft standard
availability of an appeal by any participant alleging that these principles were not respected during the standards-development process.
International activities
In addition to facilitating the formation of standards in the United States, ANSI promotes the use of U.S. standards internationally, advocates U.S. policy and technical positions in international and regional standards organizations, and encourages the adoption of international standards as national standards where appropriate.
The institute is the official U.S. representative to the two major international standards organizations, the International Organization for Standardization (ISO), as a founding member, and the International Electrotechnical Commission (IEC), via the U.S. National Committee (USNC). ANSI participates in almost the entire technical program of both the ISO and the IEC, and administers many key committees and subgroups. In many instances, U.S. standards are taken forward to ISO and IEC, through ANSI or the USNC, where they are adopted in whole or in part as international standards.
Adoption of ISO and IEC standards as American standards increased from 0.2% in 1986 to 15.5% in May 2012.
Standards panels
The Institute administers nine standards panels:
ANSI Homeland Defense and Security Standardization Collaborative (HDSSC)
ANSI Nanotechnology Standards Panel (ANSI-NSP)
ID Theft Prevention and ID Management Standards Panel (IDSP)
ANSI Energy Efficiency Standardization Coordination Collaborative (EESCC)
Nuclear Energy Standards Coordination Collaborative (NESCC)
Electric Vehicles Standards Panel (EVSP)
ANSI-NAM Network on Chemical Regulation
ANSI Biofuels Standards Coordination Panel
Healthcare Information Technology Standards Panel (HITSP)
Each of the panels works to identify, coordinate, and harmonize voluntary standards relevant to these areas.
In 2009, ANSI and the National Institute of Standards and Technology (NIST) formed the Nuclear Energy Standards Coordination Collaborative (NESCC). NESCC is a joint initiative to identify and respond to the current need for standards in the nuclear industry.
American national standards
The ASA (as for American Standards Association) photographic exposure system, originally defined in ASA Z38.2.1 (since 1943) and ASA PH2.5 (since 1954), together with the DIN system (DIN 4512 since 1934), became the basis for the ISO system (since 1974), currently used worldwide (ISO 6, ISO 2240, ISO 5800, ISO 12232).
A standard for the set of values used to represent characters in digital computers. The ANSI code standard extended the previously created ASCII seven bit code standard (ASA X3.4-1963), with additional codes for European alphabets (see also Extended Binary Coded Decimal Interchange Code or EBCDIC). In Microsoft Windows, the phrase "ANSI" refers to the Windows ANSI code pages (even though they are not ANSI standards). Most of these are fixed width, though some characters for ideographic languages are variable width. Since these characters are based on a draft of the ISO-8859 series, some of Microsoft's symbols are visually very similar to the ISO symbols, leading many to falsely assume that they are identical.
The first computer programming language standard was "American Standard Fortran" (informally known as "FORTRAN 66"), approved in March 1966 and published as ASA X3.9-1966.
The programming language COBOL had ANSI standards in 1968, 1974, and 1985. The COBOL 2002 standard was issued by ISO.
The original standard implementation of the C programming language was standardized as ANSI X3.159-1989, becoming the well-known ANSI C.
The X3J13 committee was created in 1986 to formalize the ongoing consolidation of Common Lisp, culminating in 1994 with the publication of ANSI's first object-oriented programming standard.
A popular Unified Thread Standard for nuts and bolts is ANSI/ASME B1.1 which was defined in 1935, 1949, 1989, and 2003.
The ANSI-NSF International standards used for commercial kitchens, such as restaurants, cafeterias, delis, etc.
The ANSI/APSP (Association of Pool & Spa Professionals) standards used for pools, spas, hot tubs, barriers, and suction entrapment avoidance.
The ANSI/HI (Hydraulic Institute) standards used for pumps.
The ANSI for eye protection is Z87.1, which gives a specific impact resistance rating to the eyewear. This standard is commonly used for shop glasses, shooting glasses, and many other examples of protective eyewear. While compliance to this standard is required by United States federal law, it is not made freely available by ANSI, who charges $65 to read a PDF of it.
The ANSI paper sizes (ANSI/ASME Y14.1).
See also
Accredited Crane Operator Certification
ANSI ASC X9
ANSI ASC X12
ANSI C
Institute of Environmental Sciences and Technology (IEST)
Institute of Nuclear Materials Management (INMM)
ISO (to which ANSI is the official US representative)
National Information Standards Organization (NISO)
National Institute of Standards and Technology (NIST)
Open standards
References
External links
1918 establishments in the United States
501(c)(3) organizations
Charities based in Washington, D.C.
ISO member bodies
Organizations established in 1918
Technical specifications
Standards organizations in the United States
Occupational safety and health organizations | American National Standards Institute | [
"Technology"
] | 1,994 | [
"nan"
] |
664 | https://en.wikipedia.org/wiki/Astronaut | An astronaut (from the Ancient Greek (), meaning 'star', and (), meaning 'sailor') is a person trained, equipped, and deployed by a human spaceflight program to serve as a commander or crew member aboard a spacecraft. Although generally reserved for professional space travelers, the term is sometimes applied to anyone who travels into space, including scientists, politicians, journalists, and tourists.
"Astronaut" technically applies to all human space travelers regardless of nationality. However, astronauts fielded by Russia or the Soviet Union are typically known instead as cosmonauts (from the Russian "kosmos" (космос), meaning "space", also borrowed from Greek ). Comparatively recent developments in crewed spaceflight made by China have led to the rise of the term taikonaut (from the Mandarin "tàikōng" (), meaning "space"), although its use is somewhat informal and its origin is unclear. In China, the People's Liberation Army Astronaut Corps astronauts and their foreign counterparts are all officially called hángtiānyuán (, meaning "heaven navigator" or literally "heaven-sailing staff").
Since 1961, 600 astronauts have flown in space. Until 2002, astronauts were sponsored and trained exclusively by governments, either by the military or by civilian space agencies. With the suborbital flight of the privately funded SpaceShipOne in 2004, a new category of astronaut was created: the commercial astronaut.
Definition
The criteria for what constitutes human spaceflight vary, with some focus on the point where the atmosphere becomes so thin that centrifugal force, rather than aerodynamic force, carries a significant portion of the weight of the flight object. The (FAI) Sporting Code for astronautics recognizes only flights that exceed the Kármán line, at an altitude of . In the United States, professional, military, and commercial astronauts who travel above an altitude of are awarded astronaut wings.
, 552 people from 36 countries have reached or more in altitude, of whom 549 reached low Earth orbit or beyond.
Of these, 24 people have traveled beyond low Earth orbit, either to lunar orbit, the lunar surface, or, in one case, a loop around the Moon. Three of the 24—Jim Lovell, John Young and Eugene Cernan—did so twice.
, under the U.S. definition, 558 people qualify as having reached space, above altitude. Of eight X-15 pilots who exceeded in altitude, only one, Joseph A. Walker, exceeded 100 kilometers (about 62.1 miles) and he did it two times, becoming the first person in space twice. Space travelers have spent over 41,790 man-days (114.5-man-years) in space, including over 100 astronaut-days of spacewalks. , the man with the longest cumulative time in space is Oleg Kononenko, who has spent over 1100 days in space. Peggy A. Whitson holds the record for the most time in space by a woman, at 675 days.
Terminology
In 1959, when both the United States and Soviet Union were planning, but had yet to launch humans into space, NASA Administrator T. Keith Glennan and his Deputy Administrator, Hugh Dryden, discussed whether spacecraft crew members should be called astronauts or cosmonauts. Dryden preferred "cosmonaut", on the grounds that flights would occur in and to the broader cosmos, while the "astro" prefix suggested flight specifically to the stars. Most NASA Space Task Group members preferred "astronaut", which survived by common usage as the preferred American term. When the Soviet Union launched the first man into space, Yuri Gagarin in 1961, they chose a term which anglicizes to "cosmonaut".
Astronaut
A professional space traveler is called an astronaut. The first known use of the term "astronaut" in the modern sense was by Neil R. Jones in his 1930 short story "The Death's Head Meteor". The word itself had been known earlier; for example, in Percy Greg's 1880 book Across the Zodiac, "astronaut" referred to a spacecraft. In Les Navigateurs de l'infini (1925) by J.-H. Rosny aîné, the word astronautique (astronautics) was used. The word may have been inspired by "aeronaut", an older term for an air traveler first applied in 1784 to balloonists. An early use of "astronaut" in a non-fiction publication is Eric Frank Russell's poem "The Astronaut", appearing in the November 1934 Bulletin of the British Interplanetary Society.
The first known formal use of the term astronautics in the scientific community was the establishment of the annual International Astronautical Congress in 1950, and the subsequent founding of the International Astronautical Federation the following year.
NASA applies the term astronaut to any crew member aboard NASA spacecraft bound for Earth orbit or beyond. NASA also uses the term as a title for those selected to join its Astronaut Corps. The European Space Agency similarly uses the term astronaut for members of its Astronaut Corps.
Cosmonaut
By convention, an astronaut employed by the Russian Federal Space Agency (or its predecessor, the Soviet space program) is called a cosmonaut in English texts. The word is an Anglicization of kosmonavt ( ). Other countries of the former Eastern Bloc use variations of the Russian kosmonavt, such as the (although Poles also used , and the two words are considered synonyms).
Coinage of the term has been credited to Soviet aeronautics (or "cosmonautics") pioneer Mikhail Tikhonravov (1900–1974). The first cosmonaut was Soviet Air Force pilot Yuri Gagarin, also the first person in space. He was part of the first six Soviet citizens, with German Titov, Yevgeny Khrunov, Andriyan Nikolayev, Pavel Popovich, and Grigoriy Nelyubov, who were given the title of pilot-cosmonaut in January 1961. Valentina Tereshkova was the first female cosmonaut and the first and youngest woman to have flown in space with a solo mission on the Vostok 6 in 1963. On 14 March 1995, Norman Thagard became the first American to ride to space on board a Russian launch vehicle, and thus became the first "American cosmonaut".
Taikonaut
In Chinese, the term (, "cosmos navigating personnel") is used for astronauts and cosmonauts in general, while (, "navigating celestial-heaven personnel") is used for Chinese astronauts. Here, (, literally "heaven-navigating", or spaceflight) is strictly defined as the navigation of outer space within the local star system, i.e. Solar System. The phrase (, "spaceman") is often used in Hong Kong and Taiwan.
The term taikonaut is used by some English-language news media organizations for professional space travelers from China. The word has featured in the Longman and Oxford English dictionaries, and the term became more common in 2003 when China sent its first astronaut Yang Liwei into space aboard the Shenzhou 5 spacecraft. This is the term used by Xinhua News Agency in the English version of the Chinese People's Daily since the advent of the Chinese space program. The origin of the term is unclear; as early as May 1998, Chiew Lee Yih () from Malaysia used it in newsgroups.
Parastronaut
For its 2022 Astronaut Group, the European Space Agency envisioned recruiting an astronaut with a physical disability, a category they called "parastronauts", with the intention but not guarantee of spaceflight. The categories of disability considered for the program were individuals with lower limb deficiency (either through amputation or congenital), leg length difference, or a short stature (less than ). On 23 November 2022, John McFall was selected to be the first ESA parastronaut.
Other terms
With the rise of space tourism, NASA and the Russian Federal Space Agency agreed to use the term "spaceflight participant" to distinguish those space travelers from professional astronauts on missions coordinated by those two agencies.
While no nation other than Russia (and previously the Soviet Union), the United States, and China have launched a crewed spacecraft, several other nations have sent people into space in cooperation with one of these countries, e.g. the Soviet-led Interkosmos program. Inspired partly by these missions, other synonyms for astronaut have entered occasional English usage. For example, the term spationaut () is sometimes used to describe French space travelers, from the Latin word for "space"; the Malay term (deriving from angkasa meaning 'space') was used to describe participants in the Angkasawan program (note its similarity with the Indonesian term antariksawan). Plans of the Indian Space Research Organisation to launch its crewed Gaganyaan spacecraft have spurred at times public discussion if another term than astronaut should be used for the crew members, suggesting vyomanaut (from the Sanskrit word meaning 'sky' or 'space') or gagannaut (from the Sanskrit word for 'sky'). In Finland, the NASA astronaut Timothy Kopra, a Finnish American, has sometimes been referred to as , from the Finnish word . Across Germanic languages, the word for "astronaut" typically translates to "space traveler", as it does with German's Raumfahrer, Dutch's ruimtevaarder, Swedish's rymdfarare, and Norwegian's romfarer.
As of 2021 in the United States, astronaut status is conferred on a person depending on the authorizing agency:
one who flies in a vehicle above for NASA or the military is considered an astronaut (with no qualifier)
one who flies in a vehicle to the International Space Station in a mission coordinated by NASA and Roscosmos is a spaceflight participant
one who flies above in a non-NASA vehicle as a crewmember and demonstrates activities during flight that are essential to public safety, or contribute to human space flight safety, is considered a commercial astronaut by the Federal Aviation Administration
one who flies to the International Space Station as part of a "privately funded, dedicated commercial spaceflight on a commercial launch vehicle dedicated to the mission ... to conduct approved commercial and marketing activities on the space station (or in a commercial segment attached to the station)" is considered a private astronaut by NASA (as of 2020, nobody has yet qualified for this status)
a generally-accepted but unofficial term for a paying non-crew passenger who flies a private non-NASA or military vehicles above is a space tourist (as of 2020, nobody has yet qualified for this status)
On July 20, 2021, the FAA issued an order redefining the eligibility criteria to be an astronaut in response to the private suborbital spaceflights of Jeff Bezos and Richard Branson. The new criteria states that one must have "[d]emonstrated activities during flight that were essential to public safety, or contributed to
human space flight safety" to qualify as an astronaut. This new definition excludes Bezos and Branson.
Space travel milestones
The first human in space was Soviet Yuri Gagarin, who was launched on 12 April 1961, aboard Vostok 1 and orbited around the Earth for 108 minutes. The first woman in space was Soviet Valentina Tereshkova, who launched on 16 June 1963, aboard Vostok 6 and orbited Earth for almost three days.
Alan Shepard became the first American and second person in space on 5 May 1961, on a 15-minute sub-orbital flight aboard Freedom 7. The first American to orbit the Earth was John Glenn, aboard Friendship 7 on 20 February 1962. The first American woman in space was Sally Ride, during Space Shuttle Challenger's mission STS-7, on 18 June 1983. In 1992, Mae Jemison became the first African American woman to travel in space aboard STS-47.
Cosmonaut Alexei Leonov was the first person to conduct an extravehicular activity (EVA), (commonly called a "spacewalk"), on 18 March 1965, on the Soviet Union's Voskhod 2 mission. This was followed two and a half months later by astronaut Ed White who made the first American EVA on NASA's Gemini 4 mission.
The first crewed mission to orbit the Moon, Apollo 8, included American William Anders who was born in Hong Kong, making him the first Asian-born astronaut in 1968.
The Soviet Union, through its Intercosmos program, allowed people from other "socialist" (i.e. Warsaw Pact and other Soviet-allied) countries to fly on its missions, with the notable exceptions of France and Austria participating in Soyuz TM-7 and Soyuz TM-13, respectively. An example is Czechoslovak Vladimír Remek, the first cosmonaut from a country other than the Soviet Union or the United States, who flew to space in 1978 on a Soyuz-U rocket. Rakesh Sharma became the first Indian citizen to travel to space. He was launched aboard Soyuz T-11, on 2 April 1984.
On 23 July 1980, Pham Tuan of Vietnam became the first Asian in space when he flew aboard Soyuz 37. Also in 1980, Cuban Arnaldo Tamayo Méndez became the first person of Hispanic and black African descent to fly in space, and in 1983, Guion Bluford became the first African American to fly into space. In April 1985, Taylor Wang became the first ethnic Chinese person in space. The first person born in Africa to fly in space was Patrick Baudry (France), in 1985. In 1985, Saudi Arabian Prince Sultan Bin Salman Bin AbdulAziz Al-Saud became the first Arab Muslim astronaut in space. In 1988, Abdul Ahad Mohmand became the first Afghan to reach space, spending nine days aboard the Mir space station.
With the increase of seats on the Space Shuttle, the U.S. began taking international astronauts. In 1983, Ulf Merbold of West Germany became the first non-US citizen to fly in a US spacecraft. In 1984, Marc Garneau became the first of eight Canadian astronauts to fly in space (through 2010).
In 1985, Rodolfo Neri Vela became the first Mexican-born person in space. In 1991, Helen Sharman became the first Briton to fly in space.
In 2002, Mark Shuttleworth became the first citizen of an African country to fly in space, as a paying spaceflight participant. In 2003, Ilan Ramon became the first Israeli to fly in space, although he died during a re-entry accident.
On 15 October 2003, Yang Liwei became China's first astronaut on the Shenzhou 5 spacecraft.
On 30 May 2020, Doug Hurley and Bob Behnken became the first astronauts to launch to orbit on a private crewed spacecraft, Crew Dragon.
Age milestones
The youngest person to reach space is Oliver Daemen, who was 18 years and 11 months old when he made a suborbital spaceflight on Blue Origin NS-16. Daemen, who was a commercial passenger aboard the New Shepard, broke the record of Soviet cosmonaut Gherman Titov, who was 25 years old when he flew Vostok 2. Titov remains the youngest human to reach orbit; he rounded the planet 17 times. Titov was also the first person to suffer space sickness and the first person to sleep in space, twice. The oldest person to reach space is William Shatner, who was 90 years old when he made a suborbital spaceflight on Blue Origin NS-18. The oldest person to reach orbit is John Glenn, one of the Mercury 7, who was 77 when he flew on STS-95.
Duration and distance milestones
The longest time spent in space was by Russian Valeri Polyakov, who spent 438 days there.
As of 2006, the most spaceflights by an individual astronaut is seven, a record held by both Jerry L. Ross and Franklin Chang-Diaz. The farthest distance from Earth an astronaut has traveled was , when Jim Lovell, Jack Swigert, and Fred Haise went around the Moon during the Apollo 13 emergency.
Civilian and non-government milestones
The first civilian in space was Valentina Tereshkova aboard Vostok 6 (she also became the first woman in space on that mission).
Tereshkova was only honorarily inducted into the USSR's Air Force, which did not accept female pilots at that time. A month later, Joseph Albert Walker became the first American civilian in space when his X-15 Flight 90 crossed the line, qualifying him by the international definition of spaceflight. Walker had joined the US Army Air Force but was not a member during his flight.
The first people in space who had never been a member of any country's armed forces were both Konstantin Feoktistov and Boris Yegorov aboard Voskhod 1.
The first non-governmental space traveler was Byron K. Lichtenberg, a researcher from the Massachusetts Institute of Technology who flew on STS-9 in 1983. In December 1990, Toyohiro Akiyama became the first paying space traveler and the first journalist in space for Tokyo Broadcasting System, a visit to Mir as part of an estimated $12 million (USD) deal with a Japanese TV station, although at the time, the term used to refer to Akiyama was "Research Cosmonaut". Akiyama suffered severe space sickness during his mission, which affected his productivity.
The first self-funded space tourist was Dennis Tito on board the Russian spacecraft Soyuz TM-3 on 28 April 2001.
Self-funded travelers
The first person to fly on an entirely privately funded mission was Mike Melvill, piloting SpaceShipOne flight 15P on a suborbital journey, although he was a test pilot employed by Scaled Composites and not an actual paying space tourist. Jared Isaacman was the first person to self-fund a mission to orbit, commanding Inspiration4 in 2021. Nine others have paid Space Adventures to fly to the International Space Station:
Dennis Tito (American): 28 April – 6 May 2001
Mark Shuttleworth (South African): 25 April – 5 May 2002
Gregory Olsen (American): 1–11 October 2005
Anousheh Ansari (Iranian / American): 18–29 September 2006
Charles Simonyi (Hungarian / American): 7–21 April 2007, 26 March – 8 April 2009
Richard Garriott (British / American): 12–24 October 2008
Guy Laliberté (Canadian): 30 September 2009 – 11 October 2009
Yusaku Maezawa and Yozo Hirano (both Japanese): 8 – 24 December 2021
Training
The first NASA astronauts were selected for training in 1959. Early in the space program, military jet test piloting and engineering training were often cited as prerequisites for selection as an astronaut at NASA, although neither John Glenn nor Scott Carpenter (of the Mercury Seven) had any university degree, in engineering or any other discipline at the time of their selection. Selection was initially limited to military pilots. The earliest astronauts for both the US and the USSR tended to be jet fighter pilots, and were often test pilots.
Once selected, NASA astronauts go through twenty months of training in a variety of areas, including training for extravehicular activity in a facility such as NASA's Neutral Buoyancy Laboratory. Astronauts-in-training (astronaut candidates) may also experience short periods of weightlessness (microgravity) in an aircraft called the "Vomit Comet," the nickname given to a pair of modified KC-135s (retired in 2000 and 2004, respectively, and replaced in 2005 with a C-9) which perform parabolic flights. Astronauts are also required to accumulate a number of flight hours in high-performance jet aircraft. This is mostly done in T-38 jet aircraft out of Ellington Field, due to its proximity to the Johnson Space Center. Ellington Field is also where the Shuttle Training Aircraft is maintained and developed, although most flights of the aircraft are conducted from Edwards Air Force Base.
Astronauts in training must learn how to control and fly the Space Shuttle; further, it is vital that they are familiar with the International Space Station so they know what they must do when they get there.
NASA candidacy requirements
The candidate must be a citizen of the United States.
The candidate must complete a master's degree in a STEM field, including engineering, biological science, physical science, computer science or mathematics.
The candidate must have at least two years of related professional experience obtained after degree completion or at least 1,000 hours pilot-in-command time on jet aircraft.
The candidate must be able to pass the NASA long-duration flight astronaut physical.
The candidate must also have skills in leadership, teamwork and communications.
The master's degree requirement can also be met by:
Two years of work toward a doctoral program in a related science, technology, engineering or math field.
A completed Doctor of Medicine or Doctor of Osteopathic Medicine degree.
Completion of a nationally recognized test pilot school program.
Mission Specialist Educator
Applicants must have a bachelor's degree with teaching experience, including work at the kindergarten through twelfth grade level. An advanced degree, such as a master's degree or a doctoral degree, is not required, but is strongly desired.
Mission Specialist Educators, or "Educator Astronauts", were first selected in 2004; as of 2007, there are three NASA Educator astronauts: Joseph M. Acaba, Richard R. Arnold, and Dorothy Metcalf-Lindenburger.
Barbara Morgan, selected as back-up teacher to Christa McAuliffe in 1985, is considered to be the first Educator astronaut by the media, but she trained as a mission specialist.
The Educator Astronaut program is a successor to the Teacher in Space program from the 1980s.
Health risks of space travel
Astronauts are susceptible to a variety of health risks including decompression sickness, barotrauma, immunodeficiencies, loss of bone and muscle, loss of eyesight, orthostatic intolerance, sleep disturbances, and radiation injury. A variety of large scale medical studies are being conducted in space via the National Space Biomedical Research Institute (NSBRI) to address these issues. Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity Study in which astronauts (including former ISS commanders Leroy Chiao and Gennady Padalka) perform ultrasound scans under the guidance of remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study's techniques are now being applied to cover professional and Olympic sports injuries as well as ultrasound performed by non-expert operators in medical and high school students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations, where access to a trained physician is often rare.
A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space.
On 31 December 2012, a NASA-supported study reported that human spaceflight may harm the brain and accelerate the onset of Alzheimer's disease.
In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.
Over the last decade, flight surgeons and scientists at NASA have seen a pattern of vision problems in astronauts on long-duration space missions. The syndrome, known as visual impairment intracranial pressure (VIIP), has been reported in nearly two-thirds of space explorers after long periods spent aboard the International Space Station (ISS).
On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.
Being in space can be physiologically deconditioning on the body. It can affect the otolith organs and adaptive capabilities of the central nervous system. Zero gravity and cosmic rays can cause many implications for astronauts.
In October 2018, NASA-funded researchers found that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely.
Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.
A study by Russian scientists published in April 2019 stated that astronauts facing space radiation could face temporary hindrance of their memory centers. While this does not affect their intellectual capabilities, it temporarily hinders formation of new cells in brain's memory centers. The study conducted by Moscow Institute of Physics and Technology (MIPT) concluded this after they observed that mice exposed to neutron and gamma radiation did not impact the rodents' intellectual capabilities.
A 2020 study conducted on the brains of eight male Russian cosmonauts after they returned from long stays aboard the International Space Station showed that long-duration spaceflight causes many physiological adaptions, including macro- and microstructural changes. While scientists still know little about the effects of spaceflight on brain structure, this study showed that space travel can lead to new motor skills (dexterity), but also slightly weaker vision, both of which could possibly be long lasting. It was the first study to provide clear evidence of sensorimotor neuroplasticity, which is the brain's ability to change through growth and reorganization.
Food and drink
An astronaut on the International Space Station requires about mass of food per meal each day (inclusive of about packaging mass per meal).
Space Shuttle astronauts worked with nutritionists to select menus that appealed to their individual tastes. Five months before flight, menus were selected and analyzed for nutritional content by the shuttle dietician. Foods are tested to see how they will react in a reduced gravity environment. Caloric requirements are determined using a basal energy expenditure (BEE) formula. On Earth, the average American uses about of water every day. On board the ISS astronauts limit water use to only about per day.
Insignia
In Russia, cosmonauts are awarded Pilot-Cosmonaut of the Russian Federation upon completion of their missions, often accompanied with the award of Hero of the Russian Federation. This follows the practice established in the USSR where cosmonauts were usually awarded the title Hero of the Soviet Union.
At NASA, those who complete astronaut candidate training receive a silver lapel pin. Once they have flown in space, they receive a gold pin. U.S. astronauts who also have active-duty military status receive a special qualification badge, known as the Astronaut Badge, after participation on a spaceflight. The United States Air Force also presents an Astronaut Badge to its pilots who exceed in altitude.
Deaths
, eighteen astronauts (fourteen men and four women) have died during four space flights. By nationality, thirteen were American, four were Russian (Soviet Union), and one was Israeli.
, eleven people (all men) have died training for spaceflight: eight Americans and three Russians. Six of these were in crashes of training jet aircraft, one drowned during water recovery training, and four were due to fires in pure oxygen environments.
Astronaut David Scott left a memorial consisting of a statuette titled Fallen Astronaut on the surface of the Moon during his 1971 Apollo 15 mission, along with a list of the names of eight of the astronauts and six cosmonauts known at the time to have died in service.
The Space Mirror Memorial, which stands on the grounds of the Kennedy Space Center Visitor Complex, is maintained by the Astronauts Memorial Foundation and commemorates the lives of the men and women who have died during spaceflight and during training in the space programs of the United States. In addition to twenty NASA career astronauts, the memorial includes the names of an X-15 test pilot, a U.S. Air Force officer who died while training for a then-classified military space program, and a civilian spaceflight participant.
See also
Explanatory notes
References
External links
NASA: How to become an astronaut 101
List of International partnership organizations
Encyclopedia Astronautica: Phantom cosmonauts
collectSPACE: Astronaut appearances calendar
spacefacts Spacefacts.de
Manned astronautics: facts and figures
Astronaut Candidate Brochure online
Science occupations
1959 introductions | Astronaut | [
"Biology"
] | 5,887 | [
"Astronauts",
"Space-flown life"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.