id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
164,598 | https://en.wikipedia.org/wiki/Radiative%20cooling | In the study of heat transfer, radiative cooling is the process by which a body loses heat by thermal radiation. As Planck's law describes, every physical body spontaneously and continuously emits electromagnetic radiation.
Radiative cooling has been applied in various contexts throughout human history, including ice making in India and Iran, heat shields for spacecraft, and in architecture. In 2014, a scientific breakthrough in the use of photonic metamaterials made daytime radiative cooling possible. It has since been proposed as a strategy to mitigate local and global warming caused by greenhouse gas emissions known as passive daytime radiative cooling.
Terrestrial radiative cooling
Mechanism
Infrared radiation can pass through dry, clear air in the wavelength range of 8–13 μm. Materials that can absorb energy and radiate it in those wavelengths exhibit a strong cooling effect. Materials that can also reflect 95% or more of sunlight in the 200 nanometres to 2.5 μm range can exhibit cooling even in direct sunlight.
Earth's energy budget
The Earth-atmosphere system is radiatively cooled, emitting long-wave (infrared) radiation which balances the absorption of short-wave (visible light) energy from the sun.
Convective transport of heat, and evaporative transport of latent heat are both important in removing heat from the surface and distributing it in the atmosphere. Pure radiative transport is more important higher up in the atmosphere. Diurnal and geographical variation further complicate the picture.
The large-scale circulation of the Earth's atmosphere is driven by the difference in absorbed solar radiation per square meter, as the sun heats the Earth more in the Tropics, mostly because of geometrical factors. The atmospheric and oceanic circulation redistributes some of this energy as sensible heat and latent heat partly via the mean flow and partly via eddies, known as cyclones in the atmosphere. Thus the tropics radiate less to space than they would if there were no circulation, and the poles radiate more; however in absolute terms the tropics radiate more energy to space.
Nocturnal surface cooling
Radiative cooling is commonly experienced on cloudless nights, when heat is radiated into outer space from Earth's surface, or from the skin of a human observer. The effect is well-known among amateur astronomers.
The effect can be experienced by comparing skin temperature from looking straight up into a cloudless night sky for several seconds, to that after placing a sheet of paper between the face and the sky. Since outer space radiates at about a temperature of , and the sheet of paper radiates at about (around room temperature), the sheet of paper radiates more heat to the face than does the darkened cosmos. The effect is blunted by Earth's surrounding atmosphere, and particularly the water vapor it contains, so the apparent temperature of the sky is far warmer than outer space. The sheet does not block the cold, but instead reflects heat to the face and radiates the heat of the face that it just absorbed.
The same radiative cooling mechanism can cause frost or black ice to form on surfaces exposed to the clear night sky, even when the ambient temperature does not fall below freezing.
Kelvin's estimate of the Earth's age
The term radiative cooling is generally used for local processes, though the same principles apply to cooling over geological time, which was first used by Kelvin to estimate the age of the Earth (although his estimate ignored the substantial heat released by radioisotope decay, not known at the time, and the effects of convection in the mantle).
Astronomy
Radiative cooling is one of the few ways an object in space can give off energy. In particular, white dwarf stars are no longer generating energy by fusion or gravitational contraction, and have no solar wind. So the only way their temperature changes is by radiative cooling. This makes their temperature as a function of age very predictable, so by observing the temperature, astronomers can deduce the age of the star.
Applications
Climate change
Architecture
Cool roofs combine high solar reflectance with high infrared emittance, thereby simultaneously reducing heat gain from the sun and increasing heat removal through radiation. Radiative cooling thus offers potential for passive cooling for residential and commercial buildings. Traditional building surfaces, such as paint coatings, brick and concrete have high emittances of up to 0.96. They radiate heat into the sky to passively cool buildings at night. If made sufficiently reflective to sunlight, these materials can also achieve radiative cooling during the day.
The most common radiative coolers found on buildings are white cool-roof paint coatings, which have solar reflectances of up to 0.94, and thermal emittances of up to 0.96. The solar reflectance of the paints arises from optical scattering by the dielectric pigments embedded in the polymer paint resin, while the thermal emittance arises from the polymer resin. However, because typical white pigments like titanium dioxide and zinc oxide absorb ultraviolet radiation, the solar reflectances of paints based on such pigments do not exceed 0.95.
In 2014, researchers developed the first daytime radiative cooler using a multi-layer thermal photonic structure that selectively emits long wavelength infrared radiation into space, and can achieve 5 °C sub-ambient cooling under direct sunlight. Later researchers developed paintable porous polymer coatings, whose pores scatter sunlight to give solar reflectance of 0.96-0.99 and thermal emittance of 0.97. In experiments under direct sunlight, the coatings achieve 6 °C sub-ambient temperatures and cooling powers of 96 W/m2.
Other notable radiative cooling strategies include dielectric films on metal mirrors, and polymer or polymer composites on silver or aluminum films. Silvered polymer films with solar reflectances of 0.97 and thermal emittance of 0.96, which remain 11 °C cooler than commercial white paints under the mid-summer sun, were reported in 2015. Researchers explored designs with dielectric silicon dioxide or silicon carbide particles embedded in polymers that are translucent in the solar wavelengths and emissive in the infrared. In 2017, an example of this design with resonant polar silica microspheres randomly embedded in a polymeric matrix, was reported. The material is translucent to sunlight and has infrared emissivity of 0.93 in the infrared atmospheric transmission window. When backed with silver coating, the material achieved a midday radiative cooling power of 93 W/m2 under direct sunshine along with high-throughput, economical roll-to-roll manufacturing.
Heat shields
High emissivity coatings that facilitate radiative cooling may be used in reusable thermal protection systems (RTPS) in spacecraft and hypersonic aircraft. In such heat shields a high emissivity material, such as molybdenum disilicide (MoSi2) is applied on a thermally insulating ceramic substrate. In such heat shields high levels of total emissivity, typically in the range 0.8 - 0.9, need to be maintained across a range of high temperatures. Planck's law dictates that at higher temperatures the radiative emission peak shifts to lower wavelengths (higher frequencies), influencing material selection as a function of operating temperature. In addition to effective radiative cooling, radiative thermal protection systems should provide damage tolerance and may incorporate self-healing functions through the formation of a viscous glass at high temperatures.
James Webb Space Telescope
The James Webb Space Telescope uses radiative cooling to reach its operation temperature of about 50 K. To do this, its large reflective sunshield blocks radiation from the Sun, Earth, and Moon. The telescope structure, kept permanently in shadow by the sunshield, then cools by radiation.
Nocturnal ice making in early India and Iran
Before the invention of artificial refrigeration technology, ice making by nocturnal cooling was common in both India and Iran.
In India, such apparatuses consisted of a shallow ceramic tray with a thin layer of water, placed outdoors with a clear exposure to the night sky. The bottom and sides were insulated with a thick layer of hay. On a clear night the water would lose heat by radiation upwards. Provided the air was calm and not too far above freezing, heat gain from the surrounding air by convection was low enough to allow the water to freeze.
In Iran, this involved making large flat ice pools, which consisted of a reflection pool of water built on a bed of highly insulative material surrounded by high walls. The high walls provided protection against convective warming, the insulative material of the pool walls would protect against conductive heating from the ground, the large flat plane of water would then permit evaporative and radiative cooling to take place.
Types
The three basic types of radiant cooling are direct, indirect, and fluorescent:
Direct radiant cooling - In a building designed to optimize direct radiation cooling, the building roof acts as a heat sink to absorb the daily internal loads. The roof acts as the best heat sink because it is the greatest surface exposed to the night sky. Radiate heat transfer with the night sky will remove heat from the building roof, thus cooling the building structure. Roof ponds are an example of this strategy. The roof pond design became popular with the development of the Sky thermal system designed by Harold Hay in 1977. There are various designs and configurations for the roof pond system but the concept is the same for all designs. The roof uses water, either plastic bags filled with water or an open pond, as the heat sink while a system of movable insulation panels regulate the mode of heating or cooling. During daytime in the summer, the water on the roof is protected from the solar radiation and ambient air temperature by movable insulation, which allows it to serve as a heat sink and absorb the heat generated inside through the ceiling. At night, the panels are retracted to allow nocturnal radiation between the roof pond and the night sky, thus removing the stored heat. In winter, the process is reversed so that the roof pond is allowed to absorb solar radiation during the day and release it during the night into the space below.
Indirect radiant cooling - A heat transfer fluid removes heat from the building structure through radiate heat transfer with the night sky. A common design for this strategy involves a plenum between the building roof and the radiator surface. Air is drawn into the building through the plenum, cooled from the radiator, and cools the mass of the building structure. During the day, the building mass acts as a heat sink.
Fluorescent radiant cooling - An object can be made fluorescent: it will then absorb light at some wavelengths, but radiate the energy away again at other, selected wavelengths. By selectively radiating heat in the infrared atmospheric window, a range of frequencies in which the atmosphere is unusually transparent, an object can effectively use outer space as a heat sink, and cool to well below ambient air temperature.
See also
Heat shield
Optical solar reflector, used for thermal control of spacecraft
Passive cooling
Radiative forcing
Stefan–Boltzmann law
Terrestrial albedo effect
Urban heat island
Urban thermal plume
References
Thermodynamics
Atmospheric radiation | Radiative cooling | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,304 | [
"Thermodynamics",
"Dynamical systems"
] |
164,600 | https://en.wikipedia.org/wiki/General%20circulation%20model | A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.
GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."
Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.
Terminology
The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modelling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.
Atmospheric and oceanic models
Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model.
Structure
General Circulation Models (GCMs) discretise the equations for fluid motion and energy transfer and integrate these over time. Unlike simpler models, GCMs divide the atmosphere and/or oceans into grids of discrete "cells", which represent computational units. Unlike simpler models which make mixing assumptions, processes internal to a cell—such as convection—that occur on scales too small to be resolved directly are parameterised at the cell level, while other functions govern the interface between cells.
Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly.
A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections.
Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs). They may include atmospheric chemistry.
AGCMs consist of a dynamical core which integrates the equations of fluid motion, typically for:
surface pressure
horizontal components of velocity in layers
temperature and water vapor in layers
radiation, split into solar/short wave and terrestrial/infrared/long wave
parameters for:
convection
land surface processes
albedo
hydrology
cloud cover
A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds.
OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.
AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself.
Grid
The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude / longitude grid). However, non-rectangular grids (e.g., icosahedral) and grids of variable resolution are more often used. The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a Gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively. These resolutions are lower than is typically used for weather forecasting. Ocean resolutions tend to be higher, for example HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.
For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.
Flux buffering
Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result, the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections.
Convection
Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used, although a variety of different schemes are now in use. Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcoming of the method.
Software
Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 2-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.
Projections
Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain.
The 2001 IPCC Third Assessment Report Figure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year. Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.
Future scenarios do not include unknown events for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect.
Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels.
Emissions scenarios
For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–1999) of 1.8 °C to 4.0 °C. Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C.
In 2008 a study made climate projections using several emission scenarios. In a scenario where global emissions start to decrease by 2010 and then declined at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely.
Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C.
Model accuracy
AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes in Earth system models, such as the carbon cycle, so as to better model feedbacks. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings.
Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century.
A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise, was resolved in favour of the models, following data revisions.
Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface. In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate.
Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models.
In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.
The precise magnitude of future changes in climate is still uncertain; for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (5.4 °F) and the range is +1.3 to +4.5 °C (+2.3 to 8.1 °F).
The IPCC's Fifth Assessment Report asserted "very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period". However, the report also observed that the rate of warming over the period 1998–2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.
Relation to weather forecasting
The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct.
Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are typically a few days or a week and sea surface temperatures change relatively slowly, such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast typically these are taken from the output of a previous forecast, blended with observations. Weather predictions are required at higher temporal resolutions than climate projections, often sub-hourly compared to monthly or yearly averages for climate. However, because weather forecasts only cover around 10 days the models can also be run at higher vertical and horizontal resolutions than climate mode. Currently the ECMWF runs at resolution as opposed to the scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an resolution covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times.
Computations
Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice.
All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.
The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.
Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models.
Models range in complexity:
A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy
This can be expanded vertically (radiative-convective models), or horizontally
Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
Box models treat flows across and within ocean basins.
Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.
Comparison with other climate models
Earth-system models of intermediate complexity (EMICs)
The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.
Radiative-convective models (RCM)
One-dimensional, radiative-convective models were used to verify basic climate assumptions in the 1980s and 1990s.
Earth system models
GCMs can form part of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon chemistry transport model may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the effects of climate change on the ozone hole to be studied.
History
In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model. Following Phillips's work, several groups began working to create GCMs. The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined. In 1996, efforts began to model soil and vegetation types. Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements. The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately.
See also
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM) (in the US)
Earth Simulator
Global Environmental Multiscale Model
Ice-sheet model
Intermediate General Circulation Model
NCAR
Prognostic variable
Charney Report
References
.
Further reading
External links
IPCC AR5, Evaluation of Climate Models
with media including videos, animations, podcasts and transcripts on climate models
GFDL's Flexible Modeling System containing code for the climate models
Program for climate model diagnosis and intercomparison (PCMDI/CMIP)
National Operational Model Archive and Distribution System (NOMADS)
Hadley Centre for Climate Prediction and Research model info
NCAR/UCAR Community Climate System Model (CESM)
Climate prediction, community modeling
NASA/GISS, primary research GCM model
EDGCM/NASA: Educational Global Climate Modeling
NOAA/GFDL
MAOAM: Martian Atmosphere Observation and Modeling / MPI & MIPT
Numerical climate and weather models
Climate forcing
Computational science
Climate change
Articles containing video clips | General circulation model | [
"Mathematics"
] | 4,396 | [
"Computational science",
"Applied mathematics"
] |
164,602 | https://en.wikipedia.org/wiki/Efficient-market%20hypothesis | The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to "beat the market" consistently on a risk-adjusted basis since market prices should only react to new information.
Because the EMH is formulated in terms of risk adjustment, it only makes testable predictions when coupled with a particular model of risk. As a result, research in financial economics since at least the 1990s has focused on market anomalies, that is, deviations from specific models of risk.
The idea that financial market returns are difficult to predict goes back to Bachelier, Mandelbrot, and Samuelson, but is closely associated with Eugene Fama, in part due to his influential 1970 review of the theoretical and empirical research. The EMH provides the basic logic for modern risk-based theories of asset prices, and frameworks such as consumption-based asset pricing and intermediary asset pricing can be thought of as the combination of a model of risk with the EMH.
Many decades of empirical research on return predictability has found mixed evidence. Research in the 1950s and 1960s often found a lack of predictability (e.g. Ball and Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the 1980s-2000s saw an explosion of discovered return predictors (e.g. Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; Jegadeesh and Titman 1993). Since the 2010s, studies have often found that return predictability has become more elusive, as predictability fails to work out-of-sample (Goyal and Welch 2008), or has been weakened by advances in trading technology and investor learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff 2016; Martineau 2021).
Theoretical background
Suppose that a piece of information about the value of a stock (say, about a future merger) is widely available to investors. If the price of the stock does not already reflect that information, then investors can trade on it, thereby moving the price until the information is no longer useful for trading.
Note that this thought experiment does not necessarily imply that stock prices are unpredictable. For example, suppose that the piece of information in question says that a financial crisis is likely to come soon. Investors typically do not like to hold stocks during a financial crisis, and thus investors may sell stocks until the price drops enough so that the expected return compensates for this risk.
How efficient markets are (and are not) linked to the random walk theory can be described through the fundamental theorem of asset pricing. This theorem provides mathematical predictions regarding the price of a stock, assuming that there is no arbitrage, that is, assuming that there is no risk-free way to trade profitably. Formally, if arbitrage is impossible, then the theorem predicts that the price of a stock is the discounted value of its future price and dividend:
where is the expected value given information at time , is the stochastic discount factor, and is the dividend the stock pays next period.
Note that this equation does not generally imply a random walk. However, if we assume the stochastic discount factor is constant and the time interval is short enough so that no dividend is being paid, we have
.
Taking logs and assuming that the Jensen's inequality term is negligible, we have
which implies that the log of stock prices follows a random walk (with a drift).
Although the concept of an efficient market is similar to the assumption that stock prices follow:
which follows a martingale, the EMH does not always assume that stocks follow a martingale.
Empirical studies
Research by Alfred Cowles in the 1930s and 1940s suggested that professional investors were in general unable to outperform the market. During the 1930s-1950s empirical studies focused on time-series properties, and found that US stock prices and related financial series followed a random walk model in the short-term. While there is some predictability over the long-term, the extent to which this is due to rational time-varying risk premia as opposed to behavioral reasons is a subject of debate. In their seminal paper, Fama, Fisher, Jensen, and Roll (1969) propose the event study methodology and show that stock prices on average react before a stock split, but have no movement afterwards.
Weak, semi-strong, and strong-form tests
In Fama's influential 1970 review paper, he categorized empirical tests of efficiency into "weak-form", "semi-strong-form", and "strong-form" tests.
These categories of tests refer to the information set used in the statement "prices reflect all available information." Weak-form tests study the information contained in historical prices. Semi-strong form tests study information (beyond historical prices) which is publicly available. Strong-form tests regard private information.
Historical background
Benoit Mandelbrot claimed the efficient markets theory was first proposed by the French mathematician Louis Bachelier in 1900 in his PhD thesis "The Theory of Speculation" describing how prices of commodities and stocks varied in markets. It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him, and Bachelier's thesis is now considered pioneering in the field of financial mathematics. It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas, which was cited by mathematicians including Joseph L. Doob, William Feller and Andrey Kolmogorov. The book continued to be cited, but then starting in the 1960s the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.
The concept of market efficiency had been anticipated at the beginning of the century in the dissertation submitted by Bachelier (1900) to the Sorbonne for his PhD in mathematics. In his opening paragraph, Bachelier recognizes that "past, present and even discounted future events are reflected in market price, but often show no apparent relation to price changes".
The efficient markets theory was not popular until the 1960s when the advent of computers made it possible to compare calculations and prices of hundreds of stocks more quickly and effortlessly. In 1945, F.A. Hayek argued in his article The Use of Knowledge in Society that markets were the most effective way of aggregating the pieces of information dispersed among individuals within a society. Given the ability to profit from private information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more efficient market prices. In the competitive limit, market prices reflect all available information and prices can only move in response to news. Thus there is a very close link between EMH and the random walk hypothesis.
Early theories posited that predicting stock prices is unfeasible, as they depend on fresh information or news rather than existing or historical prices. Therefore, stock prices are thought to fluctuate randomly, and their predictability is believed to be no better than a 50% accuracy rate.
The efficient-market hypothesis emerged as a prominent theory in the mid-1960s. Paul Samuelson had begun to circulate Bachelier's work among economists. In 1964 Bachelier's dissertation along with the empirical studies mentioned above were published in an anthology edited by Paul Cootner. In 1965, Eugene Fama published his dissertation arguing for the random walk hypothesis. Also, Samuelson published a proof showing that if the market is efficient, prices will exhibit random-walk behavior. This is often cited in support of the efficient-market theory, by the method of affirming the consequent, however in that same paper, Samuelson warns against such backward reasoning, saying "From a nonempirical base of axioms you never get empirical results." In 1970, Fama published a review of both the theory and the evidence for the hypothesis. The paper extended and refined the theory, included the definitions for three forms of financial market efficiency: weak, semi-strong and strong (see above).
Criticism
Investors, including the likes of Warren Buffett, George Soros, and researchers have disputed the efficient-market hypothesis both empirically and theoretically. Behavioral economists attribute the imperfections in financial markets to a combination of cognitive biases such as overconfidence, overreaction, representative bias, information bias, and various other predictable human errors in reasoning and information processing. These have been researched by psychologists such as Daniel Kahneman, Amos Tversky and Paul Slovic and economist Richard Thaler.
Empirical evidence has been mixed, but has generally not supported strong forms of the efficient-market hypothesis. According to Dreman and Berry, in a 1995 paper, low P/E (price-to-earnings) stocks have greater returns. In an earlier paper, Dreman also refuted the assertion by Ray Ball that these higher returns could be attributed to higher beta leading to a failure to correctly risk-adjust returns; Dreman's research had been accepted by efficient market theorists as explaining the anomaly in neat accordance with modern portfolio theory.
Behavioral psychology
Behavioral psychology approaches to stock market trading are among some of the alternatives to EMH (investment strategies such as momentum trading seek to exploit exactly such inefficiencies). However, Nobel Laureate co-founder of the programme Daniel Kahneman —announced his skepticism of investors beating the market: "They're just not going to do it. It's just not going to happen." Indeed, defenders of EMH maintain that behavioral finance strengthens the case for EMH in that it highlights biases in individuals and committees and not competitive markets. For example, one prominent finding in behavioral finance is that individuals employ hyperbolic discounting. It is demonstrably true that bonds, mortgages, annuities and other similar obligations subject to competitive market forces do not. Any manifestation of hyperbolic discounting in the pricing of these obligations would invite arbitrage thereby quickly eliminating any vestige of individual biases. Similarly, diversification, derivative securities and other hedging strategies assuage if not eliminate potential mispricings from the severe risk-intolerance (loss aversion) of individuals underscored by behavioral finance. On the other hand, economists, behavioral psychologists and mutual fund managers are drawn from the human population and are therefore subject to the biases that behavioralists showcase. By contrast, the price signals in markets are far less subject to individual biases highlighted by the Behavioral Finance programme. Richard Thaler has started a fund based on his research on cognitive biases. In a 2008 report he identified complexity and herd behavior as central to the 2007–2008 financial crisis.
Further empirical work has highlighted the impact transaction costs have on the concept of market efficiency, with much evidence suggesting that any anomalies pertaining to market inefficiencies are the result of a cost benefit analysis made by those willing to incur the cost of acquiring the valuable information in order to trade on it. Additionally, the concept of liquidity is a critical component to capturing "inefficiencies" in tests for abnormal returns. Any test of this proposition faces the joint hypothesis problem, where it is impossible to ever test for market efficiency, since to do so requires the use of a measuring stick against which abnormal returns are compared —one cannot know if the market is efficient if one does not know if a model correctly stipulates the required rate of return. Consequently, a situation arises where either the asset pricing model is incorrect or the market is inefficient, but one has no way of knowing which is the case.
The performance of stock markets is correlated with the amount of sunshine in the city where the main exchange is located.
EMH anomalies and rejection of the Capital Asset Pricing Model (CAPM)
While event studies of stock splits are consistent with the EMH (Fama, Fisher, Jensen, and Roll, 1969), other empirical analyses have found problems with the efficient-market hypothesis. Early examples include the observation that small neglected stocks and stocks with high book-to-market (low price-to-book) ratios (value stocks) tended to achieve abnormally high returns relative to what could be explained by the CAPM. Further tests of portfolio efficiency by Gibbons, Ross and Shanken (1989) (GJR) led to rejections of the CAPM, although tests of efficiency inevitably run into the joint hypothesis problem (see Roll's critique).
Following GJR's results and mounting empirical evidence of EMH anomalies, academics began to move away from the CAPM towards risk factor models such as the Fama-French 3 factor model. These risk factor models are not properly founded on economic theory (whereas CAPM is founded on Modern Portfolio Theory), but rather, constructed with long-short portfolios in response to the observed empirical EMH anomalies. For instance, the "small-minus-big" (SMB) factor in the FF3 factor model is simply a portfolio that holds long positions on small stocks and short positions on large stocks to mimic the risks small stocks face. These risk factors are said to represent some aspect or dimension of undiversifiable systematic risk which should be compensated with higher expected returns. Additional popular risk factors include the "HML" value factor (Fama and French, 1993); "MOM" momentum factor (Carhart, 1997); "ILLIQ" liquidity factors (Amihud et al. 2002). See also Robert Haugen.
View of some journalists, economists, and investors
Several observers have argued closed end funds (CEFs) show evidence of market inefficiency. Unlike mutual funds or exchange traded funds which can regularly redeem or create new shares and tend to trade very close to net asset value (NAV) of the assets held within the fund, CEFs raise capital by issuing a fixed number of shares at inception and after inception are closed to new capital. CEFs often trade at a price that is at a substantial discount (below) to their NAV but can also trade a premium (above NAV), implying that investors are paying substantially above or below the price for the same securities sold as CEFs than when sold in other contexts.
Economists Matthew Bishop and Michael Green claim that full acceptance of the hypothesis goes against the thinking of Adam Smith and John Maynard Keynes, who both believed irrational behavior had a real impact on the markets.
Economist John Quiggin has claimed that "Bitcoin is perhaps the finest example of a pure bubble", and that it provides a conclusive refutation of EMH. While other assets that have been used as currency (such as gold, tobacco) have value or utility independent of people's willingness to accept them as payment, Quiggin argues that "in the case of Bitcoin there is no source of value whatsoever" and thus Bitcoin should be priced at zero or worthless.
Tshilidzi Marwala surmised that artificial intelligence (AI) influences the applicability of the efficient market hypothesis in that the greater amount of AI-based market participants, the more efficient the markets become.
Warren Buffett has also argued against EMH, most notably in his 1984 presentation "The Superinvestors of Graham-and-Doddsville". He says preponderance of value investors among the world's money managers with the highest rates of performance rebuts the claim of EMH proponents that luck is the reason some investors appear more successful than others. Nonetheless, Buffett has recommended index funds that aim to track average market returns for most investors. Buffett's business partner Charlie Munger has stated the EMH is "obviously roughly correct", in that a hypothetical average investor will tend towards average results "and it's quite hard for anybody to [consistently] beat the market by significant margins". However, Munger also believes "extreme" commitment to the EMH is "bonkers", as the theory's originators were seduced by an "intellectually consistent theory that allowed them to do pretty mathematics [yet] the fundamentals did not properly tie to reality."
Burton Malkiel in his A Random Walk Down Wall Street (1973) argues that "the preponderance of statistical evidence" supports EMH, but admits there are enough "gremlins lurking about" in the data to prevent EMH from being conclusively proved.
In his book The Reformation in Economics, economist and financial analyst Philip Pilkington has argued that the EMH is actually a tautology masquerading as a theory. He argues that, taken at face value, the theory makes the banal claim that the average investor will not beat the market average—which is a tautology. When pressed on this point, Pinkington argues that EMH proponents will usually say that any actual investor will converge with the average investor given enough time and so no investor will beat the market average. But Pilkington points out that when proponents of the theory are presented with evidence that a small minority of investors do, in fact, beat the market over the long-run, these proponents then say that these investors were simply 'lucky'. Pilkington argues that introducing the idea that anyone who diverges from the theory is simply 'lucky' insulates the theory from falsification and so, drawing on the philosopher of science and critic of neoclassical economics Hans Albert, Pilkington argues that the theory falls back into being a tautology or a pseudoscientific construct.
Nobel Prize-winning economist Paul Samuelson argued that the stock market is "micro efficient" but not "macro efficient": the EMH is much better suited for individual stocks than it is for the aggregate stock market as a whole. Research based on regression and scatter diagrams, published in 2005, has strongly supported Samuelson's dictum.
Mathematician Andrew Odlyzko argued in a 2010 paper that the UK Railway Mania of the 1830s and '40s "provides a convincing demonstration of market inefficiency." When railroads were a new and innovative technology, there was widespread public interest in trading rail-related stocks and large amounts of capital were devoted to building more rail projects than could realistically be used for shipping or passengers. After the mania collapsed in the 1840s, many railroad stocks were worthless and many planned projects abandoned.
Peter Lynch, a mutual fund manager at Fidelity Investments who consistently more than doubled market averages while managing the Magellan Fund, has argued that the EMH is contradictory to the random walk hypothesis—though both concepts are widely taught in business schools without seeming awareness of a contradiction. If asset prices are rational and based on all available data as the efficient market hypothesis proposes, then fluctuations in asset price are not random. But if the random walk hypothesis is valid, then asset prices are not rational.
Joel Tillinghast, also a fund manager at Fidelity with a long history of outperforming a benchmark, has written that the core arguments of the EMH are "more true than not" and he accepts a "sloppy" version of the theory allowing for a margin of error. But he also contends the EMH is not completely accurate or accurate in all cases, given the recurrent existence of economic bubbles (when some assets are dramatically overpriced) and the fact that value investors (who focus on underpriced assets) have tended to outperform the broader market over long periods. Tillinghast also asserts that even staunch EMH proponents will admit weaknesses to the theory when assets are significantly over- or under-priced, such as double or half their value according to fundamental analysis.
In a 2012 book, investor Jack Schwager argues the EMH is "right for the wrong reasons". He agrees it is "very difficult" to consistently beat average market returns, but contends it's not due to how information is distributed more or less instantly to all market participants. Information may be distributed more or less instantly, but Schwager proposes information may not be interpreted or applied in the same way by different people and skill may play a factor in how information is used. Schwager argues markets are difficult to beat because of the unpredictable and sometimes irrational behavior of humans who buy and sell assets in the stock market. Schwager also cites several instances of mispricing that he contends are impossible according to a strict or strong interpretation of the EMH.
2007–2008 financial crisis
The 2007–2008 financial crisis led to renewed scrutiny and criticism of the hypothesis. Market strategist Jeremy Grantham said the EMH was responsible for the current financial crisis, claiming that belief in the hypothesis caused financial leaders to have a "chronic underestimation of the dangers of asset bubbles breaking". Financial journalist Roger Lowenstein said "The upside of the current Great Recession is that it could drive a stake through the heart of the academic nostrum known as the efficient-market hypothesis." Former Federal Reserve chairman Paul Volcker said "It should be clear that among the causes of the recent financial crisis was an unjustified faith in rational expectations, market efficiencies, and the techniques of modern finance." One financial analyst said "By 2007–2009, you had to be a fanatic to believe in the literal truth of the EMH."
At the International Organization of Securities Commissions annual conference, held in June 2009, the hypothesis took center stage. Martin Wolf, the chief economics commentator for the Financial Times, dismissed the hypothesis as being a useless way to examine how markets function in reality. Economist Paul McCulley said the hypothesis had not failed, but was "seriously flawed" in its neglect of human nature.
The financial crisis led economics scholar Richard Posner to back away from the hypothesis. Posner accused some of his Chicago School colleagues of being "asleep at the switch", saying that "the movement to deregulate the financial industry went too far by exaggerating the resilience—the self healing powers—of laissez-faire capitalism." Others, such as economist and Nobel laurete Eugene Fama, said that the hypothesis held up well during the crisis: "Stock prices typically decline prior to a recession and in a state of recession. This was a particularly severe recession. Prices started to decline in advance of when people recognized that it was a recession and then continued to decline. That was exactly what you would expect if markets are efficient." Despite this, Fama said that "poorly informed investors could theoretically lead the market astray" and that stock prices could become "somewhat irrational" as a result.
Efficient markets applied in securities class action litigation
The theory of efficient markets has been practically applied in the field of Securities Class Action Litigation. Efficient market theory, in conjunction with "fraud-on-the-market theory", has been used in Securities Class Action Litigation to both justify and as mechanism for the calculation of damages. In the Supreme Court Case, Halliburton v. Erica P. John Fund, U.S. Supreme Court, No. 13-317, the use of efficient market theory in supporting securities class action litigation was affirmed. Supreme Court Justice Roberts wrote that "the court's ruling was consistent with the ruling in 'Basic' because it allows 'direct evidence when such evidence is available' instead of relying exclusively on the efficient markets theory."
See also
Adaptive market hypothesis
Dumb agent theory
Financial market efficiency
Grossman-Stiglitz Paradox
Index fund
Insider trading
Investment theory
Noisy market hypothesis
Perfect market
Transparency (market)
Random walk hypothesis
Notes
References
Further reading
Bogle, John (1994). Bogle on Mutual Funds: New Perspectives for the Intelligent Investor, Dell,
Lo, Andrew and MacKinlay, Craig (2001). A Non-random Walk Down Wall St. Princeton Paperbacks
Malkiel, Burton G. (1987). "efficient market hypothesis," The New Palgrave: A Dictionary of Economics, v. 2, pp. 120–23.
Malkiel, Burton G. (1996). A Random Walk Down Wall Street, W. W. Norton,
Pilkington, P (2017). The Reformation in Economics: A Deconstruction and Reconstruction of Economic Theory. Palgrave Macmillan.
Samuelson, Paul (1972). "Proof That Properly Anticipated Prices Fluctuate Randomly." Industrial Management Review, Vol. 6, No. 2, pp. 41–49. Reproduced as Chapter 198 in Samuelson, Collected Scientific Papers, Volume III, Cambridge, M.I.T. Press.
Sharpe, William F. "The Arithmetic of Active Management"
Martineau, Charles (2021). "Rest in Peace Post-Earnings Announcement Drift". Critical Finance Review, Forthcoming.
External links
e-m-h.org
"Earnings Quality and the Equity Risk Premium: A Benchmark Model" abstract from Contemporary Accounting Research
"The Persistence of Pricing Inefficiencies in the Stock Markets of the Eastern European EU Nations" abstract from Economic and Business Review
"As The Index Fund Moves from Heresy to Dogma . . . What More Do We Need To Know?" Remarks by John Bogle on the superior returns of passively managed index funds.
Proof That Properly Discounted Present Values of Assets Vibrate Randomly Paul Samuelson
Human Behavior and the Efficiency of the Financial System (1999) by Robert J. Shiller Handbook of Macroeconomics
1900 introductions
Behavioral finance | Efficient-market hypothesis | [
"Biology"
] | 5,257 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
164,605 | https://en.wikipedia.org/wiki/Geopotential%20height | Geopotential height or geopotential altitude is a vertical coordinate referenced to Earth's mean sea level (assumed zero geopotential) that represents the work involved in lifting one unit of mass over one unit of length through a hypothetical space in which the acceleration of gravity is assumed constant.
In SI units, a geopotential height difference of one meter implies the vertical transport of a parcel of one kilogram; adopting the standard gravity value (9.80665 m/s2), it corresponds to a constant work or potential energy difference of 9.80665 joules.
Geopotential height differs from geometric height (as given by a tape measure) because Earth's gravity is not constant, varying markedly with altitude and latitude; thus, a 1-m geopotential height difference implies a different vertical distance in physical space: "the unit-mass must be lifted higher at the equator than at the pole, if the same amount of work is to be performed".
It is a useful concept in meteorology, climatology, and oceanography; it also remains a historical convention in aeronautics as the altitude used for calibration of aircraft barometric altimeters.
Definition
Geopotential is the gravitational potential energy per unit mass at elevation :
where is the acceleration due to gravity, is latitude, and is the geometric elevation.
Geopotential height may be obtained from normalizing geopotential by the acceleration of gravity:
where = 9.80665 m/s2, the standard gravity at mean sea level. Expressed in differential form,
Role in planetary fluids
Geopotential height plays an important role in atmospheric and oceanographic studies.
The differential form above may be substituted into the hydrostatic equation and ideal gas law in order to relate pressure to ambient temperature and geopotential height for measurement by barometric altimeters regardless of latitude or geometric elevation:
where and are ambient pressure and temperature, respectively, as functions of geopotential height, and is the specific gas constant. For the subsequent definite integral, the simplification obtained by assuming a constant value of gravitational acceleration is the sole reason for defining the geopotential altitude.
Usage
Geophysical sciences such as meteorology often prefer to express the horizontal pressure gradient force as the gradient of geopotential along a constant-pressure surface, because then it has the properties of a conservative force. For example, the primitive equations that weather forecast models solve use hydrostatic pressure as a vertical coordinate, and express the slopes of those pressure surfaces in terms of geopotential height.
A plot of geopotential height for a single pressure level in the atmosphere shows the troughs and ridges (highs and lows) which are typically seen on upper air charts. The geopotential thickness between pressure levels – difference of the 850 hPa and 1000 hPa geopotential heights for example – is proportional to mean virtual temperature in that layer. Geopotential height contours can be used to calculate the geostrophic wind, which is faster where the contours are more closely spaced and tangential to the geopotential height contours.
The United States National Weather Service defines geopotential height as:
See also
Atmospheric model
Above mean sea level
Dynamic height, a similar quantity used in geodesy, based on a slightly different gravity value
References
Further reading
Hofmann-Wellenhof, B. and Moritz, H. "Physical Geodesy", 2005. .
Eskinazi, S. "Fluid Mechanics and Thermodynamics of our Environment", 1975. .
External links
Atmospheric dynamics
Vertical position
fr:Hauteur du géopotentiel | Geopotential height | [
"Physics",
"Chemistry"
] | 736 | [
"Vertical position",
"Atmospheric dynamics",
"Physical quantities",
"Distance",
"Fluid dynamics"
] |
164,607 | https://en.wikipedia.org/wiki/Sensible%20heat | Sensible heat is heat exchanged by a body or thermodynamic system in which the exchange of heat changes the temperature of the body or system, and some macroscopic variables of the body or system, but leaves unchanged certain other macroscopic variables of the body or system, such as volume or pressure.
Usage
The term is used in contrast to a latent heat, which is the amount of heat exchanged that is hidden, meaning it occurs without change of temperature. For example, during a phase change such as the melting of ice, the temperature of the system containing the ice and the liquid is constant until all ice has melted. Latent and sensible heat are complementary terms.
The sensible heat of a thermodynamic process may be calculated as the product of the body's mass (m) with its specific heat capacity (c) and the change in temperature ():
Sensible heat and latent heat are not special forms of energy. Rather, they describe exchanges of heat under conditions specified in terms of their effect on a material or a thermodynamic system.
In the writings of the early scientists who provided the foundations of thermodynamics, sensible heat had a clear meaning in calorimetry. James Prescott Joule characterized it in 1847 as an energy that was indicated by the thermometer.
Both sensible and latent heats are observed in many processes while transporting energy in nature. Latent heat is associated with changes of state, measured at constant temperature, especially the phase changes of atmospheric water vapor, mostly vaporization and condensation, whereas sensible heat directly affects the temperature of the atmosphere.
In meteorology, the term 'sensible heat flux' means the conductive heat flux from the Earth's surface to the atmosphere. It is an important component of Earth's surface energy budget. Sensible heat flux is commonly measured with the eddy covariance method.
See also
Eddy covariance flux (eddy correlation, eddy flux)
Enthalpy
Thermodynamic databases for pure substances
References
Atmospheric thermodynamics
Thermodynamics
ja:顕熱 | Sensible heat | [
"Physics",
"Chemistry",
"Mathematics"
] | 424 | [
"Thermodynamics",
"Dynamical systems"
] |
164,610 | https://en.wikipedia.org/wiki/Latent%20heat | Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition, like melting or condensation.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas).
The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Usage
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics.
When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion, or increase its pressure by an amount described by its latent heat with respect to pressure.
Latent heat is energy released or absorbed by a body or a thermodynamic system during a constant-temperature process. Two common forms of latent heat are latent heat of fusion (melting) and latent heat of vaporization (boiling). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas.
In both cases the change is endothermic, meaning that the system absorbs energy. For example, when water evaporates, an input of energy is required for the water molecules to overcome the forces of attraction between them and make the transition from water to vapor.
If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface.
The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous.
Meteorology
In meteorology, latent heat flux is the flux of energy from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere. It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the eddy covariance method.
History
Background
Evaporative cooling
In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. No heat was withdrawn from the ether, yet the ether boiled, but its temperature decreased. And in 1758, on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day."
Latent heat
The English word latent comes from Latin latēns, meaning lying hidden. The term latent heat was introduced into calorimetry around 1750 by Joseph Black, commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process to study system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath.
It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone.
Black would compare the change in temperature of two identical quantities of water, heated by identical means, one of which was, say, melted from ice, whereas the other was heated from merely cold liquid state. By comparing the resulting temperatures, he could conclude that, for instance, the temperature of the sample melted from ice was 140 °F lower than the other sample, thus melting the ice absorbed 140 "degrees of heat" that could not be measured by the thermometer, yet needed to be supplied, thus it was "latent" (hidden). Black also deduced that as much latent heat as was supplied into boiling the distillate (thus giving the quantity of fuel needed) also had to be absorbed to condense it again (thus giving the cooling water required).
Quantifying latent heat
In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice now stored, as it were, an additional 8 “degrees of heat” in a form which Black called sensible heat, manifested as temperature, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were, so to speak, stored as latent heat, not manifesting itself. (In modern thermodynamics the idea of heat contained has been abandoned, so sensible heat and latent heat have been redefined. They do not reside anywhere.)
Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”).
Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale.
James Prescott Joule
Later, James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy that was indicated by the thermometer, relating the latter to thermal energy.
Specific latent heat
A specific latent heat (L) expresses the amount of energy in the form of heat (Q) required to completely effect a phase change of a unit of mass (m), usually , of a substance as an intensive property:
Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances.
From this definition, the latent heat for a given mass of a substance is calculated by
where:
Q is the amount of energy released or absorbed during the change of phase of the substance (in kJ or in BTU),
m is the mass of the substance (in kg or in lb), and
L is the specific latent heat for a particular substance (in kJ kg−1 or in BTU lb−1), either Lf for fusion, or Lv for vaporization.
Table of specific latent heats
The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases.
Specific latent heat for condensation of water in clouds
The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function:
where the temperature is taken to be the numerical value in °C.
For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function:
Variation with temperature (or pressure)
As the temperature (or pressure) rises to the critical point, the latent heat of vaporization falls to zero.
See also
Bowen ratio
Eddy covariance flux (eddy correlation, eddy flux)
Sublimation (physics)
Specific heat capacity
Enthalpy of fusion
Enthalpy of vaporization
Ton of refrigeration -- the power required to freeze or melt 2000 lb of water in 24 hours
Notes
References
Thermochemistry
Atmospheric thermodynamics
Thermodynamics
Physical phenomena | Latent heat | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,280 | [
"Thermochemistry",
"Physical phenomena",
"Thermodynamics",
"Dynamical systems"
] |
164,631 | https://en.wikipedia.org/wiki/Wavenumber%E2%80%93frequency%20diagram | A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis.
In the atmospheric sciences, these plots are a common way to visualize atmospheric waves.
In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design.
Origins
In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is:
Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as
On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram:
Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc.
See also
Dispersion relation
References
Wave mechanics
Diagrams | Wavenumber–frequency diagram | [
"Physics"
] | 237 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
164,633 | https://en.wikipedia.org/wiki/Atmospheric%20science | Atmospheric science is the study of the Earth's atmosphere and its various inner-working physical processes. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecasting. Climatology is the study of atmospheric changes (both long and short-term) that define average climates and their change over time climate variability. Aeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are important. Atmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System.
Experimental instruments used in atmospheric science include satellites, rocketsondes, radiosondes, weather balloons, radars, and lasers.
The term aerology (from Greek ἀήρ, aēr, "air"; and -λογία, -logia) is sometimes used as an alternative term for the study of Earth's atmosphere; in other definitions, aerology is restricted to the free atmosphere, the region above the planetary boundary layer.
Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann.
Atmospheric chemistry
Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines. Research is increasingly connected with other areas of study such as climatology.
The composition and chemistry of the atmosphere is of importance for several reasons, but primarily because of the interactions between the atmosphere and living organisms. The composition of the Earth's atmosphere has been changed by human activity and some of these changes are harmful to human health, crops and ecosystems. Examples of problems which have been addressed by atmospheric chemistry include acid rain, photochemical smog and global warming. Atmospheric chemistry seeks to understand the causes of these problems, and by obtaining a theoretical understanding of them, allow possible solutions to be tested and the effects of changes in government policy evaluated.
Atmospheric dynamics
Atmospheric dynamics is the study of motion systems of meteorological importance, integrating observations at multiple locations and times and theories. Common topics studied include diverse phenomena such as thunderstorms, tornadoes, gravity waves, tropical cyclones, extratropical cyclones, jet streams, and global-scale circulations. The goal of dynamical studies is to explain the observed circulations on the basis of fundamental principles from physics. The objectives of such studies incorporate improving weather forecasting, developing methods for predicting seasonal and interannual climate fluctuations, and understanding the implications of human-induced perturbations (e.g., increased carbon dioxide concentrations or depletion of the ozone layer) on the global climate.
Atmospheric physics
Atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, chemical models, radiation balancing, and energy transfer processes in the atmosphere and underlying oceans and land. In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics, each of which incorporate high levels of mathematics and physics. Atmospheric physics has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments.
In the United Kingdom, atmospheric studies are underpinned by the Meteorological Office. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The U.S. National Astronomy and Ionosphere Center also carries out studies of the high atmosphere.
The Earth's magnetic field and the solar wind interact with the atmosphere, creating the ionosphere, Van Allen radiation belts, telluric currents, and radiant energy.
Climatology
Is a science that bases its more general knowledge of the more specialized disciplines of meteorology, oceanography, geology, and astronomy, which in turn are based on the basic sciences of physics, chemistry, and mathematics. In contrast to meteorology, which studies short term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes in long-term average weather patterns, in relation to atmospheric conditions. Climatologists, those who practice climatology, study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and tries to predict future climate change.
Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere. Related disciplines include astrophysics, atmospheric physics, chemistry, ecology, physical geography, geology, geophysics, glaciology, hydrology, oceanography, and volcanology.
Aeronomy
Aeronomy is the scientific study of the upper atmosphere of the Earth — the atmospheric layers above the stratopause — and corresponding regions of the atmospheres of other planets, where the entire atmosphere may correspond to the Earth's upper atmosphere or a portion of it. A branch of both atmospheric chemistry and atmospheric physics, aeronomy contrasts with meteorology, which focuses on the layers of the atmosphere below the stratopause. In atmospheric regions studied by aeronomers, chemical dissociation and ionization are important phenomena.
Atmospheres on other celestial bodies
All of the Solar System's planets have atmospheres. This is because their gravity is strong enough to keep gaseous particles close to the surface. Larger gas giants are massive enough to keep large amounts of the light gases hydrogen and helium close by, while the smaller planets lose these gases into space. The composition of the Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. Much of Mercury's atmosphere has been blasted away by the solar wind. The only moon that has retained a dense atmosphere is Titan. There is a thin atmosphere on Triton, and a trace of an atmosphere on the Moon.
Planetary atmospheres are affected by the varying degrees of energy received from either the Sun or their interiors, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), an Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). At least one extrasolar planet, HD 189733 b, has been claimed to possess such a weather system, similar to the Great Red Spot but twice as large.
Hot Jupiters have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides which produce supersonic winds, although the day and night sides of HD 189733b appear to have very similar temperatures, indicating that planet's atmosphere effectively redistributes the star's energy around the planet.
See also
Air pollution
References
External links
Atmospheric fluid dynamics applied to weather maps – Principles such as Advection, Deformation and Vorticity
National Center for Atmospheric Research (NCAR) Archives, documents the history of the atmospheric sciences
Fluid dynamics | Atmospheric science | [
"Chemistry",
"Engineering"
] | 1,549 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
164,644 | https://en.wikipedia.org/wiki/Albert%20Ghiorso | Albert Ghiorso (July 15, 1915 – December 26, 2010) was an American nuclear scientist and co-discoverer of a record 12 chemical elements on the periodic table. His research career spanned six decades, from the early 1940s to the late 1990s.
Biography
Early life
Ghiorso was born in Vallejo, California on July 15, 1915, of Italian and Spanish ancestry. He grew up in Alameda, California. Living near the Oakland International Airport, he became interested in airplanes, aeronautics, and other technologies. After graduating from high school, he built radio circuitry and earned a reputation for establishing radio contacts at distances that outdid the military.
He received his BS in electrical engineering from the University of California, Berkeley in 1937. After graduation, he worked for Reginald Tibbets, a prominent amateur radio operator who operated a business supplying radiation detectors to the government. Ghiorso's ability to develop and produce these instruments, as well as a variety of electronic tasks, brought him into contact with the nuclear scientists at the University of California Radiation Laboratory at Berkeley, in particular Glenn Seaborg. During a job in which he was to install an intercom at the lab, he met two secretaries, one of whom, Helen Griggs, married Seaborg. The other, Wilma Belt, became Albert's wife of 60+ years.
Ghiorso was raised in a devout Christian family, but later left the religion and became an atheist. However, he still identified with Christian ethics.
Wartime research
In the early 1940s, Seaborg moved to Chicago to work on the Manhattan Project. He invited Ghiorso to join him, and for the next four years Ghiorso developed sensitive instruments for detecting the radiation associated with nuclear decay, including spontaneous fission. One of Ghiorso's breakthrough instruments was a 48-channel pulse height analyzer, which enabled him to identify the energy, and therefore the source, of the radiation. During this time they discovered two new elements (95, americium and 96, curium), although publication was withheld until after the war.
New elements
After the war, Seaborg and Ghiorso returned to Berkeley, where they and colleagues used the 60" Crocker cyclotron to produce elements of increasing atomic number by bombarding exotic targets with helium ions. In experiments during 1949–1950, they produced and identified elements 97 (berkelium) and 98 (californium). In 1953, in a collaboration with Argonne Lab, Ghiorso and collaborators sought and found elements 99 (einsteinium) and 100 (fermium), identified by their characteristic radiation in dust collected by airplanes from the first thermonuclear explosion (the Mike test). In 1955, the group used the cyclotron to produce 17 atoms of element 101 (mendelevium), the first new element to be discovered atom-by-atom. The recoil technique invented by Ghiorso was crucial to obtaining an identifiable signal from individual atoms of the new element.
In the mid-1950s it became clear that to extend the periodic chart any further, a new accelerator would be needed, and the Berkeley Heavy Ion Linear Accelerator (HILAC) was built, with Ghiorso in charge. That machine was used in the discovery of elements 102–106 (102, nobelium; 103, lawrencium; 104, rutherfordium; 105, dubnium and 106, seaborgium), each produced and identified on the basis of only a few atoms. The discovery of each successive element was made possible by the development of innovative techniques in robotic target handling, fast chemistry, efficient radiation detectors, and computer data processing. The 1972 upgrade of the HILAC to the superHILAC provided higher intensity ion beams, which was crucial to producing enough new atoms to enable detection of element 106.
With increasing atomic number, the experimental difficulties of producing and identifying a new element increase significantly. In the 1970s and 1980s, resources for new element research at Berkeley were diminishing, but the GSI laboratory at Darmstadt, Germany, under the leadership of Peter Armbruster and with considerable resources, was able to produce and identify elements 107–109 (107, bohrium; 108, hassium and 109, meitnerium). In the early 1990s, the Berkeley and Darmstadt groups made a collaborative attempt to create element 110. Experiments at Berkeley were unsuccessful, but eventually elements 110–112 (110, darmstadtium; 111, roentgenium and 112, copernicium) were identified at the Darmstadt laboratory. Subsequent work at the JINR laboratory at Dubna, led by Yuri Oganessian and a Russian-American team of scientists, was successful in identifying elements 113–118 (113, nihonium; 114, flerovium; 115, moscovium; 116, livermorium; 117, tennessine and 118, oganesson), thereby completing the Period 7 elements of the periodic table of the elements.
Inventions
Ghiorso invented numerous techniques and machines for isolating and identifying heavy elements atom-by-atom. He is generally credited with implementing the multichannel analyzer and the technique of recoil to isolate reaction products, although both of these were significant extensions of previously understood concepts. His concept for a new type of accelerator, the Omnitron, is acknowledged to have been a brilliant advance that probably would have enabled the Berkeley lab to discover numerous additional new elements, but the machine was never built, a victim of the evolving political landscape of the 1970s in the U.S. that de-emphasized basic nuclear research and greatly expanded research on environmental, health, and safety issues. Partially as a result of the failure to build the Omnitron, Ghiorso (together with colleagues Bob Main and others) conceived the joining of the HILAC and the Bevatron, which he called the Bevalac. This combination machine, an ungainly articulation across the steep slope at the Rad Lab, provided heavy ions at GeV energies, thereby enabling development of two new fields of research: "high-energy nuclear physics," meaning that the compound nucleus is sufficiently hot to exhibit collective dynamical effects, and heavy ion therapy, in which high-energy ions are used to irradiate tumors in cancer patients. Both of these fields have expanded into activities in many laboratories and clinics worldwide.
Later life
In his later years, Ghiorso continued research toward finding superheavy elements, fusion energy, and innovative electron beam sources. He was a non-participating co-author of the experiments in 1999 that gave evidence of elements 116 and 118, which later turned out to be a case of scientific fraud perpetrated by the first author, Victor Ninov. He also had brief research interests in the free quark experiment of William Fairbank of Stanford, in the discovery of element 43, and in the electron disk accelerator, among others.
Legacy
Albert Ghiorso is credited with having co-discovered the following elements:
Americium ca. 1945 (element 95)
Curium in 1944 (element 96)
Berkelium in 1949 (element 97)
Californium in 1950 (element 98)
Einsteinium in 1952 (element 99)
Fermium in 1953 (element 100)
Mendelevium in 1955 (element 101)
Nobelium in 1958–59 (element 102)
Lawrencium in 1961 (element 103)
Rutherfordium in 1969 (element 104)
Dubnium in 1970 (element 105)
Seaborgium in 1974 (element 106)
Ghiorso personally selected some of the names recommended by his group for the new elements. His original name for element 105 (hahnium) was changed by the International Union of Pure and Applied Chemistry (IUPAC) to dubnium, to recognize the contributions of the laboratory at Dubna, Russia, in the search for trans-fermium elements. His recommendation for element 106, seaborgium, was accepted only after extensive debate about naming an element after a living person. In 1999, evidence for two superheavy elements (element 116 and element 118) was published by a group in Berkeley. The discovery group intended to propose the name ghiorsium for element 118, but eventually the data were found to have been tampered and in 2002 the claims were withdrawn. Ghiorso's lifetime output comprised about 170 technical papers, most published in The Physical Review.
Ghiorso was famous among his colleagues for his endless stream of creative "doodles," which define an art form suggestive of fractals. He also developed a state-of-the-art camera for birdwatching, and was a constant supporter of environmental causes and organizations.
Several obituaries are available online, and a full-length biography is in preparation.
Notes
References
Images in the LBNL archives
1915 births
2010 deaths
American atheists
American electrical engineers
Discoverers of chemical elements
People involved with the periodic table
Manhattan Project people
People from Alameda, California
People from Vallejo, California
UC Berkeley College of Engineering alumni
Howard N. Potts Medal recipients
Fellows of the American Physical Society | Albert Ghiorso | [
"Chemistry"
] | 1,879 | [
"Periodic table",
"People involved with the periodic table"
] |
164,670 | https://en.wikipedia.org/wiki/Cantharidin | Cantharidin is an odorless, colorless fatty substance of the terpenoid class, which is secreted by many species of blister beetles. Its main current use in pharmacology is treating molluscum contagiosum and warts topically. It is a burn agent and poisonous in large doses, and has been historically used as aphrodisiacs (Spanish fly). In its natural form, cantharidin is secreted by the male blister beetle, and given to the female as a copulatory gift during mating. Afterwards, the female beetle covers her eggs with it as a defense against predators.
Poisoning from cantharidin is a significant veterinary concern, especially in horses, but it can also be poisonous to humans if taken internally (where the source is usually experimental self-exposure). Externally, cantharidin is a potent vesicant (blistering agent), exposure to which can cause severe chemical burns. Properly dosed and applied, the same properties have also been used therapeutically, for instance, for treatment of skin conditions, such as molluscum contagiosum infection of the skin.
Cantharidin is classified as an extremely hazardous substance in the United States, and is subject to strict reporting requirements by facilities that produce, store, or use it in significant quantities.
Chemistry
Structure and nomenclature
Cantharidin, from the Greek kantharis, for beetle, is an odorless, colorless natural product with solubility in various organic solvents, but only slight solubility in water. Its skeleton is tricyclic, formally, a tricyclo-[5.2.1.02,6]decane skeleton. Its functionalities include a carboxylic acid anhydride (−CO−O−CO−) substructure in one of its rings, as well as a bridging ether in its bicyclic ring system.
The complete mechanism of the biosynthesis of cantharidin is unknown. Its framework formally consists of two isoprene units. However, feeding studies indicate that the biosynthetic process is more complicated, and not a simple product of geranyl pyrophosphate or related ten-carbon parent structure, as the seeming monoterpene nature would suggest. Instead, there is a farnesol (15-carbon) precursor from which certain carbon segments are later excised.
Distribution and availability
The level of cantharidin in blister beetles can be quite variable. Among blister beetles of the genus Epicauta in Colorado, E. pennsylvanica contains about 0.2 mg, E. maculata contains 0.7 mg, and E. immaculata contains 4.8 mg per beetle; males also contain higher levels than females.
Males of Berberomeloe majalis have higher level of cantharidin per beetle: 64.22 ± 51.28 mg/g (dry weight) and 9.10 ± 12.64 mg/g (d. w.). Cantharidin content in haemolymph is also higher in males (80.9 ± 106.5 μg/g) than in females (20.0 ± 41.5 μg/g).
History
Aphrodisiac preparations
Preparations made from blister beetles (particularly "Spanish fly") have been used since ancient times as an aphrodisiac, possibly because their physical effects were perceived to mimic those of sexual arousal, and because they can cause prolonged erection or priapism in men. These preparations were known as cantharides, from the Greek word for "beetle".
Examples of such use found in historical sources include:
The ancient Roman historian Tacitus relates that a cantharid preparation was used by the empress Livia, wife of Augustus Caesar, to entice members of the imperial family or dinner guests to commit sexual indiscretions (thus, providing her information to hold over them).
The German emperor Henry IV (1050–1106) is said to have consumed cantharides.
The French surgeon Ambroise Paré (1510–1590) described a case in 1572 of a man suffering from "the most frightful satyriasis" after taking a potion composed of nettles and a cantharid extract. This is perhaps the same man of whom Paré relates that a courtesan sprinkled a cantharid powder on food she served to him, after which the man experienced "violent priapism" and anal bleeding, of which he later died. Paré also cites the case of a priest who died of hematuria after swallowing a dose of cantharides, which he intended to fortify his sex drive.
Cantharides were in widespread use among the upper classes in France in the 1600s, despite being a banned substance. Police searches in connection with a rash of poisonings around 1680 turned up many stashes of "bluish flies", which were known to be used in the preparation of aphrodisiac potions.
The French sorceress Catherine Monvoisin (known as "La Voisin," c. 1640–1680) was recorded in the 1670s as having prepared a love charm made from Spanish fly mixed with dried mole's blood and bat's blood.
Aphrodisiac sweets presumably laced with cantharides were circulated within libertine circles during the 1700s in France. They were multicolored tablets nicknamed "pastilles de Richelieu," after the Maréchal de Richelieu, a notorious libertine (not to be confused with his great-uncle, the Cardinal Richelieu) who procured sexual encounters for King Louis XV.
The French writer Donatien Alphonse François — notoriously known as the Marquis de Sade (1740–1814) — is said to have given aniseed-flavored pastilles laced with Spanish fly to two prostitutes at a pair of orgies in 1772, poisoning and nearly killing them. He was sentenced to death for that (and for the crime of sodomy), but was later reprieved on appeal.
Non-aphrodisiac uses
The Spanish clergyman Juan de Horozco y Covarrubias (es) (c. 1540–1610) reported the use of blister beetles as a poison as well as an aphrodisiac.
Preparations of dried blister beetles were at one time used as a treatment for smallpox. As late as 1892, Andrew Taylor Still, the founder of osteopathy, recommended inhaling a tincture of cantharidin as an effective preventative and treatment for smallpox, decrying vaccination.
Pharmaco-chemical isolation
Cantharidin was first isolated as a chemically pure substance in 1810 by Pierre Robiquet, a French chemist then living in Paris. Robiquet isolated cantharidin as the active ingredient in pharmacological preparations of Lytta vesicatoria, a.k.a. "Spanish fly", a species of blister beetle. This was one of the first historical instances of the identification and extraction of a simple active principle from a complex medicine.
Robiquet found cantharidin to be an odorless and colorless solid at room temperature. He demonstrated that it was the active principle responsible for the aggressively blistering properties of the coating of the eggs of the blister beetle, and additionally established that cantharidin had toxic properties comparable in degree to those of the most virulent poisons known in the 19th century, such as strychnine.
Other uses of the pharmacological isolate
Diluted solutions of cantharidin can be used as a topical medication to remove warts and tattoos, and to treat the small papules of molluscum contagiosum.
In Santería rituals, cantharides are used in incense.
Veterinary issues
Poisoning by Epicauta species from cantharidin is a significant veterinary concern, especially in horses; species infesting feedstocks depend on region—e.g., Epicauta pennsylvanica (black blister beetle) in the U.S. midwest; and E. occidentalis, temexia, and vittata species (striped blister beetles) in the U.S. southwest—where the concentrations of the agent in each can vary substantially. Beetles feed on weeds, and occasionally move into crop fields used to produce livestock feeds (e.g., alfalfa), where they are found to cluster and find their way into baled hay, e.g., a single flake (4–5 in. section) may have several hundred insects, or none at all. Horses are very sensitive to the cantharidin produced by beetle infestations: the for horses is roughly 1 mg/kg of the horse's body weight. Horses may be accidentally poisoned when fed bales of fodder with blister beetles in them.
Great bustards, a strongly polygynous bird species, are not immune to the toxicity of cantharidin; they become intoxicated after ingesting blister beetles. However, cantharidin has activity also against parasites that infect them. Great bustards may eat toxic blister beetles of the genus Meloe to increase the sexual arousal of males.
Human medical issues
General risks
As a blister agent, cantharidin has the potential to cause adverse effects when used medically; for this reason, it has been included in a list of "problem drugs" used by dermatologists and emergency personnel. However, this references unregulated sources of cantharidin. In July 2023, the US FDA approved a topical formulation of cantharidin (Ycanth) for the treatment of molluscum contagiosum.
When ingested by humans, the is unknown, but fatal doses have been
recorded between 10 mg and 65 mg. The median lethal dose appears to be around 1 mg/kg but individuals have survived after consuming oral doses as high as 175 mg. Ingesting cantharidin can initially cause severe damage to the lining of the gastrointestinal and urinary tracts, and may also cause permanent renal damage. Symptoms of cantharidin poisoning include blood in the urine, abdominal pain, and (rarely) prolonged erections.
Risks of aphrodisiac use
The extreme toxicity of cantharidin makes any use as an aphrodisiac highly dangerous. As a result, it is illegal to sell (or use) cantharidin or preparations containing it without a prescription in many countries.
Research
Mechanism of action
Topical cantharidin is absorbed by the lipid membranes of epidermal cells, causing the release of serine proteases, enzymes that break the peptide bonds in proteins. This causes the disintegration of desmosomal plaques, cellular structures involved in cell-to-cell adhesion, leading to detachment of the tonofilaments that hold cells together. The process leads to the loss of cellular connections (acantholysis), and ultimately results in blistering of the skin. Lesions heal without scarring.
Pharmaceutical use
VP-102, an experimental drug-device combination that includes cantharidin delivered via a single-use applicator, is being studied for the treatment of molluscum contagiosum, common warts, and genital warts.
Bioactivities
Cantharidin appears to have some effect in the topical treatment of cutaneous leishmaniasis in animal models. In addition to topical medical applications, cantharidin and its analogues may have activity against cancer cells.
Laboratory studies with cultured tumor cells suggest that this activity may be the result of PP2A inhibition.
Notes
References
External links
Cantharidin : origin and synthesis at Lycée Faidherbe de Lille
Terpenes and terpenoids
Phosphatase inhibitors
Carboxylic anhydrides
Ethers
Blister agents
Insects in culture
Dermatoxins
Heterocyclic compounds with 3 rings
Oxygen heterocycles | Cantharidin | [
"Chemistry"
] | 2,543 | [
"Biomolecules by chemical classification",
"Natural products",
"Blister agents",
"Chemical weapons",
"Functional groups",
"Organic compounds",
"Ethers",
"Terpenes and terpenoids"
] |
164,897 | https://en.wikipedia.org/wiki/Three-domain%20system | The three-domain system is a taxonomic classification system that groups all cellular life into three domains, namely Archaea, Bacteria and Eukarya, introduced by Carl Woese, Otto Kandler and Mark Wheelis in 1990. The key difference from earlier classifications such as the two-empire system and the five-kingdom classification is the splitting of Archaea (previously named "archaebacteria") from Bacteria as completely different organisms.
The three domain Hypothesis is no longer valid since it is now widely recognized that eukaryotes do not form a separate domain of life. They originated from a fusion of two lineages; one from within Archaea and one from within Bacteria.
Background
Woese argued, on the basis of differences in 16S rRNA genes, that bacteria, archaea, and eukaryotes each arose separately from an ancestor with poorly developed genetic machinery, often called a progenote. To reflect these primary lines of descent, he treated each as a domain, divided into several different kingdoms. Originally his split of the prokaryotes was into Eubacteria (now Bacteria) and Archaebacteria (now Archaea). Woese initially used the term "kingdom" to refer to the three primary phylogenic groupings, and this nomenclature was widely used until the term "domain" was adopted in 1990.
Acceptance of the validity of Woese's phylogenetically valid classification was a slow process. Prominent biologists including Salvador Luria and Ernst Mayr objected to his division of the prokaryotes. Not all criticism of him was restricted to the scientific level. A decade of labor-intensive oligonucleotide cataloging left him with a reputation as "a crank", and Woese would go on to be dubbed "Microbiology's Scarred Revolutionary" by a news article printed in the journal Science in 1997. The growing amount of supporting data led the scientific community to accept the Archaea by the mid-1980s. Today, very few scientists still accept the concept of a unified Prokarya.
Classification
The three-domain system adds a level of classification (the domains) "above" the kingdoms present in the previously used five- or six-kingdom systems. This classification system recognizes the fundamental divide between the two prokaryotic groups, insofar as Archaea appear to be more closely related to eukaryotes than they are to other prokaryotes – bacteria-like organisms with no cell nucleus. The three-domain system sorts the previously known kingdoms into these three domains: Archaea, Bacteria, and Eukarya.
Domain Archaea
The Archaea are prokaryotic, with no nuclear membrane, but with biochemistry and RNA markers that are distinct from bacteria. The archaeans possess unique, ancient evolutionary history for which they are considered some of the oldest species of organisms on Earth, most notably their diverse, exotic metabolisms.
Some examples of archaeal organisms are:
methanogens – which produce the gas methane
halophiles – which live in very salty water
thermoacidophiles – which thrive in acidic high-temperature water
Domain Bacteria
The Bacteria are also prokaryotic; their domain consists of cells with bacterial rRNA, no nuclear membrane, and whose membranes possess primarily diacyl glycerol diester lipids. Traditionally classified as bacteria, many thrive in the same environments favored by humans, and were the first prokaryotes discovered; they were briefly called the Eubacteria or "true" bacteria when the Archaea were first recognized as a distinct clade.
Most known pathogenic prokaryotic organisms belong to bacteria (see for exceptions). For that reason, and because the Archaea are typically difficult to grow in laboratories, Bacteria are currently studied more extensively than Archaea.
Some examples of bacteria include:
"Cyanobacteria" – photosynthesizing bacteria that are related to the chloroplasts of eukaryotic plants and algae
Spirochaetota – Gram-negative bacteria that include those causing syphilis and Lyme disease
Actinomycetota – Gram-positive bacteria including Bifidobacterium animalis which is present in the human large intestine
Domain Eukarya
Eukaryota are organisms whose cells contain a membrane-bound nucleus. They include many large single-celled organisms and all known non-microscopic organisms. The domain contains, for example:
Holomycota – mushrooms and allies
Viridiplantae – green plants
Holozoa – animals and allies
Stramenopiles – includes brown algae
Amoebozoa – solitary and social amoebae
Discoba – includes euglenoids
Niches
Each of the three cell types tends to fit into recurring specialities or roles. Bacteria tend to be the most prolific reproducers, at least in moderate environments. Archaeans tend to adapt quickly to extreme environments, such as high temperatures, high acids, high sulfur, etc. This includes adapting to use a wide variety of food sources. Eukaryotes are the most flexible with regard to forming cooperative colonies, such as in multi-cellular organisms, including humans. In fact, the structure of a eukaryote is likely to have derived from a joining of different cell types, forming organelles.
Parakaryon myojinensis (incertae sedis) is a single-celled organism known to be a unique example. "This organism appears to be a life form distinct from prokaryotes and eukaryotes", with features of both.
Alternatives
Parts of the three-domain theory have been challenged by scientists including Ernst Mayr, Thomas Cavalier-Smith, and Radhey S. Gupta.
Recent work has proposed that Eukaryota may have actually branched off from the domain Archaea. According to Spang et al., Lokiarchaeota forms a monophyletic group with eukaryotes in phylogenomic analyses. The associated genomes also encode an expanded repertoire of eukaryotic signature proteins that are suggestive of sophisticated membrane remodelling capabilities. This work suggests a two-domain system as opposed to the three-domain system. Exactly how and when Archaea, Bacteria, and Eucarya developed and how they are related continues to be debated.
See also
Two-domain system
Neomura
Bacterial phyla
Eocyte hypothesis
Taxonomy
Two-empire system
References
Biological classification
High-level systems of taxonomy
Biology controversies
de:Domäne (Biologie)
fr:Domaine (biologie) | Three-domain system | [
"Biology"
] | 1,341 | [
"nan"
] |
164,901 | https://en.wikipedia.org/wiki/Post-translational%20modification | In molecular biology, post-translational modification (PTM) is the covalent process of changing proteins following protein biosynthesis. PTMs may involve enzymes or occur spontaneously. Proteins are created by ribosomes, which translate mRNA into polypeptide chains, which may then change to form the mature protein product. PTMs are important components in cell signalling, as for example when prohormones are converted to hormones.
Post-translational modifications can occur on the amino acid side chains or at the protein's C- or N- termini. They can expand the chemical set of the 22 amino acids by changing an existing functional group or adding a new one such as phosphate. Phosphorylation is highly effective for controlling the enzyme activity and is the most common change after translation. Many eukaryotic and prokaryotic proteins also have carbohydrate molecules attached to them in a process called glycosylation, which can promote protein folding and improve stability as well as serving regulatory functions. Attachment of lipid molecules, known as lipidation, often targets a protein or part of a protein attached to the cell membrane.
Other forms of post-translational modification consist of cleaving peptide bonds, as in processing a propeptide to a mature form or removing the initiator methionine residue. The formation of disulfide bonds from cysteine residues may also be referred to as a post-translational modification. For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds.
Some types of post-translational modification are consequences of oxidative stress. Carbonylation is one example that targets the modified protein for degradation and can result in the formation of protein aggregates. Specific amino acid modifications can be used as biomarkers indicating oxidative damage.
Sites that often undergo post-translational modification are those that have a functional group that can serve as a nucleophile in the reaction: the hydroxyl groups of serine, threonine, and tyrosine; the amine forms of lysine, arginine, and histidine; the thiolate anion of cysteine; the carboxylates of aspartate and glutamate; and the N- and C-termini. In addition, although the amide of asparagine is a weak nucleophile, it can serve as an attachment point for glycans. Rarer modifications can occur at oxidized methionines and at some methylene groups in side chains.
Post-translational modification of proteins can be experimentally detected by a variety of techniques, including mass spectrometry, Eastern blotting, and Western blotting.
PTMs involving addition of functional groups
Addition by an enzyme in vivo
Hydrophobic groups for membrane localization
myristoylation (a type of acylation), attachment of myristate, a C14 saturated acid
palmitoylation (a type of acylation), attachment of palmitate, a C16 saturated acid
isoprenylation or prenylation, the addition of an isoprenoid group (e.g. farnesol and geranylgeraniol)
farnesylation
geranylgeranylation
glypiation, glycosylphosphatidylinositol (GPI) anchor formation via an amide bond to C-terminal tail
Cofactors for enhanced enzymatic activity
lipoylation (a type of acylation), attachment of a lipoate (C8) functional group
flavin moiety (FMN or FAD) may be covalently attached
heme C attachment via thioether bonds with cysteines
phosphopantetheinylation, the addition of a 4'-phosphopantetheinyl moiety from coenzyme A, as in fatty acid, polyketide, non-ribosomal peptide and leucine biosynthesis
retinylidene Schiff base formation
Modifications of translation factors
diphthamide formation (on a histidine found in eEF2)
ethanolamine phosphoglycerol attachment (on glutamate found in eEF1α)
hypusine formation (on conserved lysine of eIF5A (eukaryotic) and aIF5A (archaeal))
beta-Lysine addition on a conserved lysine of the elongation factor P (EFP) in most bacteria. EFP is a homolog to eIF5A (eukaryotic) and aIF5A (archaeal) (see above).
Smaller chemical groups
acylation, e.g. O-acylation (esters), N-acylation (amides), S-acylation (thioesters)
acetylation, the addition of an acetyl group, either at the N-terminus of the protein or at lysine residues. The reverse is called deacetylation.
formylation
alkylation, the addition of an alkyl group, e.g. methyl, ethyl
methylation the addition of a methyl group, usually at lysine or arginine residues. The reverse is called demethylation.
amidation at C-terminus. Formed by oxidative dissociation of a C-terminal Gly residue.
amide bond formation
amino acid addition
arginylation, a tRNA-mediation addition
polyglutamylation, covalent linkage of glutamic acid residues to the N-terminus of tubulin and some other proteins. (See tubulin polyglutamylase)
polyglycylation, covalent linkage of one to more than 40 glycine residues to the tubulin C-terminal tail
butyrylation
gamma-carboxylation dependent on Vitamin K
glycosylation, the addition of a glycosyl group to either arginine, asparagine, cysteine, hydroxylysine, serine, threonine, tyrosine, or tryptophan resulting in a glycoprotein. Distinct from glycation, which is regarded as a nonenzymatic attachment of sugars.
O-GlcNAc, addition of N-acetylglucosamine to serine or threonine residues in a β-glycosidic linkage
polysialylation, addition of polysialic acid, PSA, to NCAM
malonylation
hydroxylation: addition of an oxygen atom to the side-chain of a Pro or Lys residue
iodination: addition of an iodine atom to the aromatic ring of a tyrosine residue (e.g. in thyroglobulin)
nucleotide addition such as ADP-ribosylation
phosphate ester (O-linked) or phosphoramidate (N-linked) formation
phosphorylation, the addition of a phosphate group, usually to serine, threonine, and tyrosine (O-linked), or histidine (N-linked)
adenylylation, the addition of an adenylyl moiety, usually to tyrosine (O-linked), or histidine and lysine (N-linked)
uridylylation, the addition of an uridylyl-group (i.e. uridine monophosphate, UMP), usually to tyrosine
propionylation
pyroglutamate formation
S-glutathionylation
S-nitrosylation
S-sulfenylation (aka S-sulphenylation), reversible covalent addition of one oxygen atom to the thiol group of a cysteine residue
S-sulfinylation, normally irreversible covalent addition of two oxygen atoms to the thiol group of a cysteine residue
S-sulfonylation, normally irreversible covalent addition of three oxygen atoms to the thiol group of a cysteine residue, resulting in the formation of a cysteic acid residue
succinylation addition of a succinyl group to lysine
sulfation, the addition of a sulfate group to a tyrosine.
Non-enzymatic modifications in vivo
Examples of non-enzymatic PTMs are glycation, glycoxidation, nitrosylation, oxidation, succination, and lipoxidation.
glycation, the addition of a sugar molecule to a protein without the controlling action of an enzyme.
carbamylation the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys.
carbonylation the addition of carbon monoxide to other organic/inorganic compounds.
spontaneous isopeptide bond formation, as found in many surface proteins of Gram-positive bacteria.
Non-enzymatic additions in vitro
biotinylation: covalent attachment of a biotin moiety using a biotinylation reagent, typically for the purpose of labeling a protein.
carbamylation: the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys or Cys residues, typically resulting from exposure to urea solutions.
oxidation: addition of one or more Oxygen atoms to a susceptible side-chain, principally of Met, Trp, His or Cys residues. Formation of disulfide bonds between Cys residues.
pegylation: covalent attachment of polyethylene glycol (PEG) using a pegylation reagent, typically to the N-terminus or the side-chains of Lys residues. Pegylation is used to improve the efficacy of protein pharmaceuticals.
Conjugation with other proteins or peptides
ubiquitination, the covalent linkage to the protein ubiquitin.
SUMOylation, the covalent linkage to the SUMO protein (Small Ubiquitin-related MOdifier)
neddylation, the covalent linkage to the Nedd protein
ISGylation, the covalent linkage to the ISG15 protein (Interferon-Stimulated Gene 15)
pupylation, the covalent linkage to the prokaryotic ubiquitin-like protein
Chemical modification of amino acids
citrullination, or deimination, the conversion of arginine to citrulline
deamidation, the conversion of glutamine to glutamic acid or asparagine to aspartic acid
eliminylation, the conversion to an alkene by beta-elimination of phosphothreonine and phosphoserine, or dehydration of threonine and serine
Structural changes
disulfide bridges, the covalent linkage of two cysteine amino acids
lysine-cysteine bridges, the covalent linkage of 1 lysine and 1 or 2 cystine residues via an oxygen atom (NOS and SONOS bridges)
proteolytic cleavage, cleavage of a protein at a peptide bond
isoaspartate formation, via the cyclisation of asparagine or aspartic acid amino-acid residues
racemization
of serine by protein-serine epimerase
of alanine in dermorphin, a frog opioid peptide
of methionine in deltorphin, also a frog opioid peptide
protein splicing, self-catalytic removal of inteins analogous to mRNA processing
Statistics
Common PTMs by frequency
In 2011, statistics of each post-translational modification experimentally and putatively detected have been compiled using proteome-wide information from the Swiss-Prot database. The 10 most common experimentally found modifications were as follows:
Common PTMs by residue
Some common post-translational modifications to specific amino-acid residues are shown below. Modifications occur on the side-chain unless indicated otherwise.
Databases and tools
Protein sequences contain sequence motifs that are recognized by modifying enzymes, and which can be documented or predicted in PTM databases. With the large number of different modifications being discovered, there is a need to document this sort of information in databases. PTM information can be collected through experimental means or predicted from high-quality, manually curated data. Numerous databases have been created, often with a focus on certain taxonomic groups (e.g. human proteins) or other features.
List of resources
PhosphoSitePlus – A database of comprehensive information and tools for the study of mammalian protein post-translational modification
ProteomeScout – A database of proteins and post-translational modifications experimentally
Human Protein Reference Database – A database for different modifications and understand different proteins, their class, and function/process related to disease causing proteins
PROSITE – A database of Consensus patterns for many types of PTM's including sites
RESID – A database consisting of a collection of annotations and structures for PTMs.
iPTMnet – A database that integrates PTM information from several knowledgbases and text mining results.
dbPTM – A database that shows different PTM's and information regarding their chemical components/structures and a frequency for amino acid modified site
Uniprot has PTM information although that may be less comprehensive than in more specialized databases.
The O-GlcNAc Database - A curated database for protein O-GlcNAcylation and referencing more than 14 000 protein entries and 10 000 O-GlcNAc sites.
Tools
List of software for visualization of proteins and their PTMs
PyMOL – introduce a set of common PTM's into protein models
AWESOME – Interactive tool to see the role of single nucleotide polymorphisms to PTM's
Chimera – Interactive Database to visualize molecules
Case examples
Cleavage and formation of disulfide bridges during the production of insulin
PTM of histones as regulation of transcription: RNA polymerase control by chromatin structure
PTM of RNA polymerase II as regulation of transcription
Cleavage of polypeptide chains as crucial for lectin specificity
See also
Protein targeting
Post-translational regulation
References
External links
Controlled vocabulary of post-translational modifications in Uniprot
List of posttranslational modifications in ExPASy
Browse SCOP domains by PTM — from the dcGO database
Overview and description of commonly used post-translational modification detection techniques
Gene expression
Protein structure
Protein biosynthesis
Cell biology | Post-translational modification | [
"Chemistry",
"Biology"
] | 3,062 | [
"Protein biosynthesis",
"Cell biology",
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Structural biology",
"Molecular biology",
"Biochemistry",
"Protein structure"
] |
164,912 | https://en.wikipedia.org/wiki/Glycoprotein | Glycoproteins are proteins which contain oligosaccharide (sugar) chains covalently attached to amino acid side-chains. The carbohydrate is attached to the protein in a cotranslational or posttranslational modification. This process is known as glycosylation. Secreted extracellular proteins are often glycosylated.
In proteins that have segments extending extracellularly, the extracellular segments are also often glycosylated. Glycoproteins are also often important integral membrane proteins, where they play a role in cell–cell interactions. It is important to distinguish endoplasmic reticulum-based glycosylation of the secretory system from reversible cytosolic-nuclear glycosylation. Glycoproteins of the cytosol and nucleus can be modified through the reversible addition of a single GlcNAc residue that is considered reciprocal to phosphorylation and the functions of these are likely to be an additional regulatory mechanism that controls phosphorylation-based signalling. In contrast, classical secretory glycosylation can be structurally essential. For example, inhibition of asparagine-linked, i.e. N-linked, glycosylation can prevent proper glycoprotein folding and full inhibition can be toxic to an individual cell. In contrast, perturbation of glycan processing (enzymatic removal/addition of carbohydrate residues to the glycan), which occurs in both the endoplasmic reticulum and Golgi apparatus, is dispensable for isolated cells (as evidenced by survival with glycosides inhibitors) but can lead to human disease (congenital disorders of glycosylation) and can be lethal in animal models. It is therefore likely that the fine processing of glycans is important for endogenous functionality, such as cell trafficking, but that this is likely to have been secondary to its role in host-pathogen interactions. A famous example of this latter effect is the ABO blood group system.
Though there are different types of glycoproteins, the most common are N-linked and O-linked glycoproteins. These two types of glycoproteins are distinguished by structural differences that give them their names. Glycoproteins vary greatly in composition, making many different compounds such as antibodies or hormones. Due to the wide array of functions within the body, interest in glycoprotein synthesis for medical use has increased. There are now several methods to synthesize glycoproteins, including recombination and glycosylation of proteins.
Glycosylation is also known to occur on nucleocytoplasmic proteins in the form of O-GlcNAc.
Types of glycosylation
There are several types of glycosylation, although the first two are the most common.
In N-glycosylation, sugars are attached to nitrogen, typically on the amide side-chain of asparagine.
In O-glycosylation, sugars are attached to oxygen, typically on serine or threonine, but also on tyrosine or non-canonical amino acids such as hydroxylysine and hydroxyproline.
In P-glycosylation, sugars are attached to phosphorus on a phosphoserine.
In C-glycosylation, sugars are attached directly to carbon, such as in the addition of mannose to tryptophan.
In S-glycosylation, a beta-GlcNAc is attached to the sulfur atom of a cysteine residue.
In glypiation, a GPI glycolipid is attached to the C-terminus of a polypeptide, serving as a membrane anchor.
In glycation, also known as non-enzymatic glycosylation, sugars are covalently bonded to a protein or lipid molecule, without the controlling action of an enzyme, but through a Maillard reaction.
Monosaccharides
Monosaccharides commonly found in eukaryotic glycoproteins include:
The sugar group(s) can assist in protein folding, improve proteins' stability and are involved in cell signalling.
Structure
The critical structural element of all glycoproteins is having oligosaccharides bonded covalently to a protein. There are 10 common monosaccharides in mammalian glycans including: glucose (Glc), fucose (Fuc), xylose (Xyl), mannose (Man), galactose (Gal), N-acetylglucosamine (GlcNAc), glucuronic acid (GlcA), iduronic acid (IdoA), N-acetylgalactosamine (GalNAc), sialic acid, and 5-N-acetylneuraminic acid (Neu5Ac). These glycans link themselves to specific areas of the protein amino acid chain.
The two most common linkages in glycoproteins are N-linked and O-linked glycoproteins. An N-linked glycoprotein has glycan bonds to the nitrogen containing an asparagine amino acid within the protein sequence. An O-linked glycoprotein has the sugar is bonded to an oxygen atom of a serine or threonine amino acid in the protein.
Glycoprotein size and composition can vary largely, with carbohydrate composition ranges from 1% to 70% of the total mass of the glycoprotein. Within the cell, they appear in the blood, the extracellular matrix, or on the outer surface of the plasma membrane, and make up a large portion of the proteins secreted by eukaryotic cells. They are very broad in their applications and can function as a variety of chemicals from antibodies to hormones.
Glycomics
Glycomics is the study of the carbohydrate components of cells. Though not exclusive to glycoproteins, it can reveal more information about different glycoproteins and their structure. One of the purposes of this field of study is to determine which proteins are glycosylated and where in the amino acid sequence the glycosylation occurs. Historically, mass spectrometry has been used to identify the structure of glycoproteins and characterize the carbohydrate chains attached.
Examples
The unique interaction between the oligosaccharide chains have different applications. First, it aids in quality control by identifying misfolded proteins. The oligosaccharide chains also change the solubility and polarity of the proteins that they are bonded to. For example, if the oligosaccharide chains are negatively charged, with enough density around the protein, they can repulse proteolytic enzymes away from the bonded protein. The diversity in interactions lends itself to different types of glycoproteins with different structures and functions.
One example of glycoproteins found in the body is mucins, which are secreted in the mucus of the respiratory and digestive tracts. The sugars when attached to mucins give them considerable water-holding capacity and also make them resistant to proteolysis by digestive enzymes.
Glycoproteins are important for white blood cell recognition. Examples of glycoproteins in the immune system are:
molecules such as antibodies (immunoglobulins), which interact directly with antigens.
molecules of the major histocompatibility complex (or MHC), which are expressed on the surface of cells and interact with T cells as part of the adaptive immune response.
sialyl Lewis X antigen on the surface of leukocytes.
H antigen of the ABO blood compatibility antigens.
Other examples of glycoproteins include:
gonadotropins (luteinizing hormone and follicle-stimulating hormone)
glycoprotein IIb/IIIa, an integrin found on platelets that is required for normal platelet aggregation and adherence to the endothelium.
components of the zona pellucida, which surrounds the oocyte, and is important for sperm-egg interaction.
structural glycoproteins, which occur in connective tissue. These help bind together the fibers, cells, and ground substance of connective tissue. They may also help components of the tissue bind to inorganic substances, such as calcium in bone.
Glycoprotein-41 (gp41) and glycoprotein-120 (gp120) are HIV viral coat proteins.
Soluble glycoproteins often show a high viscosity, for example, in egg white and blood plasma.
Miraculin, is a glycoprotein extracted from Synsepalum dulcificum a berry which alters human tongue receptors to recognize sour foods as sweet.
Variable surface glycoproteins allow the sleeping sickness Trypanosoma parasite to escape the immune response of the host.
The viral spike of the human immunodeficiency virus is heavily glycosylated. Approximately half the mass of the spike is glycosylation and the glycans act to limit antibody recognition as the glycans are assembled by the host cell and so are largely 'self'. Over time, some patients can evolve antibodies to recognise the HIV glycans and almost all so-called 'broadly neutralising antibodies (bnAbs) recognise some glycans. This is possible mainly because the unusually high density of glycans hinders normal glycan maturation and they are therefore trapped in the premature, high-mannose, state. This provides a window for immune recognition. In addition, as these glycans are much less variable than the underlying protein, they have emerged as promising targets for vaccine design.
P-glycoproteins are critical for antitumor research due to its ability block the effects of antitumor drugs. P-glycoprotein, or multidrug transporter (MDR1), is a type of ABC transporter that transports compounds out of cells. This transportation of compounds out of cells includes drugs made to be delivered to the cell, causing a decrease in drug effectiveness. Therefore, being able to inhibit this behavior would decrease P-glycoprotein interference in drug delivery, making this an important topic in drug discovery. For example, P-Glycoprotein causes a decrease in anti-cancer drug accumulation within tumor cells, limiting the effectiveness of chemotherapies used to treat cancer.
Hormones
Hormones that are glycoproteins include:
Follicle-stimulating hormone
Luteinizing hormone
Thyroid-stimulating hormone
Human chorionic gonadotropin
Alpha-fetoprotein
Erythropoietin (EPO)
Distinction between glycoproteins and proteoglycans
Functions
Analysis
A variety of methods used in detection, purification, and structural analysis of glycoproteins are
Synthesis
The glycosylation of proteins has an array of different applications from influencing cell to cell communication to changing the thermal stability and the folding of proteins. Due to the unique abilities of glycoproteins, they can be used in many therapies. By understanding glycoproteins and their synthesis, they can be made to treat cancer, Crohn's Disease, high cholesterol, and more.
The process of glycosylation (binding a carbohydrate to a protein) is a post-translational modification, meaning it happens after the production of the protein. Glycosylation is a process that roughly half of all human proteins undergo and heavily influences the properties and functions of the protein. Within the cell, glycosylation occurs in the endoplasmic reticulum.
Recombination
There are several techniques for the assembly of glycoproteins. One technique utilizes recombination. The first consideration for this method is the choice of host, as there are many different factors that can influence the success of glycoprotein recombination such as cost, the host environment, the efficacy of the process, and other considerations. Some examples of host cells include E. coli, yeast, plant cells, insect cells, and mammalian cells. Of these options, mammalian cells are the most common because their use does not face the same challenges that other host cells do such as different glycan structures, shorter half life, and potential unwanted immune responses in humans. Of mammalian cells, the most common cell line used for recombinant glycoprotein production is the Chinese hamster ovary line. However, as technologies develop, the most promising cell lines for recombinant glycoprotein production are human cell lines.
Glycosylation
The formation of the link between the glycan and the protein is key element of the synthesis of glycoproteins. The most common method of glycosylation of N-linked glycoproteins is through the reaction between a protected glycan and a protected Asparagine. Similarly, an O-linked glycoprotein can be formed through the addition of a glycosyl donor with a protected Serine or Threonine. These two methods are examples of natural linkage. However, there are also methods of unnatural linkages. Some methods include ligation and a reaction between a serine-derived sulfamidate and thiohexoses in water. Once this linkage is complete, the amino acid sequence can be expanded upon using solid-phase peptide synthesis.
See also
Ero1
Female sperm storage
Glycocalyx
Glycome
Glycopeptide
Gp120
Gp41
Miraculin
P-glycoprotein
Proteoglycan
Ribophorin
Glycan
Protein
Monosaccharides
Notes and references
Further reading
External links
Carbohydrate chemistry | Glycoprotein | [
"Chemistry"
] | 2,989 | [
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycoproteins",
"Glycobiology"
] |
164,952 | https://en.wikipedia.org/wiki/Software%20design%20pattern | In software engineering, a software design pattern or design pattern is a general, reusable solution to a commonly occurring problem in many contexts in software design. A design pattern is not a rigid structure to be transplanted directly into source code. Rather, it is a description or a template for solving a particular type of problem that can be deployed in many different situations. Design patterns can be viewed as formalized best practices that the programmer may use to solve common problems when designing a software application or system.
Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.
History
Patterns originated as an architectural concept by Christopher Alexander as early as 1977 in A Pattern Language (c.f. his article, "The Pattern of Streets," JOURNAL OF THE AIP, September, 1966, Vol. 32, No. 5, pp. 273–278). In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying patterns to programming – specifically pattern languages – and presented their results at the OOPSLA conference that year. In the following years, Beck, Cunningham and others followed up on this work.
Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by the so-called "Gang of Four" (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides), which is frequently abbreviated as "GoF". That same year, the first Pattern Languages of Programming Conference was held, and the following year the Portland Pattern Repository was set up for documentation of design patterns. The scope of the term remains a matter of dispute. Notable books in the design pattern genre include:
Although design patterns have been applied practically for a long time, formalization of the concept of design patterns languished for several years.
Practice
Design patterns can speed up the development process by providing proven development paradigms. Effective software design requires considering issues that may not become apparent until later in the implementation. Freshly written code can often have hidden, subtle issues that take time to be detected; issues that sometimes can cause major problems down the road. Reusing design patterns can help to prevent such issues, and enhance code readability for those familiar with the patterns.
Software design techniques are difficult to apply to a broader range of problems. Design patterns provide general solutions, documented in a format that does not require specifics tied to a particular problem.
In 1996, Christopher Alexander was invited to give a Keynote Speech to the 1996 OOPSLA Convention. Here he reflected on how his work on Patterns in Architecture had developed and his hopes for how the Software Design community could help Architecture extend Patterns to create living structures that use generative schemes that are more like computer code.
Motif
A pattern describes a design motif, a.k.a. prototypical micro-architecture, as a set of program constituents (e.g., classes, methods...) and their relationships. A developer adapts the motif to their codebase to solve the problem described by the pattern. The resulting code has structure and organization similar to the chosen motif.
Domain-specific patterns
Efforts have also been made to codify design patterns in particular domains, including the use of existing design patterns as well as domain-specific design patterns. Examples include user interface design patterns, information visualization, secure design, "secure usability", Web design and business model design.
The annual Pattern Languages of Programming Conference proceedings include many examples of domain-specific patterns.
Object-oriented programming
Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
Examples
Design patterns can be organized into groups based on what kind of problem they solve. Creational patterns create objects. Structural patterns organize classes and objects to form larger structures that provide new functionality. Behavioral patterns provide communication between objects and realizing these patterns.
Creational patterns
Structural patterns
Behavioral patterns
Concurrency patterns
Documentation
The documentation for a design pattern describes the context in which the pattern is used, the forces within the context that the pattern seeks to resolve, and the suggested solution. There is no single, standard format for documenting design patterns. Rather, a variety of different formats have been used by different pattern authors. However, according to Martin Fowler, certain pattern forms have become more well-known than others, and consequently become common starting points for new pattern-writing efforts. One example of a commonly used documentation format is the one used by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides in their book Design Patterns. It contains the following sections:
Pattern Name and Classification: A descriptive and unique name that helps in identifying and referring to the pattern.
Intent: A description of the goal behind the pattern and the reason for using it.
Also Known As: Other names for the pattern.
Motivation (Forces): A scenario consisting of a problem and a context in which this pattern can be used.
Applicability: Situations in which this pattern is usable; the context for the pattern.
Structure: A graphical representation of the pattern. Class diagrams and Interaction diagrams may be used for this purpose.
Participants: A listing of the classes and objects used in the pattern and their roles in the design.
Collaboration: A description of how classes and objects used in the pattern interact with each other.
Consequences: A description of the results, side effects, and trade offs caused by using the pattern.
Implementation: A description of an implementation of the pattern; the solution part of the pattern.
Sample Code: An illustration of how the pattern can be used in a programming language.
Known Uses: Examples of real usages of the pattern.
Related Patterns: Other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
Criticism
Some suggest that design patterns may be a sign that features are missing in a given programming language (Java or C++ for instance). Peter Norvig demonstrates that 16 out of the 23 patterns in the Design Patterns book (which is primarily focused on C++) are simplified or eliminated (via direct language support) in Lisp or Dylan. Related observations were made by Hannemann and Kiczales who implemented several of the 23 design patterns using an aspect-oriented programming language (AspectJ) and showed that code-level dependencies were removed from the implementations of 17 of the 23 design patterns and that aspect-oriented programming could simplify the implementations of design patterns.
See also Paul Graham's essay "Revenge of the Nerds".
Inappropriate use of patterns may unnecessarily increase complexity. FizzBuzzEnterpriseEdition offers a humorous example of over-complexity introduced by design patterns.
By definition, a pattern must be programmed anew into each application that uses it. Since some authors see this as a step backward from software reuse as provided by components, researchers have worked to turn patterns into components. Meyer and Arnout were able to provide full or partial componentization of two-thirds of the patterns they attempted.
In order to achieve flexibility, design patterns may introduce additional levels of indirection, which may complicate the resulting design and decrease runtime performance.
Relationship to other topics
Software design patterns offer finer granularity compared to software architecture patterns and software architecture styles, as design patterns focus on solving detailed, low-level design problems within individual components or subsystems. Examples include Singleton, Factory Method, and Observer.
Software Architecture Pattern refers to a reusable, proven solution to a recurring problem at the system level, addressing concerns related to the overall structure, component interactions, and quality attributes of the system. Software architecture patterns operate at a higher level of abstraction than design patterns, solving broader system-level challenges. While these patterns typically affect system-level concerns, the distinction between architectural patterns and architectural styles can sometimes be blurry. Examples include Circuit Breaker.
Software Architecture Style refers to a high-level structural organization that defines the overall system organization, specifying how components are organized, how they interact, and the constraints on those interactions. Architecture styles typically include a vocabulary of component and connector types, as well as semantic models for interpreting the system's properties. These styles represent the most coarse-grained level of system organization. Examples include Layered Architecture, Microservices, and Event-Driven Architecture.
See also
Abstraction principle
Algorithmic skeleton
Anti-pattern
Architectural pattern
Canonical protocol pattern
Debugging patterns
Design pattern
Distributed design patterns
Double-chance function
Enterprise Architecture framework
GRASP (object-oriented design)
Helper class
Idiom in programming
Interaction design pattern
List of software architecture styles and patterns
List of software development philosophies
List of software engineering topics
Pattern language
Pattern theory
Pedagogical patterns
Portland Pattern Repository
Refactoring
Software development methodology
References
Further reading
Software development | Software design pattern | [
"Technology",
"Engineering"
] | 1,926 | [
"Software engineering",
"Computer occupations",
"Software development"
] |
164,953 | https://en.wikipedia.org/wiki/Purity%20test | A purity test is a self-graded survey that assesses the participants' supposed degree of innocence in worldly matters (sex, drugs, deceit, and other activities assumed to be vices), generally on a percentage scale with 100% being the most and 0% being the least pure. Online purity tests were among the earliest of Internet memes, popular on Usenet beginning in the early 1980s. However, similar types of tests circulated under various names long before the existence of the Internet.
Historical examples
The Rice Thresher student newspaper in 1924 published the results of an informal ten-question survey of 119 female undergraduate students at Rice University, with questions like "Have you ever been drunk?", "Did you ever dance conspicuously?", and "Have you ever done anything that you wouldn't tell your mother?" The test has periodically been revisited and revised by the Thresher, usually on its satirical "Backpage," with expanded lists of questions that students can take and score on their own, starting with a list of 100 questions in 1988. An official website was created for the 2012 edition by the newspaper staff.
The Columbia University humor magazine, The Jester, reported in its October 1935 issue on a campus wide "purity test" conducted at Barnard College in 1935. The issue of The Jester was briefly censored, with distribution curtailed until the director of activities at the university could review the article. According to the editor-in-chief of The Jester, "We printed the survey to clear up some of the misconceptions that Columbia and the outside world have about Barnard girls," he said. "The results seem to establish that Barnard girls are quite regular. I fail to see anything off-color in the story. It's a sociological study."
In 1936, The Indian Express reported that students at Toronto University were "under-going a 'purity test', which took "the form of twenty very personal questions, designed to determine the state of their morals and their 'purity ratio'. For example, so many marks are lost for smoking, drinking, and every time the sinner kisses a girl or boy. Then, after truthfully answering all the questions, the total number of bad marks are added up and subtracted from a hundred. What is left, if any, is the 'purity ratio'. The test is unofficial and just what it will prove when completed nobody knows."
Alan Dundes, a professor of anthropology and folklore at the University of California, Berkeley, and Carl R. Pagter included examples of purity tests in their 1975 book Work Hard and You Shall Be Rewarded: Urban Folklore from the Paperwork Empire. They noted, "An indication of the particular sexual activities that are valued is provided by various versions of a questionnaire parody entitled 'Virtue Test' or 'Official Purity Test' or the like. It is obviously doubtful whether anyone would answer the questions posed on the test in an honest and truthful fashion. Nevertheless, the questions themselves serve to reveal a good deal about the American male's sexual fantasy life." Dundes and Pagter's book reprints a "Virtue Test" circulated at Indiana University in 1939 and a more contemporary "Official Purity Test" circulated at California Institute of Technology.
In 1976, a teacher at La Grange High School in Texas was fired for distributing a 1966 purity test, which had appeared in the Ask Ann Landers column, to her students. The questions on the test ranged from "Ever said 'I love you'?" to "Ever had group sex?" The teacher sued the school district and was awarded $71,000 in back pay and damages.
Description
Most purity tests have possible scores anywhere from 0% to 100%. Purity tests ask numerous personal questions of their users, most commonly about the use of alcohol and illicit substances; sexual acts with members of the opposite or same-sex; other illicit or illegal activities, and the above actions in an odd or "kinky" context. These tests typically have anywhere from 50 to 2000 questions.
Many popular purity tests encourage participation in a social situation (one person reading a purity test aloud while others mark down their "yeses" for later tabulation). The tests often acknowledge that some may use them as a checklist for things to do, try, or accomplish.
One test is The Unisex Purity Test (or, simply, the Purity Test). First written before 1980 in the Massachusetts Institute of Technology Baker House, the first incarnation had two parallel versions, 100 questions each; one for male, and one for female. The next iteration (247 questions, written at Carnegie Mellon University in 1983) heralded the merging of the gendered versions, making it unisex. Over the next decade or so, many re-writes and expansions commenced, a 2000-question version being written in 1995.
References
Further reading
Human sexuality
Internet culture
Personality tests | Purity test | [
"Biology"
] | 1,003 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
164,968 | https://en.wikipedia.org/wiki/Jan%20Swammerdam | Jan or Johannes Swammerdam (February 12, 1637 – February 17, 1680) was a Dutch biologist and microscopist. His work on insects demonstrated that the various phases during the life of an insect—egg, larva, pupa, and adult—are different forms of the same animal. As part of his anatomical research, he carried out experiments on muscle contraction. In 1658, he was the first to observe and describe red blood cells. He was one of the first people to use the microscope in dissections, and his techniques remained useful for hundreds of years.
Education
Johannes Swammerdam was baptized on 15 February 1637 in the Oude Kerk Amsterdam. His father Jan (or Johannes) Jacobsz (-1678) was an apothecary and an amateur collector of minerals, coins, fossils, and insects from around the world. In 1632 he married Baartje Jans (-1660) in Weesp. The couple lived across the Montelbaanstoren, near the harbour, the headquarter and the warehouses of the Dutch West India Company where an uncle worked. Some of their children were buried in the Walloon church, likewise Jan himself (who never married) and his father.
As a youngster, Swammerdam had helped his father to take care of his curiosity collection. Despite his father's wish that he should study theology Swammerdam started to study medicine in 1661 at the University of Leiden. He studied under the guidance of Johannes van Horne and Franciscus Sylvius. Among his fellow students were Frederik Ruysch, Reinier de Graaf, Ole Borch, Theodor Kerckring, Steven Blankaart, Burchard de Volder, Ehrenfried Walther von Tschirnhaus and Niels Stensen. While studying medicine Swammerdam started his own collection of insects.
In 1663 Swammerdam moved to France to continue his studies. It seems together with Steno. He studied for one year at the Protestant University of Saumur, under the guidance of Tanaquil Faber. Subsequently, he studied in Paris at the scientific academy of Melchisédech Thévenot. 1665 he returned to the Dutch Republic and joined a group of physicians who performed dissections and published their findings. Between 1666 and 1667 Swammerdam concluded his study of medicine at the University of Leiden; he received his doctorate in medicine in 1667 under van Horne for his dissertation on the mechanism of respiration, published under the title De respiratione usuque pulmonum.
Together with van Horne, he researched the anatomy of the uterus. The result of this research was published under the title Miraculum naturae sive uteri muliebris fabrica in 1672. Swammerdam accused Reinier de Graaf of taking credit of discoveries he and Van Horne had made earlier regarding the importance of the ovary and its eggs.
He used waxen injection techniques and a single-lens microscope made by Johannes Hudde.
Research into insects
While studying medicine Swammerdam had started to dissect insects and after qualifying as a doctor, he focused on them. His father pressured him to earn a living, but Swammerdam persevered and in late 1669 published Historia insectorum generalis ofte Algemeene verhandeling van de bloedeloose dierkens (The General History of Insects, or General Treatise on little Bloodless Animals). The treatise summarised his study of insects he had collected in France and around Amsterdam. He countered the prevailing Aristotelian notion that insects were imperfect animals that lacked internal anatomy. Following the publication his father withdrew all financial support. As a result, Swammerdam was forced, at least occasionally, to practice medicine in order to finance his own research. He obtained leave at Amsterdam to dissect the bodies of those who died in the hospital.
At university Swammerdam engaged deeply in the religious and philosophical ideas of his time. He categorically opposed the ideas behind spontaneous generation, which held that God had created some creatures, but not insects. Swammerdam argued that this would blasphemously imply that parts of the universe were excluded from God's will. In his scientific study, Swammerdam tried to prove that God's creation happened time after time, and that it was uniform and stable. Swammerdam was much influenced by René Descartes, whose natural philosophy had been widely adopted by Dutch intellectuals. In Discours de la Methode Descartes had argued that nature was orderly and obeyed fixed laws, thus nature could be explained rationally.
Swammerdam was convinced that the creation, or generation, of all creatures obeyed the same laws. Having studied the reproductive organs of men and women at university he set out to study the generation of insects. He had devoted himself to studying insects after discovering that the bee was indeed a queen bee. Swammerdam knew this because he had found eggs inside the creature. But he did not publish this finding. Swammerdam corresponded with Matthew Slade and Paolo Boccone and was visited by Willem Piso, Nicolaas Tulp and Nicolaas Witsen. He showed Cosimo III de' Medici, accompanied by Thévenot, another revolutionary discovery. Inside a caterpillar the limbs and wings of the butterfly could be seen (now called the imaginal discs).
When Swammerdam published The General History of Insects, or General Treatise on little Bloodless Animals later that year he not only did away with the idea that insects lacked internal anatomy but also attacked the Christian notion that insects originated from spontaneous generation and that their life cycle was a metamorphosis. Swammerdam maintained that all insects originated from eggs and their limbs grew and developed slowly. Thus there was no distinction between insects and so-called higher animals. Swammerdam declared war on "vulgar errors" and the symbolic interpretation of insects was, in his mind, incompatible with the power of God, the almighty architect.
Swammerdam, therefore, dispelled the seventeenth-century notion of metamorphosis —the idea that different life stages of an insect (e.g. caterpillar and butterfly) represent different individuals or a sudden change from one type of animal to another.
Spirituality
Swammerdam suffered a crisis of conscience; his father repudiated the study of insects. Having believed that his scientific research was a tribute to the Creator, he started to fear that he may be worshipping the idol of curiosities. In 1673 Swammerdam briefly fell under the influence of the Flemish mystic Antoinette Bourignon. His 1675 treatise on the mayfly, entitled Ephemeri vita, included devout poetry and documented his religious experiences. Swammerdam found comfort in the arms of Bourignon's sect in Nordstrand, Germany. Swammerdam traveled to Copenhagen to visit the mother of Nicolaus Steno, but was back in Amsterdam in early 1676. In a letter to Henry Oldenburg, he explained "I was never at any time busier than in these days, and the chief of all architects has blessed my endeavors".
Bybel der natuure
His religious crisis only interrupted his scientific research briefly and until his premature death aged 43, he worked on what was to become his main work. It remained unpublished when he died in 1680 and was published as Bybel der natuure posthumously in 1737 by the Leiden University professor Herman Boerhaave. Convinced that all insects were worth studying, Swammerdam had compiled an epic treatise on as many insects as he could, using the microscope and dissection. Inspired by Marcello Malpighi, in De Bombyce Swammerdam described the anatomy of silkworms, mayflies, ants, stag beetles, cheese mites, bees and many other insects. His scientific observations were infused by his belief in God, the almighty creator. Swammerdam's praise of the louse went on to become a classic:
Herewith I offer you the Omnipotent Finger of God in the anatomy of a louse: wherein you will find miracle heaped on miracle and see the wisdom of God clearly manifested in a minute point.
Research on bees
Since ancient times it had been asserted that the queen bee was male, and ruled the hive. In 1586 Luis Mendez de Torres had first published the finding that the hive was ruled by a female, but Torres had maintained that she produced all other bees in the colony through a "seed". In 1609 Charles Butler had recorded the sex of drones as male, but he wrongly believed that they mated with worker bees. In Biblia naturae the first visual proof was published that his contemporaries had mistakenly identified the queen bee as male. Swammerdam also provided evidence that the queen bee is the sole mother of the colony.
Swammerdam had engaged in five intense years of beekeeping. He had found that drones were masculine, and had no stinger. Swammerdam identified the worker bees as "natural eunuchs" because he was unable to detect ovaries in them, but described them as nearer to the nature of the female. Swammerdam had produced a drawing of the queen bee's reproductive organs, as observed through the microscope. The drawing Swammerdam produced of the internal anatomy of the queen bee was only published in 1737. His drawing of the honeycomb geometry was first published in Biblia naturae, but had been referenced by Giacomo Filippo Maraldi in his 1712 book. Details of Swammerdam's research on bees had already been published elsewhere because he had shared his findings with other scientists in correspondence. Among others, Swammerdam's research had been referenced by Nicolas Malebranche in 1688.
Research on muscles
In Biblia naturae Swammerdam's research on muscles was published. Swammerdam played a key role in the debunking of the balloonist theory, the idea that 'moving spirits' are responsible for muscle contractions. The idea, supported by the Greek physician Galen, held that nerves were hollow and the movement of spirits through them propelled muscle motion. René Descartes furthered the idea by basing it on a model of hydraulics, suggesting that the spirits were analogous to fluids or gasses and calling them 'animal spirits'. In the model, which Descartes used to explain reflexes, the spirits would flow from the ventricles of the brain, through the nerves, and to the muscles to animate the latter. According to this hypothesis, muscles would grow larger when they contract because of the animal spirits flowing into them. To test this idea, Swammerdam placed severed frog thigh muscle in an airtight syringe with a small amount of water in the tip. He could thus determine whether there was a change in the volume of the muscle when it contracted by observing a change in the level of the water (image at right). When Swammerdam caused the muscle to contract by irritating the nerve, the water level did not rise but rather was lowered by a minute amount; this showed that no air or fluid could be flowing into the muscle. The idea that nerve stimulation led to the movement had important implications for neuroscience by putting forward the idea that behavior is based on stimuli.
Swammerdam's research had been referenced before its publication by Nicolas Steno, who had visited Swammerdam in Amsterdam. Swammerdam's research concluded after Steno had published the second edition of Elements of Myology in 1669, which is referenced in Biblia naturae. A letter from Steno to Malpighi from 1675 suggests that Swammerdam's findings on muscle contraction had caused his crisis of consciousness. Steno sent Malpighi the drawings Swammerdam had done of the experiments, saying "when he had written a treatise on this matter he destroyed it and he has only preserved these figures. He is seeking God, but not yet in the Church of God."
Legacy
Together with his father he collected 6,000 objects in 27 drawer cabinets. Swammerdam's Historia insectorum generalis was widely known and applauded before he died. Two years after his death in 1680 it was translated into French and in 1685 it was translated into Latin. John Ray, author of the 1705 Historia insectorum, praised Swammerdam' methods, they were "the best of all". Though Swammerdam's work on insects and anatomy was significant, many current histories remember him as much for his methods and skill with microscopes as for his discoveries. He developed new techniques for examining, preserving, and dissecting specimens, including wax injection to make viewing blood vessels easier. A method he invented for the preparation of hollow human organs was later much employed in anatomy. He had corresponded with contemporaries across Europe and his friends Gottfried Wilhelm Leibniz and Nicolas Malebranche used his microscopic research to substantiate their own natural and moral philosophy. But Swammerdam has also been credited with heralding the natural theology of the 18th century, were God's grand design was detected in the mechanics of the Solar System, the seasons, snowflakes and the anatomy of the human eye. An English translation of his entomological works by T. Floyd was published in 1758.
No authentic portrait of Jan Swammerdam is extant nowadays. The portrait shown in the header is derived from the painting The Anatomy Lesson of Dr Tulp by Rembrandt and represents the leading Amsterdam physician Hartman Hartmanzoon (1591–1659).
Notes
Works
References
Cobb M. 2002. Exorcizing the animal spirits: John Swammerdam on nerve function. Nature Reviews, Volume 3, Pages 395–400.
Winsor, Mary P. "Swammerdam, Jan." Dictionary of Scientific Biography. 1976
Cobb, Matthew. "Reading and writing The Book of Nature: Jan Swammerdam (1637–1680)." Endeavour. Vol. 24(3). 2000.
O'Connell, Sanjida. "A silk road to biology." The Times. May 27, 2002.
Hall, Rupert A. From Galileo to Newton 1630–1720R. &R. Clark, Ltd., Edinburgh: 1963.
Further reading
Jorink, Eric. "'Outside God there is Nothing': Swammerdam, Spinoza, and the Janus-Face of the Early Dutch Enlightenment." The Early Enlightenment in the Dutch Republic, 1650–1750: Selected Papers of a Conference, Held at the Herzog August Bibliothek Wolfenbüttel, 22–23 March 2001. Ed. Wiep Van Bunge. Leiden, The Netherlands: Brill Academic Publishers, 2003. 81–108.
Fearing, Franklin. "Jan Swammerdam: A Study in the History of Comparative and Physiological Psychology of the 17th Century." The American Journal of Psychology 41.3 (1929): 442–455
Ruestow, Edward G. The Microscope in the Dutch Republic: The Shaping of Discovery. New York: Cambridge University Press, 1996.
Ruestow, Edward G. "Piety and the defense of natural order: Swammerdam on generation." Religion Science and Worldview: Essays in Honor of Richard S. Westfall. Eds. Margaret Osler and Paul Lawrence Farber. New York: Cambridge University Press, 1985. 217–241.
External links
Site devoted to Swammerdam
Short biography from a website that chronicles the "rocky road to modern paleontology and biology"
Biography written by Matthew Cobb, a professor at Laboratoire d'Ecologie in Paris
An English Edition of Swammerdam's "The Book of Nature, or, The History of Insects" From the History of Science Digital Collection: Utah State University
The Correspondence of Jan Swammerdam in EMLO
1637 births
1680 deaths
17th-century Dutch naturalists
17th-century Dutch biologists
Dutch beekeepers
17th-century farmers
Dutch entomologists
Dutch zoologists
Leiden University alumni
Microscopists
Scientists from Amsterdam
Biology and natural history in the Dutch Republic | Jan Swammerdam | [
"Chemistry"
] | 3,282 | [
"Microscopists",
"Microscopy"
] |
164,969 | https://en.wikipedia.org/wiki/Magdeburg%20hemispheres | The Magdeburg hemispheres are a pair of large copper hemispheres with mating rims that were used in a famous 1654 experiment to demonstrate the power of atmospheric pressure. When the rims were sealed with grease and the air was pumped out, the sphere contained a vacuum and could not be pulled apart by teams of horses. Once the valve was opened, air rushed in and the hemispheres were easily separated. The Magdeburg hemispheres were invented by German scientist and mayor of Magdeburg, Otto von Guericke, to demonstrate the air pump that he had invented and the concept of atmospheric pressure.
Speculation varied about the contents of the sphere. Many thought it was simply empty, while others argued the vacuum contained air or some finer aerial substance. Sound did not transmit through the sphere, indicating that sound needed a medium in order to be heard, while light did not.
The first artificial vacuum had been produced a few years earlier by Evangelista Torricelli and inspired Guericke to design the world's first vacuum pump, which consisted of a piston and cylinder with one-way flap valves. The hemispheres became popular in physics lectures as an illustration of the strength of air pressure, and are still used in education. The original hemispheres are on display in the Deutsches Museum in Munich.
Aside from its scientific importance, the experiment served to prove the recovery of the city of Magdeburg, which only two decades earlier had undergone the Sack of Magdeburg - considered the worst atrocity of the Thirty Years' War - when 20,000 of its inhabitants were massacred, and only 4,000 remained at the end of the war in 1648. Von Guericke was concerned with both aspects of the experiment, in his double capacity as a leading scientist and as the mayor who worked tirelessly to restore the city's wealth.
Overview
The Magdeburg hemispheres, around 50 cm (20 inches) in diameter, were designed to demonstrate the vacuum pump that Guericke had invented. One of them had a tube connection to attach the pump, with a valve to close it off. When the air was sucked out from inside the hemispheres, and the valve was closed, the hose from the pump could be detached, and they were held firmly together by the air pressure of the surrounding atmosphere.
The force holding the hemispheres together was equal to the area bounded by the joint between the hemispheres, a circle with a diameter of 50 cm, multiplied by the difference in air pressure between the inside and the outside. It is unclear how strong a vacuum Guericke's pump was able to achieve, but if it was able to evacuate all of the air from the inside, the hemispheres would have been held together with a force of around , equivalent to lifting a car or small elephant; a dramatic demonstration of the pressure of the atmosphere.
Demonstrations
Guericke's demonstration was performed on 8 May 1654 in front of the Imperial Diet, and the Emperor Ferdinand III in Regensburg. Thirty horses, in two teams of fifteen, could not separate the hemispheres until the valve was opened to equalize the air pressure. In 1656 he repeated the demonstration with sixteen horses (two teams of eight) in his hometown of Magdeburg, where he was mayor. He also took the two spheres, hung the two hemispheres with a support, and removed the air from within. He then strapped weights to the spheres, but the spheres would not budge. Gaspar Schott was the first to describe the experiment in print in his Mechanica Hydraulico-Pneumatica (1657). In 1663 (or, according to some sources, in 1661) the same demonstration was given in Berlin before Frederick William, Elector of Brandenburg with twenty-four horses.
The experiment became a popular way to illustrate the principles of air pressure, and many smaller copies of the hemispheres were made, and are used to this day in science classes. Reenactments of von Guericke's experiment of 1654 are performed in locations around the world by the Otto von Guericke Society. On 18 March 2000, a demonstration using sixteen horses was conducted in Great Torrington by Barometer World.
The experiment has been commemorated on three German stamps.
After learning about Guericke's pump through Schott's book, Robert Boyle worked with Robert Hooke to design and build an improved air pump. From this, through various experiments, they formulated what is called Boyle's law, which states that the volume of a body of an ideal gas is inversely proportional to its pressure. Much later the ideal gas law was formulated in 1834.
See also
Horror vacui (physics)
References
published in print Summer 2004.
External links
Magdeburg Hemispheres
Magdeburg Notgeld (emergency banknote) depicting two teams of horses attempting to separate the halves of a Magdeburg Hemisphere. http://webgerman.com/Notgeld/Directory/M/Magdeburg3.htm
Physics experiments
Historical scientific instruments
Magdeburg | Magdeburg hemispheres | [
"Physics"
] | 1,024 | [
"Physics experiments",
"Physical quantities",
"Meteorological quantities",
"Atmospheric pressure",
"Experimental physics"
] |
164,974 | https://en.wikipedia.org/wiki/Hydrography | Hydrography is the branch of applied sciences which deals with the measurement and description of the physical features of oceans, seas, coastal areas, lakes and rivers, as well as with the prediction of their change over time, for the primary purpose of safety of navigation and in support of all other marine activities, including economic development, security and defense, scientific research, and environmental protection.
History
The origins of hydrography lay in the making of charts to aid navigation, by individual mariners as they navigated into new waters. These were usually the private property, even closely held secrets, of individuals who used them for commercial or military advantage. As transoceanic trade and exploration increased, hydrographic surveys started to be carried out as an exercise in their own right, and the commissioning of surveys was increasingly done by governments and special hydrographic offices. National organizations, particularly navies, realized that the collection, systematization and distribution of this knowledge gave it great organizational and military advantages. Thus were born dedicated national hydrographic organizations for the collection, organization, publication and distribution of hydrography incorporated into charts and sailing directions.
Prior to the establishment of the United Kingdom Hydrographic Office, Royal Navy captains were responsible for the provision of their own charts. In practice this meant that ships often sailed with inadequate information for safe navigation, and that when new areas were surveyed, the data rarely reached all those who needed it. The Admiralty appointed Alexander Dalrymple as Hydrographer in 1795, with a remit to gather and distribute charts to HM Ships. Within a year existing charts from the previous two centuries had been collated, and the first catalog published. The first chart produced under the direction of the Admiralty, was a chart of Quiberon Bay in Brittany, and it appeared in 1800.
Under Captain Thomas Hurd the department received its first professional guidelines, and the first catalogs were published and made available to the public and to other nations as well. In 1829, Rear-Admiral Sir Francis Beaufort, as Hydrographer, developed the eponymous Scale, and introduced the first official tide tables in 1833 and the first "Notices to Mariners" in 1834. The Hydrographic Office underwent steady expansion throughout the 19th century; by 1855, the Chart Catalogue listed 1,981 charts giving a definitive coverage over the entire world, and produced over 130,000 charts annually, of which about half were sold.
The word hydrography comes from the Ancient Greek ὕδωρ (hydor), "water" and γράφω (graphō), "to write".
Overview
Large-scale hydrography is usually undertaken by national or international organizations which sponsor data collection through precise surveys and publish charts and descriptive material for navigational purposes. The science of oceanography is, in part, an outgrowth of classical hydrography. In many respects the data are interchangeable, but marine hydrographic data will be particularly directed toward marine navigation and safety of that navigation. Marine resource exploration and exploitation is a significant application of hydrography, principally focused on the search for hydrocarbons.
Hydrographical measurements include the tidal, current and wave information of physical oceanography. They include bottom measurements, with particular emphasis on those marine geographical features that pose a hazard to navigation such as rocks, shoals, reefs and other features that obstruct ship passage. Bottom measurements also include collection of the nature of the bottom as it pertains to effective anchoring. Unlike oceanography, hydrography will include shore features, natural and manmade, that aid in navigation. Therefore, a hydrographic survey may include the accurate positions and representations of hills, mountains and even lights and towers that will aid in fixing a ship's position, as well as the physical aspects of the sea and seabed.
Hydrography, mostly for reasons of safety, adopted a number of conventions that have affected its portrayal of the data on nautical charts. For example, hydrographic charts are designed to portray what is safe for navigation, and therefore will usually tend to maintain least depths and occasionally de-emphasize the actual submarine topography that would be portrayed on bathymetric charts. The former are the mariner's tools to avoid accident. The latter are best representations of the actual seabed, as in a topographic map, for scientific and other purposes. Trends in hydrographic practice since c. 2003–2005 have led to a narrowing of this difference, with many more hydrographic offices maintaining "best observed" databases, and then making navigationally "safe" products as required. This has been coupled with a preference for multi-use surveys, so that the same data collected for nautical charting purposes can also be used for bathymetric portrayal.
Even though, in places, hydrographic survey data may be collected in sufficient detail to portray bottom topography in some areas, hydrographic charts only show depth information relevant for safe navigation and should not be considered as a product that accurately portrays the actual shape of the bottom. The soundings selected from the raw source depth data for placement on the nautical chart are selected for safe navigation and are biased to show predominantly the shallowest depths that relate to safe navigation. For instance, if there is a deep area that can not be reached because it is surrounded by shallow water, the deep area may not be shown. The color filled areas that show different ranges of shallow water are not the equivalent of contours on a topographic map since they are often drawn seaward of the actual shallowest depth portrayed. A bathymetric chart does show marine topology accurately. Details covering the above limitations can be found in Part 1 of Bowditch's American Practical Navigator. Another concept that affects safe navigation is the sparsity of detailed depth data from high resolution sonar systems. In more remote areas, the only available depth information has been collected with lead lines. This collection method drops a weighted line to the bottom at intervals and records the depth, often from a rowboat or sail boat. There is no data between soundings or between sounding lines to guarantee that there is not a hazard such as a wreck or a coral head waiting there to ruin a sailor's day. Often, the navigation of the collecting boat does not match today's GPS navigational accuracies. The hydrographic chart will use the best data available and will caveat its nature in a caution note or in the legend of the chart.
A hydrographic survey is quite different from a bathymetric survey in some important respects, particularly in a bias toward least depths due to the safety requirements of the former and geomorphologic descriptive requirements of the latter. Historically, this could include echosoundings being conducted under settings biased toward least depths, but in modern practice hydrographic surveys typically attempt to best measure the depths observed, with the adjustments for navigational safety being applied after the fact.
Hydrography of streams will include information on the stream bed, flows, water quality and surrounding land. Basin or interior hydrography pays special attention to rivers and potable water although if collected data is not for ship navigational uses, and is intended for scientific usage, it is more commonly called hydrometry or hydrology.
Hydrography of rivers and streams is also an integral part of water management. Most reservoirs in the United States use dedicated stream gauging and rating tables to determine inflows into the reservoir and outflows to irrigation districts, water municipalities and other users of captured water. River/stream hydrographers use handheld and bank mounted devices, to capture a sectional flow rate of moving water through a section and or current.
Equipment
Uncrewed Surface Vessels (USVs) and are commonly used for hydrographic surveys - they are often equipped with some sort of sonar. Single-beam echosounders, multibeam echosounders, and side scan sonars are all frequently used in hydrographic applications. The knowledge gained from these surveys aid in disaster planning, port and harbor maintenance, and various other coastal planning activities.
Organizations
Hydrographic services in most countries are carried out by specialized hydrographic offices. The international coordination of hydrographic efforts lies with the International Hydrographic Organization.
The United Kingdom Hydrographic Office is one of the oldest, supplying a wide range of charts covering the globe to other countries, allied military organizations and the public.
In the United States, the hydrographic charting function has been carried out since 1807 by the Office of Coast Survey of the National Oceanic and Atmospheric Administration within the U.S. Department of Commerce and the U.S. Army Corps of Engineers.
See also
Associations focussing on ocean hydrography
International Federation of Hydrographic Societies (formerly The Hydrographic Society)
State Hydrography Service of Georgia
The Hydrographic Society of America
Australasian Hydrographic Society
Associations focussing on river stream and lake hydrography
Australian Hydrographic Association
New Zealand Hydrological Society
American Institute of Hydrology
References
External links
Hydro International Lemmer, the Netherlands: Hydrographic Information ]
What is hydrography? National Ocean Service
Hydrography
Hydrology
Physical geography | Hydrography | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,810 | [
"Hydrography",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Environmental engineering"
] |
164,975 | https://en.wikipedia.org/wiki/Terrella | A terrella () is a small magnetised model ball representing the Earth, that is thought to have been invented by the English physician William Gilbert while investigating magnetism, and further developed 300 years later by the Norwegian scientist and explorer Kristian Birkeland, while investigating the aurora.
Terrellas have been used until the late 20th century to attempt to simulate the Earth's magnetosphere, but have now been replaced by computer simulations.
William Gilbert's terrella
William Gilbert, the royal physician to Queen Elizabeth I, devoted much of his time, energy and resources to the study of the Earth's magnetism. It had been known for centuries that a freely suspended compass needle pointed north. Earlier investigators (including Christopher Columbus) found that direction deviated somewhat from true north, and Robert Norman showed the force on the needle was not horizontal but slanted into the Earth.
William Gilbert's explanation was that the Earth itself was a giant magnet, and he demonstrated this by creating a scale model of the magnetic Earth, a "terrella", a sphere formed out of a lodestone. Passing a small compass over the terrella, Gilbert demonstrated that a horizontal compass would point towards the magnetic pole, while a dip needle, balanced on a horizontal axis perpendicular to the magnetic one, indicated the proper "magnetic inclination" between the magnetic force and the horizontal direction. Gilbert later reported his findings in De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure, published in 1600.
Kristian Birkeland's terrella
Kristian Birkeland was a Norwegian physicist who, around 1895, tried to explain why the lights of the polar aurora appeared only in regions centered at the magnetic poles.
He simulated the effect by directing cathode rays (later identified as electrons) at a terrella in a vacuum tank, and found they indeed produced a glow in regions around the poles of the terrella. Because of residual gas in the chamber, the glow also outlined the path of the particles. Neither he nor his associate Carl Størmer (who calculated such paths) could understand why the actual aurora avoided the area directly above the poles themselves. Birkeland believed the electrons came from the Sun, since large auroral outbursts were associated with sunspot activity.
Birkeland constructed several terrellas. One large terrella experiment was reconstructed in Tromsø, Norway.
Other terrellas
The German Baron Carl Reichenbach (1788–1869) also experimented with a terrella. He used an electromagnet, placed within a large hollow iron sphere, and this was examined in the darkroom under varying degrees of electrification.
Brunberg and Dattner in Sweden, around 1950, used a terrella to simulate trajectories of particles in the Earth's field. Podgorny in the Soviet Union, around 1972, built terrellas at which a flow of plasma was directed, simulating the solar wind. Hafiz-Ur Rahman at the University of California, Riverside conducted more realistic experiments around 1990. All such experiments are difficult to interpret, and are never able to scale all the parameters needed to properly simulate the Earth's magnetosphere, which is why such experiments have now been completely replaced by computer simulations.
Recently the terrella experiments have been further developed by a team of physicists at the Institute of Planetology and Astrophysics in Grenoble, France to create the "planeterrella" which uses two magnetised spheres which can be manipulated to recreate several different auroral phenomena.
Notes
References
External links
NASA Educational Website on the Terrella
Physics experiments | Terrella | [
"Physics"
] | 729 | [
"Experimental physics",
"Physics experiments"
] |
165,015 | https://en.wikipedia.org/wiki/Metal%20detector | A metal detector is an instrument that detects the nearby presence of metal. Metal detectors are useful for finding metal objects on the surface, underground, and under water. A metal detector consists of a control box, an adjustable shaft, and a variable-shaped pickup coil. When the coil nears metal, the control box signals its presence with a tone, light, or needle movement. Signal intensity typically increases with proximity. A common type are stationary "walk through" metal detectors used at access points in prisons, courthouses, airports and psychiatric hospitals to detect concealed metal weapons on a person's body.
The simplest form of a metal detector consists of an oscillator producing an alternating current that passes through a coil producing an alternating magnetic field. If a piece of electrically conductive metal is close to the coil, eddy currents will be induced (inductive sensor) in the metal, and this produces a magnetic field of its own. If another coil is used to measure the magnetic field (acting as a magnetometer), the change in the magnetic field due to the metallic object can be detected.
The first industrial metal detectors came out in the 1960s. They were used for finding minerals among other things. Metal detectors help find land mines. They also detect weapons like knives and guns, which is important for airport security. People even use them to search for buried objects, like in archaeology and treasure hunting. Metal detectors are also used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing bars in concrete and pipes and wires buried in walls and floors.
History and development
In 1841, Professor Heinrich Wilhelm Dove published an invention he called the "differential inductor". It was a 4-coil induction balance, with 2 glass tubes each having 2 well-insulated copper wire solenoids wound around them. Charged Leyden jars (high-voltage capacitors) were discharged through the 2 primary coils; this current surge induced a voltage in the secondary coils. When the secondary coils were wired in opposition, the induced voltages cancelled as confirmed by the Professor holding the ends of the secondary coils. When a piece of metal was placed inside one glass tube the Professor received a shock. This then was the first magnetic induction metal detector, and the first pulse induction metal detector.
In late 1878 and early 1879, Professor (of music) David Edward Hughes published his experiments with the 4-coil induction balance. He used his own recent invention the microphone and a ticking clock to generate regular pulses and a telephone receiver as detector. To measure the strength of the signals he invented a coaxial 3-coil induction balance which he called the "electric sonometer". Hughes did much to popularize the induction balance, quickly leading to practical devices that could identify counterfeit coins. In 1880 Mr. J. Munro, C.E. suggested the use of the 4-coil induction balance for metal prospecting. Hughes's coaxial 3-coil induction balance would also see use in metal detecting.
In July 1881, Alexander Graham Bell initially used a 4-coil induction balance to attempt to locate a bullet lodged in the chest of American President James Garfield. After much experimenting the best bullet detection range he achieved was only 2 inches (5 centimeters). He then used his own earlier discovery, the partially overlapping 2-coil induction balance, and the detection range increased to 5 inches (12 centimeters). But the attempt was still unsuccessful because the metal coil spring bed Garfield was lying on confused the detector. Bell's 2-coil induction balance would go on to evolve into the popular double D coil.
On December 16, 1881, Captain Charles Ambrose McEvoy applied for British Patent No. 5518, Apparatus for Searching for Submerged Torpedoes, &c., which was granted Jun 16 1882. His US269439 patent application of Jul 12 1882 was granted Dec 19 1882. It was a 4-coil induction balance for detecting submerged metallic torpedoes and iron ships and the like. Given the development time involved this may have been the earliest known device specifically constructed as a metal detector using magnetic induction.
In 1892, George M. Hopkins described an orthogonal 2-coil induction balance for metal detecting.
In 1915, Professor Camille Gutton developed a 4-coil induction balance to detect unexploded shells in farmland of former battlefields in France. Unusually both coil pairs were used for detection. The 1919 photo at the right is a later version of Gutton's detector.
Modern developments
The modern development of the metal detector began in the 1920s. Gerhard Fischer had developed a system of radio direction-finding, which was to be used for accurate navigation. The system worked extremely well, but Fischer noticed there were anomalies in areas where the terrain contained ore-bearing rocks. He reasoned that if a radio beam could be distorted by metal, then it should be possible to design a machine which would detect metal using a search coil resonating at a radio frequency. In 1925 he applied for, and was granted, the first patent for an electronic metal detector. Although Gerhard Fischer was the first person granted a patent for an electronic metal detector, the first to apply was Shirl Herr, a businessman from Crawfordsville, Indiana. His application for a hand-held Hidden-Metal Detector was filed in February 1924, but not patented until July 1928. Herr assisted Italian leader Benito Mussolini in recovering items remaining from the Emperor Caligula's galleys at the bottom of Lake Nemi, Italy, in August 1929. Herr's invention was used by Admiral Richard Byrd's Second Antarctic Expedition in 1933, when it was used to locate objects left behind by earlier explorers. It was effective up to a depth of eight feet.
However, it was one Lieutenant Józef Stanisław Kosacki, a Polish officer attached to a unit stationed in St Andrews, Fife, Scotland, during the early years of World War II, who refined the design into a practical Polish mine detector.
These units were still quite heavy, as they ran on vacuum tubes, and needed separate battery packs.
The design invented by Kosacki was used extensively during the Second Battle of El Alamein when 500 units were shipped to Field Marshal Montgomery to clear the minefields of the retreating Germans, and later used during the Allied invasion of Sicily, the Allied invasion of Italy and the Invasion of Normandy.
As the creation and refinement of the device was a wartime military research operation, the knowledge that Kosacki created the first practical metal detector was kept secret for over 50 years.
Beat frequency induction
Many manufacturers of these new devices brought their own ideas to the market. White's Electronics of Oregon began in the 1950s by building a machine called the Oremaster Geiger Counter. Another leader in detector technology was Charles Garrett, who pioneered the BFO (beat frequency oscillator) machine. With the invention and development of the transistor in the 1950s and 1960s, metal detector manufacturers and designers made smaller, lighter machines with improved circuitry, running on small battery packs. Companies sprang up all over the United States and Britain to supply the growing demand. Beat Frequency Induction requires movement of the detector coil; akin to how swinging a conductor near a magnet induces an electric current.
Refinements
Modern top models are fully computerized, using integrated circuit technology to allow the user to set sensitivity, discrimination, track speed, threshold volume, notch filters, etc., and hold these parameters in memory for future use. Compared to just a decade ago, detectors are lighter, deeper-seeking, use less battery power, and discriminate better.
State-of-the-art metal detectors have further incorporated extensive wireless technologies for the earphones, connect to Wi-Fi networks and Bluetooth devices. Some also utilize built in GPS locator technology to keep track of searching location and the location of items found. Some connect to smartphone applications to further extend functionality.
Discriminators
The biggest technical change in detectors was the development of a tunable induction system. This system involved two coils that are electro-magnetically tuned. One coil acts as an RF transmitter, the other as a receiver; in some cases these can be tuned to between 3 and 100 kHz. When metal is in their vicinity, a signal is detected owing to eddy currents induced in the metal. What allowed detectors to discriminate between metals was the fact that every metal has a different phase response when exposed to alternating current; longer waves (low frequency) penetrate ground deeper, and select for high-conductivity targets like silver, and copper; than shorter waves (higher frequency) which, while less ground penetrating, select for low-conductivity targets like iron. Unfortunately, high frequency is also sensitive to ground mineralization interference. This selectivity or discrimination allowed detectors to be developed that could selectively detect desirable metals, while ignoring undesirable ones.
Even with discriminators, it was still a challenge to avoid undesirable metals, because some of them have similar phase responses (e.g. tinfoil and gold), particularly in alloy form. Thus, improperly tuning out certain metals increased the risk of passing over a valuable find. Another disadvantage of discriminators was that they reduced the sensitivity of the machines.
New coil designs
Coil designers also tried out innovative designs. The original induction balance coil system consisted of two identical coils placed on top of one another. Compass Electronics produced a new design: two coils in a D shape, mounted back-to-back to form a circle. The system was widely used in the 1970s, and both concentric and double D type (or widescan as they became known) had their fans. Another development was the invention of detectors which could cancel out the effect of mineralization in the ground. This gave greater depth, but was a non-discriminate mode. It worked best at lower frequencies than those used before, and frequencies of 3 to 20 kHz were found to produce the best results. Many detectors in the 1970s had a switch which enabled the user to switch between the discriminate mode and the non-discriminate mode. Later developments switched electronically between both modes. The development of the induction balance detector would ultimately result in the motion detector, which constantly checked and balanced the background mineralization.
Pulse induction
At the same time, developers were looking at using a different technique in metal detection called pulse induction. Unlike the beat frequency oscillator or the induction balance machines, which both used a uniform alternating current at a low frequency, the pulse induction (PI) machine simply magnetized the ground with a relatively powerful, momentary current through a search coil. In the absence of metal, the field decayed at a uniform rate, and the time it took to fall to zero volts could be accurately measured. However, if metal was present when the machine fired, a small eddy current would be induced in the metal, and the time for sensed current decay would be increased. These time differences were minute, but the improvement in electronics made it possible to measure them accurately and identify the presence of metal at a reasonable distance. These new machines had one major advantage: they were mostly impervious to the effects of mineralization, and rings and other jewelry could now be located even under highly mineralized black sand. The addition of computer control and digital signal processing have further improved pulse induction sensors.
One particular advantage of using a pulse induction detector includes the ability to ignore the minerals contained within heavily mineralized soil; in some cases the heavy mineral content may even help the PI detector function better. Where a VLF detector is affected negatively by soil mineralization, a PI unit is not.
Uses
Large portable metal detectors are used by archaeologists and treasure hunters to locate metallic items, such as jewelry, coins, clothes buttons and other accessories, bullets, and other various artifacts buried beneath the surface.
Archaeology
Metal detectors are widely used in archaeology with the first recorded use by military historian Don Rickey in 1958 who used one to detect the firing lines at Little Big Horn. However archaeologists oppose the use of metal detectors by "artifact seekers" or "site looters" whose activities disrupt archaeological sites. The problem with use of metal detectors in archaeological sites or hobbyist who find objects of archeological interest is that the context that the object was found in is lost and no detailed survey of its surroundings is made. Outside of known sites the significance of objects may not be apparent to a metal detector hobbyist.
England and Wales
In England and Wales, metal detecting is legal provided that the landowner has granted permission and that the area is not a Scheduled Ancient Monument, a site of special scientific interest (SSSI), or covered by elements of the Countryside Stewardship Scheme.
The Treasure Act 1996 governs whether or not items that have been discovered are defined as treasure.
Finders of items that the Act defines as treasure must report their finds to the local coroner.
If they discover items which are not defined as treasure but that are of cultural or historical interest, finders can voluntarily report them to the Portable Antiquities Scheme and the UK Detector Finds Database.
France
The sale of metal detectors is allowed in France. The first use of metal detectors in France which led to archaeological discoveries occurred in 1958: people living in the city of Graincourt-lès-Havrincourt who were seeking copper from World War I bombshell with military mine detector found a Roman silver treasure. The French law on metal detecting is ambiguous because it refers only to the objective pursued by the user of a metal detector. The first law to regulate the use of metal detectors was Law No. 89–900 of 18 December 1989. This last is resumed without any change in Article L. 542–1 of the code of the heritage, which states that "no person may use the equipment for the detection of metal objects, for the purpose of research monuments and items of interest prehistory, history, art and archeology without having previously obtained an administrative authorization issued based on the applicant's qualification and the nature and method of research."
Outside the research of archaeological objects, using a metal detector does not require specific authorization, except that of the owner of the land. Asked about Law No. 89–900 of 18 December 1989 by a member of parliament, Jack Lang, Minister of Culture at the time, replied by letter the following: "The new law does not prohibit the use of metal detectors but only regulates the use. If the purpose of such use is the search for archaeological remains, prior authorization is required from my services. Apart from this case, the law ask to be reported to the appropriate authorities an accidental discovery of archaeological remains." The entire letter of Jack Lang was published in 1990 in a French metal detection magazine, and then, to be visible on the internet, scanned with permission of the author of the magazine on a French metal detection website.
Northern Ireland
In Northern Ireland, it is an offence to be in possession of a metal detector on a scheduled or a State Care site without a licence from the Department for Communities. It is also illegal to remove an archaeological object found with a detector from such a site without written consent.
Republic of Ireland
In the Republic of Ireland, laws against metal detecting are very strict: it is illegal to use a detection device to search for archaeological objects anywhere within the State or its territorial seas without the prior written consent of the Minister for Culture, Heritage and the Gaeltacht, and it is illegal to promote the sale or use of detection devices for the purposes of searching for archaeological objects.
Scotland
Under the Scots law principle of bona vacantia, the Crown has claim over any object of any material value where the original owner cannot be traced. There is also no 300 year limit to Scottish finds. Any artifact found, whether by metal detector survey or from an archaeological excavation, must be reported to the Crown through the Treasure Trove Advisory Panel at the National Museums of Scotland. The panel then determines what will happen to the artifacts. Reporting is not voluntary, and failure to report the discovery of historic artifacts is a criminal offence in Scotland.
United States
The sale of metal detectors is allowed in the United States. People can use metal detectors in public places (parks, beaches, etc.) and on private property with the permission of the owner of the site. In the United States, cooperation between archeologists hunting for the location of colonial-era Native American villages and hobbyists has been productive.
As a hobby
There are various types of hobby activities involving metal detectors:
Coin shooting is specifically targeting coins. Some coin shooters conduct historical research to locate sites with potential to give up historical and collectible coins.
Prospecting is looking for valuable metals like gold, silver, and copper in their natural forms, such as nuggets or flakes.
Metal detectors are also used to search for discarded or lost, valuable man-made objects such as jewelry, mobile phones, cameras and other devices. Some metal detectors are waterproof, to allow the user to search for submerged objects in areas of shallow water.
General metal detecting is very similar to coin shooting except the user is after any type of historical artifact. Detectorists may be dedicated to preserving historical artifacts, and often have considerable expertise. Coins, bullets, buttons, axe heads, and buckles are just a few of the items that are commonly found by relic hunters; in general the potential is far greater in Europe and Asia than in many other parts of the world. More valuable finds in Britain alone include the Staffordshire Hoard of Anglo-Saxon gold, sold for £3,285,000, the gold Celtic Newark Torc, the Ringlemere Cup, West Bagborough Hoard, Milton Keynes Hoard, Roman Crosby Garrett Helmet, Stirling Hoard, Collette Hoard and thousands of smaller finds.
Beach combing is hunting for lost coins or jewelry on a beach. Beach hunting can be as simple or as complicated as one wishes to make it. Many dedicated beach hunters also familiarize themselves with tide movements and beach erosion.
Metal detecting clubs exist for hobbyists to learn from others, show off finds from their hunts and to learn more about the hobby.
Hobbyists often use their own metal detecting lingo when discussing the hobby with others.
Politics and conflicts in the metal detecting hobby in the United States
The metal detecting community and professional archaeologists have different ideas related to the recovery and preservation of historic finds and locations. Archaeologists claim that detector hobbyists take an artifact-centric approach, removing these from their context resulting in a permanent loss of historical information. Archaeological looting of places like Slack Farm in 1987 and Petersburg National Battlefield serve as evidence against allowing unsupervised metal detecting in historic locations.
Security screening
In 1926, two Leipzig, Germany scientists installed a walk-though enclosure at a factory, to ensure that employees were not exiting with prohibited metallic items.
A series of aircraft hijackings led the United States in 1972 to adopt metal detector technology to screen airline passengers, initially using magnetometers that were originally designed for logging operations to detect spikes in trees. The Finnish company Outokumpu adapted mining metal detectors in the 1970s, still housed in a large cylindrical pipe, to make a commercial walk-through security detector. The development of these systems continued in a spin-off company and systems branded as Metor Metal Detectors evolved in the form of the rectangular gantry now standard in airports. In common with the developments in other uses of metal detectors both alternating current and pulse systems are used, and the design of the coils and the electronics has moved forward to improve the discrimination of these systems. In 1995 systems such as the Metor 200 appeared with the ability to indicate the approximate height of the metal object above the ground, enabling security personnel to more rapidly locate the source of the signal. Smaller hand held metal detectors are also used to locate a metal object on a person more precisely.
Industrial metal detectors
Contamination of food by metal shards from broken processing machinery during the manufacturing process is a major safety issue in the food industry. Most food processing equipment is made of stainless steel, and other components made of plastic or elastomers can be manufactured with embedded metallic particles, allowing them to be detected as well. Metal detectors for this purpose are widely used and integrated into the production line.
Current practice at garment or apparel industry plants is to apply metal detecting after the garments are completely sewn and before garments are packed to check whether there is any metal contamination (needle, broken needle, etc.) in the garments. This needs to be done for safety reasons.
The industrial metal detector was developed by Bruce Kerr and David Hiscock in 1947. The founding company Goring Kerr
pioneered the use and development of the first industrial metal detector. Mars Incorporated was one of the first customers of Goring Kerr using their Metlokate metal detector to inspect Mars bars.
The basic principle of operation for the common industrial metal detector is based on a 3-coil design. This design utilizes an AM (amplitude modulated) transmitting coil and two receiving coils one on either side of the transmitter. The design and physical configuration of the receiving coils are instrumental in the ability to detect very small metal contaminates of 1 mm or smaller. Today modern metal detectors continue to utilize this configuration for the detection of tramp metal.
The coil configuration is such that it creates an opening whereby the product (food, plastics, pharmaceuticals, etc.) passes through the coils. This opening or aperture allows the product to enter and exit through the three-coil system, producing an equal but mirrored signal on the two receiving coils. The resulting signals are summed together effectively nullifying each other. Fortress Technology innovated a new feature, that allows the coil structure of their BSH Model to ignore the effects of vibration, even when inspecting conductive products.
When a metal contaminant is introduced into the product an unequal disturbance is created. That creates a very small electronic signal. After suitable amplification a mechanical device mounted to the conveyor system is signaled to remove the contaminated product from the production line. This process is completely automated and allows manufacturing to operate uninterrupted.
Civil engineering
In civil engineering, special metal detectors (cover meters) are used to locate reinforcement bars inside walls.
The most common type of metal detector is a hand-held metal detector or coil-based detectors using oval-shaped disks with built-in copper coils. The search coil works as sensing probe and must be moved over the ground to detect potential metal targets buried underground. When the search coil detects metal objects, the device gives an audible signal via speaker or earphone. In most units, the feedback is an analog or digital indicator.
The metal detectors were first invented and manufactured commercially in the United States by Fisher Labs in the 1930s; other companies like Garrett established and developed the metal detectors in terms of technology and features in the following decades.
Military
The first metal detector proved inductance changes to be a practical metal detection technique, and it served as the prototype for all subsequent metal detectors.
Initially these machines were huge and complex. After Lee de Forest invented the triode in 1907 metal detectors used vacuum tubes to operate and became more sensitive but still quite cumbersome. One of the early common uses of the first metal detectors, for example, was the detection of landmines and unexploded bombs in a number of European countries following the First and Second World Wars.
Uses and benefits
Metal detectors can be used for several military uses, including:
Exposing mines planted during the war or after the end of the war
Detecting dangerous explosives and cluster bombs dangerous to people's lives
Hand-held metal detectors can be used to search people for weapons and explosives
War mine detection
Demining, also known as mine removal, is the method of clearing a field of landmines. The aim of military operations is to clear a path through a minefield as quickly as possible, which is mostly accomplished using equipment like mine plows and blast waves.
Humanitarian demining aims to clear all landmines to a certain depth and make the land secure for human use. Landmine detection techniques have been studied in various forms. Detection of mines can be done by a specially designed metal detector tuned to detect mines and bombs. Electromagnetic technologies have been used in conjunction with ground-penetrating radar. Specially trained dogs are often used to focus the search and confirm that an area has been cleared, mines are often cleared using mechanical equipment such as flails and excavators.
First idea
The first metal detector was likely the simple electric conduction metal detector ca. 1830. Electric conduction was also used to locate metal ore bodies by measuring the conductivity between metal rods driven into the ground.
In 1862, Italian General Giuseppe Garibaldi was wounded in the foot. It was difficult to distinguish between bullet, bone, and cartilage. So Professor Favre of Marseilles quickly built a simple probe that was inserted into the track of the bullet. It had two sharp points connected to a battery and a bell. Contact with metal completed the circuit and rang the bell. In 1867, Mr. Sylvan de Wilde had a similar detector and an extractor also wired to a bell.
In 1870, Gustave Trouvé, a French electrical engineer also had a similar device however his buzzer made a different sound for lead and iron.
The electric bullet locators were in use until the advent of X-rays.
Technology development
Gerhard Fischer
Gerhard Fischer developed a portable metal detector in 1925. His model was first marketed commercially in 1931; he was responsible for the first large-scale hand-held metal detector development.
Gerhard Fisher studied electronics at the University of Dresden before emigrating to the United States. When working as a research engineer in Los Angeles, he came up with the concept of a portable metal detector while working with aircraft radio detection finders. Fisher shared the concept with Albert Einstein, who foresaw the widespread use of hand-held metal detectors.
Fisher, the founder of Fisher Research Laboratory, was contracted by the Federal Telegraph Company and Western Air Express to establish airborne direction finding equipment in the late 1920s. He received some of the first patents in the area of radio-based airborne direction finding. He came across some unusual errors in the course of his work; once he figured out what was wrong, he had the foresight to apply the solution to a totally unrelated area, metal and mineral detection."
Fisher received the patent for the first portable electronic metal detector in 1925. In 1931, he marketed his first Fisher device to the general public, and he established a famous Fisher Labs company that started to manufacture and develop hand-held metal detectors and sell it commercially.
Charles Garrett
Despite the fact that Fisher was the first to receive a patent for an electronic metal detector, he was only one of many who improved and mastered the device. Charles Garrett, the founder of Garrett Metal Detectors, was another key figure in the creation of today's metal detectors.
Garrett, an electrical engineer by profession, began metal detecting as a pastime in the early 1960s. He tried a number of machines on the market but couldn't find one that could do what he needed. As a result, he started developing his own metal detector. He was able to develop a system that removed oscillator drift, as well as many special search coils that he patented, both of which effectively revolutionized metal detector design at the time.
To present day
In the 1960s, the first industrial metal detectors were produced, and they were widely used for mineral prospecting and other industrial purposes. De-mining (the detection of landmines), the detection of weapons such as knives and guns (particularly in airport security), geophysical prospecting, archaeology, and treasure hunting are just some of the applications.
Metal detectors are also used to detect foreign bodies in food, as well as steel reinforcement bars in concrete and pipes. The building industry uses them to find wires buried in walls or floors.
Discriminators and circuits
The development of transistors, discriminators, modern search coil designs, and wireless technology significantly impacted the design of metal detectors as we know them today: lightweight, compact, easy-to-use, and deep-seeking systems. The invention of a tunable induction device was the most significant technological advancement in detectors. Two electro-magnetically tuned coils were used in this method. One coil serves as an RF transmitter, while the other serves as a receiver; in some situations, these coils may be tuned to frequencies ranging from 3 to 100 kHz.
Due to eddy currents induced in the metal, a signal is detected when metal is present. The fact that every metal has a different phase response when exposed to alternating current allowed detectors to differentiate between metals. Longer waves (low frequency) penetrate the ground deeper and select for high conductivity targets like silver and copper, while shorter waves (higher frequency) select for low conductivity targets like iron. Unfortunately, ground mineralization interference affects high frequency as well. This selectivity or discrimination allowed the development of detectors that can selectively detect desirable metals.
Even with discriminators, avoiding undesirable metals was difficult because some of them have similar phase responses (for example, tinfoil and gold), particularly in alloy form. As a result, tuning out those metals incorrectly increased the chance of missing a valuable discovery. Discriminators also had the downside of lowering the sensitivity of the devices.
See also
List of metal detecting finds
DEMIRA
Detectorists (BBC Television series)
Inductive sensor
Induction loop
Magnet fishing
Portable Antiquities Scheme
Notes
References
Grosvenor, Edwin S. and Wesson, Morgan. Alexander Graham Bell: The Life and Times of the Man Who Invented the Telephone. New York: Harry N. Abrahms, Inc., 1997. .
Colin King (Editor), Jane's Mines and Mine Clearance,
Graves M, Smith A, and Batchelor B 1998: "Approaches to foreign body detection in foods", Trends in Food Science & Technology 9 21–27
1881 introductions
Hobbies
19th-century inventions | Metal detector | [
"Technology",
"Engineering"
] | 6,078 | [
"Measuring instruments",
"Metal detecting"
] |
165,018 | https://en.wikipedia.org/wiki/Ampere-turn | The ampere-turn (symbol A⋅t) is the MKS (metre–kilogram–second) unit of magnetomotive force (MMF), represented by a direct current of one ampere flowing in a single-turn loop. Turns refers to the winding number of an electrical conductor composing an electromagnetic coil.
For example, a current of flowing through a coil of 10 turns produces an MMF of .
The corresponding physical quantity is NI, the product of the number of turns, N, and the current, I; it has been used in industry, specifically, US-based coil-making industries.
By maintaining the same current and increasing the number of loops or turns of the coil, the strength of the magnetic field increases because each loop or turn of the coil sets up its own magnetic field. The magnetic field unites with the fields of the other loops to produce the field around the entire coil, making the total magnetic field stronger.
The strength of the magnetic field is not linearly related to the ampere-turns when a magnetic material is used as a part of the system. Also, the material within the magnet carrying the magnetic flux "saturates" at some point, after which adding more ampere-turns has little effect.
The ampere-turn corresponds to gilberts, the corresponding CGS unit.
In Thomas Edison's laboratory Francis Upton was the lead mathematician. Trained with Helmholtz in Germany, he used weber as the name of the unit of current, modified to ampere later:
When conducting his investigations, Upton always noted the Weber turns and with his other data had all that was necessary to put the results of his work in proper form.
He discovered that a Weber turn (that is, an ampere turn) was a constant factor, a given number of which always produced the same effect magnetically.
See also
Inductance
Solenoid
References
Units of measurement
Magnetism | Ampere-turn | [
"Mathematics"
] | 390 | [
"Quantity",
"Units of measurement"
] |
165,062 | https://en.wikipedia.org/wiki/Newcomen%20atmospheric%20engine | The atmospheric engine was invented by Thomas Newcomen in 1712, and is often referred to as the Newcomen fire engine (see below) or simply as a Newcomen engine. The engine was operated by condensing steam drawn into the cylinder, thereby creating a partial vacuum which allowed the atmospheric pressure to push the piston into the cylinder. It was historically significant as the first practical device to harness steam to produce mechanical work. Newcomen engines were used throughout Britain and Europe, principally to pump water out of mines. Hundreds were constructed throughout the 18th century.
James Watt's later engine design was an improved version of the Newcomen engine that roughly doubled fuel efficiency. Many atmospheric engines were converted to the Watt design, for a price which was based on a fraction of the fuel-savings. As a result, Watt is today better known than Newcomen in relation to the origin of the steam engine.
Precursors
Prior to Newcomen a number of small steam devices of various sorts had been made, but most were essentially novelties. Around 1600 a number of experimenters used steam to power small fountains working like a coffee percolator. First a container was filled with water via a pipe, which extended through the top of the container to nearly the bottom. The bottom of the pipe would be submerged in the water, making the container airtight. The container was then heated to make the water boil. The steam generated pressurized the container, but the inner pipe, immersed at the bottom by liquid, and lacking an airtight seal at top, remained at a lower pressure; expanding steam forced the water at the bottom of the container into and up the pipe to spurt out of a nozzle on top. These devices had limited effectiveness but illustrated the principle's viability.
In 1606, the Spaniard, Jerónimo de Ayanz y Beaumont demonstrated and was granted a patent for a steam powered water pump. The pump was successfully used to drain the inundated mines of Guadalcanal, Spain.
In 1662 Edward Somerset, 2nd Marquess of Worcester, published a book containing several ideas he had been working on. One was for a steam-powered pump to supply water to fountains; the device alternately used a partial vacuum and steam pressure. Two containers were alternately filled with steam, then sprayed with cold water making the steam within condense; this produced a partial vacuum that would draw water through a pipe up from a well to the container. A fresh charge of steam under pressure then drove the water from the container up another pipe to a higher-level header before that steam condensed and the cycle repeated. By working the two containers alternately, the delivery rate to the header tank could be increased.
Savery's "Miner's Friend"
In 1698 Thomas Savery patented a steam-powered pump he called the "Miner's Friend", essentially identical to Somerset's design and almost certainly a direct copy. The process of cooling and creating the vacuum was fairly slow, so Savery later added an external cold water spray to quickly cool the steam.
Savery's invention cannot be strictly regarded as the first steam "engine" since it had no moving parts and could not transmit its power to any external device. There were evidently high hopes for the Miner's Friend, which led Parliament to extend the life of the patent by 21 years, so that the 1699 patent would not expire until 1733. Unfortunately, Savery's device proved much less successful than had been hoped.
A theoretical problem with Savery's device stemmed from the fact that a vacuum could only raise water to a maximum height of about ; to this could be added another , or so, raised by steam pressure. This was insufficient to pump water out of a mine. In Savery's pamphlet, he suggests setting the boiler and containers on a ledge in the mineshaft and even a series of two or more pumps for deeper levels. Obviously these were inconvenient solutions and some sort of mechanical pump working at surface level – one that lifted the water directly instead of "sucking" it up – was desirable. Such pumps were common already, powered by horses, but required a vertical reciprocating drive that Savery's system did not provide. The more practical problem concerned having a boiler operating under pressure, as demonstrated when the boiler of an engine at Wednesbury exploded, perhaps in 1705.
Denis Papin's experimental steam cylinder and piston
Louis Figuier in his monumental work gives a full quotation of Denis Papin's paper published in 1690 in Acta eruditorum at Leipzig, entitled "Nouvelle méthode pour obtenir à bas prix des forces considérables" (A new method for cheaply obtaining considerable forces). It seems that the idea came to Papin whilst working with Robert Boyle at the Royal Society in London. Papin describes first pouring a small quantity of water into the bottom of a vertical cylinder, inserting a piston on a rod and after first evacuating the air below the piston, placing a fire beneath the cylinder to boil the water away and create enough steam pressure to raise the piston to the top end of the cylinder. The piston was then temporarily locked in the upper position by a spring catch engaging a notch in the rod. The fire was then removed, allowing the cylinder to cool, which condensed steam back into water, thus creating a vacuum beneath the piston. To the end of the piston rod was attached a cord passing over two pulleys and a weight hung down from the cord's end. Upon releasing the catch, the piston was sharply drawn down to the bottom of the cylinder by the pressure differential between the atmosphere and the created vacuum; enough force was thus generated to raise a weight. "Several of his papers were put before the Royal Society between 1707 and 1712 [including] a description of his 1690 atmospheric steam engine, similar to that built and [subsequently] put into use by Thomas Newcomen in 1712."
Introduction and spread
Newcomen took forward Papin's experiment and made it workable, although little information exists as to exactly how this came about. The main problem to which Papin had given no solution was how to make the action repeatable at regular intervals. The way forward was to provide, as Savery had, a boiler capable of ensuring the continuity of the supply of steam to the cylinder, providing the vacuum power stroke by condensing the steam, and disposing of the water once it had been condensed. The power piston was hung by chains from the end of a rocking beam. Unlike Savery's device, pumping was entirely mechanical, the work of the steam engine being to lift a weighted rod slung from the opposite extremity of the rocking beam. The rod descended the mine shaft by gravity and drove a force pump, or pole pump (or most often a gang of two) inside the mineshaft. The suction stroke of the pump was only for the length of the upward (priming) stroke, there consequently was no longer the 30-foot restriction of a vacuum pump and water could be forced up a column from far greater depths. The boiler supplied the steam at extremely low pressure and was at first located immediately beneath the power cylinder but could also be placed behind a separating wall with a connecting steam pipe. Making all this work needed the skill of a practical engineer; Newcomen's trade as an "ironmonger" or metal merchant would have given him significant practical knowledge of what materials would be suitable for such an engine and brought him into contact with people having even more detailed knowledge.
The earliest examples for which reliable records exist were two engines in the Black Country, of which the more famous was that erected in 1712 at the Conygree Coalworks in Bloomfield Road Tipton now the site of "The Angle Ring Company Limited", Tipton. This is generally accepted as the first successful Newcomen engine and followed by one built a mile and a half east of Wolverhampton. Both these were used by Newcomen and his partner John Calley to pump out water-filled coal mines. A working replica can today be seen at the nearby Black Country Living Museum, which stands on another part of what was Lord Dudley's Conygree Park.
Another Newcomen engine was in Cornwall. Its location is uncertain, but it is known that one was in operation at Wheal Vor mine in 1715.
Soon orders from wet mines all over England were coming in, and some have suggested that word of his achievement was spread through his Baptist connections. Since Savery's patent had not yet run out, Newcomen was forced to come to an arrangement with Savery and operate under the latter's patent, as its term was much longer than any Newcomen could have easily obtained. During the latter years of its currency, the patent belonged to an unincorporated company, The Proprietors of the Invention for raising water by fire.
Although its first use was in coal-mining areas, Newcomen's engine was also used for pumping water out of the metal mines in his native West Country, such as the tin mines of Cornwall. By the time of his death, Newcomen and others had installed over a hundred of his engines, not only in the West Country and the Midlands but also in north Wales, near Newcastle and in Cumbria. Small numbers were built in other European countries, including in France, Belgium, Spain, and Hungary, also at Dannemora, Sweden. Evidence of the use of a Newcomen Steam Engine associated with early coal mines was found in 2010 in Midlothian, VA (site of some of the first coal mines in the US).
(Dutton and Associates survey dated 24 November 2009).
Technical details
Components
Although based on simple principles, Newcomen's engine was rather complex and showed signs of incremental development, problems being empirically addressed as they arose. It consisted of a boiler A, usually a haystack boiler, situated directly below the cylinder. This produced large quantities of very low pressure steam, no more than – the maximum allowable pressure for a boiler that in earlier versions was made of copper with a domed top of lead and later entirely assembled from small riveted iron plates. The action of the engine was transmitted through a rocking "Great balanced Beam", the fulcrum E of which rested on the very solid end-gable wall of the purpose-built engine house with the pump side projecting outside of the building, the engine being located in-house. The pump rods were slung by a chain from the arch-head F of the great beam. From the in-house arch-head D was suspended a piston P working in a cylinder B, the top end of which was open to the atmosphere above the piston and the bottom end closed, apart from the short admission pipe connecting the cylinder to the boiler; early cylinders were made of cast brass, but cast iron was soon found more effective and much cheaper to produce. The piston was surrounded by a seal in the form of a leather ring, but as the cylinder bore was finished by hand and not absolutely true, a layer of water had to be constantly maintained on top of the piston. Installed high up in the engine house was a water tank C (or header tank) fed by a small in-house pump slung from a smaller arch-head. The header tank supplied cold water under pressure via a stand-pipe for condensing the steam in the cylinder with a small branch supplying the cylinder-sealing water; at each top stroke of the piston excess warm sealing water overflowed down two pipes, one to the in-house well and the other to feed the boiler by gravity.
Operation
The pump equipment was heavier than the steam piston, so that the position of the beam at rest was pump-side down/engine-side up, which was called "out of the house".
To start the engine, the regulator valve V was opened and steam admitted into the cylinder from the boiler, filling the space beneath the piston. The regulator valve was then closed and the water injection valve V' briefly snapped open and shut, sending a spray of cold water into the cylinder. This condensed the steam and created a partial vacuum under the piston. Pressure differential between the atmosphere above the piston and the partial vacuum below then drove the piston down making the power stroke, bringing the beam "into the house" and raising the pump gear.
Steam was then readmitted to the cylinder, destroying the vacuum and driving the condensate down the sinking or "eduction" pipe. As the low pressure steam from the boiler flowed into the cylinder, the weight of the pump and gear returned the beam to its initial position whilst at the same time driving the water up from the mine.
This cycle was repeated around 12 times per minute.
Snifting valve
Newcomen found that his first engine would stop working after a while, and eventually discovered that this was due to small amounts of air being admitted to the cylinder with the steam. Water usually contains some dissolved air, and boiling the water released this with the steam. This air could not be condensed by the water spray and gradually accumulated until the engine became "wind logged". To prevent this, a release valve called a "snifting clack" or snifter valve was included near the bottom of the cylinder. This opened briefly when steam was first introduced, and non-condensable gas was driven from the cylinder. Its name was derived from the noise it made when it operated to release the air and steam "like a Man snifting with a Cold".
Automation
In early versions, the valves or plugs as they were then called, were operated manually by the plug man but the repetitive action demanded precise timing, making automatic action desirable. This was obtained by means of a plug tree which was a beam suspended vertically alongside the cylinder from a small arch head by crossed chains, its function being to open and close the valves automatically when the beam reached certain positions, by means of tappets and escapement mechanisms using weights. On the 1712 engine, the water feed pump was attached to the bottom of the plug tree, but later engines had the pump outside suspended from a separate small arch-head. There is a common legend that in 1713 a cock boy named Humphrey Potter, whose duty it was to open and shut the valves of an engine he attended, made the engine self-acting by causing the beam itself to open and close the valves by suitable cords and catches (known as the "potter cord"); however the plug tree device (the first form of valve gear) was very likely established practice before 1715, and is clearly depicted in the earliest known images of Newcomen engines by Henry Beighton (1717) (believed by Hulse to depict the 1714 Griff colliery engine) and by Thomas Barney (1719) (depicting the 1712 Dudley Castle engine). Because of the very heavy steam demands, the engine had to be periodically stopped and restarted, but even this process was automated by means of a buoy rising and falling in a vertical stand pipe fixed to the boiler. The buoy was attached to the scoggen, a weighted lever that worked a stop blocking the water injection valve shut until more steam had been raised.
Pumps
Most images show only the engine side, giving no information on the pumps. Current opinion is that at least on the early engines, dead-weight force pumps were used, the work of the engine being solely to lift the pump side ready for the next downwards pump stroke. This is the arrangement used for the Dudley Castle replica which effectively works at the original stated rate of 12 strokes per minute/ lifted per stroke. The later Watt engines used lift pumps powered by the engine stroke and it may be that later versions of the Newcomen engine did so too.
Development and application
Towards the close of its career, the atmospheric engine was much improved in its mechanical details and its proportions by John Smeaton, who built many large engines of this type during the 1770s. The urgent need for an engine to give rotary motion was making itself felt and this was done with limited success by Wasborough and Pickard using a Newcomen engine to drive a flywheel through a crank. Although the principle of the crank had long been known, Pickard managed to obtain a 12-year patent in 1780 for the specific application of the crank to steam engines; this was a setback to Boulton and Watt who bypassed the patent by applying the sun and planet motion to their advanced double-acting rotative engine of 1782.
By 1725 the Newcomen engine was in common use in mining, particularly collieries. It held its place with little material change for the rest of the century. Use of the Newcomen engine was extended in some places to pump municipal water supply; for instance the first Newcomen engine in France was built at Passy in 1726 to pump water from the Seine to the city of Paris. It was also used to power machinery indirectly, by returning water from below a water wheel to a reservoir above it, so that the same water could again turn the wheel. Among the earliest examples of this was at Coalbrookdale. A horse-powered pump had been installed in 1735 to return water to the pool above the Old Blast Furnace. This was replaced by a Newcomen engine in 1742–3. Several new furnaces built in Shropshire in the 1750s were powered in a similar way, including Horsehay and Ketley Furnaces and Madeley Wood or Bedlam Furnaces. The latter does not seem to have had a pool above the furnace, merely a tank into which the water was pumped. In other industries, engine-pumping was less common, but Richard Arkwright used an engine to provide additional power for his cotton mill.
Attempts were made to drive machinery by Newcomen engines, but these were unsuccessful, as the single power stroke produced a very jerky motion.
Successor
The main problem with the Newcomen design was that it used energy inefficiently, and was therefore expensive to operate. After the water vapor within was cooled enough to create the vacuum, the cylinder walls were cold enough to condense some of the steam as it was admitted during the next intake stroke. This meant that a considerable amount of fuel was being used just to heat the cylinder back to the point where the steam would start to fill it again. As the heat losses were related to the surfaces, while useful work related to the volume, increases in the size of the engine increased efficiency, and Newcomen engines became larger in time. However, efficiency did not matter very much within the context of a colliery, where coal was freely available.
Newcomen's engine was only replaced when James Watt improved it in 1769 to avoid this problem (Watt had been asked to repair a model of a Newcomen engine by Glasgow University; a small model that exaggerated the problem). In the Watt steam engine, condensation took place in an exterior condenser unit, attached to the steam cylinder via a pipe. When a valve on the pipe was opened, the vacuum in the condenser would, in turn, evacuate that part of the cylinder below the piston. This eliminated the cooling of the main cylinder walls and such, and dramatically reduced fuel use. It also enabled the development of a double-acting cylinder, with both upwards and downwards power strokes, increasing amount of power from the engine without a great increase in the size of the engine.
Watt's design, introduced in 1769, did not eliminate Newcomen engines immediately. Watt's vigorous defence of his patents resulted in the continued use of the Newcomen engine in an effort to avoid royalty payments. When his patents expired in 1800, there was a rush to install Watt engines, and Newcomen engines were eclipsed, even in collieries.
Surviving examples
The Newcomen Memorial Engine can be seen operating in Newcomen's home town of Dartmouth, where it was moved in 1963 by the Newcomen Society. This is believed to date from 1725, when it was initially installed at the Griff Colliery near Coventry.
An engine was installed at a colliery in Ashton-under-Lyne in about 1760. Known locally as Fairbottom Bobs it is now preserved at the Henry Ford Museum in Dearborn, Michigan.
The only Newcomen-style engine still extant in its original location is at what is now the Elsecar Heritage Centre, near Barnsley in South Yorkshire. This was probably the last commercially used Newcomen-style engine, as it ran from 1795 until 1923. The engine underwent extensive conservation works, together with its original shaft and engine-house, which were completed in autumn 2014.
There are two static examples of a Newcomen Engine. One is in the Science Museum, London. A second example is in the National Museum of Scotland. Formerly at Caprington Colliery at Kilmarnock.
Another example, originally used at Farme Colliery is on display at Summerlee, Museum of Scottish Industrial Life; unusually it was used for winding rather than water pumping, and had been in operation for almost a century when examined in situ in 1902.
In 1986, a full-scale operational replica of the 1712 Newcomen Steam Engine was completed at the Black Country Living Museum in Dudley. It is the only full-size working replica of the engine in existence and is believed to be a couple of miles away from the site of the first completed engine, erected in 1712. The 'fire engine' as it was known, is an impressive brick building from which a wooden beam projects through one wall. Rods hang from the outer end of the beam and operate pumps at the bottom of the mine shaft which raise the water to the surface. The engine itself is simple, with only a boiler, a cylinder and piston and operating valves. A coal fire heats the water in the boiler which is little more than a covered pan and the steam generated then passes through a valve into the brass cylinder above the boiler. The cylinder is more than 2 metres long and 52 centimetres in diameter. The steam in the cylinder is condensed by injecting cold water and the vacuum beneath the piston pulls the inner end of the beam down and causes the pump to move.
See also
Timeline of steam power
Cataract – the speed governing device used on beam engines
Atmospheric railway
References
Further reading
Reprint:
External links
English Heritage – National Monuments Record for Elsecar Newcomen engine
Industrial Revolution
Beam engines
Stationary steam engines
Steam engines
Piston engines
History of the steam engine
English inventions | Newcomen atmospheric engine | [
"Technology"
] | 4,584 | [
"Piston engines",
"Engines"
] |
165,067 | https://en.wikipedia.org/wiki/Watt%20steam%20engine | The Watt steam engine design was an invention of James Watt that became synonymous with steam engines during the Industrial Revolution, and it was many years before significantly new designs began to replace the basic Watt design.
The first steam engines, introduced by Thomas Newcomen in 1712, were of the "atmospheric" design. At the end of the power stroke, the weight of the object being moved by the engine pulled the piston to the top of the cylinder as steam was introduced. Then the cylinder was cooled by a spray of water, which caused the steam to condense, forming a partial vacuum in the cylinder. Atmospheric pressure on the top of the piston pushed it down, lifting the work object. James Watt noticed that it required significant amounts of heat to warm the cylinder back up to the point where steam could enter the cylinder without immediately condensing. When the cylinder was warm enough that it became filled with steam the next power stroke could commence.
Watt realised that the heat needed to warm the cylinder could be saved by adding a separate condensing cylinder. After the power cylinder was filled with steam, a valve was opened to the secondary cylinder, allowing the steam to flow into it and be condensed, which drew the steam from the main cylinder causing the power stroke. The condensing cylinder was water cooled to keep the steam condensing. At the end of the power stroke, the valve was closed so the power cylinder could be filled with steam as the piston moved to the top. The result was the same cycle as Newcomen's design, but without any cooling of the power cylinder which was immediately ready for another stroke.
Watt worked on the design over a period of several years, introducing the condenser, and introducing improvements to practically every part of the design. Notably, Watt performed a lengthy series of trials on ways to seal the piston in the cylinder, which considerably reduced leakage during the power stroke, preventing power loss. All of these changes produced a more reliable design which used half as much coal to produce the same amount of power.
The new design was introduced commercially in 1776, with the first example sold to the Carron Company ironworks. Watt continued working to improve the engine, and in 1781 introduced a system using a sun and planet gear to turn the linear motion of the engines into rotary motion. This made it useful not only in the original pumping role, but also as a direct replacement in roles where a water wheel would have been used previously. This was a key moment in the industrial revolution, since power sources could now be located anywhere instead of, as previously, needing a suitable water source and topography. Watt's partner Matthew Boulton began developing a multitude of machines that made use of this rotary power, developing the first modern industrialized factory, the Soho Foundry, which in turn produced new steam engine designs. Watt's early engines were like the original Newcomen designs in that they used low-pressure steam, and all of the power was produced by atmospheric pressure. When, in the early 1800s, other companies introduced high-pressure steam engines, Watt was reluctant to follow suit due to safety concerns. Wanting to improve on the performance of his engines, Watt began considering the use of higher-pressure steam, as well as designs using multiple cylinders in both the double-acting concept and the multiple-expansion concept. These double-acting engines required the invention of the parallel motion, which allowed the piston rods of the individual cylinders to move in straight lines, keeping the piston true in the cylinder, while the walking beam end moved through an arc, somewhat analogous to a crosshead in later steam engines.
Introduction
In 1698, the English mechanical designer Thomas Savery invented a pumping appliance that used steam to draw water directly from a well by means of a vacuum created by condensing steam. The appliance was also proposed for draining mines, but it could only draw fluid up approximately 25 feet, meaning it had to be located within this distance of the mine floor being drained. As mines became deeper, this was often impractical. It also consumed a large amount of fuel compared with later engines.
The solution to draining deep mines was found by Thomas Newcomen who developed an "atmospheric" engine that also worked on the vacuum principle. It employed a cylinder containing a movable piston connected by a chain to one end of a rocking beam that worked a mechanical lift pump from its opposite end. At the bottom of each stroke, steam was allowed to enter the cylinder below the piston. As the piston rose within the cylinder, drawn upward by a counterbalance, it drew in steam at atmospheric pressure. At the top of the stroke the steam valve was closed, and cold water was briefly injected into the cylinder as a means of cooling the steam. This water condensed the steam and created a partial vacuum below the piston. The atmospheric pressure outside the engine was then greater than the pressure within the cylinder, thereby pushing the piston into the cylinder. The piston, attached to a chain and in turn attached to one end of the "rocking beam", pulled down the end of the beam, lifting the opposite end of the beam. Hence, the pump deep in the mine attached to opposite end of the beam via ropes and chains was driven. The pump pushed, rather than pulled the column of water upward, hence it could lift water any distance. Once the piston was at the bottom, the cycle repeated.
The Newcomen engine was more powerful than the Savery engine. For the first time water could be raised from a depth of over 300 feet. The first example from 1712 was able to replace a team of 500 horses that had been used to pump out the mine. Seventy-five Newcomen pumping engines were installed at mines in Britain, France, Holland, Sweden and Russia. In the next fifty years only a few small changes were made to the engine design.
While Newcomen engines brought practical benefits, they were inefficient in terms of the use of energy to power them. The system of alternately sending jets of steam, then cold water into the cylinder meant that the walls of the cylinder were alternately heated, then cooled with each stroke. Each charge of steam introduced would continue condensing until the cylinder approached working temperature once again. So at each stroke part of the potential of the steam was lost.
Separate condenser
In 1763, James Watt was working as instrument maker at the University of Glasgow when he was assigned the job of repairing a model Newcomen engine and noted how inefficient it was.
In 1765, Watt conceived the idea of equipping the engine with a separate condensation chamber, which he called a "condenser". Because the condenser and the working cylinder were separate, condensation occurred without significant loss of heat from the cylinder. The condenser remained cold and below atmospheric pressure at all times, while the cylinder remained hot at all times.
Steam was drawn from the boiler to the cylinder under the piston. When the piston reached the top of the cylinder, the steam inlet valve closed and the valve controlling the passage to the condenser opened. The condenser being at a lower pressure, drew the steam from the cylinder into the condenser where it cooled and condensed from water vapour to liquid water, maintaining a partial vacuum in the condenser that was communicated to the space of the cylinder by the connecting passage. External atmospheric pressure then pushed the piston down the cylinder.
The separation of the cylinder and condenser eliminated the loss of heat that occurred when steam was condensed in the working cylinder of a Newcomen engine. This gave the Watt engine greater efficiency than the Newcomen engine, reducing the amount of coal consumed while doing the same amount of work as a Newcomen engine.
In Watt's design, the cold water was injected only into the condensation chamber. This type of condenser is known as a jet condenser. The condenser is located in a cold water bath below the cylinder. The volume of water entering the condenser as spray absorbed the latent heat of the steam, and was determined as seven times the volume of the condensed steam. The condensate and the injected water was then removed by the air pump, and the surrounding cold water served to absorb the remaining thermal energy to retain a condenser temperature of 30 °C to 45 °C and the equivalent pressure of 0.04 to 0.1 bar
At each stroke the warm condensate was drawn off from the condenser and sent to a hot well by a vacuum pump, which also helped to evacuate the steam from under the power cylinder. The still-warm condensate was recycled as feedwater for the boiler.
Watt's next improvement to the Newcomen design was to seal the top of the cylinder and surround the cylinder with a jacket. Steam was passed through the jacket before being admitted below the piston, keeping the piston and cylinder warm to prevent condensation within it. The second improvement was the utilisation of steam expansion against the vacuum on the other side of the piston. The steam supply was cut during the stroke, and the steam expanded against the vacuum on the other side. This increased the efficiency of the engine, but also created a variable torque on the shaft which was undesirable for many applications, in particular pumping. Watt therefore limited the expansion to a ratio of 1:2 (i.e. the steam supply was cut at half stroke). This increased the theoretical efficiency from 6.4% to 10.6%, with only a small variation in piston pressure. Watt did not use high pressure steam because of safety concerns.
These improvements led to the fully developed version of 1776 that actually went into production.
The partnership of Matthew Boulton and James Watt
The separate condenser showed dramatic potential for improvements on the Newcomen engine but Watt was still discouraged by seemingly insurmountable problems before a marketable engine could be perfected. It was only after entering into partnership with Matthew Boulton that such became reality. Watt told Boulton about his ideas on improving the engine, and Boulton, an avid entrepreneur, agreed to fund development of a test engine at Soho, near Birmingham. At last Watt had access to facilities and the practical experience of craftsmen who were soon able to get the first engine working. As fully developed, it used about 75% less fuel than a similar Newcomen one.
In 1775, Watt designed two large engines: one for the Bloomfield Colliery at Tipton, completed in March 1776, and one for John Wilkinson's ironworks at Broseley in Shropshire, which was at work the following month. A third engine, at Stratford-le-Bow in east London, was also working that summer.
Watt had tried unsuccessfully for several years to obtain an accurately bored cylinder for his steam engines, and was forced to use hammered iron, which was out of round and caused leakage past the piston. Joseph Wickham Roe stated in 1916: "When [John] Smeaton saw the first engine he reported to the Society of Engineers that 'Neither the tools nor the workmen existed who could manufacture such a complex machine with sufficient precision.
In 1774, John Wilkinson invented a boring machine in which the shaft that held the cutting tool was supported on both ends and extended through the cylinder, unlike the cantilevered borers then in use. Boulton wrote in 1776 that "Mr. Wilkinson has bored us several cylinders almost without error; that of 50 inches diameter, which we have put up at Tipton, does not err on the thickness of an old shilling in any part".
Boulton and Watt's practice was to help mine-owners and other customers to build engines, supplying men to erect them and some specialised parts. However, their main profit from their patent was derived from charging a licence fee to the engine owners, based on the cost of the fuel they saved. The greater fuel efficiency of their engines meant that they were most attractive in areas where fuel was expensive, particularly Cornwall, for which three engines were ordered in 1777, for the Wheal Busy, Ting Tang, and Chacewater mines.
Later improvements
The first Watt engines were atmospheric pressure engines, like the Newcomen engine but with the condensation taking place separate from the cylinder. Driving the engines using both low pressure steam and a partial vacuum raised the possibility of reciprocating engine development. An arrangement of valves could alternately admit low pressure steam to the cylinder and then connect with the condenser. Consequently, the direction of the power stroke might be reversed, making it easier to obtain rotary motion. Additional benefits of the double acting engine were increased efficiency, higher speed (greater power) and more regular motion.
Before the development of the double acting piston, the linkage to the beam and the piston rod had been by means of a chain, which meant that power could only be applied in one direction, by pulling. This was effective in engines that were used for pumping water, but the double action of the piston meant that it could push as well as pull. This was not possible as long as the beam and the rod were connected by a chain. Furthermore, it was not possible to connect the piston rod of the sealed cylinder directly to the beam, because while the rod moved vertically in a straight line, the beam was pivoted at its centre, with each side inscribing an arc. To bridge the conflicting actions of the beam and the piston, Watt developed his parallel motion. This device used a four bar linkage coupled with a pantograph to produce the required straight line motion much more cheaply than if he had used a slider type of linkage. He was very proud of his solution.
Having the beam connected to the piston shaft by a means that applied force alternately in both directions also meant that it was possible to use the motion of the beam to turn a wheel. The simplest solution to transforming the action of the beam into a rotating motion was to connect the beam to a wheel by a crank, but because another party had patent rights on the use of the crank, Watt was obliged to come up with another solution. He adopted the epicyclic sun and planet gear system suggested by an employee William Murdoch, only later reverting, once the patent rights had expired, to the more familiar crank seen on most engines today. The main wheel attached to the crank was large and heavy, serving as a flywheel which, once set in motion, by its momentum maintained a constant power and smoothed the action of the alternating strokes. To its rotating central shaft, belts and gears could be attached to drive a great variety of machinery.
Because factory machinery needed to operate at a constant speed, Watt linked a steam regulator valve to a centrifugal governor which he adapted from those used to automatically control the speed of windmills. The centrifugal was not a true speed controller because it could not hold a set speed in response to a change in load.
These improvements allowed the steam engine to replace the water wheel and horses as the main sources of power for British industry, thereby freeing it from geographical constraints and becoming one of the main drivers in the Industrial Revolution.
Watt was also concerned with fundamental research on the functioning of the steam engine. His most notable measuring device, still in use today, is the Watt indicator incorporating a manometer to measure steam pressure within the cylinder according to the position of the piston, enabling a diagram to be produced representing the pressure of the steam as a function of its volume throughout the cycle.
Preserved Watt engines
The oldest surviving Watt engine is Old Bess of 1777, now in the Science Museum, London.
The oldest working engine in the world is the Smethwick Engine, brought into service in May 1779 and now at Thinktank in Birmingham (formerly at the now defunct Museum of Science and Industry, Birmingham).
The oldest still in its original engine house and still capable of doing the job for which it was installed is the 1812 Boulton and Watt engine at the Crofton Pumping Station in Wiltshire. This was used to pump water for the Kennet and Avon Canal; on certain weekends throughout the year the modern pumps are switched off and the two steam engines at Crofton still perform this function.
The oldest extant rotative steam engine, the Whitbread Engine (from 1785, the third rotative engine ever built), is located in the Powerhouse Museum in Sydney, Australia.
A Boulton-Watt engine of 1788 may be found in the Science Museum, London, while an 1817 blowing engine, formerly used at the Netherton ironworks of M W Grazebrook now decorates Dartmouth Circus, a traffic island at the start of the A38(M) motorway in Birmingham.
The Henry Ford Museum in Dearborn, Michigan houses a replica of a 1788 Watt rotative engine. It is a full-scale working model of a Boulton-Watt engine. The American industrialist Henry Ford commissioned the replica engine from the English manufacturer Charles Summerfield in 1932. The museum also holds an original Boulton and Watt atmospheric pump engine, originally used for canal pumping in Birmingham, illustrated below, and in use in situ at the Bowyer Street pumping station, from 1796 until 1854, and afterwards removed to Dearborn in 1929.
An other one is preserved at Fumel factory, France.
Watt engine produced by Hathorn, Davey and Co
In the 1880s, Hathorn Davey and Co / Leeds produced a 1 hp / 125 rpm atmospheric engine with external condenser but without steam expansion. It has been argued that this was probably the last commercial atmospheric engine to be manufactured. As an atmospheric engine, it did not have a pressurized boiler. It was intended for small businesses.
Recent developments
Watt's Expansion Engine is generally considered as of historic interest only. There are however some recent developments which may lead to a renaissance of the technology. Today, there is an enormous amount of waste steam and waste heat with temperatures between 100 and 150 °C generated by industry. In addition, solarthermal collectors, geothermal energy sources and biomass reactors produce heat in this temperature range. There are technologies to utilise this energy, in particular the Organic Rankine Cycle. In principle, these are steam turbines which do not use water but a fluid (a refrigerant) which evaporates at temperatures below 100 °C. Such systems are however fairly complex. They work with pressures of 6 to 20 bars, so that the whole system has to be completely sealed.
The Expansion Engine can offer significant advantages here, in particular for lower power ratings of 2 to 100 kW: with expansion ratios of 1:5, the theoretical efficiency reaches 15%, which is in the range of ORC systems. The Expansion Engine uses water as working fluid which is simple, cheap, non-toxic, non-flammable and non-corrosive. It works at pressure near and below atmospheric, so that sealing is not a problem. And it is a simple machine, implying cost effectiveness. Researchers from the University of Southampton / UK are currently developing a modern version of Watt's engine in order to generate energy from waste steam and waste heat. They improved the theory, demonstrating that theoretical efficiencies of up to 17.4% (and actual efficiencies of 11%) are possible.
In order to demonstrate the principle, a 25 watt experimental model engine was built and tested. The engine incorporates steam expansion as well as new features such as electronic control. The picture shows the model built and tested in 2016. Currently, a project to build and test a scaled-up 2 kW engine is under preparation.
See also
Carnot cycle
Corliss steam engine
Heat engine
Thermodynamics
Preserved beam engines
Ivan Polzunov made a dual-piston steam engine in 1766, but died before he could mass-produce it
References
External links
Watt atmospheric engine – Michigan State University, Chemical Engineering
Watt's 'perfect engine' – excerpts from Transactions of the Newcomen Society.
Boulton & Watt engine at the National Museum of Scotland
Boulton and Watt Steam Engine at the Powerhouse Museum, Sydney
James Watt Steam Engine Act on the UK Parliament website
Industrial Revolution
Scottish inventions
Steam Engine
History of the steam engine
Beam engines
Stationary steam engines
Thermodynamics | Watt steam engine | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,107 | [
"Thermodynamics",
"Dynamical systems"
] |
165,094 | https://en.wikipedia.org/wiki/Runway | In aviation, a runway is an elongated, rectangular surface designed for the landing and takeoff of an aircraft. Runways may be a human-made surface (often asphalt, concrete, or a mixture of both) or a natural surface (grass, dirt, gravel, ice, sand or salt). Runways, taxiways and ramps, are sometimes referred to as "tarmac", though very few runways are built using tarmac. Takeoff and landing areas defined on the surface of water for seaplanes are generally referred to as waterways. Runway lengths are now commonly given in meters worldwide, except in North America where feet are commonly used.
History
In 1916, in a World War I war effort context, the first concrete-paved runway was built in Clermont-Ferrand in France, allowing local company Michelin to manufacture Bréguet Aviation military aircraft.
In January 1919, aviation pioneer Orville Wright underlined the need for "distinctly marked and carefully prepared landing places, [but] the preparing of the surface of reasonably flat ground [is] an expensive undertaking [and] there would also be a continuous expense for the upkeep."
Headings
For fixed-wing aircraft, it is advantageous to perform takeoffs and landings into the wind to reduce takeoff or landing roll and reduce the ground speed needed to attain flying speed. Larger airports usually have several runways in different directions, so that one can be selected that is most nearly aligned with the wind. Airports with one runway are often constructed to be aligned with the prevailing wind. Compiling a wind rose is one of the preliminary steps taken in constructing airport runways. Wind direction is given as the direction the wind is coming from: a plane taking off from runway 09 faces east, into an "east wind" blowing from 090°.
Originally in the 1920s and 1930s, airports and air bases (particularly in the United Kingdom) were built in a triangle-like pattern of three runways at 60° angles to each other. The reason was that aviation was only starting, and although it was known that wind affected the runway distance required, not much was known about wind behaviour. As a result, three runways in a triangle-like pattern were built, and the runway with the heaviest traffic would eventually expand into the airport's main runway, while the other two runways would be either abandoned or converted into taxiways.
Naming
Runways are named by a number between 01 and 36, which is generally the magnetic azimuth of the runway's heading in decadegrees. This heading differs from true north by the local magnetic declination. A runway numbered 09 points east (90°), runway 18 is south (180°), runway 27 points west (270°) and runway 36 points to the north (360° rather than 0°). When taking off from or landing on runway 09, a plane is heading around 90° (east). A runway can normally be used in both directions, and is named for each direction separately: e.g., "runway 15" in one direction is "runway 33" when used in the other. The two numbers differ by 18 (= 180°). For clarity in radio communications, each digit in the runway name is pronounced individually: runway one-five, runway three-three, etc. (instead of "fifteen" or "thirty-three").
A leading zero, for example in "runway zero-six" or "runway zero-one-left", is included for all ICAO and some U.S. military airports (such as Edwards Air Force Base). However, most U.S. civil aviation airports drop the leading zero as required by FAA regulation. This also includes some military airfields such as Cairns Army Airfield. This American anomaly may lead to inconsistencies in conversations between American pilots and controllers in other countries. It is very common in a country such as Canada for a controller to clear an incoming American aircraft to, for example, runway 04, and the pilot read back the clearance as runway 4. In flight simulation programs those of American origin might apply U.S. usage to airports around the world. For example, runway 05 at Halifax will appear on the program as the single digit 5 rather than 05.
Military airbases may include smaller paved runways known as "assault strips" for practice and training next to larger primary runways. These strips eschew the standard numerical naming convention and instead employ the runway's full three digit heading; examples include Dobbins Air Reserve Base's Runway 110/290 and Duke Field's Runway 180/360.
Runways with non-hard surfaces, such as small turf airfields and waterways for seaplanes, may use the standard numerical scheme or may use traditional compass point naming, examples include Ketchikan Harbor Seaplane Base's Waterway E/W. Airports with unpredictable or chaotic water currents, such as Santa Catalina Island's Pebbly Beach Seaplane Base, may designate their landing area as Waterway ALL/WAY to denote the lack of designated landing direction.
Letter suffix
If there is more than one runway pointing in the same direction (parallel runways), each runway is identified by appending left (L), center (C) and right (R) to the end of the runway number to identify its position (when facing its direction)—for example, runways one-five-left (15L), one-five-center (15C), and one-five-right (15R). Runway zero-three-left (03L) becomes runway two-one-right (21R) when used in the opposite direction (derived from adding 18 to the original number for the 180° difference when approaching from the opposite direction). In some countries, regulations mandate that where parallel runways are too close to each other, only one may be used at a time under certain conditions (usually adverse weather).
At large airports with four or more parallel runways (for example, at Chicago O'Hare, Los Angeles, Detroit Metropolitan Wayne County, Hartsfield-Jackson Atlanta, Denver, Dallas–Fort Worth and Orlando), some runway identifiers are shifted by 1 to avoid the ambiguity that would result with more than three parallel runways. For example, in Los Angeles, this system results in runways 6L, 6R, 7L, and 7R, even though all four runways are actually parallel at approximately 69°. At Dallas/Fort Worth International Airport, there are five parallel runways, named 17L, 17C, 17R, 18L, and 18R, all oriented at a heading of 175.4°. Occasionally, an airport with only three parallel runways may use different runway identifiers, such as when a third parallel runway was opened at Phoenix Sky Harbor International Airport in 2000 to the south of existing 8R/26L—rather than confusingly becoming the "new" 8R/26L it was instead designated 7R/25L, with the former 8R/26L becoming 7L/25R and 8L/26R becoming 8/26.
Suffixes may also be used to denote special use runways. Airports that have seaplane waterways may choose to denote the waterway on charts with the suffix W; such as Daniel K. Inouye International Airport in Honolulu and Lake Hood Seaplane Base in Anchorage. Small airports that host various forms of air traffic may employ additional suffixes to denote special runway types based on the type of aircraft expected to use them, including STOL aircraft (S), gliders (G), rotorcraft (H), and ultralights (U). Runways that are numbered relative to true north rather than magnetic north will use the suffix T; this is advantageous for certain airfields in the far north such as Thule Air Base (08T/26T).
Renumbering
Runway designations may change over time because Earth's magnetic lines slowly drift on the surface and the magnetic direction changes. Depending on the airport location and how much drift occurs, it may be necessary to change the runway designation. As runways are designated with headings rounded to the nearest 10°, this affects some runways sooner than others. For example, if the magnetic heading of a runway is 233°, it is designated Runway 23. If the magnetic heading changes downwards by 5 degrees to 228°, the runway remains Runway 23. If on the other hand the original magnetic heading was 226° (Runway 23), and the heading decreased by only 2 degrees to 224°, the runway becomes Runway 22. Because magnetic drift itself is slow, runway designation changes are uncommon, and not welcomed, as they require an accompanying change in aeronautical charts and descriptive documents. When a runway designation does change, especially at major airports, it is often done at night, because taxiway signs need to be changed and the numbers at each end of the runway need to be repainted to the new runway designators. In July 2009 for example, London Stansted Airport in the United Kingdom changed its runway designations from 05/23 to 04/22 during the night.
Declared distances
Runway dimensions vary from as small as long and wide in smaller general aviation airports, to long and wide at large international airports built to accommodate the largest jets, to the huge lake bed runway 17/35 at Edwards Air Force Base in California – developed as a landing site for the Space Shuttle.
Takeoff and landing distances available are given using one of the following terms:
Takeoff Run Available (TORA) – The length of runway declared available and suitable for the ground run of an airplane taking off.
Takeoff Distance Available (TODA) – The length of the takeoff run available plus the length of the clearway, if clearway is provided. (The clearway length allowed must lie within the aerodrome or airport boundary. According to the Federal Aviation Regulations and Joint Aviation Requirements (JAR) TODA is the lesser of TORA plus clearway or 1.5 times TORA).
Accelerate-Stop Distance Available (ASDA)– The length of the takeoff run available plus the length of the stopway, if stopway is provided.
Landing Distance Available (LDA) – The length of runway that is declared available and suitable for the ground run of an airplane landing.
Emergency Distance Available (EMDA) – LDA (or TORA) plus a stopway.
Sections
There are standards for runway markings.
The runway thresholds are markings across the runway that denote the beginning and end of the designated space for landing and takeoff under non-emergency conditions.
The runway safety area is the cleared, smoothed and graded area around the paved runway. It is kept free from any obstacles that might impede flight or ground roll of aircraft.
The runway is the surface from threshold to threshold (including displaced thresholds), which typically features threshold markings, numbers, and centerlines, but excludes blast pads and stopways at both ends.
Blast pads are often constructed just before the start of a runway where jet blast produced by large planes during the takeoff roll could otherwise erode the ground and eventually damage the runway.
Stopways, also known as overrun areas, are also constructed at the end of runways as emergency space to stop planes that overrun the runway on landing or a rejected takeoff.
Blast pads and stopways look similar, and are both marked with yellow chevrons; stopways may optionally be surrounded by red runway lights. The differences are that stopways can support the full weight of an aircraft and are designated for use in an aborted takeoff, while blast pads are often not as strong as the main paved surface of the runway and are not to be used for taxiing, landing, or aborted takeoffs. An engineered materials arrestor system (EMAS) may also be present, which may overlap with the end of the blast pad or stopway and is painted similarly (although an EMAS does not count as part of a stopway).
Displaced thresholds may be used for taxiing, takeoff, and landing rollout, but not for touchdown. A displaced threshold often exists because of obstacles just before the runway, runway strength, or noise restrictions making the beginning section of runway unsuitable for landings. It is marked with white paint arrows that lead up to the beginning of the landing portion of the runway. As with blast pads, landings on displaced thresholds are not permitted aside from emergency use or exigent circumstance.
Relocated thresholds are similar to displaced thresholds. They are used to mark a portion of the runway temporarily closed due to construction or runway maintenance. This closed portion of the runway is not available for use by aircraft for takeoff or landing, but it is available for taxi. While methods for identifying the relocated threshold vary, a common way for the relocated threshold to be marked is a ten-foot-wide white bar across the width of the runway.
Clearway is an area beyond the paved runway, aligned with the runway centerline and under the control of the airport authorities. This area is not less than 500 ft and there are no protruding obstacles except for threshold lights provided they are not higher than 26 inches. There is a limit on the upslope of the clearway of 1.25%. The length of the clearway may be included in the length of the takeoff distance available. For example, if a paved runway is long and there are of clearway beyond the end of the runway, the takeoff distance available is long. When the runway is to be used for takeoff of a large airplane, the maximum permissible takeoff weight of the airplane can be based on the takeoff distance available, including clearway. Clearway allows large airplanes to take off at a heavier weight than would be allowed if only the length of the paved runway is taken into account.
Markings
There are runway markings and signs on most large runways. Larger runways have a distance remaining sign (black box with white numbers). This sign uses a single number to indicate the remaining distance of the runway in thousands of feet. For example, a 7 will indicate remaining. The runway threshold is marked by a line of green lights.
There are three types of runways:
Visual runways are used at small airstrips and are usually just a strip of grass, gravel, ice, asphalt, or concrete. Although there are usually no markings on a visual runway, they may have threshold markings, designators, and centerlines. Additionally, they do not provide an instrument-based landing procedure; pilots must be able to see the runway to use it. Also, radio communication may not be available and pilots must be self-reliant.
Non-precision instrument runways are often used at small- to medium-size airports. These runways, depending on the surface, may be marked with threshold markings, designators, centerlines, and sometimes a mark (known as an aiming point, sometimes installed at ). While centerlines provide horizontal position guidance, aiming point markers provide vertical position guidance to planes on visual approach.
Precision instrument runways, which are found at medium- and large-size airports, consist of a blast pad/stopway (optional, for airports handling jets), threshold, designator, centerline, aiming point, and , /, , , and touchdown zone marks. Precision runways provide both horizontal and vertical guidance for instrument approaches.
Waterways may be unmarked or marked with buoys that follow maritime notation instead.
For runways and taxiways that are permanently closed, the lighting circuits are disconnected. The runway threshold, runway designation, and touchdown markings are obliterated and yellow "Xs" are placed at each end of the runway and at intervals.
National variants
In Australia, Canada, the United Kingdom, as well as some other countries or territories (Hong Kong and Macau) all 3-stripe and 2-stripe touchdown zones for precision runways are replaced with one-stripe touchdown zones.
In some South American countries like Colombia, Ecuador and Peru, one 3-stripe is added and a 2-stripe is replaced with the aiming point.
Some European countries replace the aiming point with a 3-stripe touchdown zone.
Runways in Norway have yellow markings instead of the usual white ones. This also occurs in some airports in Japan, Sweden, and Finland. The yellow markings are used to ensure better contrast against snow.
Runways may have different types of equipment on each end. To reduce costs, many airports do not install precision guidance equipment on both ends. Runways with one precision end and any other type of end can install the full set of touchdown zones, even if some are past the midpoint. Runways with precision markings on both ends omit touchdown zones within of the midpoint, to avoid ambiguity over the end with which the zone is associated.
Lighting
A line of lights on an airfield or elsewhere to guide aircraft in taking off or coming in to land or an illuminated runway is sometimes also known as a flare path.
Technical specifications
Runway lighting is used at airports during periods of darkness and low visibility. Seen from the air, runway lights form an outline of the runway. A runway may have some or all of the following:
Runway end identifier lights (REIL) – unidirectional (facing approach direction) or omnidirectional pair of synchronized flashing lights installed at the runway threshold, one on each side.
Runway end lights – a pair of four lights on each side of the runway on precision instrument runways, these lights extend along the full width of the runway. These lights show green when viewed by approaching aircraft and red when seen from the runway.
Runway edge lights – white elevated lights that run the length of the runway on either side. On precision instrument runways, the edge-lighting becomes amber in the last of the runway, or last third of the runway, whichever is less. Taxiways are differentiated by being bordered by blue lights, or by having green center lights, depending on the width of the taxiway, and the complexity of the taxi pattern.
Runway centerline lighting system (RCLS) – lights embedded into the surface of the runway at intervals along the runway centerline on some precision instrument runways. White except the last : alternate white and red for next and red for last .
Touchdown zone lights (TDZL) – rows of white light bars (with three in each row) at intervals on either side of the centerline for .
Taxiway centerline lead-off lights – installed along lead-off markings, alternate green and yellow lights embedded into the runway pavement. It starts with green light at about the runway centerline to the position of first centerline light beyond the Hold-Short markings on the taxiway.
Taxiway centerline lead-on lights – installed the same way as taxiway centerline lead-off Lights, but directing airplane traffic in the opposite direction.
Land and hold short lights – a row of white pulsating lights installed across the runway to indicate hold short position on some runways that are facilitating land and hold short operations (LAHSO).
Approach lighting system (ALS) – a lighting system installed on the approach end of an airport runway and consists of a series of lightbars, strobe lights, or a combination of the two that extends outward from the runway end.
According to Transport Canada's regulations, the runway-edge lighting must be visible for at least . Additionally, a new system of advisory lighting, runway status lights, is currently being tested in the United States.
The edge lights must be arranged such that:
the minimum distance between lines is , and maximum is
the maximum distance between lights within each line is
the minimum length of parallel lines is
the minimum number of lights in the line is 8.
Control of lighting system
Typically the lights are controlled by a control tower, a flight service station or another designated authority. Some airports/airfields (particularly uncontrolled ones) are equipped with pilot-controlled lighting, so that pilots can temporarily turn on the lights when the relevant authority is not available. This avoids the need for automatic systems or staff to turn the lights on at night or in other low visibility situations. This also avoids the cost of having the lighting system on for extended periods. Smaller airports may not have lighted runways or runway markings. Particularly at private airfields for light planes, there may be nothing more than a windsock beside a landing strip.
Safety
Types of runway safety incidents include:
Runway excursion – an incident involving only a single aircraft, where it makes an inappropriate exit from the runway (e.g. Thai Airways Flight 679).
Runway overrun (also known as an overshoot) – a type of excursion where the aircraft is unable to stop before the end of the runway (e.g. Air France Flight 358, TAM Airlines Flight 3054, Air India Express Flight 812).
Runway incursion – an incident involving incorrect presence of a vehicle, person or another aircraft on the runway (e.g. Aeroflot Flight 3352, Scandinavian Airlines Flight 686).
Runway confusion – an aircraft makes use of the wrong runway for landing or takeoff (e.g. Singapore Airlines Flight 006, Western Airlines Flight 2605).
Runway undershoot – an aircraft that lands short of the runway (e.g. British Airways Flight 38, Asiana Airlines Flight 214).
Surface
The choice of material used to construct the runway depends on the use and the local ground conditions. For a major airport, where the ground conditions permit, the most satisfactory type of pavement for long-term minimum maintenance is concrete. Although certain airports have used reinforcement in concrete pavements, this is generally found to be unnecessary, with the exception of expansion joints across the runway where a dowel assembly, which permits relative movement of the concrete slabs, is placed in the concrete. Where it can be anticipated that major settlements of the runway will occur over the years because of unstable ground conditions, it is preferable to install asphalt concrete surface, as it is easier to patch on a periodic basis. Fields with very low traffic of light planes may use a sod surface. Some runways make use of salt flats.
For pavement designs, borings are taken to determine the subgrade condition, and based on the relative bearing capacity of the subgrade, the specifications are established. For heavy-duty commercial aircraft, the pavement thickness, no matter what the top surface, varies from , including subgrade.
Airport pavements have been designed by two methods. The first, Westergaard, is based on the assumption that the pavement is an elastic plate supported on a heavy fluid base with a uniform reaction coefficient known as the K value. Experience has shown that the K values on which the formula was developed are not applicable for newer aircraft with very large footprint pressures.
The second method is called the California bearing ratio and was developed in the late 1940s. It is an extrapolation of the original test results, which are not applicable to modern aircraft pavements or to modern aircraft landing gear. Some designs were made by a mixture of these two design theories. A more recent method is an analytical system based on the introduction of vehicle response as an important design parameter. Essentially it takes into account all factors, including the traffic conditions, service life, materials used in the construction, and, especially important, the dynamic response of the vehicles using the landing area.
Because airport pavement construction is so expensive, manufacturers aim to minimize aircraft stresses on the pavement. Manufacturers of the larger planes design landing gear so that the weight of the plane is supported on larger and more numerous tires. Attention is also paid to the characteristics of the landing gear itself, so that adverse effects on the pavement are minimized. Sometimes it is possible to reinforce a pavement for higher loading by applying an overlay of asphaltic concrete or portland cement concrete that is bonded to the original slab. Post-tensioning concrete has been developed for the runway surface. This permits the use of thinner pavements and should result in longer concrete pavement life. Because of the susceptibility of thinner pavements to frost heave, this process is generally applicable only where there is no appreciable frost action.
Pavement surface
Runway pavement surface is prepared and maintained to maximize friction for wheel braking. To minimize hydroplaning following heavy rain, the pavement surface is usually grooved so that the surface water film flows into the grooves and the peaks between grooves will still be in contact with the aircraft tyres. To maintain the macrotexturing built into the runway by the grooves, maintenance crews engage in airfield rubber removal or hydrocleaning in order to meet required FAA, or other aviation authority friction levels.
Pavement subsurface drainage and underdrains
Subsurface underdrains help provide extended life and excellent and reliable pavement performance. At the Hartsfield Atlanta, GA airport the underdrains usually consist of trenches wide and deep from the top of the pavement. A perforated plastic tube ( in diameter) is placed at the bottom of the ditch. The ditches are filled with gravel size crushed stone. Excessive moisture under a concrete pavement can cause pumping, cracking, and joint failure.
Surface type codes
In aviation charts, the surface type is usually abbreviated to a three-letter code.
The most common hard surface types are asphalt and concrete. The most common soft surface types are grass and gravel.
Length
A runway of at least in length is usually adequate for aircraft weights below approximately . Larger aircraft including widebodies will usually require at least at sea level. International widebody flights, which carry substantial amounts of fuel and are therefore heavier, may also have landing requirements of or more and takeoff requirements of . The Boeing 747 is considered to have the longest takeoff distance of the more common aircraft types and has set the standard for runway lengths of larger international airports.
At sea level, can be considered an adequate length to land virtually any aircraft. For example, at O'Hare International Airport, when landing simultaneously on 4L/22R and 10/28 or parallel 9R/27L, it is routine for arrivals from East Asia, which would normally be vectored for 4L/22R () or 9R/27L () to request 28R (). It is always accommodated, although occasionally with a delay. Another example is that the Luleå Airport in Sweden was extended to to allow any fully loaded freight aircraft to take off. These distances are also influenced by the runway grade (slope) such that, for example, each 1 percent of runway down slope increases the landing distance by 10 percent.
An aircraft taking off at a higher altitude must do so at reduced weight due to decreased density of air at higher altitudes, which reduces engine power and wing lift. An aircraft must also take off at a reduced weight in hotter or more humid conditions (see density altitude). Most commercial aircraft carry manufacturer's tables showing the adjustments required for a given temperature.
In India, recommendations of International Civil Aviation Organization (ICAO) are now followed more often. For landing, only altitude correction is done for runway length whereas for take-off, all types of correction are taken into consideration.
See also
Engineered materials arrestor system
Helipad
Highway strip
ICAO recommendations on use of the International System of Units
Instrument landing system (ILS)
List of airports
Pavement classification number (PCN)
Precision approach path indicator
Roll way, sometimes referred as a runway
Runway visual range
Tabletop runway
Visual approach slope indicator
References
External links
World Airport and Runway Map (ICAO official site)
United States Aeronautical Information Manual – Federal Aviation Administration (published yearly)
United States Airport/Facility Directory (d-AFD) – Federal Aviation Administration (published every 56 days)
United States Terminal Procedures Publication/Airport Diagrams (d-TPP) – Federal Aviation Administration (published every 28 days)
North American Powered Parachute Federation
Visual Aids Handbook – Civil Aviation Authority
Airport engineering
Airport infrastructure | Runway | [
"Engineering"
] | 5,601 | [
"Airport engineering",
"Airport infrastructure",
"Aerospace engineering"
] |
165,106 | https://en.wikipedia.org/wiki/Desk%20Set | Desk Set (released as His Other Woman in the UK) is a 1957 American romantic comedy film starring Spencer Tracy and Katharine Hepburn. Directed by Walter Lang, the picture's screenplay was written by Phoebe Ephron and Henry Ephron, adapted from the 1955 play of the same name by William Marchant.
Plot
Bunny Watson is a documentalist in charge of the reference library at the Federal Broadcasting Network in Midtown Manhattan. The reference librarians are responsible for researching facts and answering questions for the general public on all manner of topics, great and small. Bunny has been romantically involved for seven years with rising network executive Mike Cutler, but with no marriage in sight.
Methods engineer and efficiency expert Richard Sumner is the inventor of EMERAC ("Electromagnetic MEmory and Research Arithmetical Calculator"), nicknamed "Emmy," a powerful early generation computer (referred to then as an "electronic brain"). He is brought in to see how the library functions, and size it up for installation of one of his massive machines.
Despite Bunny's initial intransigence, Richard is surprised and intrigued to discover how stunningly capable and engaging she is. When her staff finds out the computer is coming, they jump to the conclusion they are being replaced.
After an innocuous but seemingly salacious situation that Mike walks in on at Bunny's apartment, he recognizes the older Richard has emerged as a romantic rival, and begins to want to commit to Bunny.
Bunny's fear of unemployment seems confirmed when she and everyone on her staff receive a pink "layoff" slip printed out by a similar new EMERAC already installed in payroll. But it turns out to have been a mistake – the machine fired everybody in the company, including the president. The network has kept everything hush-hush to avoid tipping off competitors that a merger was in the works. Rather than replace the research staff, "Emmy" was installed to help the employees cope with the extra work.
With the threat of displacement out of the way, Richard reveals his romantic interest to Bunny, but she believes that EMERAC will always be his first love. He denies it, but then Bunny puts him to the test, pressing the machine beyond its limits. Richard resists the urge to fix it as long as possible, but finally gives in and forces an emergency shutdown. Bunny then accepts his marriage proposal.
Cast
Production
In the play, Bunny Watson (played by Shirley Booth, who was originally intended for the film as well) had only brief, somewhat hostile interactions with Richard Sumner. Screenwriters Phoebe and Henry Ephron (the parents of Nora Ephron) built up the role of the efficiency expert and tailored the interactions between him and the researcher to fit Spencer Tracy and Katharine Hepburn.
The exterior shots of the "Federal Broadcasting Network" seen in the film are actually of the RCA Building (now known as the Comcast Building) at 30 Rockefeller Plaza in Rockefeller Center, the headquarters of NBC.
The character of Bunny Watson was based on Agnes E. Law, a real-life librarian at CBS who retired about a year before the film was released.
This film was the eighth screen pairing of Hepburn and Tracy, after a five-year respite since 1952's Pat and Mike, and was a first for Hepburn and Tracy in several ways: the first non-MGM film the two starred in together, their first color film, and their first CinemaScope film. Following Desk Set their last film together would be 1967's Guess Who's Coming to Dinner.
The computer referred to as EMERAC is a homoiophone metonym for ENIAC ("Electronic Numerical Integrator And Computer"), which was developed in the 1940s and was the first electronic general-purpose computer. Parts of the EMERAC computer, particularly the massive display of moving square lights, would later be seen in various 20th Century Fox productions including both the motion picture (1961) and TV (1964–1968) versions of Voyage to the Bottom of the Sea and the Edgar Hopper segment of the 1964 film What a Way to Go!.
The researchers furnish incorrect information about the career of baseball player Ty Cobb. Miss Costello claims his major league career lasted for 21 years, and that he played only for the Detroit Tigers. In fact, he played for 24 years—22 with Detroit, and his final two seasons with the Philadelphia Athletics.
There is a well-known "goof" in one scene. Mike gives Bunny an arrangement of white carnations, and she inserts one in his lapel's button-hole. At the end of the day, she and Richard leave the office. She is carrying the white carnation arrangement as they enter the elevator. As they exit the building, the carnations are pink.
Reception
Bosley Crowther, film critic of The New York Times, felt the film was "out of dramatic kilter", inasmuch as Hepburn was simply too "formidable" to convincingly play someone "scared by a machine", resulting in "not much tension in this thoroughly lighthearted film".
The New York Post review was mixed: "There are such sops to sentiment as Miss Hepburn's willingness to be dragged altarwards by the young head of her department, Gig Young, who kindly lets her do her most impressive work, and a growing understanding between Hepburn and the rather remote and intellectually Olympian Tracy....Running true to form, the sex narrative follows a predictable pattern, rewarding honest virtue and slapping down the unworthy, and the other, scientific trail is permitted a twist that may surprise any who have found themselves emotionally involved in that timely problem of technological unemployment....'Desk Set,' let us conclude, is a shining piece of machinery brought to a high polish, and, delivered with appropriate performances, flourishes. Affection, though, it cannot inspire."
TIME magazine wrote: "At long last, somebody has a kind word for the girls in the research department. The word: one of those electronic brains could do the job much better and with less back chat—and what's more, it would free the girls' energies for the more important job of getting a man....Desk Set has been expanded [from the play] by a sizable pigeonhole, in which [Hepburn and Tracy] intermittently bill and coo....On the whole, the film compares favorably with the play....And though Actress Hepburn tends to wallow in the wake of Shirley Booth...she never quite sinks in the comic scenes, and in the romantic ones she is light enough to ride the champagne splashes of emotion as if she were going over Niagara in a barrel. Spencer Tracy has one wonderful slapstick scene, and Gig Young does very well with a comic style for which he is much beholden to William Holden."
The Philadelphia Inquirer was critical: "The middle-aged excesses of Miss Hepburn and Tracy...leave a good deal to be desired. Equipped with an insubstantial vehicle, bogged down by surprisingly flat-footed direction...the stars come close to being embarrassing as they bound through roles involving them in office nonsense about a mechanical brain, a bibulous Christmas party, an innocent, but suspecting, dinner in negligee Katie's rain-bound flat....Marchant's foolish little comedy gains nothing via the Phoebe and Henry Ephron adaptation. Long recitations from "Hiawatha" and "The Curfew Shall Not Ring Tonight," plus question-and-answer games...in addition to the repetitiousness of the central idea...turn 'Desk Set's' 104 minutes into an endurance contest for cast and audience."
Today the film is seen far more favorably, with the sharpness of the script praised in particular. It has achieved a rare 100% rating on Rotten Tomatoes based on 22 reviews, with a weighted average of 6.78/10. The site's consensus reads: "Desk Set reunites one of cinema's most well-loved pairings for a solidly crafted romantic comedy that charmingly encapsulates their timeless appeal". Dennis Schwartz of Dennis Schwartz Movie Reviews called it an "inconsequential sex comedy," but contended "the star performers are better than the material they are given to work with" and that "the comedy was so cheerful and the banter between the two was so refreshingly smart that it was easy to forgive this bauble for not being as rich as many of the legendary duo's other films together."
Legacy
A Canadian radio program, Bunny Watson, was named for and inspired by Hepburn's character.
See also
List of American films of 1957
References
External links
Desk Set at AllMovie
1957 films
1957 romantic comedy films
20th Century Fox films
American films based on plays
American romantic comedy films
Films about computing
Films about technological impact
Films directed by Walter Lang
Films scored by Cyril J. Mockridge
Films set around New Year
Films set in libraries
Films set in Manhattan
Workplace comedy films
CinemaScope films
1950s English-language films
1950s American films
Films about librarians
English-language romantic comedy films | Desk Set | [
"Technology"
] | 1,878 | [
"Works about computing",
"Films about computing"
] |
165,123 | https://en.wikipedia.org/wiki/Hannes%20Alfv%C3%A9n | Hannes Olof Gösta Alfvén (; 30 May 1908 – 2 April 1995) was a Swedish electrical engineer, plasma physicist and winner of the 1970 Nobel Prize in Physics for his work on magnetohydrodynamics (MHD). He described the class of MHD waves now known as Alfvén waves. He was originally trained as an electrical power engineer and later moved to research and teaching in the fields of plasma physics and electrical engineering. Alfvén made many contributions to plasma physics, including theories describing the behavior of aurorae, the Van Allen radiation belts, the effect of magnetic storms on the Earth's magnetic field, the terrestrial magnetosphere, and the dynamics of plasmas in the Milky Way galaxy.
Education
Alfvén received his PhD from the University of Uppsala in 1934. His thesis was titled "Investigations of High-frequency Electromagnetic Waves."
Early years
In 1934, Alfvén taught physics at both the University of Uppsala and the Nobel Institute for Physics (later renamed the Manne Siegbahn Institute of Physics) in Stockholm, Sweden. In 1940, he became professor of electromagnetic theory and electrical measurements at the Royal Institute of Technology in Stockholm. In 1945, he acquired the nonappointive position of Chair of Electronics. His title was changed to Chair of Plasma Physics in 1963. From 1954 to 1955, Alfvén was a Fulbright Scholar at the University of Maryland, College Park. In 1967, after leaving Sweden and spending time in the Soviet Union, he moved to the United States. Alfvén worked in the departments of electrical engineering at both the University of California, San Diego and the University of Southern California.
Later years
In 1991, Alfvén retired as professor of electrical engineering at the University of California, San Diego and professor of plasma physics at the Royal Institute of Technology in Stockholm.
Alfvén spent his later adult life alternating between California and Sweden. He died at the age of 86.
Research
In 1937, Alfvén argued that if plasma pervaded the universe, it could then carry electric currents capable of generating a galactic magnetic field. After winning the Nobel Prize for his works in magnetohydrodynamics, he emphasized that:
In order to understand the phenomena in a certain plasma region, it is necessary to map not only the magnetic but also the electric field and the electric currents. Space is filled with a network of currents which transfer energy and momentum over large or very large distances. The currents often pinch to filamentary or surface currents. The latter are likely to give space, as also interstellar and intergalactic space, a cellular structure.
His theoretical work on field-aligned electric currents in the aurora (based on earlier work by Kristian Birkeland) was confirmed in 1967, these currents now being known as Birkeland currents.
British scientist Sydney Chapman was a strong critic of Alfvén.<ref>S. Chapman and J. Bartels, Geomagnetism," Vol. 1 and 2, Clarendon Press, Oxford, 1940.</ref> Many physicists regarded Alfvén as espousing unorthodox opinions R. H. Stuewer noting that "... he remained an embittered outsider, winning little respect from other scientists even after he received the Nobel Prize..." and was often forced to publish his papers in obscure journals. Alfvén recalled:
When I describe [plasma phenomena] according to this formalism most referees do not understand what I say and turn down my papers. With the referee system which rules US science today, this means that my papers are rarely accepted by the leading US journals.
Alfvén played a central role in the development of:
Plasma physics
Charged particle beams
Interplanetary medium
Magnetospheric physics
Magnetohydrodynamics
Solar phenomena investigation (such as the solar wind)
Aurorae science
In 1939, Alfvén proposed the theory of magnetic storms and auroras and the theory of plasma dynamics in the Earth's magnetosphere. This was the paper rejected by the U.S. journal Terrestrial Magnetism and Atmospheric Electricity.
Applications of Alfvén's research in space science include:
Van Allen radiation belt theory
Reduction of the Earth's magnetic field during magnetic storms
Magnetosphere (protective plasma covering the Earth)
Formation of comet tails
Formation of the Solar System
Dynamics of plasmas in the galaxy
Physical cosmology
Alfvén's views followed those of the founder of magnetospheric physics, Kristian Birkeland. At the end of the nineteenth century, Birkeland proposed (backed by extensive data) that electric currents flowing down along the Earth's magnetic fields into the atmosphere caused the aurora and polar magnetic disturbances.
Areas of technology benefiting from Alfvén's contributions include:
Particle accelerators
Controlled thermonuclear fusion
Hypersonic flight
Rocket propulsion
Reentry braking of space vehicles
Contributions to astrophysics:
Galactic magnetic field (1937)
Identified nonthermal synchrotron radiation from astronomical sources (1950)
Alfvén waves (low frequency hydromagnetic plasma oscillations) are named in his honor, and propagate at the Alfvén speed. Many of his theories about the solar system were verified as late as the 1980s through external measurements of cometary and planetary magnetospheres. However, Alfvén himself noted that astrophysical textbooks poorly represented known plasma phenomena:
A study of how a number of the most used textbooks in astrophysics treat important concepts such as double layers, critical velocity, pinch effects, and circuits is made. It is found that students using these textbooks remain essentially ignorant of even the existence of these concepts, despite the fact that some of them have been well known for half a century (e.g, double layers, Langmuir, 1929; pinch effect, Bennet, 1934).
Alfvén reported that of 17 of the most used textbooks on astrophysics, none mention the pinch effect, none mentioned critical ionization velocity, only two mentioned circuits, and three mentioned double layers.
Alfvén believed the problem with the Big Bang was that astrophysicists tried to extrapolate the origin of the universe from mathematical theories developed on the blackboard, rather than starting from known observable phenomena. He also considered the Big Bang to be a myth devised to explain creation. Alfvén and colleagues proposed the Alfvén–Klein model as an alternative cosmological theory to both the Big Bang and steady state theory cosmologies.
Personal life
Alfvén was married for 67 years to his wife Kerstin (1910–1992). They raised five children, one boy and four girls. Their son became a physician, while one daughter became a writer and another a lawyer in Sweden. The writer was Inger Alfvén and is well known for her work in Sweden. The composer Hugo Alfvén was Hannes Alfvén's uncle.
Alfvén studied the history of science, oriental philosophy, and religion. On his religious views, Alfven was irreligious and critical of religion. He spoke Swedish, English, German, French, and Russian, and some Spanish and Chinese. He expressed great concern about the difficulties of permanent high-level radioactive waste management." Alfvén was also interested in problems in cosmology and all aspects of auroral physics, and used Schröder's well known book on aurora, Das Phänomen des Polarlichts. Letters of Alfvén, Treder, and Schröder were published on the occasion of Treder's 70th birthday.Schröder, Wilfried, and Hans Jürgen Treder. 1993. The earth and the universe: A festschrift in honour of Hans-Jürgen Treder. Bremen-Rönnebeck: Science Editions. The relationships between Hans-Jürgen Treder, Hannes Alfvén and Wilfried Schröder were discussed in detail by Schröder in his publications.
Alfvén died on 2 April, 1995 at Djursholm aged 86.
Awards and honours
The Hannes Alfvén Prize, awarded annually by the European Physical Society for outstanding contributions in plasma physics, is named after him. The asteroid 1778 Alfvén is named in his honour.
Awards
Gold Medal of the Royal Astronomical Society (1967)
Nobel Prize in Physics (1970) for his work on magnetohydrodynamics
Franklin Medal of the Franklin Institute (1971)
Lomonosov Gold Medal of the USSR Academy of Sciences (1971)
Elected a Foreign Member of the Royal Society (ForMemRS) in 1980
William Bowie Medal of the American Geophysical Union (1988) for his work on comets and plasmas in the Solar System
Member of Royal Swedish Academy of Sciences
Member of Royal Swedish Academy of Engineering Sciences
Life fellows of the Institute of Electrical and Electronics Engineers
Member of European Physical Society
Foreign Honorary Member of the American Academy of Arts and Sciences (1962)
Member of the Yugoslav Academy of Sciences
Contributor to the Pugwash Conferences on Science and World Affairs
Member of the International Academy of Science
Member of the Indian National Science AcademyElected member of the American Philosophical Society (1971)
Alfvén was one of the few scientists who was a foreign member of both the United States and Soviet Academies of Sciences.
Selected bibliography
For full list of publications see.
Books
Cosmical Electrodynamics, International Series of Monographs on Physics, Oxford: Clarendon Press, 1950. (See also 2nd Ed. 1963, co-authored with Carl-Gunne Fälthammar.)
Worlds-Antiworlds: Antimatter in Cosmology (1966).
The Great Computer: A Vision (1968) (a political-scientific satire under the pen name Olof Johannesson; publ. Gollancz, ).
Atom, Man, and the Universe: A Long Chain of Complications, W.H. Freeman and Company, 1969.
Living on the Third Planet, authored with Kerstin Alfvén, W.H. Freeman and Company, 1972. .
Cosmic Plasma, Astrophysics and Space Science Library, Vol. 82 (1981) Springer Verlag.
Schröder, Wilfried, and Hans Jürgen Treder. 2007. Theoretical physics and geophysics: Recollections of Hans-Jürgen Treder (1928–2006). Potsdam: Science Editions.
Articles
On the cosmogony of the solar system I (1942) | Part II | Part III
Interplanetary Magnetic Field (1958)
On the Origin of Cosmic Magnetic Fields (1961)
On the Filamentary Structure of the Solar Corona (1963)
Currents in the Solar Atmosphere and a Theory of Solar Flares (1967)
On the Importance of Electric Fields in the Magnetosphere and Interplanetary Space (1967)
Jet Streams in Space (1970)
Evolution of the Solar System (1976) with Gustaf Arrhenius (NASA book)
Double radio sources and the new approach to cosmical plasma physics (1978) (PDF)
Interstellar clouds and the formation of stars with Per Carlqvist (1978) (PDF)
Energy source of the solar wind with Per Carlqvist (1980) (PDF) A direct transfer of energy from photospheric activity to the solar wind by means of electric currents is discussed.
Electromagnetic Effects and the Structure of the Saturnian Rings (1981) (PDF)
A three-ring circuit model of the magnetosphere with Whipple, E. C. and Jr.; McIlwain (1981) (PDF)
The Voyager 1/Saturn encounter and the cosmogonic shadow effect (1981) (PDF)
Origin, evolution and present structure of the asteroid region (1983) (PDF)
On hierarchical cosmology (1983) (PDF) Progress in lab studies of plasmas and on their methods of transferring the results to cosmic conditions.
Solar system history as recorded in the Saturnian ring structure (1983) (PDF)
Cosmology – Myth or science? (1984) (PDF)
Cosmogony as an extrapolation of magnetospheric research (1984) (PDF)
See also
Alfvén resonator
Astrophysical plasma
Heliospheric current sheet
Magnetic reconnection
Magnetohydrodynamic turbulence
Magnetosonic wave
Marklund convection
Plasma parameters
Plasma stability
Solar wind
Spheromak
References
External links
Hannes Alfvén biography
including the Nobel Lecture, December 11, 1970 Plasma Physics, Space Research and the Origin of the Solar System''
Hannes Alfvén biography (Royal Institute of Technology in Stockholm, Sweden)
Hannes Alfvén Biographical Memoirs (Proceedings of the American Philosophical Society)
Papers of Hannes Olof Gosta Alfvén
Hannes Alfvén Medal – awarded for outstanding scientific contributions towards the understanding of plasma processes in the Solar System and other cosmical plasma environments
Timeline of Nobel Prize Winners: Hannes Olof Gosta Alfvén
Hannes Alfvén Papers (1945–1991) in the Mandeville Special Collections Library.
QJRAS Obituary 37 (1996) 259
Hannes Alfvén Birth Centennial 30 May 2008 (2008)
1908 births
1995 deaths
Critics of religions
People from Norrköping
Fellows of the IEEE
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Nobel laureates in Physics
Religious skeptics
20th-century Swedish astronomers
Fluid dynamicists
Swedish pacifists
20th-century Swedish physicists
Swedish science fiction writers
Swedish electrical engineers
Uppsala University alumni
Academic staff of the KTH Royal Institute of Technology
Swedish Nobel laureates
Recipients of the Gold Medal of the Royal Astronomical Society
Recipients of the Lomonosov Gold Medal
Fellows of the American Academy of Arts and Sciences
Foreign members of the USSR Academy of Sciences
Foreign members of the Russian Academy of Sciences
Foreign fellows of the Indian National Science Academy
University of California, San Diego faculty
University of Maryland, College Park faculty
Swedish plasma physicists
Members of the American Philosophical Society
Recipients of Franklin Medal | Hannes Alfvén | [
"Chemistry",
"Technology"
] | 2,769 | [
"Science and technology awards",
"Fluid dynamicists",
"Recipients of the Lomonosov Gold Medal",
"Fluid dynamics"
] |
165,128 | https://en.wikipedia.org/wiki/Royal%20we | The royal we, majestic plural (), or royal plural, is the use of a plural pronoun (or corresponding plural-inflected verb forms) used by a single person who is a monarch or holds a high office to refer to themself. A more general term for the use of a we, us, or our to refer to oneself is nosism.
Example
After the United Kingdom had been asked to arbitrate a boundary dispute between Argentina and Chile, King Edward VII issued the adjudication of the requested arbitration, known as the Cordillera of the Andes Boundary Case. The sentence following the preamble of the award begins as follows:
In this quotation, underlining has been added to the words that exemplify the use of the majestic plural.
Western usage
The royal we is commonly employed by a person of high office, such as a monarch or other type of sovereign. It is also used in certain formal contexts by bishops and university rectors. William Longchamp is credited with its introduction to England in the late 12th century, following the practice of the Chancery of Apostolic Briefs.
In the public situations in which it is used, the monarch or other dignitary is typically speaking not only in their own personal capacity but also in an official capacity as leader of a nation or institution. In the grammar of several languages, plural forms tend to be perceived as deferential and more polite than singular forms.
In diplomatic letters, such as letters of credence, it is customary for monarchs to use the singular first-person (I, me, my) when writing to other monarchs, while the majestic plural is used in royal letters to a president of a republic.
In Commonwealth realms, the sovereign discharges their commissions to ranked military officers in the capacity of we. Many official documents published in the name of the monarch are also presented with royal we, such as letters patent, proclamations, etc.
Popes have historically used the we as part of their formal speech, for example as used in , , and .
Since Pope John Paul I, however, the royal we has been dropped by popes in public speech, although formal documents may have retained it. Recent important papal documents still use the majestic plural in the original Latin but are given with the singular I in their official English translations.
In 1989, Margaret Thatcher, then Prime Minister of the United Kingdom, was met with disdain by some in the press for using the royal we when announcing to reporters that she had become a grandmother in her "We have become a grandmother" statement.
Non-Western usage
Several prominent epithets of the Bible describe the Hebrew God in plural terms: , , and . Many Christian scholars, including the post-apostolic leaders and Augustine of Hippo, have seen the use of the plural and grammatically singular verb forms as support for the doctrine of the Trinity. The earliest known use of this poetic device is somewhere in the 4th century AD, during the Byzantine period; nevertheless, scholars such as Mircea Eliade, Wilhelm Gesenius, and Aaron Ember claim that Elohim is a form of majestic plural in the Torah.
In Imperial China and every monarchy within its cultural sphere (including Japan, Korea, Ryukyu, and Vietnam), the majestic imperial pronoun was expressed by the character (, ). This was in fact the former Chinese first-person pronoun (that is, ). However, following his unification of China, the emperor Shi Huangdi arrogated it entirely for his personal use. Previously, in the Chinese cultural sphere, the use of the first-person pronoun in formal courtly language was already uncommon, with the nobility using the self-deprecating term () for self-reference, while their subjects referred to themselves as (, original meaning or ), with an indirect deferential reference like (), or by employing a deferential epithet (such as the adjective (), ). While this practice did not affect the non-Chinese countries as much since their variants of () and other terms were generally imported loanwords, the practice of polite avoidance of pronouns nevertheless spread throughout East Asia. This still persists, except in China, where, following the May Fourth Movement and the Communist Party victory in the Chinese Civil War, the use of the first-person pronoun , which dates to the Shang dynasty oracle inscriptions as a plural possessive pronoun, is common.
In Hindustani and other Indo-Aryan languages, the majestic plural is a common way for elder speakers to refer to themselves, and also for persons of higher social rank to refer to themselves. In certain communities, the first-person singular () may be dispensed with altogether for self-reference and the plural nosism used uniformly.
In Islam, several plural word forms are used to refer to Allah.
In Malaysia, before the Yang di-Pertuan Agong takes office, he will first take an oath, in which the Malay word for 'we', , would be the pronoun used. This is because His Majesty represents the other Malay Rulers of Malaysia during his reign as the Yang di-Pertuan Agong.
See also
Generic you
Royal one
Singular they
T–V distinction
References
Personal pronouns
Sociolinguistics
Grammatical number
Etiquette | Royal we | [
"Biology"
] | 1,077 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
165,146 | https://en.wikipedia.org/wiki/Inductance | Inductance is the tendency of an electrical conductor to oppose a change in the electric current flowing through it. The electric current produces a magnetic field around the conductor. The magnetic field strength depends on the magnitude of the electric current, and therefore follows any changes in the magnitude of the current. From Faraday's law of induction, any change in magnetic field through a circuit induces an electromotive force (EMF) (voltage) in the conductors, a process known as electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by Lenz's law, and the voltage is called back EMF.
Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality constant that depends on the geometry of circuit conductors (e.g., cross-section area and length) and the magnetic permeability of the conductor and nearby materials. An electronic component designed to add inductance to a circuit is called an inductor. It typically consists of a coil or helix of wire.
The term inductance was coined by Oliver Heaviside in May 1884, as a convenient way to refer to "coefficient of self-induction". It is customary to use the symbol for inductance, in honour of the physicist Heinrich Lenz. In the SI system, the unit of inductance is the henry (H), which is the amount of inductance that causes a voltage of one volt, when the current is changing at a rate of one ampere per second. The unit is named for Joseph Henry, who discovered inductance independently of Faraday.
History
The history of electromagnetic induction, a facet of electromagnetism, began with observations of the ancients: electric charge or static electricity (rubbing silk on amber), electric current (lightning), and magnetic attraction (lodestone). Understanding the unity of these forces of nature, and the scientific theory of electromagnetism was initiated and achieved during the 19th century.
Electromagnetic induction was first described by Michael Faraday in 1831. In Faraday's experiment, he wrapped two wires around opposite sides of an iron ring. He expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Using a galvanometer, he observed a transient current flow in the second coil of wire each time that a battery was connected or disconnected from the first coil. This current was induced by the change in magnetic flux that occurred when the battery was connected and disconnected. Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Source of inductance
A current flowing through a conductor generates a magnetic field around the conductor, which is described by Ampere's circuital law. The total magnetic flux through a circuit is equal to the product of the perpendicular component of the magnetic flux density and the area of the surface spanning the current path. If the current varies, the magnetic flux through the circuit changes. By Faraday's law of induction, any change in flux through a circuit induces an electromotive force (EMF, in the circuit, proportional to the rate of change of flux
The negative sign in the equation indicates that the induced voltage is in a direction which opposes the change in current that created it; this is called Lenz's law. The potential is therefore called a back EMF. If the current is increasing, the voltage is positive at the end of the conductor through which the current enters and negative at the end through which it leaves, tending to reduce the current. If the current is decreasing, the voltage is positive at the end through which the current leaves the conductor, tending to maintain the current. Self-inductance, usually just called inductance, is the ratio between the induced voltage and the rate of change of the current
Thus, inductance is a property of a conductor or circuit, due to its magnetic field, which tends to oppose changes in current through the circuit. The unit of inductance in the SI system is the henry (H), named after Joseph Henry, which is the amount of inductance that generates a voltage of one volt when the current is changing at a rate of one ampere per second.
All conductors have some inductance, which may have either desirable or detrimental effects in practical electrical devices. The inductance of a circuit depends on the geometry of the current path, and on the magnetic permeability of nearby materials; ferromagnetic materials with a higher permeability like iron near a conductor tend to increase the magnetic field and inductance. Any alteration to a circuit which increases the flux (total magnetic field) through the circuit produced by a given current increases the inductance, because inductance is also equal to the ratio of magnetic flux to current
An inductor is an electrical component consisting of a conductor shaped to increase the magnetic flux, to add inductance to a circuit. Typically it consists of a wire wound into a coil or helix. A coiled wire has a higher inductance than a straight wire of the same length, because the magnetic field lines pass through the circuit multiple times, it has multiple flux linkages. The inductance is proportional to the square of the number of turns in the coil, assuming full flux linkage.
The inductance of a coil can be increased by placing a magnetic core of ferromagnetic material in the hole in the center. The magnetic field of the coil magnetizes the material of the core, aligning its magnetic domains, and the magnetic field of the core adds to that of the coil, increasing the flux through the coil. This is called a ferromagnetic core inductor. A magnetic core can increase the inductance of a coil by thousands of times.
If multiple electric circuits are located close to each other, the magnetic field of one can pass through the other; in this case the circuits are said to be inductively coupled. Due to Faraday's law of induction, a change in current in one circuit can cause a change in magnetic flux in another circuit and thus induce a voltage in another circuit. The concept of inductance can be generalized in this case by defining the mutual inductance of circuit and circuit as the ratio of voltage induced in circuit to the rate of change of current in circuit This is the principle behind a transformer. The property describing the effect of one conductor on itself is more precisely called self-inductance, and the properties describing the effects of one conductor with changing current on nearby conductors is called mutual inductance.
Self-inductance and magnetic energy
If the current through a conductor with inductance is increasing, a voltage is induced across the conductor with a polarity that opposes the current—in addition to any voltage drop caused by the conductor's resistance. The charges flowing through the circuit lose potential energy. The energy from the external circuit required to overcome this "potential hill" is stored in the increased magnetic field around the conductor. Therefore, an inductor stores energy in its magnetic field. At any given time the power flowing into the magnetic field, which is equal to the rate of change of the stored energy is the product of the current and voltage across the conductor
From (1) above
When there is no current, there is no magnetic field and the stored energy is zero. Neglecting resistive losses, the energy (measured in joules, in SI) stored by an inductance with a current through it is equal to the amount of work required to establish the current through the inductance from zero, and therefore the magnetic field. This is given by:
If the inductance is constant over the current range, the stored energy is
Inductance is therefore also proportional to the energy stored in the magnetic field for a given current. This energy is stored as long as the current remains constant. If the current decreases, the magnetic field decreases, inducing a voltage in the conductor in the opposite direction, negative at the end through which current enters and positive at the end through which it leaves. This returns stored magnetic energy to the external circuit.
If ferromagnetic materials are located near the conductor, such as in an inductor with a magnetic core, the constant inductance equation above is only valid for linear regions of the magnetic flux, at currents below the level at which the ferromagnetic material saturates, where the inductance is approximately constant. If the magnetic field in the inductor approaches the level at which the core saturates, the inductance begins to change with current, and the integral equation must be used.
Inductive reactance
When a sinusoidal alternating current (AC) is passing through a linear inductance, the induced back- is also sinusoidal. If the current through the inductance is , from (1) above the voltage across it is
where is the amplitude (peak value) of the sinusoidal current in amperes, is the angular frequency of the alternating current, with being its frequency in hertz, and is the inductance.
Thus the amplitude (peak value) of the voltage across the inductance is
Inductive reactance is the opposition of an inductor to an alternating current. It is defined analogously to electrical resistance in a resistor, as the ratio of the amplitude (peak value) of the alternating voltage to current in the component
Reactance has units of ohms. It can be seen that inductive reactance of an inductor increases proportionally with frequency so an inductor conducts less current for a given applied AC voltage as the frequency increases. Because the induced voltage is greatest when the current is increasing, the voltage and current waveforms are out of phase; the voltage peaks occur earlier in each cycle than the current peaks. The phase difference between the current and the induced voltage is radians or 90 degrees, showing that in an ideal inductor the current lags the voltage by 90°.
Calculating inductance
In the most general case, inductance can be calculated from Maxwell's equations. Many important cases can be solved using simplifications. Where high frequency currents are considered, with skin effect, the surface current densities and magnetic field may be obtained by solving the Laplace equation. Where the conductors are thin wires, self-inductance still depends on the wire radius and the distribution of the current in the wire. This current distribution is approximately constant (on the surface or in the volume of the wire) for a wire radius much smaller than other length scales.
Inductance of a straight single wire
As a practical matter, longer wires have more inductance, and thicker wires have less, analogous to their electrical resistance (although the relationships are not linear, and are different in kind from the relationships that length and diameter bear to resistance).
Separating the wire from the other parts of the circuit introduces some unavoidable error in any formulas' results. These inductances are often referred to as “partial inductances”, in part to encourage consideration of the other contributions to whole-circuit inductance which are omitted.
Practical formulas
For derivation of the formulas below, see Rosa (1908).
The total low frequency inductance (interior plus exterior) of a straight wire is:
where
is the "low-frequency" or DC inductance in nanohenry (nH or 10−9H),
is the length of the wire in meters,
is the radius of the wire in meters (hence a very small decimal number),
the constant is the permeability of free space, commonly called , divided by ; in the absence of magnetically reactive insulation the value 200 is exact when using the classical definition of μ0 = , and correct to 7 decimal places when using the 2019-redefined SI value of μ0 = .
The constant 0.75 is just one parameter value among several; different frequency ranges, different shapes, or extremely long wire lengths require a slightly different constant (see below). This result is based on the assumption that the radius is much less than the length which is the common case for wires and rods. Disks or thick cylinders have slightly different formulas.
For sufficiently high frequencies skin effects cause the interior currents to vanish, leaving only the currents on the surface of the conductor; the inductance for alternating current, is then given by a very similar formula:
where the variables and are the same as above; note the changed constant term now 1, from 0.75 above.
In an example from everyday experience, just one of the conductors of a lamp cord long, made of 18 AWG wire, would only have an inductance of about if stretched out straight.
Mutual inductance of two parallel straight wires
There are two cases to consider:
Current travels in the same direction in each wire, and
current travels in opposing directions in the wires.
Currents in the wires need not be equal, though they often are, as in the case of a complete circuit, where one wire is the source and the other the return.
Mutual inductance of two wire loops
This is the generalized case of the paradigmatic two-loop cylindrical coil carrying a uniform low frequency current; the loops are independent closed circuits that can have different lengths, any orientation in space, and carry different currents. Nonetheless, the error terms, which are not included in the integral are only small if the geometries of the loops are mostly smooth and convex: They must not have too many kinks, sharp corners, coils, crossovers, parallel segments, concave cavities, or other topologically "close" deformations. A necessary predicate for the reduction of the 3-dimensional manifold integration formula to a double curve integral is that the current paths be filamentary circuits, i.e. thin wires where the radius of the wire is negligible compared to its length.
The mutual inductance by a filamentary circuit on a filamentary circuit is given by the double integral Neumann formula
where
and are the curves followed by the wires.
is the permeability of free space ()
is a small increment of the wire in circuit
is the position of in space
is a small increment of the wire in circuit
is the position of in space.
Derivation
where
is the current through the th wire, this current creates the magnetic flux through the th surface
is the magnetic flux through the ith surface due to the electrical circuit outlined by
where
Stokes' theorem has been used for the 3rd equality step. For the last equality step, we used the retarded potential expression for and we ignore the effect of the retarded time (assuming the geometry of the circuits is small enough compared to the wavelength of the current they carry). It is actually an approximation step, and is valid only for local circuits made of thin wires.
Self-inductance of a wire loop
Formally, the self-inductance of a wire loop would be given by the above equation with However, here becomes infinite, leading to a logarithmically divergent integral.
This necessitates taking the finite wire radius and the distribution of the current in the wire into account. There remains the contribution from the integral over all points and a correction term,
where
and are distances along the curves and respectively
is the radius of the wire
is the length of the wire
is a constant that depends on the distribution of the current in the wire:
when the current flows on the surface of the wire (total skin effect),
when the current is evenly over the cross-section of the wire.
is an error term whose size depends on the curve of the loop:
when the loop has sharp corners, and
when it is a smooth curve.
Both are small when the wire is long compared to its radius.
Inductance of a solenoid
A solenoid is a long, thin coil; i.e., a coil whose length is much greater than its diameter. Under these conditions, and without any magnetic material used, the magnetic flux density within the coil is practically constant and is given by
where is the magnetic constant, the number of turns, the current and the length of the coil. Ignoring end effects, the total magnetic flux through the coil is obtained by multiplying the flux density by the cross-section area
When this is combined with the definition of inductance it follows that the inductance of a solenoid is given by:
Therefore, for air-core coils, inductance is a function of coil geometry and number of turns, and is independent of current.
Inductance of a coaxial cable
Let the inner conductor have radius and permeability let the dielectric between the inner and outer conductor have permeability and let the outer conductor have inner radius outer radius and permeability However, for a typical coaxial line application, we are interested in passing (non-DC) signals at frequencies for which the resistive skin effect cannot be neglected. In most cases, the inner and outer conductor terms are negligible, in which case one may approximate
Inductance of multilayer coils
Most practical air-core inductors are multilayer cylindrical coils with square cross-sections to minimize average distance between turns (circular cross -sections would be better but harder to form).
Magnetic cores
Many inductors include a magnetic core at the center of or partly surrounding the winding. Over a large enough range these exhibit a nonlinear permeability with effects such as magnetic saturation. Saturation makes the resulting inductance a function of the applied current.
The secant or large-signal inductance is used in flux calculations. It is defined as:
The differential or small-signal inductance, on the other hand, is used in calculating voltage. It is defined as:
The circuit voltage for a nonlinear inductor is obtained via the differential inductance as shown by Faraday's Law and the chain rule of calculus.
Similar definitions may be derived for nonlinear mutual inductance.
Mutual inductance
Mutual inductance is defined as the ratio between the EMF induced in one loop or coil by the rate of change of current in another loop or coil. Mutual inductance is given the symbol .
Derivation of mutual inductance
The inductance equations above are a consequence of Maxwell's equations. For the important case of electrical circuits consisting of thin wires, the derivation is straightforward.
In a system of wire loops, each with one or several wire turns, the flux linkage of loop is given by
Here denotes the number of turns in loop is the magnetic flux through loop and are some constants described below. This equation follows from Ampere's law: magnetic fields and fluxes are linear functions of the currents. By Faraday's law of induction, we have
where denotes the voltage induced in circuit This agrees with the definition of inductance above if the coefficients are identified with the coefficients of inductance. Because the total currents contribute to it also follows that is proportional to the product of turns
Mutual inductance and magnetic field energy
Multiplying the equation for vm above with imdt and summing over m gives the energy transferred to the system in the time interval dt,
This must agree with the change of the magnetic field energy, W, caused by the currents. The integrability condition
requires Lm,n = Ln,m. The inductance matrix, Lm,n, thus is symmetric. The integral of the energy transfer is the magnetic field energy as a function of the currents,
This equation also is a direct consequence of the linearity of Maxwell's equations. It is helpful to associate changing electric currents with a build-up or decrease of magnetic field energy. The corresponding energy transfer requires or generates a voltage. A mechanical analogy in the K = 1 case with magnetic field energy (1/2)Li2 is a body with mass M, velocity u and kinetic energy (1/2)Mu2. The rate of change of velocity (current) multiplied with mass (inductance) requires or generates a force (an electrical voltage).
Mutual inductance occurs when the change in current in one inductor induces a voltage in another nearby inductor. It is important as the mechanism by which transformers work, but it can also cause unwanted coupling between conductors in a circuit.
The mutual inductance, is also a measure of the coupling between two inductors. The mutual inductance by circuit on circuit is given by the double integral Neumann formula, see calculation techniques
The mutual inductance also has the relationship:
where
Once the mutual inductance is determined, it can be used to predict the behavior of a circuit:
where
The minus sign arises because of the sense the current has been defined in the diagram. With both currents defined going into the dots the sign of will be positive (the equation would read with a plus sign instead).
Coupling coefficient
The coupling coefficient is the ratio of the open-circuit actual voltage ratio to the ratio that would be obtained if all the flux coupled from one magnetic circuit to the other. The coupling coefficient is related to mutual inductance and self inductances in the following way. From the two simultaneous equations expressed in the two-port matrix the open-circuit voltage ratio is found to be:
where
while the ratio if all the flux is coupled is the ratio of the turns, hence the ratio of the square root of the inductances
thus,
where
The coupling coefficient is a convenient way to specify the relationship between a certain orientation of inductors with arbitrary inductance. Most authors define the range as but some define it as Allowing negative values of captures phase inversions of the coil connections and the direction of the windings.
Matrix representation
Mutually coupled inductors can be described by any of the two-port network parameter matrix representations. The most direct are the z parameters, which are given by
The y parameters are given by
Where is the complex frequency variable, and are the inductances of the primary and secondary coil, respectively, and is the mutual inductance between the coils.
Multiple Coupled Inductors
Mutual inductance may be applied to multiple inductors simultaneously. The matrix representations for multiple mutually coupled inductors are given by
Equivalent circuits
T-circuit
Mutually coupled inductors can equivalently be represented by a T-circuit of inductors as shown. If the coupling is strong and the inductors are of unequal values then the series inductor on the step-down side may take on a negative value.
This can be analyzed as a two port network. With the output terminated with some arbitrary impedance the voltage gain is given by,
where is the coupling constant and is the complex frequency variable, as above.
For tightly coupled inductors where this reduces to
which is independent of the load impedance. If the inductors are wound on the same core and with the same geometry, then this expression is equal to the turns ratio of the two inductors because inductance is proportional to the square of turns ratio.
The input impedance of the network is given by,
For this reduces to
Thus, current gain is independent of load unless the further condition
is met, in which case,
and
π-circuit
Alternatively, two coupled inductors can be modelled using a π equivalent circuit with optional ideal transformers at each port. While the circuit is more complicated than a T-circuit, it can be generalized to circuits consisting of more than two coupled inductors. Equivalent circuit elements have physical meaning, modelling respectively magnetic reluctances of coupling paths and magnetic reluctances of leakage paths. For example, electric currents flowing through these elements correspond to coupling and leakage magnetic fluxes. Ideal transformers normalize all self-inductances to 1 Henry to simplify mathematical formulas.
Equivalent circuit element values can be calculated from coupling coefficients with
where coupling coefficient matrix and its cofactors are defined as
and
For two coupled inductors, these formulas simplify to
and
and for three coupled inductors (for brevity shown only for and )
and
Resonant transformer
When a capacitor is connected across one winding of a transformer, making the winding a tuned circuit (resonant circuit) it is called a single-tuned transformer. When a capacitor is connected across each winding, it is called a double tuned transformer. These resonant transformers can store oscillating electrical energy similar to a resonant circuit and thus function as a bandpass filter, allowing frequencies near their resonant frequency to pass from the primary to secondary winding, but blocking other frequencies. The amount of mutual inductance between the two windings, together with the Q factor of the circuit, determine the shape of the frequency response curve. The advantage of the double tuned transformer is that it can have a wider bandwidth than a simple tuned circuit. The coupling of double-tuned circuits is described as loose-, critical-, or over-coupled depending on the value of the coupling coefficient When two tuned circuits are loosely coupled through mutual inductance, the bandwidth is narrow. As the amount of mutual inductance increases, the bandwidth continues to grow. When the mutual inductance is increased beyond the critical coupling, the peak in the frequency response curve splits into two peaks, and as the coupling is increased the two peaks move further apart. This is known as overcoupling.
Stongly-coupled self-resonant coils can be used for wireless power transfer between devices in the mid range distances (up to two metres). Strong coupling is required for a high percentage of power transferred, which results in peak splitting of the frequency response.
Ideal transformers
When the inductor is referred to as being closely coupled. If in addition, the self-inductances go to infinity, the inductor becomes an ideal transformer. In this case the voltages, currents, and number of turns can be related in the following way:
where
Conversely the current:
where
The power through one inductor is the same as the power through the other. These equations neglect any forcing by current sources or voltage sources.
Self-inductance of thin wire shapes
The table below lists formulas for the self-inductance of various simple shapes made of thin cylindrical conductors (wires). In general these are only accurate if the wire radius is much smaller than the dimensions of the shape, and if no ferromagnetic materials are nearby (no magnetic core).
is an approximately constant value between 0 and 1 that depends on the distribution of the current in the wire: when the current flows only on the surface of the wire (complete skin effect), when the current is evenly spread over the cross-section of the wire (direct current). For round wires, Rosa (1908) gives a formula equivalent to:
where
is represents small term(s) that have been dropped from the formula, to make it simpler. Read the term as "plus small corrections that vary on the order of (see big O notation).
See also
Electromagnetic induction
Gyrator
Hydraulic analogy
Leakage inductance
LC circuit, RLC circuit, RL circuit
Kinetic inductance
Footnotes
References
General references
Küpfmüller K., Einführung in die theoretische Elektrotechnik, Springer-Verlag, 1959.
Heaviside O., Electrical Papers. Vol.1. – L.; N.Y.: Macmillan, 1892, p. 429-560.
Fritz Langford-Smith, editor (1953). Radiotron Designer's Handbook, 4th Edition, Amalgamated Wireless Valve Company Pty., Ltd. Chapter 10, "Calculation of Inductance" (pp. 429–448), includes a wealth of formulas and nomographs for coils, solenoids, and mutual inductance.
F. W. Sears and M. W. Zemansky 1964 University Physics: Third Edition (Complete Volume), Addison-Wesley Publishing Company, Inc. Reading MA, LCCC 63-15265 (no ISBN).
External links
Clemson Vehicular Electronics Laboratory: Inductance Calculator
Electrodynamics
Electromagnetic quantities | Inductance | [
"Physics",
"Mathematics"
] | 5,871 | [
"Electromagnetic quantities",
"Physical quantities",
"Quantity",
"Electrodynamics",
"Dynamical systems"
] |
165,180 | https://en.wikipedia.org/wiki/Software%20configuration%20management | Software configuration management (SCM), a.k.a.
software change and configuration management (SCCM), is the software engineering practice of tracking and controlling changes to a software system; part of the larger cross-disciplinary field of configuration management (CM). SCM includes version control and the establishment of baselines.
Goals
The goals of SCM include:
Configuration identification - Identifying configurations, configuration items and baselines.
Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
Build management - Managing the process and tools used for builds.
Process management - Ensuring adherence to the organization's development process.
Environment management - Managing the software and hardware that host the system.
Teamwork - Facilitate team interactions related to the process.
Defect tracking - Making sure every defect has traceability back to the source.
With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as virtual machines and saved with state and version. The tools can model and manage cloud-based virtual resources, including virtual appliances, storage units, and software bundles. The roles and responsibilities of the actors have become merged as well with developers now being able to dynamically instantiate virtual servers and related resources.
History
Examples
See also
References
Further reading
Aiello, R. (2010). Configuration Management Best Practices: Practical Methods that Work in the Real World (1st ed.). Addison-Wesley. .
Babich, W.A. (1986). Software Configuration Management, Coordination for Team Productivity. 1st edition. Boston: Addison-Wesley
Berczuk, Appleton; (2003). Software Configuration Management Patterns: Effective TeamWork, Practical Integration (1st ed.). Addison-Wesley. .
Bersoff, E.H. (1997). Elements of Software Configuration Management. IEEE Computer Society Press, Los Alamitos, CA, 1-32
Dennis, A., Wixom, B.H. & Tegarden, D. (2002). System Analysis & Design: An Object-Oriented Approach with UML. Hoboken, New York: John Wiley & Sons, Inc.
Department of Defense, USA (2001). Military Handbook: Configuration management guidance (rev. A) (MIL-HDBK-61A). Retrieved January 5, 2010, from http://www.everyspec.com/MIL-HDBK/MIL-HDBK-0001-0099/MIL-HDBK-61_11531/
Futrell, R.T. et al. (2002). Quality Software Project Management. 1st edition. Prentice-Hall.
International Organization for Standardization (2003). ISO 10007: Quality management systems – Guidelines for configuration management.
Saeki M. (2003). Embedding Metrics into Information Systems Development Methods: An Application of Method Engineering Technique. CAiSE 2003, 374–389.
Scott, J.A. & Nisse, D. (2001). Software configuration management. In: Guide to Software Engineering Body of Knowledge. Retrieved January 5, 2010, from http://www.computer.org/portal/web/swebok/htmlformat
Paul M. Duvall, Steve Matyas, and Andrew Glover (2007). Continuous Integration: Improving Software Quality and Reducing Risk. (1st ed.). Addison-Wesley Professional. .
External links
SCM and ISO 9001 by Robert Bamford and William Deibler, SSQC
Use Cases and Implementing Application Lifecycle Management
Parallel Development Strategies for Software Configuration Management
Configuration management
Software engineering
IEEE standards
Types of tools used in software development | Software configuration management | [
"Technology",
"Engineering"
] | 835 | [
"Systems engineering",
"Computer engineering",
"Computer standards",
"Configuration management",
"Software engineering",
"Information technology",
"IEEE standards"
] |
165,194 | https://en.wikipedia.org/wiki/Hydrometer | A hydrometer or lactometer is an instrument used for measuring density or relative density of liquids based on the concept of buoyancy. They are typically calibrated and graduated with one or more scales such as specific gravity.
A hydrometer usually consists of a sealed hollow glass tube with a wider bottom portion for buoyancy, a ballast such as lead or mercury for stability, and a narrow stem with graduations for measuring. The liquid to test is poured into a tall container, often a graduated cylinder, and the hydrometer is gently lowered into the liquid until it floats freely. The point at which the surface of the liquid touches the stem of the hydrometer correlates to relative density. Hydrometers can contain any number of scales along the stem corresponding to properties correlating to the density.
Hydrometers are calibrated for different uses, such as a lactometer for measuring the density (creaminess) of milk, a saccharometer for measuring the density of sugar in a liquid, or an alcoholometer for measuring higher levels of alcohol in spirits.
The hydrometer makes use of Archimedes' principle: a solid suspended in a fluid is buoyed by a force equal to the weight of the fluid displaced by the submerged part of the suspended solid. The lower the density of the fluid, the deeper a hydrometer of a given weight sinks; the stem is calibrated to give a numerical reading.
History
thumb|upright=0.6|Hydrometer from Practical Physics
The hydrometer probably dates back to the Greek philosopher Archimedes (3rd century BC) who used its principles to find the density of various liquids. An early description of a hydrometer comes from a Latin poem, written in the 2nd century AD by Remnius, who compared the use of a hydrometer to the method of fluid displacement used by Archimedes to determine the gold content of Hiero II's crown.
Hypatia of Alexandria 370; d.415 CE), an important female Greek mathematician, is the first person traditionally associated with the hydrometer. In a letter, Synesius of Cyrene asks Hypatia, his teacher, to make a hydrometer for him:
The instrument in question is a cylindrical tube, which has the shape of a flute and is about the same size. It has notches in a perpendicular line, by means of which we are able to test the weight of the waters. A cone forms a lid at one of the extremities, closely fitted to the tube. The cone and the tube have one base only. This is called the baryllium. Whenever you place the tube in water, it remains erect. You can then count the notches at your ease, and in this way ascertain the weight of the water.
According to the Encyclopedia of the History of Arabic Science, it was used by Abū Rayhān al-Bīrūnī in the 11th century and described by Al-Khazini in the 12th century. It was rediscovered in 1612 by Galileo and his circle of friends, and used in experiments especially at the Accademia del Cimento. It appeared again in the 1675 work of Robert Boyle (who coined the name "hydrometer"), with types devised by Antoine Baumé (the Baumé scale), William Nicholson, and Jacques Alexandre César Charles in the late 18th century, more or less contemporarily with Benjamin Sikes' discovery of the device by which the alcoholic content of a liquid can be automatically determined. The use of the Sikes device was made obligatory by British law in 1818.
Ranges
The hydrometer sinks deeper in low-density liquids such as kerosene, gasoline, and alcohol, and less deep in high-density liquids such as brine, milk, and acids. It is usual for hydrometers to be used with dense liquids to have the mark 1.000 (for water) near the top of the stem, and those for use with lighter liquids to have 1.000 near the bottom. In many industries a set of hydrometers is used (1.0–0.95, 0.95–.) to have instruments covering the range of specific gravities that may be encountered.
Scales
Modern hydrometers usually measure specific gravity but different scales were (and sometimes still are) used in certain industries. Examples include:
API gravity, universally used worldwide by the petroleum industry.
Baumé scale, formerly used in industrial chemistry and pharmacology
Brix scale, primarily used in fruit juice, wine making and the sugar industry
Oechsle scale, used for measuring the density of grape must
Plato scale, primarily used in brewing
Twaddell scale, formerly used in the bleaching and dyeing industries
Specialized hydrometers
Specialized hydrometers are frequently named for their use: a lactometer, for example, is a hydrometer designed especially for use with dairy products. They are sometimes referred to by this specific name, sometimes as hydrometers.
Alcoholometer
An alcoholmeter is a hydrometer that indicates the alcoholic strength of liquids which are essentially a mixture of alcohol and water. It is also known as a proof and Tralles hydrometer (after Johann Georg Tralles, but commonly misspelled as traille and tralle). It measures the density of the fluid. Where no sugar or other dissolved substances are present, the specific gravity of a solution of ethanol in water can be directly correlated to the concentration of alcohol. Saccharometers for measuring sugar-water mixtures measure densities greater than water. Many have scales marked with volume percents of "potential alcohol", based on a pre-calculated specific gravity. A higher "potential alcohol" reading on this scale is caused by a greater specific gravity, assumed to be caused by the introduction of dissolved sugars or carbohydrate based material. A reading is taken before and after fermentation and approximate alcohol content is determined by subtracting the post fermentation reading from the pre-fermentation reading.
These were important instruments for determining tax, and specific maker's instruments could be specified. Bartholomew Sikes had a monopoly in the UK and Mary Dicas and her family enjoyed a similar monopoly in the US.
Lactometer
A lactometer is used to check purity of cow's milk. The specific gravity of milk does not give a conclusive indication of its composition since milk contains a variety of substances that are either heavier or lighter than water. Additional tests for fat content are necessary to determine overall composition. The instrument is graduated into a hundred parts. Milk is poured in and allowed to stand until the cream has formed, then the depth of the cream deposit in degrees determines the quality of the milk. If the milk sample is pure, the lactometer floats higher than if it is adulterated or impure.
Saccharometer
A saccharometer is a type of hydrometer used for determining the amount of sugar in a solution, invented by Thomas Thomson. It is used primarily by winemakers and brewers, and it can also be used in making sorbets and ice-creams. The first brewers' saccharometer was constructed by Benjamin Martin (with distillation in mind), and initially used for brewing by James Baverstock Sr in 1770. Henry Thrale adopted its use and it was later popularized by John Richardson in 1784.
It consists of a large weighted glass bulb with a thin stem rising from the top with calibrated markings. The sugar level can be determined by reading the value where the surface of the liquid crosses the scale. The higher the sugar content, the denser the solution, and thus the higher the bulb will float.
Thermohydrometer
A thermohydrometer is a hydrometer that has a thermometer enclosed in the float section. For measuring the density of petroleum products, such as fuel oils, the specimen is usually heated in a temperature jacket with a thermometer placed behind it since density is dependent on temperature. Light oils are placed in cooling jackets, typically at 15 °C.
Very light oils with many volatile components are measured in a variable volume container using a floating piston sampling device to minimize light end losses.
Battery hydrometer
The state of charge of a lead-acid battery can be estimated from the density of the sulfuric acid solution used as electrolyte. A hydrometer calibrated to read specific gravity relative to water at is a standard tool for servicing automobile batteries. Tables are used to correct the reading to the standard temperature. Hydrometers are also used for maintenance of wet-cell nickel-cadmium batteries to ensure the electrolyte is of the proper strength for the application; for this battery chemistry the specific gravity of the electrolyte is not related to the state of charge of the battery.
A battery hydrometer with thermometer (thermohydrometer) measures the temperature-compensated specific gravity and electrolyte temperature.
Antifreeze tester
Another automotive use of hydrometers is testing the quality of the antifreeze solution used for engine cooling. The degree of freeze protection can be related to the density (and so concentration) of the antifreeze; different types of antifreeze have different relations between measured density and freezing point.
Acidometer
An acidometer, or acidimeter, is a hydrometer used to measure the specific gravity of an acid.
Barkometer
A barkometer is calibrated to test the strength of tanning liquors used in tanning leather.
Salinometer
A salinometer is a hydrometer used to measure the salt content of the feed water to a marine steam boiler.
Urinometer
A urinometer is a medical hydrometer designed for urinalysis. As urine's specific gravity is dictated by its ratio of solutes (wastes) to water, a urinometer makes it possible to quickly assess a patient's overall level of hydration.
Gallery
Use in soil analysis
A hydrometer analysis is the process by which fine-grained soils, silts and clays, are graded. Hydrometer analysis is performed if the grain sizes are too small for sieve analysis. The basis for this test is Stokes' Law for falling spheres in a viscous fluid in which the terminal velocity of fall depends on the grain diameter and the densities of the grain in suspension and of the fluid. The grain diameter thus can be calculated from a knowledge of the distance and time of fall. The hydrometer also determines the specific gravity (or density) of the suspension, and this enables the fraction of particles of a certain equivalent particle diameter to be calculated.
See also
Density meter
Densitometer
Dasymeter
Elevator paradox (physics)
Fahrenheit hydrometer
Gravity (beer)
Hydrostatic bubbles
Hygrometer
Oscillating U-tube
Pyknometer
Refractometer
References
Sources
Hypatia of Alexandria
Hydrometer Information
Guide to Brewing Hydrometers
Jurjen Draaijer. Milk Testing . Milk Producer Group Resource Book, Food and Agriculture Organization of the United Nations
Using Your Hydrometer, Winemaking Home Page.
How The Hydrometer Works, Home Winemaking Techniques
Ancient Roman technology
Brewing
Laboratory equipment
Laboratory glassware
Density meters | Hydrometer | [
"Physics",
"Technology",
"Engineering"
] | 2,291 | [
"Density",
"Density meters",
"Physical quantities",
"Measuring instruments"
] |
165,198 | https://en.wikipedia.org/wiki/Wind%20speed | In meteorology, wind speed, or wind flow speed, is a fundamental atmospheric quantity caused by air moving from high to low pressure, usually due to changes in temperature. Wind speed is now commonly measured with an anemometer.
Wind speed affects weather forecasting, aviation and maritime operations, construction projects, growth and metabolism rates of many plant species, and has countless other implications. Wind direction is usually almost parallel to isobars (and not perpendicular, as one might expect), due to Earth's rotation.
Units
The meter per second (m/s) is the SI unit for velocity and the unit recommended by the World Meteorological Organization for reporting wind speeds, and used amongst others in weather forecasts in the Nordic countries. Since 2010 the International Civil Aviation Organization (ICAO) also recommends meters per second for reporting wind speed when approaching runways, replacing their former recommendation of using kilometers per hour (km/h).
For historical reasons, other units such as miles per hour (mph), knots (kn), and feet per second (ft/s) are also sometimes used to measure wind speeds. Historically, wind speeds have also been classified using the Beaufort scale, which is based on visual observations of specifically defined wind effects at sea or on land.
Factors affecting wind speed
Wind speed is affected by a number of factors and situations, operating on varying scales (from micro to macro scales). These include the pressure gradient, Rossby waves, jet streams, and local weather conditions. There are also links to be found between wind speed and wind direction, notably with the pressure gradient and terrain conditions.
The Pressure gradient describes the difference in air pressure between two points in the atmosphere or on the surface of the Earth. It is vital to wind speed, because the greater the difference in pressure, the faster the wind flows (from the high to low pressure) to balance out the variation. The pressure gradient, when combined with the Coriolis effect and friction, also influences wind direction.
Rossby waves are strong winds in the upper troposphere. These operate on a global scale and move from west to east (hence being known as westerlies). The Rossby waves are themselves a different wind speed from that experienced in the lower troposphere.
Local weather conditions play a key role in influencing wind speed, as the formation of hurricanes, monsoons, and cyclones as freak weather conditions can drastically affect the flow velocity of the wind.
Highest speed
Non-tornadic
The fastest wind speed not related to tornadoes ever recorded was during the passage of Tropical Cyclone Olivia on 10 April 1996: an automatic weather station on Barrow Island, Australia, registered a maximum wind gust of The wind gust was evaluated by the WMO Evaluation Panel, who found that the anemometer was mechanically sound and that the gust was within statistical probability and ratified the measurement in 2010. The anemometer was mounted 10 m above ground level (and thus 64 m above sea level). During the cyclone, several extreme gusts of greater than were recorded, with a maximum 5-minute mean speed of ; the extreme gust factor was on the order of 2.27–2.75 times the mean wind speed. The pattern and scales of the gusts suggest that a mesovortex was embedded in the already-strong eyewall of the cyclone.
Currently, the second-highest surface wind speed ever officially recorded is at the Mount Washington (New Hampshire) Observatory above sea level in the US on 12 April 1934, using a hot-wire anemometer. The anemometer, specifically designed for use on Mount Washington, was later tested by the US National Weather Bureau and confirmed to be accurate.
Tornadic
Wind speeds within certain atmospheric phenomena (such as tornadoes) may greatly exceed these values but have never been accurately measured. Directly measuring these tornadic winds is rarely done, as the violent wind would destroy the instruments. A method of estimating speed is to use Doppler on Wheels or mobile Doppler weather radars to measure the wind speeds remotely. Using this method, a mobile radar (RaXPol) owned and operated by the University of Oklahoma recorded winds up to inside the 2013 El Reno tornado, marking the fastest winds ever observed by radar in history. In 1999, a mobile radar measured winds up to during the 1999 Bridge Creek–Moore tornado in Oklahoma on 3 May, although another figure of has also been quoted for the same tornado. Yet another number used by the Center for Severe Weather Research for that measurement is . However, speeds measured by Doppler weather radar are not considered official records.
On other planets
Wind speeds can be much higher on exoplanets. Scientists at the University of Warwick in 2015 determined that HD 189733b has winds of . In a press release, the University announced that the methods used from measuring HD 189733b's wind speeds could be used to measure wind speeds on Earth-like exoplanets.
Measurement
An anemometer is one of the tools used to measure wind speed. A device consisting of a vertical pillar and three or four concave cups, the anemometer captures the horizontal movement of air particles (wind speed).
Unlike traditional cup-and-vane anemometers, ultrasonic wind sensors have no moving parts and are therefore used to measure wind speed in applications that require maintenance-free performance, such as atop wind turbines. As the name suggests, ultrasonic wind sensors measure the wind speed using high-frequency sound. An ultrasonic anemometer has two or three pairs of sound transmitters and receivers. Each transmitter constantly beams high-frequency sound to its receiver. Electronic circuits inside measure the time it takes for the sound to make its journey from each transmitter to the corresponding receiver. Depending on how the wind blows, some of the sound beams will be affected more than the others, slowing it down or speeding it up very slightly. The circuits measure the difference in speeds of the beams and use that to calculate how fast the wind is blowing.
Acoustic resonance wind sensors are a variant of the ultrasonic sensor. Instead of using time of flight measurement, acoustic resonance sensors use resonating acoustic waves within a small purpose-built cavity. Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction.
Another tool used to measure wind velocity includes a GPS combined with pitot tube. A fluid flow velocity tool, the Pitot tube is primarily used to determine the air velocity of an aircraft.
Design of structures
Wind speed is a common factor in the design of structures and buildings around the world. It is often the governing factor in the required lateral strength of a structure's design.
In the United States, the wind speed used in design is often referred to as a "3-second gust", which is the highest sustained gust over a 3-second period having a probability of being exceeded per year of 1 in 50 (ASCE 7-05, updated to ASCE 7-16). This design wind speed is accepted by most building codes in the United States and often governs the lateral design of buildings and structures.
In Canada, reference wind pressures are used in design and are based on the "mean hourly" wind speed having a probability of being exceeded per year of 1 in 50. The reference wind pressure is calculated using the equation , where is the air density and is the wind speed.
Historically, wind speeds have been reported with a variety of averaging times (such as fastest mile, 3-second gust, 1-minute, and mean hourly) which designers may have to take into account. To convert wind speeds from one averaging time to another, the Durst Curve was developed, which defines the relation between probable maximum wind speed averaged over some number of seconds to the mean wind speed over one hour.
See also
American Society of Civil Engineers (promulgator of ASCE 7-05, current version is ASCE 7-16)
Beaufort scale
Fujita scale and Enhanced Fujita Scale
International Building Code (promulgator of NBC 2005)
ICAO recommendations – International System of Units
Knot (unit)
Prevailing wind
Saffir–Simpson Hurricane Scale
TORRO scale
Wind direction
References
External links
Wind
Airspeed
Meteorological quantities
Wind power
Weather extremes of Earth
es:Viento#Características físicas de los vientos | Wind speed | [
"Physics",
"Mathematics"
] | 1,753 | [
"Physical quantities",
"Quantity",
"Meteorological quantities",
"Airspeed",
"Wikipedia categories named after physical quantities"
] |
165,201 | https://en.wikipedia.org/wiki/Nuclear%20Regulatory%20Commission | The United States Nuclear Regulatory Commission (NRC) is an independent agency of the United States government tasked with protecting public health and safety related to nuclear energy. Established by the Energy Reorganization Act of 1974, the NRC began operations on January 19, 1975, as one of two successor agencies to the United States Atomic Energy Commission. Its functions include overseeing reactor safety and security, administering reactor licensing and renewal, licensing and oversight for fuel cycle facilities, licensing radioactive materials, radionuclide safety, and managing the storage, security, recycling, and disposal of spent fuel.
History
Prior to 1975 the Atomic Energy Commission was in charge of matters regarding radionuclides. The AEC was dissolved, because it was perceived as unduly favoring the industry it was charged with regulating. The NRC was formed as an independent commission to oversee nuclear energy matters, oversight of nuclear medicine, and nuclear safety and security.
The U.S. AEC became the Energy Research and Development Administration (ERDA) in 1975, responsible for development and oversight of nuclear weapons. Research and promotion of civil uses of radioactive materials, such as for nuclear non-destructive testing, nuclear medicine, and nuclear power, was split into the Office of Nuclear Energy, Science & Technology within ERDA by the same act. In 1977, ERDA became the United States Department of Energy (DOE). In 2000, the National Nuclear Security Administration was created as a subcomponent of DOE, responsible for nuclear weapons.
Following the Fukushima nuclear disaster in 2011, the NRC developed a guidance strategy known as "Diverse and Flexible Coping Strategies (FLEX)" which requires licensee nuclear power plants to account for beyond-design-basis external events (seismic, flooding, high-winds, etc.) that are most
impactful to reactor safety through loss of power and loss of ultimate heat sink. FLEX Strategies have been implemented at all operating nuclear power plants in the United States.
The origins and development of NRC regulatory processes and policies are explained in five volumes of history published by the University of California Press. These are:
Controlling the Atom: The Beginnings of Nuclear Regulation 1946–1962 (1984).
Containing the Atom: Nuclear Regulation in a Changing Environment, 1963–1971 (1992).
Permissible Dose: A History of Radiation Protection in the Twentieth Century (2000)
Three Mile Island: A Nuclear Crisis in Historical Perspective (2004)
The Road to Yucca Mountain: The Development of Radioactive Waste Policy in the United States (2009).
The NRC has produced a booklet, A Short History of Nuclear Regulation 1946–2009, which outlines key issues in NRC history. Thomas Wellock, a former academic, is the NRC historian. Before joining the NRC, Wellock wrote Critical Masses: Opposition to Nuclear Power in California, 1958–1978.
Mission and commissioners
The NRC's mission is to regulate the nation's civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment.
The NRC's regulatory mission covers three main areas:
Reactors – Commercial reactors for generating electric power and research and test reactors used for research, testing, and training
Materials – Uses of nuclear materials in medical, industrial, and academic settings and facilities that produce nuclear fuel
Waste – Transportation, storage, and disposal of nuclear materials and waste, and decommissioning of nuclear facilities from service.
The NRC is headed by five commissioners appointed by the president of the United States and confirmed by the United States Senate for five-year terms. One of them is designated by the president to be the chairman and official spokesperson of the commission. The chairman is the principal executive officer of the NRC, who exercise all of the executive and administrative functions of the commission.
The current chairman is David A. Wright. President Trump designated Wright as chairman of the NRC effective January 20, 2025.
Current commissioners
The current commissioners :
List of chairmen
List of commissioners
Organization
The NRC consists of the commission on the one hand and offices of the executive director for Operations on the other.
The commission is divided into two committees (Advisory Committee on Reactor Safeguards and Advisory Committee on the Medical Uses of Isotopes) and one Board, the Atomic Safety and Licensing Board Panel, as well as eight commission staff offices (Office of Commission Appellate Adjudication, Office of Congressional Affairs, Office of the General Counsel, Office of International Programs, Office of Public Affairs, Office of the Secretary, Office of the Chief Financial Officer, Office of the Executive Director for Operations).
Christopher T. Hanson is the chairman of the NRC. There are 14 Executive Director for Operations offices:
Office of Nuclear Material Safety and Safeguards, Office of Nuclear Reactor Regulation, Office of Nuclear Regulatory Research, Office of Enforcement, which investigates reports by nuclear power whistleblowers, specifically the Allegations Program, Office of Investigations, Office of Nuclear Security and Incident Response, Region I, Region II, Region III, Region IV, Office of the Chief Information Officer, Office of Administration, Office of the Chief Human Capital Officer, and Office of Small Business and Civil Rights.
Of these operations offices, NRC's major program components are the first two offices mentioned above.
NRC's proposed FY 2024 budget is $9.949 million, with 2897.9 full-time equivalents (FTE), 90 percent of which is recovered by fees. This is an increase of $5.1million, compared to FY 2023.
NRC headquarters offices are located in unincorporated North Bethesda, Maryland (although the mailing address for two of the three main buildings in the complex list the city as Rockville, MD), and there are four regional offices.
Regions
The NRC territory is broken down into four geographical regions; until the late 1990s, there was a Region V office in Walnut Creek, California which was absorbed into Region IV, and Region V was dissolved.
Region I, located in King of Prussia, Pennsylvania, oversees the northeastern states.
Region II, located in Atlanta, Georgia, oversees most of the southeastern states.
Region III, located in Lisle, Illinois, oversees the Midwest.
Region IV, located in Arlington, Texas, oversees the western and south central states.
In these four regions NRC oversees the operation of US nuclear reactors, namely 94 power-producing reactors, and 31 non-power-producing, or research and test reactors. Oversight is done on several levels. For example:
Each power-producing reactor site has resident inspectors, who monitor day-to-day operations.
Numerous special inspection teams, with many different specialties, routinely conduct inspections at each site.
Agreement States
Agreement States have entered into agreements with the NRC that give them the authority to license and inspect byproduct, source, or special nuclear materials used or possessed within their borders. Any applicant, other than a Federal agency or Federally recognized Indian tribe, who wishes to possess or use licensed material in one of these Agreement States should contact the responsible officials in that State for guidance on preparing an application. These applications should be filed with State officials, not with the NRC.
Recordkeeping system
NRC has a library, which also contains online document collections. In 1984 it started an electronic repository called ADAMS, the Agencywide Documents Access and Management System, for its public inspection reports, correspondence, and other technical documents written by NRC staff, contractors, and licensees. It was upgraded in October 2010 and is now web-based. Of documents from 1980 to 1999 only some have abstracts and/or full text; most are citations. Documents from before 1980 are available in paper or microfiche formats. Copies of these older documents or classified documents can be applied for with a FOIA request.
Training and accreditation
NRC conducts audits and training inspections, observes the National Nuclear Accrediting Board meetings, and nominates some members.
The 1980 Kemeny Commission's report after the Three Mile Island accident recommended that the nuclear energy industry "set and police its own standards of excellence". The nuclear industry founded the Institute of Nuclear Power Operations (INPO) within 9 months to establish personnel training and qualification. The industry through INPO created the 'National Academy for Nuclear Training Program' either as early as 1980 or in September 1985 per the International Atomic Energy Agency. INPO refers to NANT as "our National Academy for Nuclear Training" on its website. NANT integrates and standardizes the training programs of INPO and US nuclear energy companies, offers training scholarships and interacts with the 'National Nuclear Accrediting Board'. This Board is closely related to the National Academy for Nuclear Training, not a government body, and referred to as independent by INPO, the Nuclear Energy Institute, and nuclear utilities. but not by the NRC, all of whom are represented on the board.
The 1982 Nuclear Waste Policy Act directed NRC in Section 306 to issue regulations or "other appropriate regulatory guidance" on training of nuclear plant personnel. Since the nuclear industry already had developed training and accreditation, NRC issued a policy statement in 1985, endorsing the INPO program. NRC has a memorandum of agreement with INPO and "monitors INPO activities by observing accreditation team visits and the monthly NNAB meetings".
In 1993, NRC endorsed the industry's approach to training that had been used for nearly a decade through its 'Training Rule'. In February 1994, NRC passed the 'Operator Requalification Rule' 59 FR 5938, Feb. 9, 1994, allowing each nuclear power plant company to conduct the operator licensing renewal examination every six years, eliminating the requirement of NRC-administered written requalification examination.
In 1999, NRC issued a final rule on operator initial licensing examination, that allows companies to prepare, proctor, and grade their own operator initial licensing examinations. Facilities can "upon written request" continue to have the examinations prepared and administered by NRC staff, but if a company volunteers to prepare the examination, NRC continues to approve and administer it.
Since 2000 meetings between NRC and applicants or licensees have been open to the public.
Prospective nuclear units
Between 2007 and 2009, 13 companies applied to the Nuclear Regulatory Commission for construction and operating licenses to build 25 new nuclear power reactors in the United States.
However, the case for widespread nuclear plant construction was eroded due to abundant natural gas supplies. Many license applications for proposed new reactors were suspended or cancelled. These will not be the cheapest energy options available, therefore not an attractive investment. In 2013, four reactors were permanently closed: San Onofre 2 and 3 in California, Crystal River 3 in Florida, and Kewaunee in Wisconsin. Vermont Yankee, in Vernon, was shut down on December 29, 2014. New York state eventually closed Indian Point Energy Center, in Buchanan, 30 miles from New York City, on April 30, 2021.
In 2019 the NRC approved a second 20-year license extension for Turkey Point units 3 and 4, the first time NRC had extended licenses to 80 years total lifetime. Similar extensions for about 20 reactors are planned or intended, with more expected in the future. This will reduce demand for replacement new builds.
Controversy, concerns, and criticisms
Byrne and Hoffman wrote in 1996, that since the 1980s the NRC has generally favored the interests of nuclear industry, and been unduly responsive to industry concerns, while failing to pursue tough regulation. The NRC has often sought to hamper or deny public access to the regulatory process, and created new barriers to public participation.
Barack Obama, when running for president in 2007, said that the five-member NRC had become "captive of the industries that it regulates".
Numerous different observers have criticized the NRC as an example of regulatory capture The NRC has been accused of having conflicting roles as regulator and "salesman" in a 2011 Reuters article, doing an inadequate job by the Union of Concerned Scientists, and the agency approval process has been called a "rubber stamp".
Frank N. von Hippel wrote in March 2011, that despite the 1979 Three Mile Island accident in Pennsylvania, the NRC has often been too timid in ensuring that America's commercial reactors are operated safely:
Nuclear power regulation is a textbook example of the problem of "regulatory capture" — in which an industry gains control of an agency meant to regulate it. Regulatory capture can be countered only by vigorous public scrutiny and Congressional oversight, but in the 32 years since Three Mile Island, interest in nuclear regulation has declined precipitously.
An article in the Bulletin of the Atomic Scientists stated that many forms of NRC regulatory failure exist, including regulations ignored by the common consent of NRC and industry:
A worker (named George Galatis) at the Millstone Nuclear Power Plant in Connecticut kept warning management, that the spent fuel rods were being put too quickly into the spent storage pool and that the number of rods in the pool exceeded specifications. Management ignored him, so he went directly to the NRC, which eventually admitted that it knew of both of the forbidden practices, which happened at many plants, but chose to ignore them. The whistleblower was fired and blacklisted.
Terrorism concerns and threats
Terrorist attacks such as those executed by al-Qaeda on New York City and Washington, D.C., on September 11, 2001, and in London on July 7, 2005, have prompted fears that extremist groups might use radioactive dirty bombs in further attacks in the United States and elsewhere.
In March 2007, undercover investigators from the Government Accountability Office set up a false company and obtained a license from the Nuclear Regulatory Commission that would have allowed them to buy the radioactive materials needed for a dirty bomb. According to the GAO report, NRC officials did not visit the company or attempt to personally interview its executives. Instead, within 28 days, the NRC mailed the license to the West Virginia postal box. Upon receipt of the license, GAO officials were able to easily modify its stipulations and remove a limit on the amount of radioactive material they could buy. A spokesman for the NRC said that the agency considered the radioactive devices a "lower-level threat"; a bomb built with the materials could have contaminated an area about the length of a city block but would not have presented an immediate health hazard.
1987 congressional report
Twelve years into NRC operations, a 1987 congressional report entitled "NRC Coziness with Industry" concluded, that the NRC "has not maintained an arms length regulatory posture with the commercial nuclear power industry ... [and] has, in some critical areas, abdicated its role as a regulator altogether". To cite three examples:A 1986 Congressional report found that NRC staff had provided valuable technical assistance to the utility seeking an operating license for the controversial Seabrook plant. In the late 1980s, the NRC 'created a policy' of non-enforcement by asserting its discretion not to enforce license conditions; between September 1989 and 1994, the 'NRC has either waived or chosen not to enforce regulations at nuclear power reactors over 340 times'. Finally, critics charge that the NRC has ceded important aspects of regulatory authority to the industry's own Institute for Nuclear Power Operations (INPO), an organization formed by utilities in response to the Three Mile Island Accident.
Nuclear Reactor License Renewal Program
One example involves the license renewal program that NRC initiated to extend the operating licenses for the nation's fleet of commercial nuclear reactors. Environmental impact statements (EIS) were prepared for each reactor to extend the operational period from 40 to 60 years. One study examined the EISs and found significant flaws, included failure to consider significant issues of concern. It also found that the NRC management had significantly underestimated the risk and consequences posed by a severe reactor accident such as a full-scale nuclear meltdown. NRC management asserted, without scientific evidence, that the risk of such accidents were so "Small" that the impacts could be dismissed and therefore no analysis of human and environmental was even performed. Such a conclusion is scientifically indefensible given the experience of the Three Mile Island, Chernobyl, and Fukushima accidents. Another finding was that NRC had concealed the risk posed to the public at large by disregarding one of the most important EIS requirements, mandating that cumulative impacts be assessed (40 Code of Federal Regulations §1508.7). By disregarding this basic requirement, NRC effectively misrepresented the risk posed to the nation by approximately two orders of magnitude (i.e., the true risk is about 100 greater than NRC represented). These findings were corroborated in a final report prepared by a special Washington State Legislature Nuclear Power Task Force, titled, "Doesn't NRC Address Consequences of Severe Accidents in EISs for re-licensing?"
Post-Fukushima
In Vermont, the day before the 2011 Tōhoku earthquake and tsunami that damaged Japan's Fukushima Daiichi Nuclear Power Plant, the NRC approved a 20-year extension for the license of Vermont Yankee Nuclear Power Plant, although the Vermont state legislature voted overwhelmingly to deny an extension. The plant had been found to be leaking radioactive materials through a network of underground pipes, which Entergy had denied under oath even existed. At a hearing in 2009 Tony Klein, chairman of the Vermont House Natural Resources and Energy Committee had asked the NRC about the pipes and the NRC also did not know they existed.
In March 2011, the Union of Concerned Scientists released a study critical of the NRC's 2010 performance as a regulator. The UCS said that over the years, it had found the NRC's enforcement of safety rules has not been "timely, consistent, or effective" and it cited 14 "near-misses" at U.S. plants in 2010 alone.
In April 2011, Reuters reported that diplomatic cables showed NRC sometimes being used as a sales tool to help push American technology to foreign governments, when "lobbying for the purchase of equipment made by Westinghouse Electric Company and other domestic manufacturers". This gives the appearance of a regulator which is acting in a commercial capacity, "raising concerns about a potential conflict of interest".
San Clemente Green, an environmental group opposed to the continued operation of the San Onofre Nuclear Plant, said in 2011 that instead of being a watchdog, the NRC too often rules in favor of nuclear plant operators.
In 2011, the Tōhoku earthquake and tsunami led to unprecedented damage and flooding of the Fukushima Daiichi Nuclear Power Plant. The subsequent loss of offsite power and flooding of onsite emergency diesel generators led to loss of coolant and subsequent Nuclear meltdown of three reactor cores. The Fukushima Daiichi nuclear disaster led to an uncontrolled release of radioactive contamination, and forced the Japanese Government to evacuate approximately 100,000 citizens.
Gregory Jaczko was chairman of the NRC when the 2011 Fukushima disaster occurred in Japan. Jaczko looked for lessons for the US, and strengthened security regulations for nuclear power plants. For example, he supported the requirement that new plants be able to withstand an aircraft crash. On February 9, 2012, Jaczko cast the lone dissenting vote on plans to build the first new nuclear power plant in more than 30 years when the NRC voted 4–1 to allow Atlanta-based Southern Co to build and operate two new nuclear power reactors at its existing Vogtle Electric Generating Plant in Georgia. He cited safety concerns stemming from Japan's 2011 Fukushima nuclear disaster, saying "I cannot support issuing this license as if Fukushima never happened". In July 2011, Mark Cooper said that the Nuclear Regulatory Commission is "on the defensive to prove it is doing its job of ensuring safety". In October 2011, Jaczko described "a tension between wanting to move in a timely manner on regulatory questions, and not wanting to go too fast".
In 2011 Edward J. Markey, Democrat of Massachusetts, criticized the NRC's response to the Fukushima Daiichi nuclear disaster and the decision-making on the proposed Westinghouse AP1000 reactor design.
In 2011, a total of 45 groups and individuals from across the nation formally asked the NRC to suspend all licensing and other activities at 21 proposed nuclear reactor projects in 15 states until the NRC completed a thorough post-Fukushima nuclear disaster examination:
The petition seeks suspension of six existing reactor license renewal decisions (Columbia Generating Station, WA Davis–Besse Nuclear Power Station, OH, Diablo Canyon Power Plant, CA, Indian Point Energy Center, NY, Pilgrim Nuclear Generating Station, MA, and Seabrook Station Nuclear Power Plant, NH); 13 new reactor combined construction permit and operating license decisions (Bellefonte Nuclear Generating Station Units 3 and 4, AL, Bell Bend, Callaway Nuclear Generating Station, MO, Calvert Cliffs Nuclear Generating Station, MD, Comanche Peak Nuclear Power Plant, TX, Enrico Fermi Nuclear Generating Station, MI, Levy County Nuclear Power Plant, FL North Anna Nuclear Generating Station, VA, Shearon Harris Nuclear Power Plant, NC, South Texas Nuclear Generating Station, TX, Turkey Point Nuclear Generating Station, FL, Alvin W. Vogtle Electric Generating Plant, GA, and William States Lee III Nuclear Generating Station, SC);a construction permit decision (Bellefonte Units 1 and 2); and an operating license decision (Watts Bar Nuclear Generating Station, TN). In addition, the petition asks the NRC to halt proceedings to approve the standardized AP1000 and Economic Simplified Boiling Water Reactor designs.
The petitioners asked the NRC to supplement its own investigation by establishing an independent commission comparable to that set up in the wake of the less severe 1979 Three Mile Island accident. The petitioners included Public Citizen, Southern Alliance for Clean Energy, and San Luis Obispo Mothers for Peace.
Intentionally concealing reports concerning the risks of flooding
Following the Fukushima disaster, the NRC prepared a report in 2011 to examine the risk that dam failures posed on the nation's fleet of nuclear reactors. A redacted version of NRC's report on dam failures was posted on the NRC website on March 6. The original, un-redacted version was leaked to the public.
The un-redacted version which was leaked to the public highlights the threat that flooding poses to nuclear power plants located near large dams and substantiates claims that NRC management has intentionally misled the public for years about the severity of the flooding.
The leaked version of the report concluded that one-third of the U.S. nuclear fleet (34 plants) may face flooding hazards greater than they were designed to withstand. It also shows that NRC management was aware of some aspects of this risk for 15 years and yet it had done nothing to effectively address the problem. Some flooding events are so serious that they could result in a "severe" nuclear accident, up to, and including, a nuclear meltdown.
This criticism is corroborated by two NRC whistleblowers who accused their management of deliberately covering up information concerning the vulnerability of flooding, and of failing to take corrective actions despite being aware of these risks for years. Richard Perkins, a second risk engineer with the NRC and the lead author of the leaked report, filed a complaint with the agency's Inspector General, asserting that NRC staff had improperly redacted information from the public version of his report "to prevent the disclosure of this safety information to the public because it will embarrass the agency." Perkins wrote. "Concurrently, the NRC concealed the information from the public."
Larry Criscione, a second NRC risk engineer also raised concerns about the NRC withholding information concerning the risk of flooding. He stated that assertions by NRC's management that plants are "currently able to mitigate flooding events," was false.
David Lochbaum, a nuclear engineer and safety advocate with the Union of Concerned Scientists: "The redacted information shows that the NRC is lying to the American public about the safety of U.S. reactors,"
The Oconee Nuclear Station has been shown to be at particular risk from flooding. An NRC letter dated 2009 states that "a Jocassee Dam failure is a credible event". It goes on to state that "NRC staff expressed concerns that Duke has not demonstrated that the [null Oconee Nuclear Station] units will be adequately protected."
NRC's 2011 leaked report notes that "dam failure incidents are common". NRC estimated the odds that dams constructed like Jocassee will fail is about 1 in 3,600 failures per year. Oconee is licensed to operate for another 20 years. The odds of the Jocassee Dam failing over that period are 1 in 180. NRC requires risks to be investigated if they have a frequency of more than 1 in 10,000 years. For a reactor operating over a period of 40 years, these risks must be evaluated if they have a chance greater than a 1 in 250 of occurring.
NRC identified 34 reactors that lie downstream from a total of more than 50 dams. More than half of these dams are roughly the size of the Jocassee dam. Assuming the NRC's failure rate applies to all of these dams, the chance that one will fail over the next 40 years is about one in four or 25 percent chance. This dam failure rate does not include risks posed by earthquakes or terrorism. Thus, the true probability may be much higher.
This raised a second and potentially larger issue. NRC recently completed its license renewal program which extended the operating licenses of the nation's fleet of nuclear reactors for an additional 20 years. NRC stated that the probability of a severe accident is so incredible that the consequences can be dismissed from the analysis of impacts in its relicensing environmental impact statements (EIS). Yet this conflicts with NRC's internal analyses which concluded that flooding presented a serious human and environmental risk. Critics charge that if these relicensing EISs failed to evaluate the risks of flooding, then how can the public be confident that NRC did not mislead stakeholders concerning other risks such as the potential for a nuclear meltdown.
NRC officials stated in June 2011 that US nuclear safety rules do not adequately weigh the risk of a single event that would knock out electricity from the grid and from emergency generators, as a quake and tsunami did in Japan. , and NRC instructed agency staff to move forward with seven of the 12 safety recommendations put forward by a federal task force in July 2011. The recommendations include "new standards aimed at strengthening operators' ability to deal with a complete loss of power, ensuring plants can withstand floods and earthquakes and improving emergency response capabilities". The new safety standards will take up to five years to fully implement.
, Jaczko warned power companies against complacency and said the agency must "push ahead with new rules prompted by the nuclear crisis in Japan, while also resolving long-running issues involving fire protection and a new analysis of earthquake risks".
The U.S. Nuclear Regulatory Commission has also been criticized for its reluctance to allow for innovation and experimentation, even controlled for and purportedly safe methods of deploying nuclear power that countries such as Poland are approving before the United States. As reported by Reason magazine in May 2022:
Exceeding powers licensing off-site interim storage facility
In September 2021 the NRC issued a license for a privately operated temporary consolidated interim storage facility (CISF) for spent nuclear fuel in Andrews County, Texas. However a group including the State of Texas, which had passed a law in 2022 prohibiting the storage of high-level waste in the state, petitioned for a court review of the license. In August 2023 the United States Court of Appeals for the Fifth Circuit ruled that the NRC does not have the authority from Congress under the Atomic Energy Act or the Nuclear Waste Policy Act to license such a temporary storage facility that is not at a nuclear power station or federal site, nullifying the purported license. Another CISF in New Mexico is similarly being challenged in the United States Court of Appeals for the Tenth Circuit.
See also
International Atomic Energy Agency
International Nuclear Regulators' Association
List of canceled nuclear plants in the United States
Nuclear power in the United States
Nuclear renaissance in the United States
Nuclear safety in the United States
Title 10 of the Code of Federal Regulations
Atomic Safety and Licensing Board
ADVANCE Act
References
External links
Nuclear Regulatory Commission (official website)
Nuclear Regulatory Commission in the Federal Register
NRC public blog
NRC list of power-producing nuclear reactors
NRC list of non-power-producing reactors
Canceled Nuclear Units Ordered in the US
The Nuclear Regulatory Commission: Policy and Governance Challenges: Joint Hearing before the Subcommittee on Energy and Power and the Subcommittee on Environment and the Economy of the Committee on Energy and Commerce, House of Representatives, One Hundred Thirteenth Congress, First Session, February 28, 2013
The future of the Nuclear Regulatory Commission in the Bulletin of the Atomic Scientists
Technical Report Archive and Image Library (TRAIL) from technicalreports.org
Governmental nuclear organizations
Independent agencies of the United States government
Nuclear energy in the United States
Nuclear regulatory organizations
Nuclear history of the United States
1974 establishments in the United States
Government agencies established in 1974
Rockville, Maryland | Nuclear Regulatory Commission | [
"Engineering"
] | 5,953 | [
"Governmental nuclear organizations",
"Nuclear regulatory organizations",
"Nuclear organizations"
] |
165,259 | https://en.wikipedia.org/wiki/Time%20value%20of%20money | The time value of money refers to the fact that there is normally a greater benefit to receiving a sum of money now rather than an identical sum later. It may be seen as an implication of the later-developed concept of time preference.
The time value of money refers to the observation that it is better to receive money sooner than later. Money you have today can be invested to earn a positive rate of return, producing more money tomorrow. Therefore, a dollar today is worth more than a dollar in the future.
The time value of money is among the factors considered when weighing the opportunity costs of spending rather than saving or investing money. As such, it is among the reasons why interest is paid or earned: interest, whether it is on a bank deposit or debt, compensates the depositor or lender for the loss of their use of their money. Investors are willing to forgo spending their money now only if they expect a favorable net return on their investment in the future, such that the increased value to be available later is sufficiently high to offset both the preference to spending money now and inflation (if present); see required rate of return.
History
The Talmud (~500 CE) recognizes the time value of money. In Tractate Makkos page 3a the Talmud discusses a case where witnesses falsely claimed that the term of a loan was 30 days when it was actually 10 years. The false witnesses must pay the difference of the value of the loan "in a situation where he would be required to give the money back (within) thirty days..., and that same sum in a situation where he would be required to give the money back (within) 10 years...The difference is the sum that the testimony of the (false) witnesses sought to have the borrower lose; therefore, it is the sum that they must pay."
The notion was later described by Martín de Azpilcueta (1491–1586) of the School of Salamanca.
Calculations
Time value of money problems involve the net value of cash flows at different points in time.
In a typical case, the variables might be: a balance (the real or nominal value of a debt or a financial asset in terms of monetary units), a periodic rate of interest, the number of periods, and a series of cash flows. (In the case of a debt, cash flows are payments against principal and interest; in the case of a financial asset, these are contributions to or withdrawals from the balance.) More generally, the cash flows may not be periodic but may be specified individually. Any of these variables may be the independent variable (the sought-for answer) in a given problem. For example, one may know that: the interest is 0.5% per period (per month, say); the number of periods is 60 (months); the initial balance (of the debt, in this case) is 25,000 units; and the final balance is 0 units. The unknown variable may be the monthly payment that the borrower must pay.
For example, £100 invested for one year, earning 5% interest, will be worth £105 after one year; therefore, £100 paid now and £105 paid exactly one year later both have the same value to a recipient who expects 5% interest assuming that inflation would be zero percent. That is, £100 invested for one year at 5% interest has a future value of £105 under the assumption that inflation would be zero percent.
This principle allows for the valuation of a likely stream of income in the future, in such a way that annual incomes are discounted and then added together, thus providing a lump-sum "present value" of the entire income stream; all of the standard calculations for time value of money derive from the most basic algebraic expression for the present value of a future sum, "discounted" to the present by an amount equal to the time value of money. For example, the future value sum to be received in one year is discounted at the rate of interest to give the present value sum
Some standard calculations based on the time value of money are:
Present value: The current worth of a future sum of money or stream of cash flows, given a specified rate of return. Future cash flows are "discounted" at the discount rate; the higher the discount rate, the lower the present value of the future cash flows. Determining the appropriate discount rate is the key to valuing future cash flows properly, whether they be earnings or obligations.
Present value of an annuity: An annuity is a series of equal payments or receipts that occur at evenly spaced intervals. Leases and rental payments are examples. The payments or receipts occur at the end of each period for an ordinary annuity while they occur at the beginning of each period for an annuity due.
Present value of a perpetuity is an infinite and constant stream of identical cash flows.
Future value: The value of an asset or cash at a specified date in the future, based on the value of that asset in the present.
Future value of an annuity (FVA): The future value of a stream of payments (annuity), assuming the payments are invested at a given rate of interest.
There are several basic equations that represent the equalities listed above. The solutions may be found using (in most cases) the formulas, a financial calculator or a spreadsheet. The formulas are programmed into most financial calculators and several spreadsheet functions (such as PV, FV, RATE, NPER, and PMT).
For any of the equations below, the formula may also be rearranged to determine one of the other unknowns. In the case of the standard annuity formula, there is no closed-form algebraic solution for the interest rate (although financial calculators and spreadsheet programs can readily determine solutions through rapid trial and error algorithms).
These equations are frequently combined for particular uses. For example, bonds can be readily priced using these equations. A typical coupon bond is composed of two types of payments: a stream of coupon payments similar to an annuity, and a lump-sum return of capital at the end of the bond's maturity—that is, a future payment. The two formulas can be combined to determine the present value of the bond.
An important note is that the interest rate i is the interest rate for the relevant period. For an annuity that makes one payment per year, i will be the annual interest rate. For an income or payment stream with a different payment schedule, the interest rate must be converted into the relevant periodic interest rate. For example, a monthly rate for a mortgage with monthly payments requires that the interest rate be divided by 12 (see the example below). See compound interest for details on converting between different periodic interest rates.
The rate of return in the calculations can be either the variable solved for, or a predefined variable that measures a discount rate, interest, inflation, rate of return, cost of equity, cost of debt or any number of other analogous concepts. The choice of the appropriate rate is critical to the exercise, and the use of an incorrect discount rate will make the results meaningless.
For calculations involving annuities, it must be decided whether the payments are made at the end of each period (known as an ordinary annuity), or at the beginning of each period (known as an annuity due). When using a financial calculator or a spreadsheet, it can usually be set for either calculation. The following formulas are for an ordinary annuity. For the answer for the present value of an annuity due, the PV of an ordinary annuity
can be multiplied by (1 + i).
Formula
The following formula use these common variables:
PV is the value at time zero (present value)
FV is the value at time n (future value)
A is the value of the individual payments in each compounding period
n is the number of periods (not necessarily an integer)
i is the interest rate at which the amount compounds each period
g is the growing rate of payments over each time period
Future value of a present sum
The future value (FV) formula is similar and uses the same variables.
Present value of a future sum
The present value formula is the core formula for the time value of money; each of the other formulas is derived from this formula. For example, the annuity formula is the sum of a series of present value calculations.
The present value (PV) formula has four variables, each of which can be solved for by numerical methods:
The cumulative present value of future cash flows can be calculated by summing the contributions of FVt, the value of cash flow at time t:
Note that this series can be summed for a given value of n, or when n is ∞. This is a very general formula, which leads to several important special cases given below.
Present value of an annuity for n payment periods
In this case the cash flow values remain the same throughout the n periods. The present value of an annuity (PVA) formula has four variables, each of which can be solved for by numerical methods:
To get the PV of an annuity due, multiply the above equation by (1 + i).
Present value of a growing annuity
In this case each cash flow grows by a factor of (1+g). Similar to the formula for an annuity, the present value of a growing annuity (PVGA) uses the same variables with the addition of g as the rate of growth of the annuity (A is the annuity payment in the first period). This is a calculation that is rarely provided for on financial calculators.
Where i ≠ g :
Where i = g :
To get the PV of a growing annuity due, multiply the above equation by (1 + i).
Present value of a perpetuity
A perpetuity is payments of a set amount of money that occur on a routine basis and continue forever. When n → ∞, the PV of a perpetuity (a perpetual annuity) formula becomes a simple division.
Present value of a growing perpetuity
When the perpetual annuity payment grows at a fixed rate (g, with g < i) the value is determined according to the following formula, obtained by setting n to infinity in the earlier formula for a growing perpetuity:
In practice, there are few securities with precise characteristics, and the application of this valuation approach is subject to various qualifications and modifications. Most importantly, it is rare to find a growing perpetual annuity with fixed rates of growth and true perpetual cash flow generation. Despite these qualifications, the general approach may be used in valuations of real estate, equities, and other assets.
This is the well known Gordon growth model used for stock valuation.
Future value of an annuity
The future value (after n periods) of an annuity (FVA) formula has four variables, each of which can be solved for by numerical methods:
To get the FV of an annuity due, multiply the above equation by (1 + i).
Future value of a growing annuity
The future value (after n periods) of a growing annuity (FVA) formula has five variables, each of which can be solved for by numerical methods:
Where i ≠ g :
Where i = g :
Formula table
The following table summarizes the different formulas commonly used in calculating the time value of money. These values are often displayed in tables where the interest rate and time are specified.
Notes:
A is a fixed payment amount, every period
G is the initial payment amount of an increasing payment amount, that starts at G and increases by G for each subsequent period.
D is the initial payment amount of an exponentially (geometrically) increasing payment amount, that starts at D and increases by a factor of (1+g) each subsequent period.
Derivations
Annuity derivation
The formula for the present value of a regular stream of future payments (an annuity) is derived from a sum of the formula for future value of a single future payment, as below, where C is the payment amount and n the period.
A single payment C at future time m has the following future value at future time n:
Summing over all payments from time 1 to time n, then reversing t
Note that this is a geometric series, with the initial value being a = C, the multiplicative factor being 1 + i, with n terms. Applying the formula for geometric series, we get
The present value of the annuity (PVA) is obtained by simply dividing by :
Another simple and intuitive way to derive the future value of an annuity is to consider an endowment, whose interest is paid as the annuity, and whose principal remains constant. The principal of this hypothetical endowment can be computed as that whose interest equals the annuity payment amount:
Note that no money enters or leaves the combined system of endowment principal + accumulated annuity payments, and thus the future value of this system can be computed simply via the future value formula:
Initially, before any payments, the present value of the system is just the endowment principal, . At the end, the future value is the endowment principal (which is the same) plus the future value of the total annuity payments (). Plugging this back into the equation:
Perpetuity derivation
Without showing the formal derivation here, the perpetuity formula is derived from the annuity formula. Specifically, the term:
can be seen to approach the value of 1 as n grows larger. At infinity, it is equal to 1, leaving as the only term remaining.
Continuous compounding
Rates are sometimes converted into the continuous compound interest rate equivalent because the continuous equivalent is more convenient (for example, more easily differentiated). Each of the formulas above may be restated in their continuous equivalents. For example, the present value at time 0 of a future payment at time t can be restated in the following way, where e is the base of the natural logarithm and r is the continuously compounded rate:
This can be generalized to discount rates that vary over time: instead of a constant discount rate r, one uses a function of time r(t). In that case the discount factor, and thus the present value, of a cash flow at time T is given by the integral of the continuously compounded rate r(t):
Indeed, a key reason for using continuous compounding is to simplify the analysis of varying discount rates and to allow one to use the tools of calculus. Further, for interest accrued and capitalized overnight (hence compounded daily), continuous compounding is a close approximation for the actual daily compounding. More sophisticated analysis includes the use of differential equations, as detailed below.
Examples
Using continuous compounding yields the following formulas for various instruments:
Annuity
Perpetuity
Growing annuity
Growing perpetuity
Annuity with continuous payments
These formulas assume that payment A is made in the first payment period and annuity ends at time t.
Differential equations
Ordinary and partial differential equations (ODEs and equations involving derivatives and one (respectively, multiple) variables are ubiquitous in more advanced treatments of financial mathematics. While time value of money can be understood without using the framework of differential equations, the added sophistication sheds additional light on time value, and provides a simple introduction before considering more complicated and less familiar situations. This exposition follows .
The fundamental change that the differential equation perspective brings is that, rather than computing a number (the present value now), one computes a function (the present value now or at any point in future). This function may then be how does its value change over or compared with other functions.
Formally, the statement that "value decreases over time" is given by defining the linear differential operator as:
This states that value decreases (−) over time (∂t) at the discount rate (r(t)). Applied to a function it yields:
For an instrument whose payment stream is described by f(t), the value V(t) satisfies the inhomogeneous first-order ODE ("inhomogeneous" is because one has f rather than 0, and "first-order" is because one has first derivatives but no higher this encodes the fact that when any cash flow occurs, the value of the instrument changes by the value of the cash flow (if you receive a £10 coupon, the remaining value decreases by exactly £10).
The standard technique tool in the analysis of ODEs is Green's functions, from which other solutions can be built. In terms of time value of money, the Green's function (for the time value ODE) is the value of a bond paying £1 at a single point in time the value of any other stream of cash flows can then be obtained by taking combinations of this basic cash flow. In mathematical terms, this instantaneous cash flow is modeled as a Dirac delta function
The Green's function for the value at time t of a £1 cash flow at time u is
where H is the Heaviside step function – the notation "" is to emphasize that u is a parameter (fixed in any the time when the cash flow will occur), while t is a variable (time). In other words, future cash flows are exponentially discounted (exp) by the sum (integral, ) of the future discount rates ( for future, r(v) for discount rates), while past cash flows are worth 0 (), because they have already occurred. Note that the value at the moment of a cash flow is not well-there is a discontinuity at that point, and one can use a convention (assume cash flows have already occurred, or not already occurred), or simply not define the value at that point.
In case the discount rate is constant, this simplifies to
where is "time remaining until cash flow".
Thus for a stream of cash flows f(u) ending by time T (which can be set to for no time horizon) the value at time t, is given by combining the values of these individual cash flows:
This formalizes time value of money to future values of cash flows with varying discount rates, and is the basis of many formulas in financial mathematics, such as the Black–Scholes formula with varying interest rates.
See also
Actuarial science
Discounted cash flow
Earnings growth
Exponential growth
Financial management
Hyperbolic discounting
Internal rate of return
Net present value
Option time value
Real versus nominal value (economics)
Return on time invested
Snowball effect
Present value interest factor
Notes
References
Crosson, S.V., and Needles, B.E.(2008). Managerial Accounting (8th Ed). Boston: Houghton Mifflin Company.
External links
Time Value of Money ebook
Engineering economics
Actuarial science
Interest
Intertemporal economics
Money | Time value of money | [
"Mathematics",
"Engineering"
] | 3,857 | [
"Engineering economics",
"Applied mathematics",
"Actuarial science"
] |
165,266 | https://en.wikipedia.org/wiki/Weighted%20average%20cost%20of%20capital | The weighted average cost of capital (WACC) is the rate that a company is expected to pay on average to all its security holders to finance its assets. The WACC is commonly referred to as the firm's cost of capital. Importantly, it is dictated by the external market and not by management. The WACC represents the minimum return that a company must earn on an existing asset base to satisfy its creditors, owners, and other providers of capital, or they will invest elsewhere.
Companies raise money from a number of sources: common stock, preferred stock and related rights, straight debt, convertible debt, exchangeable debt, employee stock options, pension liabilities, executive stock options, governmental subsidies, and so on. Different securities, which represent different sources of finance, are expected to generate different returns. The WACC is calculated taking into account the relative weights of each component of the capital structure. The more complex the company's capital structure, the more laborious it is to calculate the WACC.
Companies can use WACC to see if the investment projects available to them are worthwhile to undertake.
Calculation
In general, the WACC can be calculated with the following formula:
where is the number of sources of capital (securities, types of liabilities); is the required rate of return for security ; and is the market value of all outstanding securities .
In the case where the company is financed with only equity and debt, the average cost of capital is computed as follows:
where is the total debt, is the total shareholder's equity, is the cost of debt, and is the cost of equity. The market values of debt and equity should be used when computing the weights in the WACC formula.
Tax effects
Tax effects can be incorporated into this formula. For example, the WACC for a company financed by one type of shares with the total market value of and cost of equity and one type of bonds with the total market value of and cost of debt , in a country with corporate tax rate , is calculated as:
This calculation can vary significantly due to the existence of many plausible proxies for each element. As a result, a fairly wide range of values for the WACC of a given firm in a given year may appear defensible.
Components
Debt
The firm's debt component is stated as kd and since there is a tax benefit from interest payments then the after tax WACC component is kd(1-T); where T is the tax rate.
Increasing the debt component under WACC has advantages including:
no loss of control (voting rights) that would come from other sources,
upper limit is placed on share of profits,
flotation costs are typically lower than equity, and
interest expense is tax deductible.
But there are also disadvantages of the debt component including:
Using WACC makes the firm legally obliged to make payments no matter how tight the funds on hand are,
in the case of bonds full face value comes due at one time, and
taking on more debt = taking on more financial risk (more systematic risk) requiring higher cash flows.
Equity
Weighted average cost of capital equation:
WACC= (Wd)[(Kd)(1-t)]+ (Wpf)(Kpf)+ (Wce)(Kce)
Cost of new equity should be the adjusted cost for any underwriting fees termed flotation costs (F):
Ke = D1/P0(1-F) + g; where F = flotation costs, D1 is dividends, P0 is price of the stock, and g is the growth rate.
There are 3 ways of calculating Ke:
Capital Asset Pricing Model
Dividend Discount Method
Bond Yield Plus Risk Premium Approach
The equity component has advantages for the firm including:
no legal obligation to pay (depends on class of shares) as opposed to debt,
no maturity (unlike e.g. bonds),
lower financial risk, and
it could be cheaper than debt with good prospects of profitability.
But also disadvantages including:
new equity dilutes current ownership share of profits and voting rights (impacting control),
cost of underwriting for equity is much higher than for debt,
too much equity = target for a leveraged buy-out by another firm, and
no tax shield, dividends are not tax deductible, and may exhibit double taxation.
Marginal cost of capital schedule
Marginal cost of capital (MCC) schedule or an investment opportunity curve is a graph that relates the firm's weighted cost of each unit of capital to the total amount of new capital raised. The first step in preparing the MCC schedule is to rank the projects using internal rate of return (IRR). The higher the IRR the better off a project is.
See also
Beta coefficient
Capital asset pricing model
Cost of capital
Discounted cash flow
Economic value added
Hamada's equation
Internal rate of return
Minimum acceptable rate of return
Modigliani–Miller theorem
Net present value
Opportunity cost
References
External links
Video about practical application of the WACC approach
Financial capital
Mathematical finance
Production economics
Economics curves
Costs | Weighted average cost of capital | [
"Mathematics"
] | 1,034 | [
"Applied mathematics",
"Mathematical finance"
] |
165,299 | https://en.wikipedia.org/wiki/PARAM | Airawat
PARAM is a series of Indian supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC) in Pune. PARAM means "supreme" in the Sanskrit language, whilst also creating an acronym for "PARAllel Machine". As of November 2022, the fastest machine in the series is the PARAM Airawat (Siddhi-AI is 2nd) which ranks 63rd in world, with an Rpeak of 5.267 petaflops.
History
C-DAC was created in November 1987, originally as the Centre for Development of Advanced Computing Technology (C-DACT). This was in response to issues purchasing supercomputers from foreign sources. The Indian Government decided to try and develop indigenous computing technology.
PARAM 8000
The PARAM 8000 was the first machine in the series and was built from scratch. A prototype was benchmarked at the "1990 Zurich Super-computing Show": of the machines that ran at the show it came second only to one from the United States.
A 64-node machine was delivered in August 1991. Each node used Inmos T800/T805 transputers. A 256-node machine had a theoretical performance of 1GFLOPS, however in practice had a sustained performance of 100-200MFLOPS. PARAM 8000 was a distributed memory MIMD architecture with a reconfigurable interconnection network.
The PARAM 8000 was noted to be 28 times more powerful than the Cray X-MP that the government originally requested, for the same $10 million cost quoted for it.
Exports
The computer was a success and was exported to Germany, United Kingdom and Russia. Apart from taking over the home market, PARAM attracted 14 other buyers with its relatively low price tag of $350,000.
The computer was also exported to the ICAD Moscow in 1991 under Russian collaboration.
PARAM 8600
PARAM 8600 was an improvement over PARAM 8000. In 1992 C-DAC realised its machines were underpowered and wished to integrate the newly released Intel i860 processor. Each node was created with one i860 and four Inmos T800 transputers. The same PARAS programming environment was used for both the PARAM 8000 and 8600; this meant that programs were portable. Each 8600 cluster was noted to be as powerful as 4 PARAM 8000 clusters.
PARAM 9000
The PARAM (param vashisht lega) 9000 was designed to be merge cluster processing and massively parallel processing computing workloads. It was first demonstrated in 1994. The design was changed to be modular so that newer processors could be easily accommodated. Typically a system used 32–40 processors, however it could be scaled up to 200 CPUs using the clos network topology. The PARAM 9000/SS was the SuperSPARC II processor variant, the PARAM 9000/US used the UltraSPARC processor, and the PARAM 9000/AA used the DEC Alpha.
PARAM 10000
The PARAM 10000 was unveiled in 1998 as part of C-DAC's second mission. PARAM 10000 used several independent nodes, each based on the Sun Enterprise 250 server; each such server contained two 400Mhz UltraSPARC II processors. The base configuration had three compute nodes and a server node. The peak speed of this base system was 6.4 GFLOPS. A typical system would contain 160 CPUs and be capable of 100 GFLOPS But, it was easily scalable to the TFLOP range. Exported to Russia and Singapore.
Further computers
Further computers were made in the PARAM series as one-off supercomputers, rather than serial production machines. From the late 2010s many machines were created as part of the National Supercomputing Mission.
Supercomputer summary
PARAMNet
PARAMNet is a high speed high bandwidth low latency network developed for the PARAM series. The original PARAMNet used an 8 port cascadable non-blocking switch developed by C-DAC. Each port provided 400 Mb/s in both directions (thus 2x400 Mbit/s) as it was a full-duplex network. It was first used in PARAM 10000.
PARAMNet II, introduced with PARAM Padma, is capable of 2.5 Gbit/s while working full-duplex. It supports interfaces like Virtual Interface Architecture and Active messages. It uses 8 or 16 port SAN switches.
PARAMNet-3, used in PARAM Yuva and PARAM Yuva-II, is next generation high performance networking component for building supercomputing systems. PARAMNet-3 consists of tightly integrated hardware and software components. The hardware components consist of Network Interface Cards (NIC) based on CDAC's fourth generation communication co-processor "GEMINI", and modular 48-port Packet Routing Switch "ANVAY". The software component "KSHIPRA" is a lightweight protocol stack designed to exploit capabilities of hardware and to provide industry standard interfaces to the applications. Other application areas identified for deployment of PARAMNet-3 are storage and database applications.
Operators
PARAM supercomputers are used by both public and private operators for various purposes. As of 2008, 52 PARAMs have been deployed. Of these, 8 are located in Russia, Singapore, Germany and Canada.
PARAMs have also been sold to Tanzania, Armenia, Saudi Arabia, Singapore, Ghana, Myanmar, Nepal, Kazakhstan, Uzbekistan, and Vietnam.
See also
EKA
SAGA-220, a 220 TeraFLOP supercomputer built by ISRO
Supercomputing in India
Wipro Supernova
Notes
References
External links
PARAM Padma information page from C-DAC website
National Supercomputing Mission, INDIA
Supercomputers
Information technology in India
Supercomputing in India | PARAM | [
"Technology"
] | 1,215 | [
"Supercomputers",
"Supercomputing"
] |
165,320 | https://en.wikipedia.org/wiki/Jodrell%20Bank%20Observatory | Jodrell Bank Observatory ( ) in Cheshire, England hosts a number of radio telescopes as part of the Jodrell Bank Centre for Astrophysics at the University of Manchester. The observatory was established in 1945 by Bernard Lovell, a radio astronomer at the university, to investigate cosmic rays after his work on radar in the Second World War. It has since played an important role in the research of meteoroids, quasars, pulsars, masers, and gravitational lenses, and was heavily involved with the tracking of space probes at the start of the Space Age.
The main telescope at the observatory is the Lovell Telescope. Its diameter of makes it the third largest steerable radio telescope in the world. There are three other active telescopes at the observatory; the Mark II and and 7 m diameter radio telescopes. Jodrell Bank Observatory is the base of the Multi-Element Radio Linked Interferometer Network (MERLIN), a National Facility run by the University of Manchester on behalf of the Science and Technology Facilities Council.
The Jodrell Bank Visitor Centre and an arboretum are in Lower Withington, and the Lovell Telescope and the observatory near Goostrey and Holmes Chapel. The observatory is reached from the A535. The Crewe to Manchester Line passes by the site, and Goostrey station is a short distance away. In 2019, the observatory became a UNESCO World Heritage Site.
Early years
Jodrell Bank was first used for academic purposes in 1939 when the University of Manchester's Department of Botany purchased three fields from the Leighs. It is named from a nearby rise in the ground, Jodrell Bank, which was named after William Jauderell, an archer whose descendants lived at the mansion that is now Terra Nova School. The site was extended in 1952 by the purchase of a farm from George Massey on which the Lovell Telescope was built.
The site was first used for astrophysics in 1945, when Bernard Lovell used some equipment left over from World War II, including a gun laying radar, to investigate cosmic rays. The equipment was a GL II radar system working at a wavelength of 4.2 m, provided by J. S. Hey. He intended to use the equipment in Manchester, but electrical interference from the trams on Oxford Road prevented him from doing so. He moved the equipment to Jodrell Bank, south of the city, on 10 December 1945. Lovell's main research was transient radio echoes, which he confirmed were from ionized meteor trails by October 1946. The first staff were Alf Dean and Frank Foden who observed meteors with the naked eye while Lovell observed the electromagnetic signal using equipment. The first time Lovell turned the radar on – 14 December 1945 – the Geminids meteor shower was at a maximum.
Over the next few years, Lovell accumulated more ex-military radio hardware, including a portable cabin, known as a "Park Royal" in the military (see Park Royal Vehicles). The first permanent building was near to the cabin and was named after it.
Searchlight telescope
A searchlight was loaned to Jodrell Bank in 1946 by the army; a broadside array, was constructed on its mount by J. Clegg. It consisted of 7 elements of Yagi–Uda antennas. It was used for astronomical observations in October 1946.
On 9 and 10 October 1946, the telescope observed ionisation in the atmosphere caused by meteors in the Giacobinids meteor shower. When the antenna was turned by 90 degrees at the maximum of the shower, the number of detections dropped to the background level, proving that the transient signals detected by radar were from meteors. The telescope was then used to determine the radiant points for meteors. This was possible as the echo rate is at a minimum at the radiant point, and a maximum at 90 degrees to it. The telescope and other receivers on the site studied the auroral streamers that were visible in early August 1947.
Transit Telescope
The Transit Telescope was a parabolic reflector zenith telescope built in 1947. At the time, it was the world's largest radio telescope. It consisted of a wire mesh suspended from a ring of scaffold poles, which focussed radio signals on a focal point above the ground. The telescope mainly looked directly upwards, but the direction of the beam could be changed by small amounts by tilting the mast to change the position of the focal point. The focal mast was changed from timber to steel before construction was complete.
The telescope was replaced by the steerable Lovell Telescope, and the Mark II telescope was subsequently built at the same location.
The telescope could map a ± 15-degree strip around the zenith at 72 and 160 MHz, with a resolution at 160 MHz of 1 degree. It discovered radio noise from the Great Nebula in Andromeda – the first definite detection of an extragalactic radio source – and the remnants of Tycho's Supernova in the radio frequency; at the time it had not been discovered by optical astronomy.
Lovell Telescope
The "Mark I" telescope, now known as the Lovell Telescope, was the world's largest steerable dish radio telescope, in diameter, when it was constructed in 1957; it is now the third largest, after the Green Bank telescope in West Virginia and the Effelsberg telescope in Germany. Part of the gun turret mechanisms from the First World War battleships and were reused in the telescope's motor system. The telescope became operational in mid-1957, in time for the launch of the Soviet Union's Sputnik 1, the world's first artificial satellite. The telescope was the only one able to track Sputnik's booster rocket by radar; first locating it just before midnight on 12 October 1957, eight days after its launch.
In the following years, the telescope tracked various space probes. Between 11 March and 12 June 1960, it tracked the United States' NASA-launched Pioneer 5 probe. The telescope sent commands to the probe, including those to separate it from its carrier rocket and turn on its more powerful transmitter when the probe was eight million miles away. It received data from the probe, the only telescope in the world capable of doing so. In February 1966, Jodrell Bank was asked by the Soviet Union to track its unmanned Moon lander Luna 9 and recorded on its facsimile transmission of photographs from the Moon's surface. The photographs were sent to the British press and published before the Soviets made them public.
In 1969, the Soviet Union's Luna 15 was also tracked. A recording of the moment when Jodrell Bank's scientists observed the mission was released on 3 July 2009.
With the support of Sir Bernard Lovell, the telescope tracked Russian satellites. Satellite and space probe observations were shared with the US Department of Defense satellite tracking research and development activity at Project Space Track.
Tracking space probes only took a fraction of the Lovell telescope's observing time, and the remainder was used for scientific observations including using radar to measure the distance to the Moon and to Venus; observations of astrophysical masers around star-forming regions and giant stars; observations of pulsars (including the discovery of millisecond pulsars and the first pulsar in a globular cluster); and observations of quasars and gravitational lenses (including the detection of the first gravitational lens and the first Einstein ring). The telescope has also been used for SETI observations.
Mark II and III telescopes
The Mark II telescope is an elliptical radio telescope, with a major axis of and a minor axis of . It was constructed in 1964. As well as operating as a standalone telescope, it has been used as an interferometer with the Lovell Telescope, and is now primarily used as part of the MERLIN project.
The Mark III telescope, the same size as the Mark II, was constructed to be transportable but it was never moved from Wardle, near Nantwich, where it was used as part of MERLIN. It was built in 1966 and decommissioned in 1996.
Mark IV, V and VA telescope proposals
The Mark IV, V and VA telescope proposals were put forward in the 1960s through to the 1980s to build even larger radio telescopes.
The Mark IV proposal was for a diameter standalone telescope, built as a national project.
The Mark V proposal was for a moveable telescope. The concept of this proposal was for a telescope on a railway line adjoining Jodrell Bank, but concerns about future levels of interference meant that a site in Wales would have been preferable. Design proposals by Husband and Co and Freeman Fox, who had designed the Parkes Observatory telescope in Australia, were put forward.
The Mark VA was similar to the Mark V but with a smaller dish of and a design using prestressed concrete, similar to the Mark II (the previous two designs more closely resembled the Lovell telescope).
None of the proposed telescopes was constructed, although design studies were carried out and scale models were made, partly because of the changing political climate, and partly due to the financial constraints of astronomical research in the UK. Also it became necessary to upgrade the Lovell Telescope to the Mark IA, which overran in terms of cost.
Other single dishes
A 50 ft (15 m) alt-azimuth dish was constructed in 1964 for astronomical research and to track the Zond 1, Zond 2, Ranger 6 and Ranger 7 space probes and Apollo 11. After an accident that irreparably damaged the 50 ft telescope's surface, it was demolished in 1982 and replaced with a more accurate telescope, the "42 ft". The 42 ft (12.8 m) dish is mainly used to observe pulsars, and continually monitors the Crab Pulsar.
When the 42 ft was installed, a smaller dish, the "7 m" (actually 6.4 m, or 21 ft, in diameter) was installed and is used for undergraduate teaching. The 42 ft and 7 m telescopes were originally used at the Woomera Rocket Testing Range in South Australia. The 7 m was originally constructed in 1970 by the Marconi Company.
A Polar Axis telescope was built in 1962. It had a circular 50 ft (15.2 m) dish on a polar mount, and was mostly used for moon radar experiments. It has been decommissioned. An reflecting optical telescope was donated to the observatory in 1951 but was not used much, and was donated to the Salford Astronomical Society around 1971.
MERLIN
The Multi-Element Radio Linked Interferometer Network (MERLIN) is an array of radio telescopes spread across England and the Welsh borders. The array is run from Jodrell Bank on behalf of the Science and Technology Facilities Council as a National Facility. The array consists of up to seven radio telescopes and includes the Lovell Telescope, the Mark II, Cambridge, Defford, Knockin, Darnhall, and Pickmere (previously known as Tabley). The longest baseline is and MERLIN can operate at frequencies between 151 MHz and 24 GHz. At a wavelength of 6 cm (5 GHz frequency), MERLIN has a resolution of 50 milliarcseconds which is comparable to that of the HST at optical wavelengths.
Very Long Baseline Interferometry
Jodrell Bank has been involved with Very Long Baseline Interferometry (VLBI) since the late 1960s; the Lovell telescope took part in the first transatlantic interferometer experiment in 1968, with other telescopes at Algonquin and Penticton in Canada. The Lovell Telescope and the Mark II telescopes are regularly used for VLBI with telescopes across Europe (the European VLBI Network), giving a resolution of around 0.001 arcseconds.
Square Kilometre Array
In April 2011, Jodrell Bank was named as the location of the control centre for the planned Square Kilometre Array, or SKA Project Office (SPO). The SKA is planned by a collaboration of 20 countries and when completed, is intended to be the most powerful radio telescope ever built. In April 2015 it was announced that Jodrell Bank would be the permanent home of the SKA headquarters for the period of operation expected for the telescope (over 50 years).
Research
The Jodrell Bank Centre for Astrophysics, of which the Observatory is a part, is one of the largest astrophysics research groups in the UK. About half of the research of the group is in the area of radio astronomy – including research into pulsars, the Cosmic Microwave Background Radiation, gravitational lenses, active galaxies and astrophysical masers. The group also carries out research at different wavelengths, looking into star formation and evolution, planetary nebula and astrochemistry.
The first director of Jodrell Bank was Bernard Lovell, who established the observatory in 1945. He was succeeded in 1980 by Sir Francis Graham-Smith, followed by Professor Rod Davies around 1990 and Professor Andrew Lyne in 1999. Professor Phil Diamond took over the role on 1 October 2006, at the time when the Jodrell Bank Centre for Astrophysics was formed. Prof Ralph Spencer was Acting Director during 2009 and 2010. In October 2010, Prof. Albert Zijlstra became Director of the Jodrell Bank Centre for Astrophysics. Professor Lucio Piccirillo was the Director of the Observatory from Oct 2010 to Oct 2011. Prof. Simon Garrington is the JBCA Associate Director for the Jodrell Bank Observatory. In 2016, Prof. Michael Garrett was appointed as the inaugural Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics. As Director JBCA, Prof. Garrett also has overall responsibility for Jodrell Bank Observatory.
In May 2017 Jodrell Bank entered into a partnership with the Breakthrough Listen initiative and will share information with Jodrell Bank's team, who wish to conduct an independent SETI search via its 76-m radio telescope and e-MERLIN array.
There is an active development programme researching and constructing telescope receivers and instrumentation. The observatory has been involved in the construction of several Cosmic Microwave Background experiments, including the Tenerife Experiment, which ran from the 1980s to 2000, and the amplifiers and cryostats for the Very Small Array. It has also constructed the front-end modules of the 30 and 44 GHz receivers for the Planck spacecraft. Receivers were also designed at Jodrell Bank for the Parkes Telescope in Australia.
Visitor facilities, and events
A visitors' centre, opened on 19 April 1971 by the Duke of Devonshire, attracted around 120,000 visitors per year. It covered the history of Jodrell Bank and had a planetarium and 3D theatre hosting simulated trips to Mars.
Asbestos in the visitors' centre buildings led to its demolition in 2003 leaving a remnant of its far end. A marquee was set up in its grounds while a new science centre was planned. The plans were shelved when Victoria University of Manchester and UMIST merged to become the University of Manchester in 2004, leaving the interim centre, which received around 70,000 visitors a year.
In October 2010, work on a new visitor centre started and the Jodrell Bank Discovery Centre opened on 11 April 2011. It includes an entrance building, the Planet Pavilion, a Space Pavilion for exhibitions and events, a glass-walled cafe with a view of the Lovell Telescope and an outdoor dining area, an education space, and landscaped gardens including the Galaxy Maze. A large orrery was installed in 2013. It does not, however, include a planetarium, though a small inflatable planetarium dome has been in use on the site in recent years.
The visitor centre is open Tuesday to Sunday and Mondays during school and bank holidays and organises public outreach events, including public lectures, star parties, and "ask an astronomer" sessions.
A path around the Lovell telescope is approximately 20 m from the telescope's outer railway, information boards explain how the telescope works and the research that is done with it.
The arboretum, created in 1972, houses the UK's national collections of crab apple Malus and mountain ash Sorbus species, and the Heather Society's Calluna collection. The arboretum also has a small scale model of the Solar System, the scale is approximately 1:5,000,000,000. At Jodrell Bank, as part of the SpacedOut project, is the Sun in a 1:15,000,000 scale model of the Solar System covering Britain.
On 7 July 2010, it was announced that the observatory was being considered for the 2011 United Kingdom Tentative List for World Heritage Site status. It was announced on 22 March 2011 that it was on the UK government's shortlist. In January 2018, it became the UK's candidate for World Heritage status.
In July 2011 the visitor centre and observatory hosted "Live from Jodrell Bank - Transmission 001" – a rock concert with bands including The Flaming Lips, British Sea Power, Wave Machines, OK GO and Alice Gold. On 23 July 2012, Elbow performed live at the observatory and filmed a documentary of the event and the facility which was released as a live CD/DVD of the concert.
On 6 July 2013, Transmission 4 featured Australian Pink Floyd, Hawkwind, The Time & Space Machine and The Lucid Dream. On 7 July 2013, Transmission 5 featured New Order, Johnny Marr, The Whip, Public Service Broadcasting, Jake Evans and Hot Vestry. On 30 August 2013, Transmission 6 featured Sigur Ros, Polca and Daughter.
On 31 August 2013, Jodrell Bank hosted a concert performed by the Hallé Orchestra to commemorate what would have been Lovell's 100th birthday. As well as a number of operatic performances during the day, the evening Halle performance saw numbers such as themes from Star Trek, Star Wars and Doctor Who among others. The main Lovell telescope was rotated to face the onlooking crowd and used as a huge projection screen showing various animated planetary effects. During the interval the 'screen' was used to show a history of Lovell's work and Jodrell Bank.
There is an astronomy podcast from the observatory, named The Jodcast. The BBC television programme Stargazing Live was hosted in the control room of the observatory from 2011 to 2016.
Since 2016, the observatory hosted Bluedot, a music and science festival, featuring musical acts such as Public Service Broadcasting, The Chemical Brothers, as well as talks by scientists and scientific communicators such as Jim Al-Khalili and Richard Dawkins.
Threat of closure
On 3 March 2008, it was reported that Britain's Science and Technology Facilities Council (STFC), faced with an £80 million shortfall in its budget, was considering withdrawing its planned £2.7 million annual funding of Jodrell Bank's e-MERLIN project. The project, which aimed to replace the microwave links between Jodrell Bank and a number of other radio telescopes with high-bandwidth fibre-optic cables, greatly increasing the sensitivity of observations, was seen as critical to the survival of the facility. Bernard Lovell said "It will be a disaster … The fate of the Jodrell Bank telescope is bound up with the fate of e-MERLIN. I don't think the establishment can survive if the e-MERLIN funding is cut".
On 9 July 2008, it was reported that, following an independent review, STFC had reversed its initial position and would now guarantee funding of £2.5 million annually for three years.
Fictional references
Jodrell Bank has been mentioned in several works of fiction, including Doctor Who (The Tenth Planet, Remembrance of the Daleks, "The Poison Sky", "The Eleventh Hour", "Spyfall") and Birthday Boy by David Baddiel. It was intended to be a filming location for Logopolis (Tom Baker's final Doctor Who serial) but budget restrictions prevented this and another location with a superimposed model of a radio telescope was used instead. It was also mentioned in The Hitchhiker's Guide to the Galaxy (as well as The Hitchhiker's Guide to the Galaxy film), The Creeping Terror and Meteor.
Jodrell Bank was also featured heavily in the 1983 music video "Secret Messages" by Electric Light Orchestra and also "Are We Ourselves?" by The Fixx. The Prefab Sprout song Technique (from debut album Swoon) opens with the line "Her husband works at Jodrell Bank/He's home late in the morning".
The observatory is the site of several episodes in the novel Boneland by the local novelist Alan Garner (2012), and the central character, Colin Whisterfield, is an astrophysicist on its staff.
Jodrell bank made an appearance in the CBBC series Bitsa.
Appraisal
Since 13 July 1988 the Lovell Telescope has been designated as a Grade I listed building. On 10 July 2017 the Mark II Telescope was also designated at the same grade. On the same date five other buildings on the site were designated at Grade II; namely the Searchlight Telescope, the Control Building, the Park Royal Building, the Electrical Workshop, and the Link Hut. Grade I is the highest of the three grades of listing, and is applied to buildings that are of "exceptional interest", and Grade II, the lowest grade, is applied to buildings "of special interest".
At the 43rd Session of the UNESCO World Heritage Committee in Baku on 7 July 2019, the Jodrell Bank Observatory was adopted as a World Heritage Site on the basis of 4 criteria
Criterion (i): Jodrell Bank Observatory is a masterpiece of human creative genius related to its scientific and technical achievements.
Criterion (ii): Jodrell Bank Observatory represents an important interchange of human values over a span of time and on a global scale on developments
Criterion (iv): Jodrell Bank Observatory represents an outstanding example of a technological ensemble which illustrates a significant stage in human history
Criterion (vi): Jodrell Bank Observatory is directly and tangibly associated with events and ideas of outstanding universal significance.
See also
Cerro Tololo Inter-American Observatory
Extremely Large Telescope
Fabra Observatory
Griffith Observatory
La Silla Observatory
Llano de Chajnantor Observatory
Paranal Observatory
Very Large Telescope
List of World Heritage Sites in the United Kingdom
References
Books
Gunn, A. G. (2005). "Jodrell Bank and the Meteor Velocity Controversy". In The New Astronomy: Opening the Electromagnetic Window and Expanding Our View of Planet Earth, Volume 334 of the Astrophysics and Space Science Library. Part 3, pages 107–118. Springer Netherlands.
Journal articles
External links
Jodrell Bank Centre for Astrophysics
Jodrell Bank Visitor Centre
Jodrell Bank Observatory Archives at University of Manchester Library.
Radio observatories
Astronomical observatories in England
Astronomy institutes and departments
Tourist attractions in Cheshire
1945 establishments in the United Kingdom
Arboreta in England
Botanical gardens in England
Gardens in Cheshire
Space programme of the United Kingdom
Square Kilometre Array
World Heritage Sites in England
Buildings at the University of Manchester | Jodrell Bank Observatory | [
"Astronomy"
] | 4,676 | [
"Astronomy organizations",
"Astronomy institutes and departments"
] |
165,331 | https://en.wikipedia.org/wiki/Kinescope | Kinescope , shortened to kine , also known as telerecording in Britain, is a recording of a television program on motion picture film directly through a lens focused on the screen of a video monitor. The process was pioneered during the 1940s for the preservation, re-broadcasting, and sale of television programs before the introduction of quadruplex videotape, which from 1956 eventually superseded the use of kinescopes for all of these purposes. Kinescopes were the only practical way to preserve live television broadcasts prior to videotape.
Typically, the term can refer to the process itself, the equipment used for the procedure (a movie camera mounted in front of a video monitor, and synchronized to the monitor's scanning rate), or a film made using the process. Film recorders are similar, but record source material from a computer system instead of a television broadcast. A telecine is the inverse device, used to show film directly on television.
The term originally referred to the cathode-ray tube (CRT) used in television receivers, as named by inventor Vladimir K. Zworykin in 1929. Hence, the recordings were known in full as kinescope films or kinescope recordings. RCA was granted a trademark for the term (for its CRT) in 1932; it voluntarily released the term to the public domain in 1950.
History
The General Electric laboratories in Schenectady, New York, experimented with making still and motion picture records of television images in 1931.
There is anecdotal evidence that the BBC experimented with filming the output of the television monitor before its television service was suspended in 1939 due to the outbreak of World War II. A BBC executive, Cecil Madden, recalled filming a production of The Scarlet Pimpernel in this way, only for film director Alexander Korda to order the burning of the negative as he owned the film rights to the book, which he felt had been infringed. While there is no written record of any BBC Television production of The Scarlet Pimpernel during 1936–1939, the incident is dramatized in Jack Rosenthal's 1986 television play The Fools on the Hill.
Some of the surviving live transmissions of the Nazi German television station Fernsehsender Paul Nipkow, dating as far back as the 1930s, were recorded by pointing a 35 mm camera to a receiver's screen; although, most surviving Nazi live television programs, such as the 1936 Summer Olympics (not to be confused with the cinematic footage made during the same event by Leni Riefenstahl for her film Olympia), a number of Nuremberg Rallies, or official state visits (such as Benito Mussolini's), were shot directly on 35 mm instead and transmitted over the air as a television signal, with only two minutes' delay from the original event, by means of the so-called Zwischenfilmverfahren (see intermediate film system) from an early outside broadcast van on the site.
According to a 1949 film produced by RCA, silent films had been made of early experimental telecasts during the 1930s. The films were produced by aiming a camera at television monitors – at a speed of eight frames per second, resulting in somewhat jerky reproductions of the images. By the mid-1940s, RCA and NBC were refining the filming process and including sound; the images were less jerky but still somewhat fuzzy.
By early 1946, television cameras were being attached to American guided missiles to aid in their remote steering. Films were made of the television images they transmitted for further evaluation of the target and the missile's performance.
The first known surviving example of the telerecording process in Britain is from October 1947, showing the singer Adelaide Hall performing at the RadiOlympia event. Hall sings "Chi-Baba, Chi-Baba (My Bambino Go to Sleep)" and "I Can't Give You Anything But Love", as well as accompanying herself on ukulele and dancing. When the show was originally broadcast on BBC TV it was 60 minutes in length and also included performances from Winifred Atwell, Evelyn Dove, Cyril Blake and his Calypso Band, Edric Connor and Mable Lee, and was produced by Eric Fawcett. The six-minute footage of Miss Hall is all that survives of the show.
From the following month, the wedding of Princess Elizabeth to Prince Philip also survives, as do various early 1950s productions such as It is Midnight, Dr Schweitzer, The Lady from the Sea and the opening two episodes of The Quatermass Experiment, although in varying degrees of quality. A complete 7-hour set of telerecordings of Queen Elizabeth II's 1953 coronation also exists.
Worldwide program distribution
In the era before satellite communications, kinescopes were used to distribute live events such as a royal wedding as quickly as possible to other countries of the Commonwealth that had started a television service. A Royal Air Force aircraft would fly the telerecording from the UK to Canada, where it would be broadcast over the whole North American network.
Prior to the introduction of videotape in 1956, kinescopes were the only way to record television broadcasts, or to distribute network television programs that were broadcast live from originating cities to stations not connected to the network, or to stations that wished to show a program at a time different than the network broadcast. Although the quality was less than desirable, television programs of all types from prestigious dramas to regular news shows were handled in this manner.
Even after the introduction of videotape, the BBC and the ITV companies made black and white kinescopes of selected programs for international sales and continued to do so until the early 1970s by which time programs were being videotaped in color. Most, if not all, recordings from the 405-line era have long since been lost as have many from the introduction of 625-line video to the early days of color. Consequently, the majority of British shows that still exist before the introduction of color, and a number thereafter, do so in the form of these telerecordings. A handful of shows, including some episodes of Doctor Who and most of the first series of Adam Adamant Lives!, were deliberately telerecorded for ease of editing rather than being videotaped.
Eastman Television Recording Camera
In September 1947, Eastman Kodak introduced the Eastman Television Recording Camera, in cooperation with DuMont Laboratories and NBC, for recording images from a television screen under the trademark "Kinephoto". NBC, CBS, and DuMont set up their main kinescope recording facilities in New York City, while ABC chose Chicago. By 1951, NBC and CBS were each shipping out some 1,000 16 mm kinescope prints each week to their affiliates across the United States, and by 1955 that number had increased to 2,500 per week for CBS. By 1954 the television industry's film consumption surpassed that of all of the Hollywood studios combined.
Hot kinescope
After the network of coaxial cable and microwave relays carrying programs to the West Coast was completed in September 1951, CBS and NBC instituted a hot kinescope process in 1952, where shows being performed in New York were transmitted west, filmed on two kinescope machines in 35 mm negative and 16 mm reversal film (the latter for backup protection) in Los Angeles, rushed to film processing, and then transmitted from Los Angeles three hours later for broadcast in the Pacific Time Zone. In September 1956, NBC began making color hot kines of some of its color programs using a lenticular film process which, unlike color negative film, could be processed rapidly using standard black-and-white methods. They were called hot kines because the film reels being delivered from the lab were still warm from the developing process.
Double system editing
Even after the introduction of quadruplex videotape machines in 1956 removed the need for hot kines, the television networks continued to use kinescopes in the double system method of videotape editing. It was impossible to slow or freeze frame a videotape at that time, so the unedited tape would be copied to a kinescope, and edited conventionally. The edited kinescope print was then used to conform the videotape master. More than 300 videotaped network series and specials used this method over a 12-year period, including the fast-paced Rowan & Martin's Laugh-In.
Alternatives to kinescoping
With the variable quality of Kinescopes, networks looked towards alternative methods to replace them with a higher degree of quality.
Change to 35 mm film broadcasts
Programs originally shot with film cameras (as opposed to kinescopes) were also used in television's early years, although they were generally considered inferior to the big-production live programs because of their lower budgets and loss of immediacy.
In 1951, the stars and producers of the Hollywood-based television series I Love Lucy, Desi Arnaz and Lucille Ball, decided to film the show directly onto 35 mm film using the three-camera system, instead of broadcasting it live. Normally, a live program originating from Los Angeles would be performed live in the late afternoon for the Eastern Time Zone and seen on a kinescope three hours later in the Pacific Time Zone. But as an article in American Cinematographer explained,
The I Love Lucy decision introduced reruns to most of the American television audience, and set a pattern for the syndication of TV shows after their network runs.
Electronicam
The program director of the DuMont Television Network, James L. Caddigan, devised an alternativethe Electronicam. In this, all the studio TV cameras had built-in 35 mm film cameras which shared the same optical path. An Electronicam technician threw switches to mark the film footage electronically, identifying the camera takes called by the director. The corresponding film segments from the various cameras then were combined by a film editor to duplicate the live program. The "Classic 39" syndicated episodes of The Honeymooners were filmed using Electronicam (as well as the daily five-minute syndicated series Les Paul & Mary Ford At Home in 1954–55), but with the introduction of a practical videotape recorder only one year away, the Electronicam system never saw widespread use. The DuMont network did not survive into the era of videotape, and in order to gain clearances for its programs, was heavily dependent on kinescopes, which it called Teletranscriptions.
Electronovision
Attempts were made for many years to take television images, convert them to film via kinescope, then project them in theatres for paying audiences. In the mid-1960s, Producer/entrepreneur H. William "Bill" Sargent, Jr. used conventional analog Image Orthicon video camera tube units, shooting in the B&W 819-line interlaced 25fps French video standard, using modified high-band quadruplex VTRs to record the signal. The promoters of Electronovision (not to be confused with Electronicam) gave the impression that this was a new system created from scratch, using a high-tech name (and avoiding the word kinescope) to distinguish the process from conventional film photography. Nonetheless, the advances in picture quality were, at the time, a major step ahead. By capturing more than 800 lines of resolution at 25 frame/s, raw tape could be converted to film via kinescope recording with sufficient enhanced resolution to allow big-screen enlargement. The 1960s productions used Marconi image orthicon video cameras, which have a characteristic white glow around black objects (and a corresponding black glow around white objects), which was a defect of the pick-up. Later vidicon and plumbicon video camera tubes produced much cleaner, more accurate pictures.
Videotape
In 1951, singer Bing Crosby’s company Bing Crosby Enterprises made the first experimental magnetic video recordings; however, the poor picture quality and very high tape speed meant it would be impractical to use. In 1956, Ampex introduced the first commercial Quadruplex videotape recorder, followed in 1958 by a colour model. Offering high quality and instant playback at a much lower cost, Quadruplex tape quickly replaced kinescope as the primary means of recording television broadcasts.
Decline
In the late 1960s, U.S. television networks continued to make kinescopes of their daytime dramas available, many of which still aired live during that time period, for their smaller network affiliates that did not yet have videotape capability but wished to time-shift the network programming. Some of these programs aired up to two weeks after their original dates, particularly in Alaska and Hawaii. Many episodes of programs from the 1960s survive only through kinescoped copies.
In Australia, kinescopes were still being made of some evening news programs as late as 1977, if they were recorded at all. A recording of a 1975 episode of Australian series This Day Tonight is listed on the National Archives of Australia website as a kinescope, while surviving episodes of the 1978 drama series The Truckies also exist as kinescopes, indicating that the technology was still being used by the ABC at that point.
Until the early 1960s, much of the BBC's output, and British television in general, was broadcast live, and entire drama productions were performed live for a second time until recording methods improved. Eventually, telerecordings would be used to preserve a program for repeat showings. In the UK, telerecordings continued to be made after the introduction of commercial broadcast videotape in 1958 as they possessed several distinct advantages. Firstly, they were easier to transport and more durable than videotape. Secondly, they could be used in any country regardless of the television broadcasting standard, which was not true of videotape. Later, the system could be used to make black-and-white copies of color programs for sale to television stations that were not yet broadcasting in color.
The system was largely used for black-and-white reproduction. Although some color telerecordings were made, they were generally in the minority as by the time color programs were widely needed for sale, video standards conversion was easier and higher quality and the price of videotape had become much reduced. Before videotape became the exclusive recording format during the early to mid-1980s, any (color) video recordings used in documentaries or filmed program inserts were usually transferred onto film.
In the 1950s a home telerecording kit was introduced in Britain, allowing enthusiasts to make 16 mm film recordings of television programs. The major drawback, apart from the short duration of a 16 mm film magazine, was that a large opaque frame had to be placed in front of the TV set in order to block out any stray reflections, making it impossible to watch the set normally while filming. It is not known if any recordings made using this equipment still exist.
British broadcasters used telerecordings for domestic purposes well into the 1960s, with 35 mm film usually used as it produced a higher quality result. For overseas sales, 16 mm film would be used, as it was cheaper. Although domestic use of telerecording in the UK for repeat broadcasts dropped off sharply after the move to colour in the late 1960s, 16 mm black and white film telerecordings were still being offered for sale by British broadcasters well into the 1970s.
Telerecording was still being used internally at the BBC in the 1980s too, to preserve copies for posterity of programs that were not necessarily of the highest importance, but which nonetheless their producers wanted to be preserved. If there were no videotape machines available on a given day, then a telerecording would be made. There is evidence to suggest that the children's magazine program Blue Peter was occasionally telerecorded as late as 1985. After this point, however, cheap domestic videotape formats such as VHS could more easily be used to keep a backup reference copy of a program.
Another occasional use of telerecording into the late 1980s was by documentary makers working in 16 mm film who wished to include a videotape-sourced excerpt in their work, although such use was again rare.
In other territories, film telerecordings stopped being produced after the introduction of videotape. In Czechoslovakia, the first videotape recorders (Machtronics MVR-15) were introduced in 1966 but soon were replaced by the Ampex 2" Quadruplex in 1967. Most of the programs, like TV dramas, were recorded on video, but only a few programs continued to be telerecorded onto 16 mm film. The last known telerecording was produced in 1971 and soon after, all programs were recorded on video only.
Legacy
Kinescopes were intended to be used for immediate rebroadcast, or for an occasional repeat of a program; thus, only a small fraction of kinescope recordings remain today. Many television shows are represented by only a handful of episodes, such as with the early television work of comedian Ernie Kovacs, and the original version of Jeopardy! hosted by Art Fleming.
Another purpose of Kinescopes involved satisfying show sponsors. Kinescopes sometimes would be sent to the advertising agency for the sponsor of a show so that the ad agency could determine whether or not the sponsor's ads appeared properly. Due to this practice, some kinescopes have actually been discovered in the storage areas of some of these older advertising agencies or in the storage areas of the program sponsors themselves.
In the United States
Certain performers or production companies would require that a kinescope be made of every television program. Such is the case with performers Jackie Gleason and Milton Berle, for whom nearly complete program archives exist. As Jackie Gleason's program was broadcast live in New York, the show was kinescoped for later rebroadcast for the West Coast. Per his contract, he would receive one copy of each broadcast, which he kept in his vault, and only released them to the public (on home video) shortly before his death in 1987.
Milton Berle sued NBC late in his life, believing the kinescopes of a major portion of his programs were lost. However, the programs were later found in a warehouse in Los Angeles.
Mark Goodson-Bill Todman Productions, the producers of such TV game shows as What's My Line?, had a significant portion of their output recorded on both videotape and kinescopes. These programs are rebroadcast on the American cable TV's Game Show Network.
All of the NBC Symphony Orchestra telecasts with Arturo Toscanini, from 1948 to 1952, were preserved on kinescopes and later released on VHS and LaserDisc by RCA and on DVD by Testament. The original audio from the kinescopes, however, was replaced with high fidelity sound that had been recorded simultaneously either on transcription discs or magnetic tape.
In the mid-1990s, Edie Adams, wife of Ernie Kovacs, claimed that so little value was given to the kinescope recordings of the DuMont Television Network that after the network folded in 1956 its entire archive was dumped into upper New York bay. Today however, efforts are made to preserve the few surviving DuMont kinescopes, with the UCLA Film and Television Archive having collected over 300 for preservation.
In September 2010, a kinescope of game 7 of the 1960 World Series was found in the wine cellar of Bing Crosby. The game was thought lost forever but was preserved due to Crosby's superstition about watching the game live. The film was transferred to DVD and was broadcast on the MLB Network shortly afterwards.
In Australia
Early Australian television drama series were recorded as kinescopes, such as Autumn Affair and Emergency, along with variety series like The Lorrae Desmond Show. Kinescopes continued to be made after videotape was introduced to Australia; most existing episodes of the 1965–1967 children's series Magic Circle Club are kinescopes (per listings for episodes on National Film and Sound Archive website)
In Britain
Telerecordings form an important part of British television heritage, preserving what would otherwise have been lost. Nearly every pre-1960s British television program in the archives is in the form of a telerecording, along with the vast majority of existing 1960s output. Videotape was expensive and could be wiped and re-used; film was cheaper, smaller, and in practice more durable. Only a very small proportion of British television from the black-and-white era survives at all.
As the BBC has taken stock of the large gaps in its archive and sought to recover as much of the missing material as possible, many recovered programs, have been returned from the 1980s onwards as telerecordings held by foreign broadcasters or private film collectors. Many of these surviving telerecorded programs, such as episodes of Doctor Who, Steptoe and Son and Till Death Us Do Part continue to be transmitted on satellite television stations such as UKTV Gold, and many such programs have been released on VHS and DVD.
In 2008, the BBC undertook color restoration work on the existing 16 mm monochrome telerecording of Room at the Bottom, a 1969 episode of the sitcom Dad's Army. Although this episode was originally produced and broadcast in color, the black and white film was the only surviving copy of the episode following the wiping of the original videotape. However, the telerecording process left color information in the form of chroma dots in the frames of the film; using a specially designed computer program, these chroma dots were used to bring out the original color information, which as then applied to the film, allowing the color to be restored to the episode. The restored version of Room at the Bottom was broadcast on 13 December 2008, the first time it had been seen in color since May 1970.
Technology
NTSC television images are scanned at roughly , with two interlaced fields per frame, displayed at 30 frames per second.
A kinescope must be able to convert the 30 frame/s image to 24 frame/s, the standard sound speed of film cameras and do so in a way so that the image is clear enough to then re-broadcast by means of a film chain back to 30 frame/s.
In kinescoping an NTSC signal, 525 lines are broadcast in one frame. A 35 mm or 16 mm camera exposes one frame of film for every one frame of television (525 lines), and moving a new frame of film into place during the time equivalent of one field of television (131.25 lines). In the British 405-line television system, the French 819-line television system and the greater European 625-line television system, television ran at 25 frames—or more correctly, 50 fields—per second, so the film camera would also be run at 25 frames per second rather than the cinematic film standard of 24 frames.
Therefore, in order to maintain successful kinescope photography, a camera must expose one frame of film for exactly 1/30th or 1/25th of a second, the time in which one frame of video is transmitted, and move to another frame of film within the small interval of 1/120th of a second.
In some instances, this was accomplished through means of an electronic shutter which cuts off the TV image at the end of every set of visible lines. Most US kinescope equipment, however, utilized a mechanical shutter, revolving at 24 revolutions per second. This shutter had a closed angle of 72° and an open angle of 288°, yielding the necessary closed time of 1/120th of a second and open time 1/30th of a second. Using this shutter, in 1 second of video (60 fields equalling 30 frames), 48 television fields (totaling to 24 frames of video) would be captured on 24 frames of film, and 12 additional fields would be omitted as the shutter closed and the film advanced.
Analog television is a field-based system, and most electronic video recording solutions retain both fields of every frame, preserving the temporal resolution of interlaced video. Some early consumer-grade video tape recorders preserved only one field of each frame. Film, being a frame-based system, can retain full information from interlaced video by converting every field into a frame, but the required frame rate had been deemed impractical. Various solutions to the mapping problem have been developed resulting in successive improvements to the quality of the image at the traditional 24 fps frame rate. Nevertheless, video converted to film loses the fluid look of interlaced video, taking on a look somewhat similar to film.
Shutter bar and banding problems
The 72°/288° shutter and the systematic loss of 12 fields per second were not without side effects. In going from 30 frame/s to 24 frame/s, the camera photographed part of some fields. The juncture on the film frame where these part-fields meet is called a splice.
If the timing is accurate, the splice is invisible. However, if the camera and television are out of phase, a phenomenon known as shutter bar or banding occurs. If the shutter is slow in closing, overexposure results where the partial fields join and the shutter bar takes the form of a white line. If the shutter closes too soon, underexposure takes place and the line is black. The term banding refers to the phenomenon occurring on the screen as two bars.
Suppressed field
A simpler system, less prone to breakdown, was to suppress one of the two fields in displaying the television picture. This left the time during which the second field would have been displayed for the film camera to advance the film by one frame, which proved sufficient. This method was called skip field recording.
The method had several disadvantages. In missing every other field of video, half the information of the picture was lost on such recordings. The resulting film thus consisted of fewer than 200 lines of picture information and as a result, the line structure was very apparent. The missing field information also made movement look jerky.
Stored field
A successful improvement on the suppressed field system was to display the image from one of the fields at a much higher intensity on the television screen during the time when the film gate was closed, and then capture the image as the second field was being displayed. By adjusting the intensity of the first field, it was possible to arrange it so that the luminosity of the phosphor had decayed to exactly match that of the second field, so that the two appeared to be at the same level and the film camera captured both.
Another technique developed by the BBC, known as spot wobble, involved the addition of an extremely high frequency but low voltage sine wave to the vertical deflection plate of the television screen, which changed the moving 'spot' - a circular beam of electrons by which the television picture was displayed - into an elongated oval. While this made the image slightly blurred, it removed the visible line structure (by causing adjacent lines to touch, so that no separating band of darkness lay between them) and thereby resulted in a better image. It also prevented moiré pattern from appearing when the resulting film was re-broadcast on television, which occurred if the line structure on the film recording did not precisely match the scanning lines of the electronic film scanner.
Moye-Mechau film recording
The Mechau system used a synchronized rotating mirror to display each frame of a film in sequence without the need for a gate. When reversed, a high-quality television monitor was set up in place of the projection screen, and unexposed film stock is run through at the point where the lamp would have been illuminating the film.
This procedure had the advantage of capturing both fields of the frame on a film, but required significant attention to produce quality reulity results. The Mechau film magazine only held enough for nine minutes so two recorders were needed to run in sequence in order to record anything longer.
Lenses
Lenses did not need a great depth of field but had to be capable both of producing a very sharp image with high resolution of a flat surface and of doing so at high speed. In order to minimize light fall-off on the perimeter of the lens, a coated lens was preferable. 40 mm or 50 mm lenses were usually used with 16 mm in calibrated mounts. Focus was checked by examining a print under a microscope.
Sound recording
The camera could be equipped with sound recording to place the soundtrack and picture on the same film for single-system sound recording. More commonly, the alternative double system, whereby the soundtrack was recorded on an optical recorder or magnetic dubber in sync with the camera, yielded a better quality soundtrack and facilitated editing.
Kinescope image
Kinescope CRTs intended for photographic use were coated with phosphors rich in blue and ultraviolet radiation. This permitted the use of positive type emulsions for photographing in spite of their slow film speeds. The brightness range of kinescope CRTs was about 1 to 30.
Kinescope images were capable of great flexibility. The operator could make the CRT image brighter or darker, adjust contrast, width and height, rotate left, right or upside down, and positive or negative image.
Since kinescope CRTs were able to produce a negative image, direct positive recordings could be made by simply photographing the negative image on the kinescope CRT. When making a negative film, in order for final prints to be in the correct emulsion position, the direction of the image was reversed on the television. This applied only when double system sound was used.
Film stock used
For kinescopes, 16 mm film was the common choice by most studios because of the lower cost of stock and film processing, but in the larger network markets, it was not uncommon to see 35 mm kinescopes, particularly for national rebroadcast. Fine grain positive stock was most commonly used because of its low cost and high resolution.
Common issues
Videotape engineer Frederick M. Remley wrote of kinescope recordings:
Because each field is sequential in time to the next, a kinescope film frame that captured two interlaced fields at once often showed a ghostly fringe around the edges of moving objects, an artifact not as visible when watching television directly at 50 or 60 fields per second.
Some kinescopes filmed the television pictures at the television frame rate - 30 full frames per second for American System M broadcasts and 25 full frames per second for European System B broadcasts, resulting in more faithful picture quality than those that recorded at 24 frames per second. The standard was later changed to 59.94 fields/s or 29.97 frame/s for System M broadcasts, due to the technical requirements of color TV. Since these reasons did not affect System B, the color TV framerate in Europe remained at 25 frames/s.
In the era of early color TV, the chroma information included in the video signal filmed could cause visible artifacts. It was possible to filter the chroma out, but this was not always done. Consequently, the color information was included (but not in color) in the black-and-white film image. Using modern computing techniques, the color may now be recovered, a process known as color recovery.
Because videotape records at fifty interlaced fields per second and telerecordings at twenty-five progressive frames per second, videotaped programs that exist now only as telerecordings have lost their characteristic "live video" look and the motion now looks filmic. One solution to this problem is VidFIRE, an electronic process to restore video-type motion.
See also
Intermittent mechanism
Notes
References
External links
Audiovisual introductions in 1947
Lost television shows
Television preservation
Television terminology
Television technology
Obsolete technologies | Kinescope | [
"Technology"
] | 6,512 | [
"Information and communications technology",
"Television technology"
] |
165,380 | https://en.wikipedia.org/wiki/Tulane%20University | Tulane University, officially the Tulane University of Louisiana, is a private research university in New Orleans, Louisiana, United States. Founded as the Medical College of Louisiana in 1834 by a cohort of medical doctors, it became a comprehensive public university in the University of Louisiana in 1847. The institution became private under the endowments of Paul Tulane and Josephine Louise Newcomb in 1884 and 1887. The Tulane University Law School and Tulane University Medical School are, respectively, the 12th oldest law school and 15th oldest medical school in the United States.
Tulane has been a member of the Association of American Universities since 1958 and is classified among "R1: Doctoral Universities – Very high research activity". Alumni include 12 governors of Louisiana; 1 Chief Justice of the United States; various members of Congress, including a Speaker of the U.S. House; 2 Surgeons General of the United States; 23 Marshall Scholars; 18 Rhodes Scholars; 15 Truman Scholars; 155 Fulbright Scholars; 4 living billionaires; and a former President of Costa Rica. Two Nobel laureates have been affiliated with the university.
History
Founding and early history – 19th century
The university was founded as the Medical College of Louisiana in 1834 partly as a response to the fears of smallpox, yellow fever, and cholera in the United States. The university became the second medical school in the South, and the 15th in the United States at the time. In 1847, the state legislature established the school as the University of Louisiana, a public university, and the law department was added to the university. Subsequently, in 1851, the university established its first academic department. The first president chosen for the new university was Francis Lister Hawks, an Episcopal priest and prominent citizen of New Orleans at the time.
The university was closed from 1861 to 1865 during the American Civil War. After reopening, it went through a period of financial challenges because of an extended agricultural depression in the South which affected the nation's economy. Paul Tulane, owner of a prospering dry goods and clothing business, donated extensive real estate within New Orleans for the support of education. This donation led to the establishment of a Tulane Educational Fund (TEF), whose board of administrators sought to support the University of Louisiana instead of establishing a new university. In response, through the influence of former Confederate general Randall Lee Gibson, the Louisiana state legislature transferred control of the University of Louisiana to the administrators of the TEF in 1884. This act created the Tulane University of Louisiana. The university was privatized, and is one of only a few American universities to be converted from a state public institution to a private one.
Paul Tulane's endowment to the school specified that the institution could only admit white students, and Louisiana law passed in 1884 reiterated this condition.
In 1884, William Preston Johnston became the first president of Tulane. He had succeeded Robert E. Lee as president of Washington and Lee University after Lee's death. He had moved to Louisiana and become president of Louisiana State University in 1880.
In 1885, the university established its graduate division, later becoming the Graduate School. One year later, gifts from Josephine Louise Newcomb totaling over $3.6 million, led to the establishment of the H. Sophie Newcomb Memorial College within Tulane University. Newcomb was the first coordinate college for women in the United States and became a model for such institutions as Pembroke College and Barnard College. In 1894 the College of Technology formed, which would later become the School of Engineering. In the same year, the university moved to its present-day uptown campus on historic St. Charles Avenue, five miles (8 km) by streetcar from downtown New Orleans.
20th century
With the improvements to Tulane University in the late 19th century, Tulane had a firm foundation to build upon as the premier university of the Deep South and continued the legacy with growth in the 20th century. During 1907, the school established a four-year professional curriculum in architecture through the College of Technology, growing eventually into the Tulane School of Architecture. One year later, Schools of Dentistry and Pharmacy were established, albeit temporarily. The School of Dentistry ended in 1928, and Pharmacy six years later. In 1914, Tulane established a College of Commerce, the first business school in the South. In 1925, Tulane established the independent Graduate School. Two years later, the university set up a School of Social Work, also the first in the southern United States. Tulane was instrumental in promoting the arts in New Orleans and the South in establishing the Newcomb School of Art with William Woodward as director, thus establishing the renowned Newcomb Pottery. The Middle American Research Institute was established in 1925 at Tulane "for the purpose of advanced research into the history (both Indian and colonial), archaeology, tropical botany (both economic and medical), the natural resources and products, of the countries facing New Orleans across the waters to the south; to gather, index and disseminate data thereupon; and to aid in the upbuilding of the best commercial and friendly relations between these Trans-Caribbean peoples and the United States."
University College was established in 1942 as Tulane's division of continuing education. By 1950, the School of Architecture had grown out of Engineering into an independent school. In 1958, the university was elected to the Association of American Universities, an organization consisting of 62 of the leading research universities in North America. The School of Public Health and Tropical Medicine again became independent from the School of Medicine in 1967. It was established in 1912. Tulane's School of Tropical Medicine also remains the only one of its kind in the country. On April 23, 1975, US President Gerald Ford spoke at Tulane University's Fogelman Arena at the invitation of F. Edward Hebert, the US representative of Louisiana's 1st Congressional District. During the historic speech, Ford announced that the Vietnam War was "finished as far as America is concerned" one week before the fall of Saigon. Ford drew parallels to the Battle of New Orleans and said that such positive activity could do for America's morale what the battle did in 1815.
During World War II, Tulane was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission.
In 1963, Tulane enrolled its first African American students. In 1990, Rhonda Goode-Douglas, alongside other black, female students, founded the first African American sorority in Tulane's history, AKA Omicron Psi.
A detailed account of the history of Tulane University from its founding through 1965 was published by Dyer.
21st century
In July 2004, Tulane received two $30 million donations to its endowment, the largest individual or combined gifts in the university's history. The donations came from James H. Clark, a member of the university's board of trustees and founder of Netscape, and David Filo, a graduate of its School of Engineering and co-founder of Yahoo!. A fund-raising campaign called "Promise & Distinction" raised $730.6 million by October 3, 2008, increasing the university's total endowment to more than $1.1 billion; by March 2009, Yvette Jones, Tulane's Chief Operating Officer, told Tulane's Staff Advisory Council that the endowment "has lost close to 37%", affected by the Great Recession. In 2021, Tulane had to evacuate all students and close down for a month due to damage from Hurricane Ida. No classes took place for two weeks, then there were virtual classes for the remaining two weeks.
In June 2024, non-tenure track faculty at Tulane voted to form Tulane Workers United, the first higher education faculty union in the state of Louisiana. The union is formally affiliated with Workers United and SEIU.
Hurricane Katrina
As a result of Hurricane Katrina in August 2005 and its damaging effects on New Orleans, most of the university was closed for the second time in its history—the first being during the Civil War. The closing affected the first semester of the school calendar year. The School of Public Health and Tropical Medicine's distance learning programs and courses stayed active. The School of Medicine relocated to Houston, Texas for a year. Aside from student-athletes attending college classes together on the same campuses, most undergraduate and graduate students dispersed to campuses throughout the U.S. The storm inflicted more than $650 million in damages to the university, with some of the greatest losses impacting the Howard-Tilton Memorial Library and its collections.
Facing a budget shortfall, the Board of Administrators announced a "Renewal Plan" in December 2005 to reduce its annual operating budget and create a "student-centric" campus. Addressing the school's commitment to New Orleans, a course credit involving service learning became a requirement for an undergraduate degree. In 2006 Tulane became the first Carnegie ranked "high research activity" institution to have an undergraduate public service graduation requirement. In May 2006, graduation ceremonies included commencement speakers former Presidents George H. W. Bush and Bill Clinton, who commended the students for their desire to return to Tulane and serve New Orleans in its renewal.
Campus
Uptown
Tulane's primary campus is located in Uptown New Orleans on St. Charles Avenue, directly opposite Audubon Park, and extends north to South Claiborne Avenue through Freret and Willow Street. The campus is known colloquially as the Uptown or St. Charles campus. It was established in the 1890s and occupies more than of land. The campus is known both for its large live oak trees as well as its architecturally historic buildings. It has been listed on the National Register of Historic Places since 1978. The campus architecture consists of several styles, including Richardsonian Romanesque, Elizabethan, Italian Renaissance, Mid-Century Modern, and contemporary styles. The front campus buildings use Indiana White Limestone or orange brick for exteriors, while the middle campus buildings are mostly adorned in red St. Joe brick, the staple of Newcomb College Campus buildings. Loyola University is directly adjacent to Tulane, on the downriver side. Audubon Place, where the President of Tulane resides, is on the upriver side. The President's residence is the former home of "banana king" Sam Zemurray, who donated it in his will.
The centerpiece of the Gibson Quad is the first academic building built on campus, Gibson Hall, in 1894. The School of Architecture is also located on the oldest section of the campus, occupying the Richardson Memorial Building. The middle of the campus, between Feret and Willow Streets, and bisected by McAlister Place and Newcomb Place, serves as the center of campus activities. The Lavin-Bernick Center for University Life, Devlin Fieldhouse, McAlister Auditorium, Howard-Tilton Memorial Library, and most of the student residence halls and academic buildings populate the center of campus.
The Howard-Tilton Memorial Library is located on Freret Street. It was under construction from 2013 to 2016, but it now has two additional floors, as well as a Rare Books room. The facilities for the Freeman School of Business line McAlister Place and sit next to the Tulane University Law School. The center of campus is also home to the historic Newcomb College Campus, which sits between Newcomb Place and Broadway. The Newcomb campus was designed by New York architect James Gamble Rogers, noted for his work with Yale University's campus. The Newcomb campus is home to Tulane's performing and fine arts venues.
The back of campus, between Willow Street and South Claiborne, is home to two residence halls (Aron Residences and Décou-Labat Residences), Reily Recreation Center, and Turchin Stadium, and in January 2013, ground was broken on Tulane's Yulman Stadium between Reily Recreation Center and Turchin Stadium. Tulane Green Wave football had played in the Mercedes-Benz Superdome since Tulane Stadium's demolition in 1980. They now play in Yulman Stadium, which opened in September 2014.
After Hurricane Katrina, Tulane has continued to build new facilities and renovate old spaces on its campus. The newest dorm buildings, Lake and River Residence Halls, were completed in 2023 following the demolition of Phelps Hall and Irby Hall. Weatherhead Hall was completed in 2011, and it now houses sophomore students. Construction on Greenbaum House, a Residential College in the Newcomb Campus area, began in January 2013 and was completed by Summer 2014. The Lallage Feazel Wall Residential College was completed in August 2005 and took in its first students when Tulane re-opened in January 2006. Usually an honors dorm, Wall began accommodating students of all academic standings during the COVID-19 pandemic. The Lavin-Bernick Center for University Life (LBC) was renovated to be a green, environmentally friendly building and opened for student use in January 2007. In 2009, the university altered McAlister Drive, a street that ran through the middle of the uptown campus into a pedestrian walkway renamed McAlister Place. The area was resurfaced, and the newly added green spaces were adorned with Japanese magnolias, irises and new lighting. In late November 2008 the City of New Orleans announced plans to add bicycle lanes to the St. Charles Avenue corridor that runs in front of campus.
In 2019, a new student space located in the middle of the uptown campus, The Malkin Sacks Commons, was opened by President Mike Fitts. The Commons is the central dining area on campus. Catering to most dietary restrictions, The Commons directly connects to the Lavin-Bernick Center on the second floor, and on its third floor houses the Newcomb Institute.
Graduate housing
There is one graduate housing complex for Tulane University, Bertie M. and John W. Deming Pavilion, in the Downtown Campus. it is not operated by the university's Department of Housing and Residence Life.
There were previously two other complexes:
Global Collective, a graduate student housing complex on the Uptown campus of Tulane university operated by the university's Department of Housing and Residence Life
Papillon Apartments, an apartment complex in the Lower Garden District operated by the university's Department of Housing and Residence Life for graduate students and their families. It was managed by HRI Properties. The university acquired the building circa 2005, which previously served as an apartment for people unaffiliated with the university. The university initially paid the taxes for the apartments of legacy non-Tulane residents but began charging the taxes to these tenants in 2013. In June 2016 the university announced it would not renew the leases of the non-Tulane tenants.
Other campuses
The Tulane University Health Sciences campus is located in the downtown New Orleans Central Business District between the Mercedes-Benz Superdome and Canal Street in 18 mid/high-rise buildings, which house the School of Medicine, the School of Public Health and Tropical Medicine, and the main campus of the Tulane Medical Center. In addition to medical and public health education, the Health Sciences campus is the central location for biomedical research. Students and faculty from the Health Sciences campus are also involved in community-wide health promotion, such as community health fairs and distributing condoms to address the high rate of STIs in New Orleans. In 2014, the Tulane University School of Social Work relocated from the Uptown campus to the Health Sciences campus, with facilities located in a renovated historic building on Elk Place.
Tulane University Square consists of of space and of surrounding land located on Broadway and Leake Avenue adjacent to the Mississippi River.
Outside of New Orleans, the Tulane National Primate Research Center in Covington, Louisiana is one of eight such centers funded by the National Institutes of Health. The F. Edward Hebert Research Center near Belle Chasse, Louisiana provides facilities for graduate training and research in computer science, bioengineering, and biology. Satellite campuses of the School of Continuing Studies, Tulane's open admissions school of continuing studies, are located in downtown New Orleans, in Elmwood, Louisiana, and in Biloxi, Mississippi. From 2010 to 2017, Tulane also operated a satellite campus in Madison, Mississippi.
Tulane offers an executive MBA program in Cali, Colombia; Santiago, Chile; Shanghai, China; and Taipei, Taiwan.
Environmental sustainability
Tulane hosted an Environmental Summit at its law school in April 2009, an event that all students could attend for free. Many students from Tulane's two active environmental groups, Green Club and Environmental Law Society, attended. These student groups push for global citizenship and environmental stewardship on campus. In 2007 Tulane made a commitment to reduce greenhouse gas emissions by 10%, getting students involved by providing an Energy Smart Shopping Guide and electronics "greening" services from IT. In 2010 Tulane completed its renovation of 88-year-old Dinwiddie Hall, which was subsequently LEED Gold certified. A new residential college, Weatherhead Hall, opened in 2011 as housing for sophomore honors students. The residence has also applied for LEED Gold certification. Tulane received an "A−" on the 2011 College Sustainability Report Card, garnering an award as one of the top 52 most sustainable colleges in the country.
Organization and administration
Tulane University, as a private institution, has been governed since 1884 by the Board of Tulane (also known as the Board of Administrators of the Tulane Educational Fund) that was established in 1882. There have been 15 presidents of Tulane since then. The board comprises more than 30 regular members (plus several members emeriti) and the university president. In 2008, Tulane became one of 76 U.S. colleges and the only Louisiana college to maintain an endowment above $1 billion.
Tulane is organized into 10 schools centered around liberal arts, sciences, and specialized professions. All undergraduate students are enrolled in the Newcomb-Tulane College. The graduate programs are governed by the individual schools. Newcomb-Tulane College serves as an administrative center for all aspects of undergraduate life at Tulane, while individual schools direct specific courses of study.
The first architecture courses at Tulane leading to an architectural engineering degree were offered in 1894. After beginning as part of the College of Technology, the Tulane School of Architecture was separately formed as a school in 1953.
The A.B. Freeman School of Business was named in honor of Alfred Bird Freeman, former chair of the Louisiana Coca-Cola Bottling Co. and a prominent New Orleans philanthropist and civic leader. The business school is ranked 44th nationally and 28th among programs at private universities by Forbes magazine. U.S. News & World Report'''s Best Graduate Schools 2015 edition ranked the MBA program 63rd overall.
The Tulane University Law School, established in 1847, is the 12th oldest law school in the United States. In 1990, it became the first law school in the United States to mandate pro bono work as a graduation requirement. U.S. News & World Reports 2015 edition ranked the School of Law 46th overall and 6th in environmental law, while the 2022 edition ranked the School of Law 60th overall. "The Law School 100" ranks Tulane as 34th, relying on a qualitative (rather than quantitative) assessment. The 2010 Leiter law-school rankings put Tulane at 38th, based on student quality, using LSAT and GPA data. The Hylton law-school rankings, conducted in 2006, put Tulane at 39th. The school's maritime law program is widely considered to be the best in the United States, with the Tulane Maritime Law Journal being the paramount admiralty law journal of the country. In May 2007, Tulane Law announced a Strategic Plan to increase student selectivity by gradually reducing the incoming JD class size from a historical average of 350 students per year to a target of 250 students per year within several years.
The School of Liberal Arts encompasses 16 departments and 19 interdisciplinary programs in the social sciences, humanities, and fine and performing arts—including 50 undergraduate majors and two dozen M.A., M.F.A., and Ph.D. programs—plus the Shakespeare Festival, Summer Lyric Theatre, Carroll Gallery, Tulane Marching Band, and the Middle America Research Institute. The School of Liberal Arts is the largest of Tulane's nine schools with the greatest number of enrolled students, faculty members, majors, minors, and graduate programs.
The Tulane University School of Medicine was founded in 1834 and is the 15th oldest medical school in the United States. Faculty have been noted for innovation. For example, in 1850 J. Lawrence Smith invented the inverted microscope. In the following year John Leonard Riddell invented the first practical microscope to allow binocular viewing through a single objective lens. In 2001 the Tulane Center for Gene Therapy started as the first major center in the U.S. to focus on research using adult stem cells. The school has highly selective admissions, accepting only 175 medical students from more than 10,000 applications. It comprises 20 academic departments.
The Tulane University School of Public Health and Tropical Medicine is the first public health school established in the U.S. Although a program in hygiene was initiated in 1881, the School of Hygiene and Tropical Medicine was not established until 1912 as a separate entity from the College of Medicine. In 1919 the separate school ceased to be an independent unit and was merged with the College of Medicine. By 1967 the School of Public Health and Tropical Medicine reestablished as a separate academic unit of Tulane. In the fall of 2006, the School of Public Health began admitting undergraduate students.
The Tulane University School of Science and Engineering was established in 2005.
In 1914 the Southern School of Social Sciences and Public Services was the first training program for social workers in the Deep South. By 1927 the school became a separate program with a two-year Master of Arts. The Tulane University School of Social Work has awarded the master of social work degrees to more than 4,700 students from all 50 of the United States and more than 30 other countries.
Tulane offers continuing education courses and associate's and bachelor's degrees through the Tulane School of Professional Advancement. Tulane has several academic and research institutes and centers including The Murphy Institute, Newcomb College Center for Research on Women, The Roger Thayer Stone Center for Latin American Studies, the Middle American Research Institute, and the law school's Payson Center for International Development.
Academics
As part of the post-Hurricane Katrina Renewal Plan, the university initiated an extensive university-wide core curriculum. Several major elements of the university core include freshman seminars called TIDES classes, a two-tier writing course sequence, and a two-tier course sequence for public service. Many other course requirements of the core curriculum can be certified through Advanced Placement (AP) or International Baccalaureate (IB) exam scores, or placement exams in English and foreign languages offered by the university before course registration. Some schools' core requirements differ (e.g., students in the School of Science and Engineering are required to take fewer language classes than students in the School of Liberal Arts).
Research
Tulane was elected to the Association of American Universities in 1958. It is classified among "R1: Doctoral Universities – Very high research activity" and had research expenditure of $193.3 million in fiscal year 2018.
In 2008, Tulane was ranked by the Ford Foundation as the major international studies research institution in the South and one of the top 15 nationally. The National Institutes of Health ranks funding to Tulane at 79th. The university is home to various research centers, including the Amistad Research Center.
Fulbright Scholars: 155
Rhodes Scholars: 17
Marshall Scholars: 23
Goldwater Scholars: 31
Truman Scholars: 13
National Science Foundation Fellows: 33
Rankings
Overall university rankings and ratings include:
One of 195 U.S. universities recognized by the Carnegie Foundation for the Advancement of Teaching with a "community engagement" classification.
The 2023 edition of U.S. News & World Report ranked Tulane tied for 73rd among U.S. national universities. In addition, U.S. News & World Report ranked Tulane tied for 58th "Most Innovative", tied for 62nd in "Best Undergraduate Teaching", and 114th for "Best Value" among national universities.
Tulane held multiple rankings from The Princeton Review in 2023: Best Quality of Life (#7), Best-Run Colleges (#22), Happiest Students (#1), Lots of Beer (#4), Lots of Hard Liquor (#5), Most Engaged in Community Service (#2), Their Students Love These Colleges (#6).
Forbes magazine ranked Tulane 106th in 2019 out of 650 U.S. universities, colleges and service academies.
Admissions
According to U.S. News & World Report Tulane is deemed a "Most Selective" university. Tulane is the only institution in Louisiana to have that distinction. The school accepts the Common Application for admission. Tulane has the second lowest percentage of Pell Grant recipients in the United States, only after Fairfield University. It is need-blind for domestic applicants.
The Office of Undergraduate Admission received over 43,000 applications for fall 2022 — over a 21 percent increase over the last five years. The acceptance rate for the class of 2026 was 8.4 percent. The yield rate was 52 percent. Among freshman students who committed to enroll in Fall 2022, the average converted SAT score was 1474. Composite ACT scores for the middle 50% ranged from 31 to 34.
Honors program admissions
The most impressive incoming undergraduate students are invited to join the honors program by the Office of Admission. Incoming freshmen who did not receive an invitation are allowed to apply for one after completing their first semester with at least a 3.8 cumulative GPA. To remain in good standing with the honors program, honors students are required to maintain at least a 3.8 cumulative GPA and enroll in honors classes their first year in the program. Honors students have access to special privileges and learning opportunities on campus.
In 2021, the Newcomb-Tulane College (NTC) created programming for First Year Honors Scholars and phased out its honors program. Many of the components of the honors program have been incorporated into other NTC programs.
Scholarships
The Dean's Honor Scholarship is a merit-based scholarship awarded by Tulane which covers full tuition for the duration of the recipient's undergraduate program. The scholarship is offered to between 75 and 100 incoming freshmen by the Office of Undergraduate Admission and is awarded only through a separate application. This scholarship is renewable provided that the recipient maintains a minimum 3.0 GPA at the end of each semester and maintains continuous enrollment in a full-time undergraduate division. Typically, recipients have SAT I score of 1450 or higher or an ACT composite score of 33 or higher, rank in the top 5% of their high school graduating class, have a rigorous course load including honors and Advanced Placement classes, and an outstanding record of extracurricular activities. Notable recipients include Sean M. Berkowitz and David Filo.
Beginning in 2014, Tulane has partnered with the Stamps Family Charitable Foundation to offer the full-ride merit-based Stamps Leadership Scholarship, Tulane's most prestigious scholarship. Approximately 5 incoming students are awarded the Stamps Scholarship each year, and Tulane graduated its first class of Stamps Scholars in May 2018.
Student life
The student body of Tulane University is represented by the Associated Student Body (ASB). In 1998, the students of Tulane University voted by referendum to split the Associated Student Body (ASB) Senate into two separate houses, the Undergraduate Student Government (USG) and the Graduate and Professional Student Association (GAPSA). USG and GAPSA came together twice a semester to meet as the ASB Senate, where issues pertaining to the entire Tulane student body were discussed, presided over by the ASB President. However, starting in 2021, Tulane students and administrators collaborated to create a new student governance model. The USG was dissolved, and in its place, the Tulane Undergraduate Assembly (TUA) was formed.
Tulane maintains 3,600 beds in 14 residence halls on its uptown campus for undergraduate students. First year residence halls include Warren House, Sharp Hall, Monroe Hall, Paterson Hall, Josephine Louise Hall, Wall Hall, and Butler Hall. Sophomore residence halls include Aron Residences, Décou-Labat Residences, Greenbaum Hall, Lake Hall, River Hall, and Weatherhead Hall. Per the Renewal Plan instituted after Hurricane Katrina, Tulane requires all freshmen and sophomores to live on campus, except those who are from surrounding neighborhoods in New Orleans. Due to the increasing size of incoming classes, Tulane has allowed a small number of rising sophomores to reside off campus instead of being required to remain in campus housing. Housing is not guaranteed for juniors and seniors.
Student media
The Tulane Hullabaloo is the university's weekly student-run newspaper.
Athletics
Tulane competes in NCAA Division I as a member of the American Athletic Conference (The American). The university was a charter member of the Southeastern Conference, in which it competed until 1966. Just before leaving the SEC, it had notably become the first conference school to field a black athlete when Stephen Martin, who was on an academic scholarship, played on the baseball team in the 1966 season. Tulane, along with other academically oriented, private schools had considered forming the "Southern Ivy League" (Magnolia Conference) in the 1950s. Tulane's intercollegiate sports include football, baseball, men's and women's basketball, women's volleyball, men's and women's track, men's and women's tennis, and cross country, women's swimming and diving, women's tennis, women's golf, women's bowling, and women's beach volleyball. Tulane's graduation rate for its student-athletes consistently ranks among the top of Division I athletics programs.
Tulane Green Wave teams have seen moderate success over the years. The school's national championships have all come from men's tennis, with one team title in 1959 and multiple singles and doubles titles. The baseball team has won multiple conference titles, and in both 2001 and 2005, it finished with 56 wins and placed 5th at the College World Series. The women's basketball team has won multiple conference titles and gone to numerous NCAA tournaments. The women's volleyball team won the 2008 Conference USA Championship tournament. The Green Wave football team won the 2002 Hawaii Bowl, the 1970 Liberty Bowl, and the inaugural Sugar Bowl. In 1998 it went 12–0, winning the Liberty Bowl and finishing the season ranked 7th in the nation by the AP and 10th by the BCS. On January 2, 2023, Tulane beat a favored USC team in the Cotton Bowl, finishing the 2022 season with a 12–2 record.
Most administrative and athletic support facilities are located in the Wilson Athletic Center in the center of Tulane's athletic campus. The adjacent area was once home to Tulane Stadium, which seated more than 80,000 people, held three Super Bowls, was home to the New Orleans Saints, and gave rise to the Sugar Bowl. Home football games moved to the Mercedes-Benz Superdome when it opened in 1975, and Tulane Stadium was demolished in 1980. The university has committed to upgrading its athletic facilities in recent years, extensively renovating Turchin Stadium (baseball) in 2008, Fogelman Arena (now Devlin Fieldhouse; basketball and volleyball) in 2006 and 2012, and Goldring Tennis Center in 2008. The Hertz Center, a new practice facility for the basketball and volleyball teams that includes athletic training and strength and conditioning rooms, offices, film rooms, and lockers, opened in 2011. Tulane completed construction of Yulman Stadium in September 2014 and began using it for home football games that season.
Notable people
Tulane is home to many alumni who have contributed to both the arts and sciences and to the political and business realms. For example, from television: Jerry Springer and Ian Terry, from literature: John Kennedy Toole, Pulitzer Prize-winning author of A Confederacy of Dunces, Shirley Ann Grau, Pulitzer Prize for Fiction winner, and conservative journalist Andrew Breitbart, who later criticized his education at Tulane for what he perceived as its inadequacy; from business: David Filo, co-founder of Yahoo!, Ashley Biden, daughter of Jill Biden and Joseph R. Biden and Neil Bush, economist and brother of President George W. Bush; from entertainment: Lauren Hutton, film actor and supermodel, and Paul Michael Glaser, TV actor of "Starsky and Hutch"; from fine arts: Sergio Rossetti Morosini, artist and conservator, and internationally renowned glass artist Mitchell Gaudet; from music: conductor and composer Odaline de la Martinez, who was the first woman to conduct at a BBC Proms concert in London; from government: Newt Gingrich, former Speaker of the House who famously coordinated the first Congressional Republican majority in 40 years, Perry Chen, founder of Kickstarter and Luther Terry, former U.S. Surgeon General who issued the first official health hazard warning for tobacco; from medicine: Michael DeBakey and Dr. Regina Benjamin, President Obama's Surgeon General; from science A. Baldwin Wood, inventor of the wood screw pump and Lisa P. Jackson, United States Environmental Protection Agency (EPA) Administrator under President Obama; from sports: Bobby Brown, former New York Yankees third baseman and former president of the American League. A former graduate residence hall on campus was also named for Engineering graduate Harold Rosen, who invented the geosynchronous communications satellite. Douglas G. Hurley, NASA astronaut and pilot of mission STS-127, became the first alumnus to travel in outer space in July 2009.
Tulane also hosted several prominent faculty, such as two members who each won the Nobel Prize in Physiology or Medicine: Louis J. Ignarro and Andrew V. Schally. Other notables such as Rudolph Matas, "father of vascular surgery" and George E. Burch, inventor of the phlebomanometer in medicine, also were on faculty at Tulane. Five U.S. Supreme Court Justices have taught at Tulane, including Chief Justice William Rehnquist. Tulane has also hosted several prominent artists, most notably Mark Rothko, who was a Visiting Artist from 1956 to 1957. Currently on the faculty are Walter Isaacson, Nick Spitzer, Olawale Sulaiman and Melissa Harris-Perry.
Several football alumni played in the National Football League, including five-time NFL Champion Wide Receiver Max McGee, Mewelde Moore, Matt Forté, Troy Kropog, Dezman Moses, Cairo Santos (Chicago Bears), Darnell Mooney (Chicago Bears), and Super Bowl champion Shaun King (Tampa Bay).
Several baseball alumni played in the Major Leagues, including Brian Bogusevic (Chicago Cubs), Brandon Gomes (Tampa Bay Rays), Mark Hamilton (free agent), Aaron Loup (Toronto Blue Jays), Tommy Manzella (Colorado Rockies), Micah Owings (Washington Nationals), and J. P. France (Houston Astros).
Actor Harold Sylvester was the first African American to receive an athletic scholarship from Tulane. Turning down Harvard, he attended Tulane on a basketball scholarship and graduated in 1972 with a degree in theater and psychology.
Shalanda Young, who is an American political advisor who is the nominee to serve as deputy director of the Office of Management and Budget (OMB) in the Biden administration, graduated with her Masters in Health Administration.
In literature and media
Tulane has been portrayed in several books, television shows and films. These films include The Perfect Date, So Undercover, The Pelican Brief, College, and 22 Jump Street''. Several movies have been filmed at the Uptown campus, especially since tax credits from the state of Louisiana began drawing more productions to New Orleans in the early 2000s. The Uptown campus has hosted two movie premieres from 2006 to 2007.
See also
National Register of Historic Places listings in Orleans Parish, Louisiana
Newcomb Art Museum
A Studio in the Woods
Notes
References
External links
Tulane Athletics website
1834 establishments in Louisiana
Universities and colleges established in 1834
National Register of Historic Places in New Orleans
Universities and colleges accredited by the Southern Association of Colleges and Schools
Private universities and colleges in Louisiana
Need-blind educational institutions
Glassmaking schools
Universities and colleges in New Orleans | Tulane University | [
"Materials_science",
"Engineering"
] | 7,295 | [
"Glass engineering and science",
"Glassmaking schools"
] |
165,384 | https://en.wikipedia.org/wiki/Curie%20temperature | In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism is lost at a critical temperature.
The force of magnetism is determined by the magnetic moment, a dipole moment within an atom that originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction.
Permanent magnetism is caused by the alignment of magnetic moments, and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie–Weiss law, which is derived from Curie's law.
In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature.
Curie temperatures of materials
History
That heating destroys magnetism was already described in De Magnete (1600):Iron filings, after being heated for a long time, are attracted by a loadstone, yet not so strongly or from so great a distance as when not heated. A loadstone loses some of its virtue by too great a heat; for its humour is set free, whence its peculiar nature is marred. (Book 2, Chapter 23).
Magnetic moments
At the atomic level, there are two contributors to the magnetic moment, the electron magnetic moment and the nuclear magnetic moment. Of these two terms, the electron magnetic moment dominates, and the nuclear magnetic moment is insignificant. At higher temperatures, electrons have higher thermal energy. This has a randomizing effect on aligned magnetic domains, leading to the disruption of order, and the phenomena of the Curie point.
Ferromagnetic, paramagnetic, ferrimagnetic, and antiferromagnetic materials have different intrinsic magnetic moment structures. At a material's specific Curie temperature (), these properties change. The transition from antiferromagnetic to paramagnetic (or vice versa) occurs at the Néel temperature (), which is analogous to Curie temperature.
Materials with magnetic moments that change properties at the Curie temperature
Ferromagnetic, paramagnetic, ferrimagnetic, and antiferromagnetic structures are made up of intrinsic magnetic moments. If all the electrons within the structure are paired, these moments cancel out due to their opposite spins and angular momenta. Thus, even with an applied magnetic field, these materials have different properties and no Curie temperature.
Paramagnetic
A material is paramagnetic only above its Curie temperature. Paramagnetic materials are non-magnetic when a magnetic field is absent and magnetic when a magnetic field is applied. When a magnetic field is absent, the material has disordered magnetic moments; that is, the magnetic moments are asymmetrical and not aligned. When a magnetic field is present, the magnetic moments are temporarily realigned parallel to the applied field; the magnetic moments are symmetrical and aligned. The magnetic moments being aligned in the same direction are what causes an induced magnetic field.
For paramagnetism, this response to an applied magnetic field is positive and is known as magnetic susceptibility. The magnetic susceptibility only applies above the Curie temperature for disordered states.
Sources of paramagnetism (materials which have Curie temperatures) include:
All atoms that have unpaired electrons;
Atoms that have inner shells that are incomplete in electrons;
Free radicals;
Metals.
Above the Curie temperature, the atoms are excited, and the spin orientations become randomized but can be realigned by an applied field, i.e., the material becomes paramagnetic. Below the Curie temperature, the intrinsic structure has undergone a phase transition, the atoms are ordered, and the material is ferromagnetic. The paramagnetic materials' induced magnetic fields are very weak compared with ferromagnetic materials' magnetic fields.
Ferromagnetic
Materials are only ferromagnetic below their corresponding Curie temperatures. Ferromagnetic materials are magnetic in the absence of an applied magnetic field.
When a magnetic field is absent the material has spontaneous magnetization which is a result of the ordered magnetic moments; that is, for ferromagnetism, the atoms are symmetrical and aligned in the same direction creating a permanent magnetic field.
The magnetic interactions are held together by exchange interactions; otherwise thermal disorder would overcome the weak interactions of magnetic moments. The exchange interaction has a zero probability of parallel electrons occupying the same point in time, implying a preferred parallel alignment in the material. The Boltzmann factor contributes heavily as it prefers interacting particles to be aligned in the same direction. This causes ferromagnets to have strong magnetic fields and high Curie temperatures of around .
Below the Curie temperature, the atoms are aligned and parallel, causing spontaneous magnetism; the material is ferromagnetic. Above the Curie temperature the material is paramagnetic, as the atoms lose their ordered magnetic moments when the material undergoes a phase transition.
Ferrimagnetic
Materials are only ferrimagnetic below their corresponding Curie temperature. Ferrimagnetic materials are magnetic in the absence of an applied magnetic field and are made up of two different ions.
When a magnetic field is absent the material has a spontaneous magnetism which is the result of ordered magnetic moments; that is, for ferrimagnetism one ion's magnetic moments are aligned facing in one direction with certain magnitude and the other ion's magnetic moments are aligned facing in the opposite direction with a different magnitude. As the magnetic moments are of different magnitudes in opposite directions there is still a spontaneous magnetism and a magnetic field is present.
Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions. The orientations of moments however are anti-parallel which results in a net momentum by subtracting their momentum from one another.
Below the Curie temperature the atoms of each ion are aligned anti-parallel with different momentums causing a spontaneous magnetism; the material is ferrimagnetic. Above the Curie temperature the material is paramagnetic as the atoms lose their ordered magnetic moments as the material undergoes a phase transition.
Antiferromagnetic and the Néel temperature
Materials are only antiferromagnetic below their corresponding Néel temperature or magnetic ordering temperature, TN. This is similar to the Curie temperature as above the Néel Temperature the material undergoes a phase transition and becomes paramagnetic. That is, the thermal energy becomes large enough to destroy the microscopic magnetic ordering within the material. It is named after Louis Néel (1904–2000), who received the 1970 Nobel Prize in Physics for his work in the area.
The material has equal magnetic moments aligned in opposite directions resulting in a zero magnetic moment and a net magnetism of zero at all temperatures below the Néel temperature. Antiferromagnetic materials are weakly magnetic in the absence or presence of an applied magnetic field.
Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions preventing thermal disorder from overcoming the weak interactions of magnetic moments. When disorder occurs it is at the Néel temperature.
Listed below are the Néel temperatures of several materials:
Curie–Weiss law
The Curie–Weiss law is an adapted version of Curie's law.
The Curie–Weiss law is a simple model derived from a mean-field approximation, this means it works well for the materials temperature, , much greater than their corresponding Curie temperature, , i.e. ; it however fails to describe the magnetic susceptibility, , in the immediate vicinity of the Curie point because of correlations in the fluctuations of neighboring magnetic moments.
Neither Curie's law nor the Curie–Weiss law holds for .
Curie's law for a paramagnetic material:
The Curie constant is defined as
The Curie–Weiss law is then derived from Curie's law to be:
where:
is the Weiss molecular field constant.
For full derivation see Curie–Weiss law.
Physics
Approaching Curie temperature from above
As the Curie–Weiss law is an approximation, a more accurate model is needed when the temperature, , approaches the material's Curie temperature, .
Magnetic susceptibility occurs above the Curie temperature.
An accurate model of critical behaviour for magnetic susceptibility with critical exponent :
The critical exponent differs between materials and for the mean-field model is taken as = 1.
As temperature is inversely proportional to magnetic susceptibility, when approaches the denominator tends to zero and the magnetic susceptibility approaches infinity allowing magnetism to occur. This is a spontaneous magnetism which is a property of ferromagnetic and ferrimagnetic materials.
Approaching Curie temperature from below
Magnetism depends on temperature and spontaneous magnetism occurs below the Curie temperature. An accurate model of critical behaviour for spontaneous magnetism with critical exponent :
The critical exponent differs between materials and for the mean-field model as taken as = where .
The spontaneous magnetism approaches zero as the temperature increases towards the materials Curie temperature.
Approaching absolute zero (0 kelvin)
The spontaneous magnetism, occurring in ferromagnetic, ferrimagnetic, and antiferromagnetic materials, approaches zero as the temperature increases towards the material's Curie temperature. Spontaneous magnetism is at its maximum as the temperature approaches 0 K. That is, the magnetic moments are completely aligned and at their strongest magnitude of magnetism due to lack of thermal disturbance.
In paramagnetic materials thermal energy is sufficient to overcome the ordered alignments. As the temperature approaches 0 K, the entropy decreases to zero, that is, the disorder decreases and the material becomes ordered. This occurs without the presence of an applied magnetic field and obeys the third law of thermodynamics.
Both Curie's law and the Curie–Weiss law fail as the temperature approaches 0 K. This is because they depend on the magnetic susceptibility, which only applies when the state is disordered.
Gadolinium sulfate continues to satisfy Curie's law at 1 K. Between 0 and 1 K the law fails to hold and a sudden change in the intrinsic structure occurs at the Curie temperature.
Ising model of phase transitions
The Ising model is mathematically based and can analyse the critical points of phase transitions in ferromagnetic order due to spins of electrons having magnitudes of ±. The spins interact with their neighbouring dipole electrons in the structure and here the Ising model can predict their behaviour with each other.
This model is important for solving and understanding the concepts of phase transitions and hence solving the Curie temperature. As a result, many different dependencies that affect the Curie temperature can be analysed.
For example, the surface and bulk properties depend on the alignment and magnitude of spins and the Ising model can determine the effects of magnetism in this system.
One should note, in 1D the Curie (critical) temperature for a magnetic order phase transition is found to be at zero temperature, i.e. the magnetic order takes over only at = 0. In 2D, the critical temperature, e.g. a finite magnetization, can be calculated by solving the inequality:
Weiss domains and surface and bulk Curie temperatures
Materials structures consist of intrinsic magnetic moments which are separated into domains called Weiss domains. This can result in ferromagnetic materials having no spontaneous magnetism as domains could potentially balance each other out. The position of particles can therefore have different orientations around the surface than the main part (bulk) of the material. This property directly affects the Curie temperature as there can be a bulk Curie temperature and a different surface Curie temperature for a material.
This allows for the surface Curie temperature to be ferromagnetic above the bulk Curie temperature when the main state is disordered, i.e. ordered and disordered states occur simultaneously.
The surface and bulk properties can be predicted by the Ising model and electron capture spectroscopy can be used to detect the electron spins and hence the magnetic moments on the surface of the material. An average total magnetism is taken from the bulk and surface temperatures to calculate the Curie temperature from the material, noting the bulk contributes more.
The angular momentum of an electron is either + or − due to it having a spin of , which gives a specific size of magnetic moment to the electron; the Bohr magneton. Electrons orbiting around the nucleus in a current loop create a magnetic field which depends on the Bohr magneton and magnetic quantum number. Therefore, the magnetic moments are related between angular and orbital momentum and affect each other. Angular momentum contributes twice as much to magnetic moments than orbital.
For terbium which is a rare-earth metal and has a high orbital angular momentum the magnetic moment is strong enough to affect the order above its bulk temperatures. It is said to have a high anisotropy on the surface, that is it is highly directed in one orientation. It remains ferromagnetic on its surface above its Curie temperature (219 K) while its bulk becomes antiferromagnetic and then at higher temperatures its surface remains antiferromagnetic above its bulk Néel Temperature (230 K) before becoming completely disordered and paramagnetic with increasing temperature. The anisotropy in the bulk is different from its surface anisotropy just above these phase changes as the magnetic moments will be ordered differently or ordered in paramagnetic materials.
Changing a material's Curie temperature
Composite materials
Composite materials, that is, materials composed from other materials with different properties, can change the Curie temperature. For example, a composite which has silver in it can create spaces for oxygen molecules in bonding which decreases the Curie temperature as the crystal lattice will not be as compact.
The alignment of magnetic moments in the composite material affects the Curie temperature. If the material's moments are parallel with each other, the Curie temperature will increase and if perpendicular the Curie temperature will decrease as either more or less thermal energy will be needed to destroy the alignments.
Preparing composite materials through different temperatures can result in different final compositions which will have different Curie temperatures. Doping a material can also affect its Curie temperature.
The density of nanocomposite materials changes the Curie temperature. Nanocomposites are compact structures on a nano-scale. The structure is built up of high and low bulk Curie temperatures, however will only have one mean-field Curie temperature. A higher density of lower bulk temperatures results in a lower mean-field Curie temperature, and a higher density of higher bulk temperature significantly increases the mean-field Curie temperature. In more than one dimension the Curie temperature begins to increase as the magnetic moments will need more thermal energy to overcome the ordered structure.
Particle size
The size of particles in a material's crystal lattice changes the Curie temperature. Due to the small size of particles (nanoparticles) the fluctuations of electron spins become more prominent, which results in the Curie temperature drastically decreasing when the size of particles decreases, as the fluctuations cause disorder. The size of a particle also affects the anisotropy causing alignment to become less stable and thus lead to disorder in magnetic moments.
The extreme of this is superparamagnetism which only occurs in small ferromagnetic particles. In this phenomenon, fluctuations are very influential causing magnetic moments to change direction randomly and thus create disorder.
The Curie temperature of nanoparticles is also affected by the crystal lattice structure: body-centred cubic (bcc), face-centred cubic (fcc), and a hexagonal structure (hcp) all have different Curie temperatures due to magnetic moments reacting to their neighbouring electron spins. fcc and hcp have tighter structures and as a results have higher Curie temperatures than bcc as the magnetic moments have stronger effects when closer together. This is known as the coordination number which is the number of nearest neighbouring particles in a structure. This indicates a lower coordination number at the surface of a material than the bulk which leads to the surface becoming less significant when the temperature is approaching the Curie temperature. In smaller systems the coordination number for the surface is more significant and the magnetic moments have a stronger effect on the system.
Although fluctuations in particles can be minuscule, they are heavily dependent on the structure of crystal lattices as they react with their nearest neighbouring particles. Fluctuations are also affected by the exchange interaction as parallel facing magnetic moments are favoured and therefore have less disturbance and disorder, therefore a tighter structure influences a stronger magnetism and therefore a higher Curie temperature.
Pressure
Pressure changes a material's Curie temperature. Increasing pressure on the crystal lattice decreases the volume of the system. Pressure directly affects the kinetic energy in particles as movement increases causing the vibrations to disrupt the order of magnetic moments. This is similar to temperature as it also increases the kinetic energy of particles and destroys the order of magnetic moments and magnetism.
Pressure also affects the density of states (DOS). Here the DOS decreases causing the number of electrons available to the system to decrease. This leads to the number of magnetic moments decreasing as they depend on electron spins. It would be expected because of this that the Curie temperature would decrease; however, it increases. This is the result of the exchange interaction. The exchange interaction favours the aligned parallel magnetic moments due to electrons being unable to occupy the same space in time and as this is increased due to the volume decreasing the Curie temperature increases with pressure. The Curie temperature is made up of a combination of dependencies on kinetic energy and the DOS.
The concentration of particles also affects the Curie temperature when pressure is being applied and can result in a decrease in Curie temperature when the concentration is above a certain percent.
Orbital ordering
Orbital ordering changes the Curie temperature of a material. Orbital ordering can be controlled through applied strains. This is a function that determines the wave of a single electron or paired electrons inside the material. Having control over the probability of where the electron will be allows the Curie temperature to be altered. For example, the delocalised electrons can be moved onto the same plane by applied strains within the crystal lattice.
The Curie temperature is seen to increase greatly due to electrons being packed together in the same plane, they are forced to align due to the exchange interaction and thus increases the strength of the magnetic moments which prevents thermal disorder at lower temperatures.
Curie temperature in ferroelectric materials
In analogy to ferromagnetic and paramagnetic materials, the term Curie temperature () is also applied to the temperature at which a ferroelectric material transitions to being paraelectric. Hence, is the temperature where ferroelectric materials lose their spontaneous polarisation as a first or second order phase change occurs. In case of a second order transition, the Curie Weiss temperature which defines the maximum of the dielectric constant is equal to the Curie temperature. However, the Curie temperature can be 10 K higher than in case of a first order transition.
Ferroelectric and dielectric
Materials are only ferroelectric below their corresponding transition temperature . Ferroelectric materials are all pyroelectric and therefore have a spontaneous electric polarisation as the structures are unsymmetrical.
Ferroelectric materials' polarization is subject to hysteresis (Figure 4); that is they are dependent on their past state as well as their current state. As an electric field is applied the dipoles are forced to align and polarisation is created, when the electric field is removed polarisation remains. The hysteresis loop depends on temperature and as a result as the temperature is increased and reaches the two curves become one curve as shown in the dielectric polarisation (Figure 5).
Relative permittivity
A modified version of the Curie–Weiss law applies to the dielectric constant, also known as the relative permittivity:
Applications
A heat-induced ferromagnetic-paramagnetic transition is used in magneto-optical storage media for erasing and writing of new data. Famous examples include the Sony Minidisc format as well as the now-obsolete CD-MO format. Curie point electro-magnets have been proposed and tested for actuation mechanisms in passive safety systems of fast breeder reactors, where control rods are dropped into the reactor core if the actuation mechanism heats up beyond the material's Curie point. Other uses include temperature control in soldering irons and stabilizing the magnetic field of tachometer generators against temperature variation.
See also
Notes
References
External links
Ferromagnetic Curie Point. Video by Walter Lewin, M.I.T.
Critical phenomena
Phase transitions
Temperature
Pierre Curie | Curie temperature | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 4,447 | [
"Scalar physical quantities",
"Temperature",
"Phase transitions",
"Physical phenomena",
"Physical quantities",
"Thermodynamic properties",
"SI base quantities",
"Intensive quantities",
"Critical phenomena",
"Phases of matter",
"Condensed matter physics",
"Thermodynamics",
"Statistical mechan... |
165,390 | https://en.wikipedia.org/wiki/Venality | Venality is a vice associated with being bribeable or willing to sell one's services or power, especially when people are intended to act in a decent way instead. In its most recognizable form, venality causes people to lie and steal for their own personal advantage, and is related to bribery and nepotism, among other vices.
Though not in line with dictionary definitions of the term, modern writers often use it to connote vices only tangentially related to bribery or self-interest, such as cruelty, selfishness, and general dishonesty.
Context
Venality in its mild form is a vice notable especially among those with government or military careers. For example, the Ancien Régime in France from the 1500s through the late 1700s, was notorious for the venality of many government officials. In these fields, one is ideally supposed to act with justice and honor and not accept bribes. That ensures that the organization is not susceptible to manipulation by self-interested parties.
In contrast to the previous interpretation, dishonesty is not specifically expressed in the literal meaning, but is often implied. The condition of failing to act justly is not a literal component of the word's meaning either. By definition, committing "venal" acts does not indicate "stealing" or "lying", but rather suggests a consensual arrangement, perhaps without conscience or regard for consequences, but is not synonymous with stealing. While bribery could be related, nepotism clearly has no literal similarity or correlation with venality. Though venality is generally used as a pejorative term, an individual or entity could be venal (or mercenary) and not be corrupt or unethical. One could perform one's duties or job in a perfunctory manner in order to collect a wage or payment, or prostitute one's time or skills for monetary or material gain, without necessarily being dishonest.
Much contemporary use of the words venal or venality is applied to modern professional athletes, particularly baseball, basketball, American football, and soccer players all around the world. The implication being that the highly paid players are essentially "hired guns" with no allegiance to any team or city, and are motivated solely by the acquisition of material wealth.
In revolution and other moral panics
For people to accept settlements and legislation, the acts of the government must be seen as just. This perception enhances the legitimacy of the government. Venality is a term often used with reference to pre-revolutionary France, where it describes the then-widespread practice of selling administrative positions within the government to the highest bidder, especially regarding the Nobles of the Robe.
Thus, for example, venality was a charge for which, in part, Danton and others were executed during the Reign of Terror.
References
Human behavior
Corruption | Venality | [
"Biology"
] | 582 | [
"Behavior",
"Human behavior"
] |
165,423 | https://en.wikipedia.org/wiki/Digestion | Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food. The saliva also contains mucus, which lubricates the food; the electrolyte hydrogencarbonate (), which provides the ideal conditions of pH for amylase to work; and other electrolytes (, , ). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damaging effects of chemicals like concentrated hydrochloric acid while also aiding lubrication. Hydrochloric acid provides acidic pH for pepsin. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Pepsin breaks down proteins into peptides or proteoses, which is further broken down into dipeptides and amino acids by enzymes in the small intestine. Studies suggest that increasing the number of chews per bite increases relevant gut hormones and may decrease self-reported hunger and food intake.
When the pyloric sphincter valve opens, partially digested food (chyme) enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic (about 5.6 ~ 6.9). Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Absorption of water, simple sugar and alcohol also takes place in stomach. Waste material (feces) is eliminated from the rectum during defecation.
Digestive system
Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled.
Some organisms, including nearly all spiders, secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient.
Secretion systems
Bacteria use several systems to obtain nutrients from other organisms in the environments.
Channel transport system
In a channel transport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein. This secretion system transports various chemical species, from ions, drugs, to proteins of various sizes (20–900 kDa). The chemical species secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa.
Molecular syringe
A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than be secreted into the extracellular medium.
Conjugation machinery
The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor). The VirB complex of Agrobacterium tumefaciens is the prototypic system.
In the nitrogen-fixing Rhizobia, conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria.
Release of outer membrane vesicles
In addition to the use of the multiprotein complexes listed above, gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective.
Gastrovascular cavity
The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut.
In a plant such as the Venus flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat.
Phagosome
A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells.
Specialised organs and behaviours
To aid in the digestion of their food, animals evolved organs such as beaks, tongues, radulae, teeth, crops, gizzards, and others.
Beaks
Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak.
The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species. The beak is the only indigestible part of the squid.
Tongue
The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is used to roll food particles into a bolus before being transported down the esophagus through peristalsis.
The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract.
Teeth
Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying.
The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat.
Crop
A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds.
Certain insects may have a crop or enlarged esophagus.
Abomasum
Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size.
Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream.
The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine.
Specialised behaviours
Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation.
Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins).
Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten.
Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components.
In earthworms
An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body.
Overview of vertebrate digestion
In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps:
Ingestion: placing food into the mouth (entry of food in the digestive system),
Mechanical and chemical breakdown: mastication and the mixing of the resulting bolus with water, acids, bile and enzymes in the stomach and intestine to break down complex chemical species into simple structures,
Absorption: of nutrients from the digestive system to the circulatory and lymphatic capillaries through osmosis, active transport, and diffusion, and
Egestion (Excretion): Removal of undigested materials from the digestive tract through defecation.
Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.).
The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion.
In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation.
Human digestion process
The human gastrointestinal tract is around long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours.
Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which could damage the stomach lining, but mucus and bicarbonates are secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. Mainly proteins are digested in stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells are covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine. Bile helps in emulsification of fats and also activates lipases.
In the large intestine, the passage of food is slower to enable fermentation by the gut flora to take place. Here, water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus.
Neural and biochemical control mechanisms
Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase.
The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin.
The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid, which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine.
The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes.
Breakdown into nutrients
Protein digestion
Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides.
Fat digestion
Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, but no glycerol.
Carbohydrate digestion
In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine.
Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent.
Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine.
DNA and RNA digestion
DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas.
Non-destructive digestion
Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes.
After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood.
Digestive hormones
There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. Connections to metabolic control (largely the glucose-insulin system) have been uncovered.
Gastrin – is in the stomach and stimulates the gastric glands to secrete pepsinogen (an inactive form of the enzyme pepsin) and hydrochloric acid. Secretion of gastrin is stimulated by food arriving in stomach. The secretion is inhibited by low pH.
Secretin – is in the duodenum and signals the secretion of sodium bicarbonate in the pancreas and it stimulates the bile secretion in the liver. This hormone responds to the acidity of the chyme.
Cholecystokinin (CCK) – is in the duodenum and stimulates the release of digestive enzymes in the pancreas and stimulates the emptying of bile in the gall bladder. This hormone is secreted in response to fat in chyme.
Gastric inhibitory peptide (GIP) – is in the duodenum and decreases the stomach churning in turn slowing the emptying in the stomach. Another function is to induce insulin secretion.
Motilin – is in the duodenum and increases the migrating myoelectric complex component of gastrointestinal motility and stimulates the production of pepsin.
Significance of pH
Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment.
The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens.
In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5.
See also
Digestive system of gastropods
Digestive system of humpback whales
Evolution of the mammalian digestive system
Discovery and development of proton pump inhibitors
Erepsin
Gastroesophageal reflux disease
References
External links
Human Physiology – Digestion
NIH guide to digestive system
The Digestive System
How does the Digestive System Work?
Digestive system
Metabolism | Digestion | [
"Chemistry",
"Biology"
] | 6,046 | [
"Digestive system",
"Organ systems",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
165,450 | https://en.wikipedia.org/wiki/Phytophthora%20infestans | Phytophthora infestans is an oomycete or water mold, a fungus-like microorganism that causes the serious potato and tomato disease known as late blight or potato blight. Early blight, caused by Alternaria solani, is also often called "potato blight". Late blight was a major culprit in the 1840s European, the 1845–1852 Irish, and the 1846 Highland potato famines. The organism can also infect some other members of the Solanaceae. The pathogen is favored by moist, cool environments: sporulation is optimal at in water-saturated or nearly saturated environments, and zoospore production is favored at temperatures below . Lesion growth rates are typically optimal at a slightly warmer temperature range of .
Etymology
The genus name Phytophthora comes from the Greek (), meaning "plant" – plus the Greek (), meaning "decay, ruin, perish". The species name infestans is the present participle of the Latin verb , meaning "attacking, destroying", from which the word "to infest" is derived. The name Phytophthora infestans was coined in 1876 by the German mycologist Heinrich Anton de Bary (1831–1888).
Life cycle, signs and symptoms
The asexual life cycle of Phytophthora infestans is characterized by alternating phases of hyphal growth, sporulation, sporangia germination (either through zoospore release or direct germination, i.e. germ tube emergence from the sporangium), and the re-establishment of hyphal growth. There is also a sexual cycle, which occurs when isolates of opposite mating type (A1 and A2, see below) meet. Hormonal communication triggers the formation of the sexual spores, called oospores. The different types of spores play major roles in the dissemination and survival of P. infestans. Sporangia are spread by wind or water and enable the movement of P. infestans between different host plants. The zoospores released from sporangia are biflagellated and chemotactic, allowing further movement of P. infestans on water films found on leaves or soils. Both sporangia and zoospores are short-lived, in contrast to oospores which can persist in a viable form for many years.
People can observe P. infestans produce dark green, then brown then black spots on the surface of potato leaves and stems, often near the tips or edges, where water or dew collects. The sporangia and sporangiophores appear white on the lower surface of the foliage. As for tuber blight, the white mycelium often shows on the tubers' surface.
Under ideal conditions, P. infestans completes its life cycle on potato or tomato foliage in about five days. Sporangia develop on the leaves, spreading through the crop when temperatures are above and humidity is over 75–80% for 2 days or more. Rain can wash spores into the soil where they infect young tubers, and the spores can also travel long distances on the wind. The early stages of blight are easily missed. Symptoms include the appearance of dark blotches on leaf tips and plant stems. White mold will appear under the leaves in humid conditions and the whole plant may quickly collapse. Infected tubers develop grey or dark patches that are reddish brown beneath the skin, and quickly decay to a foul-smelling mush caused by the infestation of secondary soft bacterial rots. Seemingly healthy tubers may rot later when in store.
P. infestans survives poorly in nature apart from on its plant hosts. Under most conditions, the hyphae and asexual sporangia can survive for only brief periods in plant debris or soil, and are generally killed off during frosts or very warm weather. The exceptions involve oospores, and hyphae present within tubers. The persistence of viable pathogen within tubers, such as those that are left in the ground after the previous year's harvest or left in cull piles is a major problem in disease management. In particular, volunteer plants sprouting from infected tubers are thought to be a major source of inoculum (or propagules) at the start of a growing season. This can have devastating effects by destroying entire crops.
Mating types
The mating types are broadly divided into A1 and A2. Until the 1980s populations could only be distinguished by virulence assays and mating types, but since then more detailed analysis has shown that mating type and genotype are substantially decoupled. These types each produce a mating hormone of their own. Pathogen populations are grouped into clonal lineages of these mating types and includes:
A1
A1 produces a mating hormone, a diterpene α1. Clonal lineages of A1 include:
CN-1, -2, -4, -5, -6, -7, -8 – mtDNA haplotype Ia, China in 1996–97
– Ia, China, 1996–97
– Ia, China, 2004
– IIb, China, 2000 & 2002
– IIa, China, 2004–09
– Ia/IIb, China, 2004–09
– (only presumed to be A1), mtDNA haplo Ia subtype , Japan, Philippines, India, China, Malaysia, Nepal, present some time before 1950
– Ia, India, Nepal, 1993
– Ia, India, 1993
JP-2/SIB-1/RF006 – mtDNA haplo IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, China, Korea, Thailand, 1996–present
– IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, 1996–present
– IIa, distinguishable by RG57, intermediate level of metalaxyl resistance, Japan, 1996–present
sensu Zhang (not to be confused with #KR-1 sensu Gotoh below) – IIa, Korea, 2002–04
KR_1_A1 – mtDNA haplo unknown, Korea, 2009–16
– Ia, China, 2004
– Ia, India, Nepal, 1993, 1996–97
– Ia, Nepal, 1997
– Ia, Nepal, 1999–2000
– (Also A2, see #the A2 type of NP2 below) Ia, Nepal, 1999–2000
(not to be confused with #US-1 below) – Ib, Nepal, 1999–2000
(not to be confused with #NP3/US-1 above) – Ib, China, India, Nepal, Japan, Taiwan, Thailand, Vietnam, 1940–2000
– Ia, Nepal, 1999–2000
– mtDNA haplo unknown, Nepal, 1999–2000
– IIb, Taiwan, Korea, Vietnam, 1998–2016
– IIb, China, 2002 & 2004
– IIa, Korea, 2003–04
– Ia, Indonesia, 2016–19
A2
Discovered by John Niederhauser in the 1950s, in the Toluca Valley in Central Mexico, while working for the Rockefeller Foundation's Mexican Agriculture Program. Published in Niederhauser 1956. A2 produces a mating hormone α2. Clonal lineages of A2 include:
CN02 – See #13_A2/CN02 below
– with mtDNA haplotype H-20
– IIa, Japan, Korea, Indonesia, late 1980s–present
sensu Gotoh (not to be confused with #KR-1 sensu Zhang above) – IIa, differs from JP-1 by one RG57 band, Korea, 1992
– mtDNA haplo unknown, Korea, 2009–16
– Ia, China, 2001
– (Also A1, see #the A1 type of NP2 above) Ia, Nepal, 1999–2000
– Ib, Nepal, 1999–2000
– Ia, Nepal, 1999–2000
– Ia, Thailand, China, Nepal, 1994 & 1997
Unknown – Ib, India, 1996–2003
– Brazil
– IIa, Korea, 2002–03
/CN02 – Ia, China, India, Bangladesh, Nepal, Pakistan, Myanmar, 2005–19
Self-fertile
A self-fertile type was present in China between 2009 and 2013.
Physiology
is the in P. infestans. Hosts respond with autophagy upon detection of this elicitor, Liu et al. 2005 finding this to be the only alternative to mass hypersensitivity leading to mass programmed cell death.
Genetics
P. infestans is diploid, with about 8–10 chromosomes, and in 2009 scientists completed the sequencing of its genome. The genome was found to be considerably larger (240 Mbp) than that of most other Phytophthora species whose genomes have been sequenced; P. sojae has a 95 Mbp genome and P. ramorum had a 65 Mbp genome. About 18,000 genes were detected within the P. infestans genome. It also contained a diverse variety of transposons and many gene families encoding for effector proteins that are involved in causing pathogenicity. These proteins are split into two main groups depending on whether they are produced by the water mold in the symplast (inside plant cells) or in the apoplast (between plant cells). Proteins produced in the symplast included RXLR proteins, which contain an arginine-X-leucine-arginine (where X can be any amino acid) sequence at the amino terminus of the protein. Some RXLR proteins are avirulence proteins, meaning that they can be detected by the plant and lead to a hypersensitive response which restricts the growth of the pathogen. P. infestans was found to encode around 60% more of these proteins than most other Phytophthora species. Those found in the apoplast include hydrolytic enzymes such as proteases, lipases and glycosylases that act to degrade plant tissue, enzyme inhibitors to protect against host defence enzymes and necrotizing toxins. Overall the genome was found to have an extremely high repeat content (around 74%) and to have an unusual gene distribution in that some areas contain many genes whereas others contain very few.
The pathogen shows high allelic diversity in many isolates collected in Europe. This may be due to widespread trisomy or polyploidy in those populations.
Research
Study of P. infestans presents sampling difficulties in the United States. It occurs only sporadically and usually has significant founder effects due to each epidemic starting from introduction of a single genotype.
Origin and diversity
The highlands of central Mexico are considered by many to be the center of origin of P. infestans, although others have proposed its origin to be in the Andes, which is also the origin of potatoes. A recent study evaluated these two alternate hypotheses and found conclusive support for central Mexico being the center of origin. Support for Mexico specifically the Toluca Valley comes from multiple observations including the fact that populations are genetically most diverse in Mexico, late blight is observed in native tuber-bearing Solanum species, populations of the pathogen are in Hardy–Weinberg equilibrium, the two mating (see § Mating types above) types occur in a 1:1 ratio, and detailed phylogeographic and evolutionary studies. Furthermore, the closest relatives of P. infestans, namely P. mirabilis and P. ipomoeae are endemic to central Mexico. On the other hand, the only close relative found in South America, namely P. andina, is a hybrid that does not share a single common ancestor with P. infestans. Finally, populations of P. infestans in South America lack genetic diversity and are clonal.
Migrations from Mexico to North America or Europe have occurred several times throughout history, probably linked to the movement of tubers. Until the 1970s, the A2 mating type was restricted to Mexico, but now in many regions of the world both A1 and A2 isolates can be found in the same region. The co-occurrence of the two mating types is significant due to the possibility of sexual recombination and formation of oospores, which can survive the winter. Only in Mexico and Scandinavia, however, is oospore formation thought to play a role in overwintering. In other parts of Europe, increasing genetic diversity has been observed as a consequence of sexual reproduction. This is notable since different forms of P. infestans vary in their aggressiveness on potato or tomato, in sporulation rate, and sensitivity to fungicides. Variation in such traits also occurs in North America, however importation of new genotypes from Mexico appears to be the predominant cause of genetic diversity, as opposed to sexual recombination within potato or tomato fields. In 1976 – due to a summer drought in Europe – there was a potato production shortfall and so eating potatoes were imported to fill the shortfall. It is thought that this was the vehicle for mating type A2 to reach the rest of the world. In any case, there had been little diversity, consisting of the US-1 strain, and of that only one type of: mating type, mtDNA, restriction fragment length polymorphism, and di-locus isozyme. Then in 1980 suddenly greater diversity and A2 appeared in Europe. In 1981 it was found in the Netherlands, United Kingdom, 1985 in Sweden, the early 1990s in Norway and Finland, 1996 in Denmark, and 1999 in Iceland. In the UK new A1 lineages only replaced the old lineage by end of the '80s, and A2 spread even more slowly, with Britain having low levels and Ireland (north and Republic) having none-to-trace detections through the '90s. Many of the strains that appeared outside of Mexico since the 1980s have been more aggressive, leading to increased crop losses. In Europe since 2013 the populations have been tracked by the EuroBlight network (see links below). Some of the differences between strains may be related to variation in the RXLR effectors that are present.
Disease management
P. infestans is still a difficult disease to control. There are many chemical options in agriculture for the control of damage to the foliage as well as the fruit (for tomatoes) and the tuber (for potatoes). A few of the most common foliar-applied fungicides are Ridomil, a Gavel/SuperTin tank mix, and Previcur Flex. All of the aforementioned fungicides need to be tank mixed with a broad-spectrum fungicide, such as mancozeb or chlorothalonil, not just for resistance management but also because the potato plants will be attacked by other pathogens at the same time.
If adequate field scouting occurs and late blight is found soon after disease development, localized patches of potato plants can be killed with a desiccant (e.g. paraquat) through the use of a backpack sprayer. This management technique can be thought of as a field-scale hypersensitive response similar to what occurs in some plant-viral interactions whereby cells surrounding the initial point of infection are killed in order to prevent proliferation of the pathogen.
If infected tubers make it into a storage bin, there is a very high risk to the storage life of the entire bin. Once in storage, there is not much that can be done besides emptying the parts of the bin that contain tubers infected with Phytophthora infestans. To increase the probability of successfully storing potatoes from a field where late blight was known to occur during the growing season, some products can be applied just prior to entering storage (e.g., Phostrol).
Around the world the disease causes around $6 billion of damage to crops each year.
Resistant plants
Breeding for resistance, particularly in potato plants, has had limited success in part due to difficulties in crossing cultivated potato with its wild relatives, which are the source of potential resistance genes. In addition, most resistance genes work only against a subset of P. infestans isolates, since effective plant disease resistance results only when the pathogen expresses a RXLR effector gene that matches the corresponding plant resistance (R) gene; effector-R gene interactions trigger a range of plant defenses, such as the production of compounds toxic to the pathogen.
Potato and tomato varieties vary in their susceptibility to blight. Most early varieties are very vulnerable; they should be planted early so that the crop matures before blight starts (usually in July in the Northern Hemisphere). Many old crop varieties, such as King Edward potato, are also very susceptible but are grown because they are wanted commercially. Maincrop varieties which are very slow to develop blight include Cara, Stirling, Teena, Torridon, Remarka, and Romano. Some so-called resistant varieties can resist some strains of blight and not others, so their performance may vary depending on which are around. These crops have had polygenic resistance bred into them, and are known as "field resistant". New varieties, such as Sarpo Mira and Sarpo Axona, show great resistance to blight even in areas of heavy infestation. Defender is an American cultivar whose parentage includes Ranger Russet and Polish potatoes resistant to late blight. It is a long white-skinned cultivar with both foliar and tuber resistance to late blight. Defender was released in 2004.
Genetic engineering may also provide options for generating resistance cultivars. A resistance gene effective against most known strains of blight has been identified from a wild relative of the potato, Solanum bulbocastanum, and introduced by genetic engineering into cultivated varieties of potato. This is an example of cisgenic genetic engineering.
Melatonin in the plant/P. infestans co-environment reduces the stress tolerance of the parasite.
Reducing inoculum
Blight can be controlled by limiting the source of inoculum. Only good-quality seed potatoes and tomatoes obtained from certified suppliers should be planted. Often discarded potatoes from the previous season and self-sown tubers can act as sources of inoculum.
Compost, soil or potting medium can be heat-treated to kill oomycetes such as Phytophthora infestans. The recommended sterilisation temperature for oomycetes is for 30 minutes.
Environmental conditions
There are several environmental conditions that are conducive to P. infestans. An example of such took place in the United States during the 2009 growing season. As colder than average for the season and with greater than average rainfall, there was a major infestation of tomato plants, specifically in the eastern states. By using weather forecasting systems, such as BLITECAST, if the following conditions occur as the canopy of the crop closes, then the use of fungicides is recommended to prevent an epidemic.
A is a period of 48 consecutive hours, in at least 46 of which the hourly readings of temperature and relative humidity at a given place have not been less than and 75%, respectively.
A is at least two consecutive days where min temperature is or above and on each day at least 11 hours when the relative humidity is greater than 90%.
The Beaumont and Smith periods have traditionally been used by growers in the United Kingdom, with different criteria developed by growers in other regions. The Smith period has been the preferred system used in the UK since its introduction in the 1970s.
Based on these conditions and other factors, several tools have been developed to help growers manage the disease and plan fungicide applications. Often these are deployed as part of decision support systems accessible through web sites or smart phones.
Several studies have attempted to develop systems for real-time detection via flow cytometry or microscopy of airborne sporangia collected in air samplers. Whilst these methods show potential to allow detection of sporangia in advance of occurrence of detectable disease symptoms on plants, and would thus be useful in enhancing existing decision support systems, none have been commercially deployed to date.
Use of fungicides
Fungicides for the control of potato blight are normally used only in a preventative manner, optionally in conjunction with disease forecasting. In susceptible varieties, sometimes fungicide applications may be needed weekly. An early spray is most effective. The choice of fungicide can depend on the nature of local strains of P. infestans. Metalaxyl is a fungicide that was marketed for use against P. infestans, but suffered serious resistance issues when used on its own. In some regions of the world during the 1980s and 1990s, most strains of P. infestans became resistant to metalaxyl, but in subsequent years many populations shifted back to sensitivity. To reduce the occurrence of resistance, it is strongly advised to use single-target fungicides such as metalaxyl along with carbamate compounds. A combination of other compounds are recommended for managing metalaxyl-resistant strains. These include mandipropamid, chlorothalonil, fluazinam, triphenyltin, mancozeb, and others. In the United States, the Environmental Protection Agency has approved oxathiapiprolin for use against late blight. In African smallholder production fungicide application can be necessary up to once every three days.
In organic production
In the past, copper(II) sulfate solution (called 'bluestone') was used to combat potato blight. Copper pesticides remain in use on organic crops, both in the form of copper hydroxide and copper sulfate. Given the dangers of copper toxicity, other organic control options that have been shown to be effective include horticultural oils, phosphorous acids, and rhamnolipid biosurfactants, while sprays containing "beneficial" microbes such as Bacillus subtilis or compounds that encourage the plant to produce defensive chemicals (such as knotweed extract) have not performed as well.
During the crop year 2008, many of the certified organic potatoes produced in the United Kingdom and certified by the Soil Association as organic were sprayed with a copper pesticide to control potato blight. According to the Soil Association, the total copper that can be applied to organic land is /year.
Control of tuber blight
Ridging is often used to reduce tuber contamination by blight. This normally involves piling soil or mulch around the stems of the potato blight, meaning the pathogen has farther to travel to get to the tuber. Another approach is to destroy the canopy around five weeks before harvest, using a contact herbicide or sulfuric acid to burn off the foliage. Eliminating infected foliage reduces the likelihood of tuber infection.
Historical impact
The first recorded instances of the disease were in the United States, in Philadelphia and New York City in early 1843. Winds then spread the spores, and in 1845 it was found from Illinois to Nova Scotia, and from Virginia to Ontario. It crossed the Atlantic Ocean with a shipment of seed potatoes for Belgian farmers in 1845. The disease being first identified in Europe around Kortrijk, Belgium, in June 1845, and resulted in the Flemish potato harvest failing that summer, yields declining 75–80%, leading to an estimated forty thousand deaths in the locale. All of the potato-growing countries in Europe would be affected within a year.
The effect of Phytophthora infestans in Ireland in 1845–52 was one of the factors which caused more than one million to starve to death and forced another two million to emigrate. Most commonly referenced is the Great Irish Famine, during the late 1840s. Implicated in Ireland's fate was the island's disproportionate dependency on a single variety of potato, the Irish Lumper. The lack of genetic variability created a susceptible host population for the organism after the blight strains originating in Chiloé Archipelago replaced earlier potatoes of Peruvian origin in Europe.
During the First World War, all of the copper in Germany was used for shell casings and electric wire and therefore none was available for making copper sulfate to spray potatoes. A major late blight outbreak on potato in Germany therefore went untreated, and the resulting scarcity of potatoes contributed to the deaths from the blockade.
Since 1941, Eastern Africa has been suffering potato production losses because of strains of P. infestans from Europe.
France, Canada, the United States, and the Soviet Union researched P. infestans as a biological weapon in the 1940s and 1950s. Potato blight was one of more than 17 agents that the United States researched as potential biological weapons before the nation suspended its biological weapons program. Whether a weapon based on the pathogen would be effective is questionable, due to the difficulties in delivering viable pathogen to an enemy's fields, and the role of uncontrollable environmental factors in spreading the disease.
Late blight (A2 type) has not yet been detected in Australia and strict biosecurity measures are in place. The disease has been seen in China, India and south-east Asian countries.
A large outbreak of P. infestans occurred on tomato plants in the Northeast United States in 2009.
In light of the periodic epidemics of P. infestans ever since its first emergence, it may be regarded as a periodically emerging pathogen – or a periodically "re-emerging pathogen".
References
Further reading
External links
USAblight A National Web Portal on Late Blight
International Potato Center
Online Phytophtora bibliography
EuroBlight a potato blight network in Europe
USDA-BARC Phytophthora infestans page
Organic Alternatives for Late Blight Control in Potatoes, from ATTRA
Google Map of Tomato Potato Blight Daily Risk across NE USA
Species Profile – Late Blight (Phytophthora infestans), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for Late Blight.
Continuing education lesson created by The American Phytopathological Society
entry on Late Blight by PlantVillage
infestans
Water mould plant pathogens and diseases
Potato diseases
Biological agents | Phytophthora infestans | [
"Biology",
"Environmental_science"
] | 5,388 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
165,459 | https://en.wikipedia.org/wiki/Holding%20hands | Holding hands is a form of physical intimacy involving two or more people. It may or may not be romantic. Couples often hold hands while walking together outdoors.
Cultural aspects
In Western culture, spouses and romantic couples often hold hands as a sign of affection or to express psychological closeness. Non-romantic friends may also hold hands, although acceptance of this varies by culture and gender role. Parents or guardians may hold the hands of small children to exercise guidance or authority. In terms of romance, handholding is often used in the early stages of dating or courtship to express romantic interest in a partner. Handholding is also common in advanced stages of a romantic relationship where it may be used to signify or seek solace and reassurance.
Same-sex couples may avoid holding hands in public due to homophobia. In 2012, an average of 74% of gay men and 51% of lesbian women responded to an EU Fundamental Rights Agency survey saying they avoid holding hands in public for fear of harassment or assault. These responses varied from 45% to 89% depending on country, with an average of 66%.
In Arab countries, North Africa, some parts of Asia and traditionally in some Mediterranean and Southern European cultures (especially in Sicily), males also hold hands for friendship and as a sign of respect; a custom which is especially noticed by societies unused to it, for instance when, in 2005, Crown Prince Abdullah of Saudi Arabia held hands with the United States President George W. Bush.
Physical and psychological aspects
According to Tiffany Field, the director of the Touch Research Institute, holding hands stimulates the vagus nerve, which decreases blood pressure and heart rate and puts people in a more relaxed state.
See also
Public display of affection
Human chain
References
External links
Human communication
Hand | Holding hands | [
"Biology"
] | 355 | [
"Human communication",
"Behavior",
"Human behavior"
] |
165,487 | https://en.wikipedia.org/wiki/World%20line | The world line (or worldline) of an object is the path that an object traces in 4-dimensional spacetime. It is an important concept of modern physics, and particularly theoretical physics.
The concept of a "world line" is distinguished from concepts such as an "orbit" or a "trajectory" (e.g., a planet's orbit in space or the trajectory of a car on a road) by inclusion of the dimension time, and typically encompasses a large area of spacetime wherein paths which are straight perceptually are rendered as curves in spacetime to show their (relatively) more absolute position states—to reveal the nature of special relativity or gravitational interactions.
The idea of world lines was originated by physicists and was pioneered by Hermann Minkowski. The term is now used most often in the context of relativity theories (i.e., special relativity and general relativity).
Usage in physics
A world line of an object (generally approximated as a point in space, e.g., a particle or observer) is the sequence of spacetime events corresponding to the history of the object. A world line is a special type of curve in spacetime. Below an equivalent definition will be explained: A world line is either a time-like or a null curve in spacetime. Each point of a world line is an event that can be labeled with the time and the spatial position of the object at that time.
For example, the orbit of the Earth in space is approximately a circle, a three-dimensional (closed) curve in space: the Earth returns every year to the same point in space relative to the sun. However, it arrives there at a different (later) time. The world line of the Earth is therefore helical in spacetime (a curve in a four-dimensional space) and does not return to the same point.
Spacetime is the collection of events, together with a continuous and smooth coordinate system identifying the events. Each event can be labeled by four numbers: a time coordinate and three space coordinates; thus spacetime is a four-dimensional space. The mathematical term for spacetime is a four-dimensional manifold (a topological space that locally resembles Euclidean space near each point). The concept may be applied as well to a higher-dimensional space. For easy visualizations of four dimensions, two space coordinates are often suppressed. An event is then represented by a point in a Minkowski diagram, which is a plane usually plotted with the time coordinate, say , vertically, and the space coordinate, say , horizontally. As expressed by F.R. Harvey
A curve M in [spacetime] is called a worldline of a particle if its tangent is future timelike at each point. The arclength parameter is called proper time and usually denoted τ. The length of M is called the proper time of the particle. If the worldline M is a line segment, then the particle is said to be in free fall.
A world line traces out the path of a single point in spacetime. A world sheet is the analogous two-dimensional surface traced out by a one-dimensional line (like a string) traveling through spacetime. The world sheet of an open string (with loose ends) is a strip; that of a closed string (a loop) resembles a tube.
Once the object is not approximated as a mere point but has extended volume, it traces not a world line but rather a world tube.
World lines as a method of describing events
A one-dimensional line or curve can be represented by the coordinates as a function of one parameter. Each value of the parameter corresponds to a point in spacetime and varying the parameter traces out a line. So in mathematical terms a curve is defined by four coordinate functions (where usually denotes the time coordinate) depending on one parameter . A coordinate grid in spacetime is the set of curves one obtains if three out of four coordinate functions are set to a constant.
Sometimes, the term world line is used informally for any curve in spacetime. This terminology causes confusions. More properly, a world line is a curve in spacetime that traces out the (time) history of a particle, observer or small object. One usually uses the proper time of an object or an observer as the curve parameter along the world line.
Trivial examples of spacetime curves
A curve that consists of a horizontal line segment (a line at constant coordinate time), may represent a rod in spacetime and would not be a world line in the proper sense. The parameter simply traces the length of the rod.
A line at constant space coordinate (a vertical line using the convention adopted above) may represent a particle at rest (or a stationary observer). A tilted line represents a particle with a constant coordinate speed (constant change in space coordinate with increasing time coordinate). The more the line is tilted from the vertical, the larger the speed.
Two world lines that start out separately and then intersect, signify a collision or "encounter". Two world lines starting at the same event in spacetime, each following its own path afterwards, may represent e.g. the decay of a particle into two others or the emission of one particle by another.
World lines of a particle and an observer may be interconnected with the world line of a photon (the path of light) and form a diagram depicting the emission of a photon by a particle that is subsequently observed by the observer (or absorbed by another particle).
Tangent vector to a world line: four-velocity
The four coordinate functions
defining a world line, are real number functions of a real variable and can simply be differentiated by the usual calculus. Without the existence of a metric (this is important to realize) one can imagine the difference between a point on the curve at the parameter value and a point on the curve a little (parameter ) farther away. In the limit , this difference divided by defines a vector, the tangent vector of the world line at the point . It is a four-dimensional vector, defined in the point . It is associated with the normal 3-dimensional velocity of the object (but it is not the same) and therefore termed four-velocity , or in components:
such that the derivatives are taken at the point , so at .
All curves through point p have a tangent vector, not only world lines. The sum of two vectors is again a tangent vector to some other curve and the same holds for multiplying by a scalar. Therefore, all tangent vectors for a point p span a linear space, termed the tangent space at point p. For example, taking a 2-dimensional space, like the (curved) surface of the Earth, its tangent space at a specific point would be the flat approximation of the curved space.
World lines in special relativity
So far a world line (and the concept of tangent vectors) has been described without a means of quantifying the interval between events. The basic mathematics is as follows: The theory of special relativity puts some constraints on possible world lines. In special relativity the description of spacetime is limited to special coordinate systems that do not accelerate (and so do not rotate either), termed inertial coordinate systems. In such coordinate systems, the speed of light is a constant. The structure of spacetime is determined by a bilinear form η, which gives a real number for each pair of events. The bilinear form is sometimes termed a spacetime metric, but since distinct events sometimes result in a zero value, unlike metrics in metric spaces of mathematics, the bilinear form is not a mathematical metric on spacetime.
World lines of freely falling particles/objects are called geodesics. In special relativity these are straight lines in Minkowski space.
Often the time units are chosen such that the speed of light is represented by lines at a fixed angle, usually at 45 degrees, forming a cone with the vertical (time) axis. In general, useful curves in spacetime can be of three types (the other types would be partly one, partly another type):
light-like curves, having at each point the speed of light. They form a cone in spacetime, dividing it into two parts. The cone is three-dimensional in spacetime, appears as a line in drawings with two dimensions suppressed, and as a cone in drawings with one spatial dimension suppressed.
time-like curves, with a speed less than the speed of light. These curves must fall within a cone defined by light-like curves. In our definition above: world lines are time-like curves in spacetime.
space-like curves falling outside the light cone. Such curves may describe, for example, the length of a physical object. The circumference of a cylinder and the length of a rod are space-like curves.
At a given event on a world line, spacetime (Minkowski space) is divided into three parts.
The future of the given event is formed by all events that can be reached through time-like curves lying within the future light cone.
The past of the given event is formed by all events that can influence the event (that is, that can be connected by world lines within the past light cone to the given event).
The lightcone at the given event is formed by all events that can be connected through light rays with the event. When we observe the sky at night, we basically see only the past light cone within the entire spacetime.
Elsewhere is the region between the two light cones. Points in an observer's elsewhere are inaccessible to them; only points in the past can send signals to the observer. In ordinary laboratory experience, using common units and methods of measurement, it may seem that we look at the present, but in fact there is always a delay time for light to propagate. For example, we see the Sun as it was about 8 minutes ago, not as it is "right now". Unlike the present in Galilean/Newtonian theory, the elsewhere is thick; it is not a 3-dimensional volume but is instead a 4-dimensional spacetime region.
Included in "elsewhere" is the simultaneous hyperplane, which is defined for a given observer by a space that is hyperbolic-orthogonal to their world line. It is really three-dimensional, though it would be a 2-plane in the diagram because we had to throw away one dimension to make an intelligible picture. Although the light cones are the same for all observers at a given spacetime event, different observers, with differing velocities but coincident at the event (point) in the spacetime, have world lines that cross each other at an angle determined by their relative velocities, and thus they have different simultaneous hyperplanes.
The present often means the single spacetime event being considered.
Simultaneous hyperplane
Since a world line determines a velocity 4-vector that is time-like, the Minkowski form determines a linear function by Let N be the null space of this linear functional. Then N is called the simultaneous hyperplane with respect to v. The relativity of simultaneity is a statement that N depends on v. Indeed, N is the orthogonal complement of v with respect to η.
When two world lines u and w are related by then they share the same simultaneous hyperplane. This hyperplane exists mathematically, but physical relations in relativity involve the movement of information by light. For instance, the traditional electro-static force described by Coulomb's law may be pictured in a simultaneous hyperplane, but relativistic relations of charge and force involve retarded potentials.
World lines in general relativity
The use of world lines in general relativity is basically the same as in special relativity, with the difference that spacetime can be curved. A metric exists and its dynamics are determined by the Einstein field equations and are dependent on the mass-energy distribution in spacetime. Again the metric defines lightlike (null), spacelike, and timelike curves. Also, in general relativity, world lines include timelike curves and null curves in spacetime, where timelike curves fall within the lightcone. However, a lightcone is not necessarily inclined at 45 degrees to the time axis. However, this is an artifact of the chosen coordinate system, and reflects the coordinate freedom (diffeomorphism invariance) of general relativity. Any timelike curve admits a comoving observer whose "time axis" corresponds to that curve, and, since no observer is privileged, we can always find a local coordinate system in which lightcones are inclined at 45 degrees to the time axis. See also for example Eddington-Finkelstein coordinates.
World lines of free-falling particles or objects (such as planets around the Sun or an astronaut in space) are called geodesics.
World lines in quantum field theory
Quantum field theory, the framework in which all of modern particle physics is described, is usually described as a theory of quantized fields. However, although not widely appreciated, it has been known since Feynman that many quantum field theories may equivalently be described in terms of world lines. This preceded much of his work on the formulation which later became more standard. The world line formulation of quantum field theory has proved particularly fruitful for various calculations in gauge theories and in describing nonlinear effects of electromagnetic fields.
World lines in literature
In 1884 C. H. Hinton wrote an essay "What is the fourth dimension ?", which he published as a scientific romance. He wrote
Why, then, should not the four-dimensional beings be ourselves, and our successive states the passing of them through the three-dimensional space to which our consciousness is confined.
A popular description of human world lines was given by J. C. Fields at the University of Toronto in the early days of relativity. As described by Toronto lawyer Norman Robertson:
I remember [Fields] lecturing at one of the Saturday evening lectures at the Royal Canadian Institute. It was advertised to be a "Mathematical Fantasy"—and it was! The substance of the exercise was as follows: He postulated that, commencing with his birth, every human being had some kind of spiritual aura with a long filament or thread attached, that traveled behind him throughout his life. He then proceeded in imagination to describe the complicated entanglement every individual became involved in his relationship to other individuals, comparing the simple entanglements of youth to those complicated knots that develop in later life.
Kurt Vonnegut, in his novel Slaughterhouse-Five, describes the worldlines of stars and people:
“Billy Pilgrim says that the Universe does not look like a lot of bright little dots to the creatures from Tralfamadore. The creatures can see where each star has been and where it is going, so that the heavens are filled with rarefied, luminous spaghetti. And Tralfamadorians don't see human beings as two-legged creatures, either. They see them as great millepedes – "with babies' legs at one end and old people's legs at the other," says Billy Pilgrim.”
Almost all science-fiction stories which use this concept actively, such as to enable time travel, oversimplify this concept to a one-dimensional timeline to fit a linear structure, which does not fit models of reality. Such time machines are often portrayed as being instantaneous, with its contents departing one time and arriving in another—but at the same literal geographic point in space. This is often carried out without note of a reference frame, or with the implicit assumption that the reference frame is local; as such, this would require either accurate teleportation, as a rotating planet, being under acceleration, is not an inertial frame, or for the time machine to remain in the same place, its contents 'frozen'.
Author Oliver Franklin published a science fiction work in 2008 entitled World Lines in which he related a simplified explanation of the hypothesis for laymen.
In the short story Life-Line, author Robert A. Heinlein describes the world line of a person:
He stepped up to one of the reporters. "Suppose we take you as an example. Your name is Rogers, is it not? Very well, Rogers, you are a space-time event having duration four ways. You are not quite six feet tall, you are about twenty inches wide and perhaps ten inches thick. In time, there stretches behind you more of this space-time event, reaching to perhaps nineteen-sixteen, of which we see a cross-section here at right angles to the time axis, and as thick as the present. At the far end is a baby, smelling of sour milk and drooling its breakfast on its bib. At the other end lies, perhaps, an old man someplace in the nineteen-eighties.
"Imagine this space-time event that we call Rogers as a long pink worm, continuous through the years, one end in his mother's womb, and the other at the grave..."
Heinlein's Methuselah's Children uses the term, as does James Blish's The Quincunx of Time (expanded from "Beep").
A visual novel named Steins;Gate, produced by 5pb., tells a story based on the shifting of world lines. Steins;Gate is a part of the "Science Adventure" series. World lines and other physical concepts like the Dirac Sea are also used throughout the series.
Neal Stephenson's novel Anathem involves a long discussion of worldlines over dinner in the midst of a philosophical debate between Platonic realism and nominalism.
Absolute Choice depicts different world lines as a sub-plot and setting device.
A space armada trying to complete a (nearly) closed time-like path as a strategic maneuver forms the backdrop and a main plot device of "Singularity Sky" by Charles Stross.
See also
Specific types of world lines
Geodesics
Closed timelike curves
Causal structure, curves that represent a variety of different types of world line
Isotropic line
Feynman diagram
Time geography
References
Various English translations on Wikisource: Space and Time
Ludwik Silberstein (1914) Theory of Relativity, p. 130, Macmillan and Company
External links
World lines article on h2g2.
in depth text on world lines and special relativity
Theory of relativity
Minkowski spacetime
Time in science | World line | [
"Physics"
] | 3,751 | [
"Physical quantities",
"Time",
"Theory of relativity",
"Time in science",
"Spacetime"
] |
165,494 | https://en.wikipedia.org/wiki/Amos%20Tversky | Amos Nathan Tversky (; March 16, 1937 – June 2, 1996) was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.
Much of his early work concerned the foundations of measurement. He was co-author of a three-volume treatise, Foundations of Measurement. His early work with Daniel Kahneman focused on the psychology of prediction and probability judgment; later they worked together to develop prospect theory, which aims to explain irrational human economic choices and is considered one of the seminal works of behavioral economics.
Six years after Tversky's death, Kahneman received the 2002 Nobel Memorial Prize in Economic Sciences for work he did in collaboration with Amos Tversky. While Nobel Prizes are not awarded posthumously, Kahneman has commented that he feels "it is a joint prize. We were twinned for more than a decade."
Tversky also collaborated with many leading researchers including Thomas Gilovich, Itamar Simonson, Paul Slovic and Richard Thaler. A Review of General Psychology survey, published in 2002, ranked Tversky as the 93rd most cited psychologist of the 20th century, tied with Edwin Boring, John Dewey, and Wilhelm Wundt.
Early life and education
Tversky was born in Haifa, British Palestine (now Israel), as son of the Polish-born veterinarian Yosef Tversky and Lithuanian Jewish Jenia Tversky (née Ginzburg), a social worker who later became a member of the Knesset representing the Mapai (Workers' Party). Tversky had one sister, Ruth, thirteen years his senior.
Tversky's mother has said he was self-taught in many areas, including mathematics. In high school, Tversky took classes from literary critic Baruch Kurzweil, and befriended classmate Dahlia Ravikovich, who would become an award-winning poet.
Tversky received his bachelor's degree from Hebrew University of Jerusalem in Israel in 1961, and his doctorate in psychology from the University of Michigan in Ann Arbor in 1965. He had already developed a clear vision of researching judgement.
Military service and career
During this time he was also a member and leader in Nahal, an Israel Defense Forces program that combined compulsory military service with the establishment of agricultural settlements.
Tversky served with distinction in the Israel Defense Forces as a paratrooper, rising to the rank of captain and being decorated for bravery. He parachuted in combat zones during the Suez Crisis in 1956, commanded an infantry unit during the Six-Day War in 1967, and served in a psychology field-unit during the Yom Kippur War in 1973.
Academic career
Academic roles
After his doctorate, Tversky taught at Hebrew University. He then joined the faculty in the Department of Psychology of Stanford University in 1978, where he spent the rest of his career.
Academic work
Work with Daniel Kahneman
Amos Tversky's most influential work was done with his longtime collaborator, Daniel Kahneman, in a partnership that began in the late 1960s. Their work explored the biases and failures in rationality continually exhibited in human decision-making. Starting with their first paper together, "Belief in the Law of Small Numbers", Kahneman and Tversky laid out eleven "cognitive illusions" that affect human judgment, frequently using small-scale empirical experiments that demonstrate how subjects make irrational decisions under uncertain conditions. (They introduced the notion of cognitive bias in 1972.) This work was highly influential in the field of economics, which had largely presumed rationality of all actors.
According to Kahneman the collaboration 'tapered off' in the early 1980s, although they tried to revive it. Factors included Tversky receiving most of the external credit for the output of the partnership, and a reduction in the generosity with which Tversky and Kahneman interacted with each other.
Comparative ignorance
Tversky and Fox (1995) addressed ambiguity aversion, the idea that people do not like ambiguous gambles or choices with ambiguity, with the comparative ignorance framework. Their idea was that people are only ambiguity averse when their attention is specifically brought to the ambiguity by comparing an ambiguous option to an unambiguous option. For instance, people are willing to bet more on choosing a correct colored ball from an urn containing equal proportions of black and red balls than an urn with unknown proportions of balls when evaluating both of these urns at the same time. However, when evaluating them separately, people are willing to bet approximately the same amount on either urn. Thus, when it is possible to compare the ambiguous gamble to an unambiguous gamble people are averse — but not when one is ignorant of this comparison.
Notable contributions
foundations of measurement
anchoring and adjustment
availability heuristic
base rate fallacy
conjunction fallacy
framing
behavioral finance
clustering illusion
loss aversion
prospect theory
cumulative prospect theory
representativeness heuristic
Tversky index
support theory
contrast model
feature matching account of similarity
Approach to research
Kahneman said that Tversky "had simply perfect taste in choosing problems, and he never wasted much time on anything that was not destined to matter. He also had an unfailing compass that always kept him going forward.
Tversky's 1974 Science article with Kahneman on cognitive illusions triggered a "cascade of related research," Science News wrote in a 1994 article tracing the recent history of research on reasoning. Decision theorists in economics, business, philosophy and medicine as well as psychologists cited their work.
Recognition
In 1980, he became a fellow of the American Academy of Arts and Sciences.
In 1984 he was a recipient of the MacArthur Fellowship, and in 1985 he was elected to the National Academy of Sciences. Tversky, as a co-recipient with Daniel Kahneman, earned the 2003 University of Louisville Grawemeyer Award for Psychology.
After Tversky's death, Kahneman was awarded the 2002 Nobel Memorial Prize in Economic Sciences for the work he did in collaboration with Tversky. Nobel prizes are not awarded posthumously.
Personality and characteristics
Kahneman has said "Amos was the freest person I have known, and he was able to be free because he was also one of the most disciplined."
Persi Diaconis, a professor of mathematics at Stanford, has said "You were happy being in his presence. There was a light shining out of him."
Gerhard Casper, President of Stanford University, said Tversky "maintained the highest standards of professional ethics", and "His dedication to Stanford and its institutions of faculty governance was exemplary."
Whilst being very collaborative, Tversky also had a lifelong habit of working alone at night while others slept.
In intellectual debate Tversky "wanted to crush the opposition".
Tversky believed that humans live under uncertainty, in a probabilistic universe.
Personal life
In 1963, Tversky married American psychologist Barbara Gans, who later became a professor in the human-development department at Teachers College, Columbia University. They had three children together.
He died of a metastatic melanoma in 1996.
He was a Jewish atheist.
In popular culture
Tversky intelligence test
As recounted by Malcolm Gladwell in 2013's David and Goliath: Underdogs, Misfits, and the Art of Battling Giants, Tversky's peers thought so highly of him that they devised a tongue-in-cheek one-part test for measuring intelligence. As related to Gladwell by psychologist Adam Alter, the Tversky intelligence test was "The faster you realized Tversky was smarter than you, the smarter you were."
The Undoing Project
Michael Lewis's book The Undoing Project: A Friendship That Changed Our Minds, released in 2016, is about Tversky's personal and professional relationship with Daniel Kahneman.
References
External links
Memorial Resolution - Amos Tversky
Boston Globe: The man who wasn't there
Daniel Kahneman – Autobiography
Tversky in group discussion (39 mins)
Tversky lecturing
1937 births
1996 deaths
20th-century Israeli economists
20th-century American psychologists
Jewish American atheists
American atheists
Behavioral economists
Behavioral finance
American cognitive psychologists
Experimental economists
Fellows of the American Academy of Arts and Sciences
Fellows of the Econometric Society
MacArthur Fellows
Foreign associates of the National Academy of Sciences
Academic staff of the Hebrew University of Jerusalem
Stanford University Department of Psychology faculty
University of Michigan alumni
Jewish Israeli atheists
Israeli atheists
Israeli emigrants to the United States
Israeli psychologists
20th-century Israeli Jews
Jewish American scientists
Framing theorists
Israeli people of Belarusian-Jewish descent
Financial economists
People from Haifa
Deaths from melanoma in California
Israeli Ashkenazi Jews
Jewish scientists
Jewish psychologists
APA Distinguished Scientific Award for an Early Career Contribution to Psychology recipients | Amos Tversky | [
"Biology"
] | 1,771 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
165,590 | https://en.wikipedia.org/wiki/HTML%20editor | An HTML editor is a program used for editing HTML, the markup of a web page. Although the HTML markup in a web page can be controlled with any text editor, specialized HTML editors can offer convenience, added functionality, and organisation. For example, many HTML editors handle not only HTML, but also related technologies such as CSS, XML and JavaScript or ECMAScript. In some cases they also manage communication with remote web servers via FTP and WebDAV, and version control systems such as Subversion or Git. Many word processing, graphic design and page layout programs that are not dedicated to web design, such as Microsoft Word or Quark XPress, also have the ability to function as HTML editors.
Types of editors
There are two main varieties of HTML editors: text and WYSIWYG (what you see is what you get) editors.
Text editors
Text editors intended for use with HTML usually provide at least syntax highlighting. Some editors additionally feature templates, toolbars and keyboard shortcuts to quickly insert common HTML elements and structures. Wizards, tooltip prompts and autocompletion may help with common tasks.
Text editors commonly used for HTML typically include either built-in functions or integration with external tools for such tasks as version control, link-checking and validation, code cleanup and formatting, spell-checking, uploading by FTP or WebDAV, and structuring as a project. Some functions, such as link checking or validation may use online tools, requiring a network connection.
Text editors require user understanding of HTML and any other web technologies the designer wishes to use like CSS, JavaScript and server-side scripting languages.
To ease this requirement, some editors allow editing of the markup in more visually organized modes than simple color highlighting, but in modes not considered WYSIWYG. These editors typically include the option of using palette windows or dialog boxes to edit the text-based parameters of selected objects. These palettes allow editing parameters in individual fields, or inserting new tags by filling out an onscreen form, and may include additional widgets to present and select options when editing parameters (such as previewing an image or text styles) or an outline editor to expand and collapse HTML objects and properties.
WYSIWYG HTML editors
WYSIWYG HTML editors provide an editing interface which resembles how the page will be displayed in a web browser. Because using a WYSIWYG editor may not require any HTML knowledge, they are often easier for an inexperienced computer user to get started with.
The WYSIWYG view is achieved by embedding a layout engine. This may be custom-written or based upon one used in a web browser. The goal is that, at all times during editing, the rendered result should represent what will be seen later in a typical web browser.
WYSIWYM (what you see is what you mean) is an alternative paradigm to WYSIWYG editors. Instead of focusing on the format or presentation of the document, it preserves the intended meaning of each element. For example, page headers, sections, paragraphs, etc. are labeled as such in the editing program, and displayed appropriately in the browser.
Difficulties in achieving WYSIWYG
A given HTML document will have an inconsistent appearance on various platforms and computers for several reasons:
Different browsers and applications will render the same markup differently.
The same page may display slightly differently, by example, in Chrome, Safari, Edge, Internet Explorer and Firefox on a high-resolution screen, but it will look very different in the perfectly valid text-only Lynx browser. It needs to be rendered differently again on a PDA, an internet-enabled television and on a mobile phone. Usability in a speech or braille browser, or via a screen-reader working with a conventional browser, will place demands on entirely different aspects of the underlying HTML. All an author can do is suggest an appearance.
Web browsers, like all computer software, have bugs
They may not conform to current standards. It is hopeless to try to design Web pages around all of the common browsers' current bugs: each time a new version of each browser comes out, a significant proportion of the World Wide Web would need re-coding to suit the new bugs and the new fixes. It is generally considered much wiser to design to standards, staying away from 'bleeding edge' features until they settle down, and then wait for the browser developers to catch up to your pages, rather than the other way round. For instance, no one can argue that CSS is still 'cutting edge' as there is now widespread support available in common browsers for all the major features, even if many WYSIWYG and other editors have not yet entirely caught up.
A single visual style can represent multiple semantic meanings
Semantic meaning, derived from the underlying structure of the HTML document, is important for search engines and also for various accessibility tools. On paper we can tell from context and experience whether bold text represents a title, or emphasis, or something else. But it is very difficult to convey this distinction in a WYSIWYG editor. Simply making a piece of text bold in a WYSIWYG editor is not sufficient to tell the reader *why* the text is bold – what the boldness represents semantically.
Modern web sites are rarely constructed in a way that makes WYSIWYG useful
Modern web sites typically use a content management system or some other template processor-based means of constructing pages on the fly using content stored in a database. Individual pages are never stored in a filesystem as they may be designed and edited in a WYSIWYG editor, thus some form of abstracted template-based layout is inevitable, invalidating one of the main benefits of using a WYSIWYG editor.
Valid HTML markup
HTML is a structured markup language. There are certain rules on how HTML must be written if it is to conform to W3C standards for the World Wide Web. Following these rules means that web sites are accessible on all types and makes of computer, to able-bodied and people with disabilities, and also on wireless devices like mobile phones and PDAs, with their limited bandwidths and screen sizes. However, most HTML documents on the web do not meet the requirements of W3C standards. In a study conducted in 2011 on the 350 most popular web sites (selected by the Alexa index), 94 percent of websites fail the web standards markup and style sheet validation tests, or apply character encoding improperly. Even those syntactically correct documents may be inefficient due to an unnecessary use of repetition, or based upon rules that have been deprecated for some years.
Current W3C recommendations on the use of CSS with HTML were first formalised by W3C in 1996 and have been revised and refined since then.
These guidelines emphasise the separation of content (HTML or XHTML) from style (CSS). This has the benefit of delivering the style information once for a whole site, not repeated in each page, let alone in each HTML element. WYSIWYG editor designers have been struggling ever since with how best to present these concepts to their users without confusing them by exposing the underlying reality. Modern WYSIWYG editors all succeed in this to some extent, but none of them has succeeded entirely.
However a web page was created or edited, WYSIWYG or by hand, in order to be successful among the greatest possible number of readers and viewers, as well as to maintain the 'worldwide' value of the Web itself, first and foremost it should consist of valid markup and code. It should not be considered ready for the World Wide Web, until its HTML and CSS syntax have been successfully validated using either the free W3C validator services (W3C HTML Validator and W3C CSS Validator) or some other trustworthy alternatives.
Accessibility of web pages by those with physical, eyesight or other disabilities is not only a good idea considering the ubiquity and importance of the web in modern society, but is also mandated by law. In the U.S., the Americans with Disabilities Act and in the UK, the Disability Discrimination Act place requirement on web sites operated by publicly funded organizations. In many other countries similar laws either already exist or soon will. Making pages accessible is more complex than just making them valid; that is a prerequisite but there are many other factors to be considered. Good web design, whether done using a WYSIWYG tool or not needs to take account of these too.
Whatever software tools are used to design, create and maintain web pages, the quality of the underlying HTML is dependent on the skill of the person who works on the page. Some knowledge of HTML, CSS and other scripting languages as well as a familiarity with the current W3C recommendations in these areas will help any designer produce better web pages, with a WYSIWYG HTML editor and without.
See also
Comparison of HTML editors
List of HTML editors
Web template system
Website builder
Visual editor
Validator
References
Web design | HTML editor | [
"Engineering"
] | 1,883 | [
"Design",
"Web design"
] |
165,744 | https://en.wikipedia.org/wiki/Kelp | Kelps are large brown algae or seaweeds that make up the order Laminariales. There are about 30 different genera. Despite its appearance and use of photosynthesis in chloroplasts, kelp is technically not a plant but a stramenopile (a group containing many protists).
Kelp grow from stalks close together in very dense areas like forests under shallow temperate and Arctic oceans. They were previously thought to have appeared in the Miocene, 5 to 23 million years ago based on fossils from California. New fossils of kelp holdfasts from early Oligocene rocks in Washington State show that kelps were present in the northeastern Pacific Ocean by at least 32 million years ago. The organisms require nutrient-rich water with temperatures between . They are known for their high growth rate—the genera Macrocystis and Nereocystis can grow as fast as half a metre a day (that is, about 20 inches a day), ultimately reaching .
Through the 19th century, the word "kelp" was closely associated with seaweeds that could be burned to obtain soda ash (primarily sodium carbonate). The seaweeds used included species from both the orders Laminariales and Fucales. The word "kelp" was also used directly to refer to these processed ashes.
Description
The thallus (or body) consists of flat or leaf-like structures known as blades that originate from elongated stem-like structures, the stipes. A root-like structure, the holdfast, anchors the kelp to the substrate of the ocean. Gas-filled bladders (pneumatocysts) form at the base of blades of American species, such as Nereocystis lueteana, (Mert. & Post & Rupr.) to hold the kelp blades close to the surface.
Growth and reproduction
Growth occurs at the base of the meristem, where the blades and stipe meet. Growth may be limited by grazing. Sea urchins, for example, can reduce entire areas to urchin barrens. The kelp life cycle involves a diploid sporophyte and haploid gametophyte stage. The haploid phase begins when the mature organism releases many spores, which then germinate to become male or female gametophytes. Sexual reproduction then results in the beginning of the diploid sporophyte stage, which will develop into a mature individual.
The parenchymatous thalli are generally covered with a mucilage layer, rather than cuticle.
Taxonomy
Phylogeny
Seaweed were generally considered homologues of terrestrial plants, but are only very distantly related to plants, and have evolved plant-like structures through convergent evolution. Where plants have leaves, stems, and reproductive organs, kelp have independently evolved blades, stipes, and sporangia. With radiometric dating and the measure Ma “unequivocal minimum constraint for total group Pinaceae” vascular plants have been measured as having evolved around 419–454 Ma while the ancestors of Laminariales are much younger at 189 Ma. Although these groups are distantly related as well as different in evolutionary age, there are still comparisons that can be made between the structures of terrestrial plants and kelp but in terms of evolutionary history, most of these similarities come from convergent evolution.
Some kelp species including giant kelp, have evolved transport mechanisms for organic as well as inorganic compounds, similar to mechanisms of transport in trees and other vascular plants. In kelp this transportation network uses trumpet-shaped sieve elements (SEs). A 2015 study aimed to evaluate the efficiency of giant kelp (Macrocystis pyrifera) transport anatomy looked at 6 different laminariales species to see if they had typical vascular plant allometric relationships (if SEs had a correlation with the size of an organism). Researchers expected to find the kelp’s phloem to work similarly to a plant's xylem and therefore display similar allometric trends to minimize pressure gradient. The study found no universal allometric scaling between all tested structures of the laminariales species which implies that the transport network of brown algae is only just beginning to evolve to efficiently fit their current niches.
Apart from undergoing convergent evolution with plants, species of kelp have undergone convergent evolution within their own phylogeny that has led to niche conservatism. This niche conservatism means that some species of kelp have convergently evolved to share similar niches, as opposed to all species diverging into distinct niches through adaptive radiation. A 2020 study looked at functional traits (blade mass per area, stiffness, strength, etc.) of 14 species of kelp and found that many of these traits evolved convergently across kelp phylogeny. With different species of kelp filling slightly different environmental niches, specifically along a wave disturbance gradient, many of these convergently evolved traits for structural reinforcement also correlate with distribution along that gradient. The wave disturbance gradient that this study refers to is the environments that this kelp inhabit have a varied level of perturbation from the tide and waves that pull at the kelp. It can be assumed from these results that niche partitioning along wave disturbance gradients is a key driver of divergence between closely related kelp.
Due to the often varied and turbulent habitat that kelp inhabit, plasticity of certain structural traits has been a key for the evolutionary history of the phyla. Plasticity helps with a very important aspect of kelp adaptations to ocean environments, and that is the unusually high levels of morphological homoplasy between lineages. This in fact has made classifying brown algae difficult. Kelp often have similar morphological features to other species within its own area since the roughness of the wave disturbance regime, but can look fairly different from other members of its own species that are found in different wave disturbance regimes. Plasticity in kelps most often involves blade morphology such as the width, ruffle, and thickness of blades. Just one example is the giant bull kelp Nereocystis luetkeana, which have evolved to change blade shape in order to increase drag in water and interception of light when exposed to certain environments. Bull kelp are not unique in this adaptation; many kelp species have evolved a genetic plasticity for blade shapes for different water flow habitats. So individuals of the same species will have differences to other individuals of the same species due to what habitat they grow in. Many species have different morphologies for different wave disturbance regimes but giant kelp Macrocystis integrifolia has been found to have plasticity allowing for 4 distinct types of blade morphology depending on habitat. Where many species only have two or three different blade shapes for maximizing efficiency in only two or three habitats. These different blade shapes were found to decrease breakage and increase ability to photosynthesize. Blade adaptations like these are how kelp have evolved for efficiency in structure in a turbulent ocean environment, to the point where their stability can shape entire habitats. Apart from these structural adaptations, the evolution of dispersal methods relating to structure have been important for the success of kelp as well.
Kelp have had to adapt dispersal methods that can make successful use of ocean currents. Buoyancy of certain kelp structures allows for species to disperse with the flow of water. Certain kelp form kelp rafts, which can travel great distances away from the source population and colonize other areas. The bull kelp genus Durvillaea includes six species, some that have adapted buoyancy and others that have not. Those that have adapted buoyancy have done so thanks to the evolution of a gas filled structure called the pneumatocysts which is an adaptation that allows the kelp to float higher towards the surface to photosynthesize and also aids in dispersal by floating kelp rafts. For Macrocystis pyrifera, adaptation of pneumatocysts and raft forming have made the species dispersal method so successful that the immense spread of coast in which the species can be found has been found to actually be very recently colonized. This can be observed by the low genetic diversity in the subantarctic region. Dispersal by rafts from buoyant species also explains some evolutionary history for non-buoyant species of kelp. Since these rafts commonly have hitchhikers of other diverse species, they provide a mechanism for dispersal for species that lack buoyancy. This mechanism has been recently confirmed to be the cause of some dispersal and evolutionary history for kelp species in a study done with genomic analysis. Studies of kelp structure evolution have helped in the understanding of the adaptations that have allowed for kelp to not only be extremely successful as a group of organisms but also successful as an ecosystem engineer of kelp forests, some of the most diverse and dynamic ecosystems on earth.
Prominent species
Bull kelp, Nereocystis luetkeana, a northwestern American species. Used by coastal indigenous peoples to create fishing nets.
Giant kelp, Macrocystis pyrifera, the largest seaweed. Found in the Pacific coast of North America and South America.
Kombu, Saccharina japonica (formerly Laminaria japonica) and others, several edible species of kelp found in Japan.
Species of Laminaria in the British Isles;
Laminaria digitata (Hudson) J.V. Lamouroux (Oarweed; Tangle)
Laminaria hyperborea (Gunnerus) Foslie (Curvie)
Laminaria ochroleuca Bachelot de la Pylaie
Saccharina latissima (Linnaeus) J.V.Lamouroux (sea belt; sugar kelp; sugarwack)
Species of Laminaria worldwide, listing of species at AlgaeBase:
Laminaria agardhii (NE. America)
Laminaria bongardina Postels et Ruprecht (Bering Sea to California)
Laminaria cuneifolia (NE. America)
Laminaria dentigera Klellm. (California - America)
Laminaria digitata (NE. America)
Laminaria ephemera Setchell (Sitka, Alaska, to Monterey County, California - America)
Laminaria farlowii Setchell (Santa Cruz, California, to Baja California - America)
Laminaria groenlandica (NE. America)
Laminaria longicruris (NE. America)
Laminaria nigripes (NE. America)
Laminaria ontermedia (NE. America)
Laminaria pallida Greville ex J. Agardh (South Africa)
Laminaria platymeris (NE. America)
Laminaria saccharina (Linnaeus) Lamouroux, synonym of Saccharina latissima (north east Atlantic Ocean, Barents Sea south to Galicia - Spain)
Laminaria setchellii Silva (Aleutian Islands, Alaska to Baja California America)
Laminaria sinclairii (Harvey ex Hooker f. ex Harvey) Farlow, Anderson et Eaton (Hope Island, British Columbia to Los Angeles, California - America)
Laminaria solidungula (NE. America)
Laminaria stenophylla (NE. America)
Other species in the Laminariales that may be considered as kelp:
Alaria esculenta (North Atlantic)
Alaria marginata Post. & Rupr. (Alaska and California - America)
Costaria costata (C.Ag.) Saunders (Japan; Alaska, California - America)
Ecklonia brevipes J. Agardh (Australia; New Zealand)
Ecklonia maxima (Osbeck) Papenfuss (South Africa)
Ecklonia radiata (C.Agardh) J. Agardh (Australia; Tasmania; New Zealand; South Africa)
Eisenia arborea Aresch. (Vancouver Island, British Columbia, Montrey, Santa Catalina Island, California - America)
Egregia menziesii (Turn.) Aresch.
Hedophyllum sessile (C.Ag.) Setch (Alaska, California - America)
Macrocystis pyrifera (Linnaeus, C.Agardh) (Australia; Tasmania and South Africa)
Pleurophycus gardneri Setch. & Saund. (Alaska, California - America)
Pterygophora californica Rupr. (Vancouver Island, British Columbia to Bahia del Ropsario, Baja California and California - America)
Non-Laminariales species that may be considered as kelp:
Durvillea antarctica, Fucales (New Zealand, South America, and Australia)
Durvillea willana, Fucales (New Zealand)
Durvillaea potatorum (Labillardière) Areschoug, Fucales (Tasmania; Australia)
Ecology
Kelp forests
Kelp may develop dense forests with high production,Abdullah, M.I., Fredriksen, S., 2004. Production, respiration and exudation of dissolved organic matter by the kelp Laminaria hyperborea along the west coast of Norway. Journal of the Marine Biological Association of the UK 84: 887. biodiversity and ecological function. Along the Norwegian coast these forests cover 5,800 km2, and they support large numbers of animals.Jørgensen, N.M., Christie, H., 2003. l Diurnal, horizontal and vertical dispersal of kelp associated fauna. Hydrobiologia 50, 69-76. Numerous sessile animals (sponges, bryozoans and ascidians) are found on kelp stipes and mobile invertebrate fauna are found in high densities on epiphytic algae on the kelp stipes and on kelp holdfasts. More than 100,000 mobile invertebrates per square meter are found on kelp stipes and holdfasts in well-developed kelp forests. While larger invertebrates and in particular sea urchins (Strongylocentrotus droebachiensis) are important secondary consumers controlling large barren ground areas on the Norwegian coast, they are scarce inside dense kelp forests.
Interactions
Some animals are named after the kelp, either because they inhabit the same habitat as kelp or because they feed on kelp. These include:
Northern kelp crab (Pugettia producta) and graceful kelp crab (Pugettia gracilis), Pacific coast of North America.
Kelpfish (blenny) (e.g., Heterosticbus rostratus, genus Gibbonsia), Pacific coast of North America.
Kelp goose (kelp hen) (Chloephaga hybrida), South America and the Falkland Islands
Kelp pigeon (sheathbill) (Chionis alba and Chionis minor), Antarctic
Conservation
Overfishing nearshore ecosystems leads to the degradation of kelp forests. Herbivores are released from their usual population regulation, leading to over-grazing of kelp and other algae. This can quickly result in barren landscapes where only a small number of species can thrive.Sala, E., C.F. Bourdouresque and M. Harmelin-Vivien. 1998. Fishing, trophic cascades, and the structure of algal assemblages: evaluation of an old but untested paradigm. Oikos 82: 425-439. Other major factors which threaten kelp include marine pollution and the quality of water, climate changes and certain invasive species.
Kelp forests are some of the most productive ecosystems in the world - they are home to a great diversity of species. Many groups, like those at the Seattle Aquarium, are studying the health, habitat, and population trends in order to understand why certain kelp (like bull kelp) thrives in some areas and not others. Remotely Operated Vehicles are used in the surveying of sites and the data extracted is used to learn about which conditions are best suited for kelp restoration.
Uses
Giant kelp can be harvested fairly easily because of its surface canopy and growth habit of staying in deeper water.
Kelp ash is rich in iodine and alkali. In great amount, kelp ash can be used in soap and glass production. Until the Leblanc process was commercialized in the early 19th century, burning of kelp in Scotland was one of the principal industrial sources of soda ash (predominantly sodium carbonate). Around 23 tons of seaweed was required to produce 1 ton of kelp ash. The kelp ash would consist of around 5% sodium carbonate.
Once the Leblanc Process became commercially viable in Britain during the 1820s, common salt replaced kelp ash as raw material for sodium carbonate. Though the price of kelp ash went into steep decline, seaweed remained the only commercial source of iodine. To supply the new industry in iodine synthesis, kelp ash production continued in some parts of West and North Scotland, North West Ireland and Guernsey. The species Saccharina latissima yielded the greatest amount of iodine (between 10 and 15 lbs per ton) and was most abundant in Guernsey. Iodine was extracted from kelp ash using a lixiviation process. As with sodium carbonate however, mineral sources eventually supplanted seaweed in iodine production.
Alginate, a kelp-derived carbohydrate, is used to thicken products such as ice cream, jelly, salad dressing, and toothpaste, as well as an ingredient in exotic dog food and in manufactured goods. Alginate powder is also used frequently in general dentistry and orthodontics for making impressions of the upper and lower arches. Kelp polysaccharides are used in skin care as gelling ingredients and because of the benefits provided by fucoidan.
Kombu (昆布 in Japanese, and 海带 in Chinese, Saccharina japonica and others), several Pacific species of kelp, is a very important ingredient in Chinese, Japanese, and Korean cuisines. Kombu is used to flavor broths and stews (especially dashi), as a savory garnish (tororo konbu) for rice and other dishes, as a vegetable, and a primary ingredient in popular snacks (such as tsukudani). Transparent sheets of kelp (oboro konbu) are used as an edible decorative wrapping for rice and other foods.
Kombu can be used to soften beans during cooking, and to help convert indigestible sugars and thus reduce flatulence.
In Russia, especially in the Russian Far East, and former Soviet Union countries several types of kelp are of commercial importance: Saccharina latissima, Laminaria digitata, Saccharina japonica. Known locally as "Sea Cabbage" (Морская капуста in Russian), it comes in retail trade in dried or frozen, as well as in canned form and used as filler in different types of salads, soups and pastries.
Because of its high concentration of iodine, brown kelp (Laminaria) has been used to treat goiter, an enlargement of the thyroid gland caused by a lack of iodine, since medieval times. An intake of roughly 150 micrograms of iodine per day is beneficial for preventing hypothyroidism. Overconsumption can lead to kelp-induced thyrotoxicosis.
In 2010, researchers found that alginate, the soluble fibre substance in sea kelp, was better at preventing fat absorption than most over-the-counter slimming treatments in laboratory trials. As a food additive, it may be used to reduce fat absorption and thus obesity. Kelp in its natural form has not yet been demonstrated to have such effects.
Kelp's rich iron content can help prevent iron deficiency.
Commercial production
Commercial production of kelp harvested from its natural habitat has taken place in Japan for over a century. Many countries today produce and consume laminaria products; the largest producer is China. Laminaria japonica, the important commercial seaweed, was first introduced into China in the late 1920s from Hokkaido, Japan. Yet mariculture of this alga on a very large commercial scale was realized in China only in the 1950s. Between the 1950s and the 1980s, kelp production in China increased from about 60 to over 250,000 dry weight metric tons annually.
In culture
Some of the earliest evidence for human use of marine resources, coming from Middle Stone Age sites in South Africa, includes the harvesting of foods such as abalone, limpets, and mussels associated with kelp forest habitats.
In 2007, Erlandson et al. suggested that kelp forests around the Pacific Rim may have facilitated the dispersal of anatomically modern humans following a coastal route from Northeast Asia to the Americas. This "kelp highway hypothesis" suggested that highly productive kelp forests supported rich and diverse marine food webs in nearshore waters, including many types of fish, shellfish, birds, marine mammals, and seaweeds that were similar from Japan to California, Erlandson and his colleagues also argued that coastal kelp forests reduced wave energy and provided a linear dispersal corridor entirely at sea level, with few obstacles to maritime peoples. Archaeological evidence from California's Channel Islands confirms that islanders were harvesting kelp forest shellfish and fish, beginning as much as 12,000 years ago.
During the Highland Clearances, many Scottish Highlanders were moved on to areas of estates known as crofts, and went to industries such as fishing and kelping (producing soda ash from the ashes of kelp). At least until the 1840s, when there were steep falls in the price of kelp, landlords wanted to create pools of cheap or virtually free labour, supplied by families subsisting in new crofting townships. Kelp collection and processing was a very profitable way of using this labour, and landlords petitioned successfully for legislation designed to stop emigration. The profitability of kelp harvesting meant that landlords began to subdivide their land for small tenant kelpers, who could now afford higher rent than their gentleman farmer counterparts. But the economic collapse of the kelp industry in northern Scotland during the 1820s led to further emigration, especially to North America.
Natives of the Falkland Islands are sometimes nicknamed "Kelpers". dictionary.com definition for "Kelper" This designation is primarily applied by outsiders rather than the natives themselves.
In Chinese slang, "kelp" (), is used to describe an unemployed returnee. It has negative overtones, implying the person is drifting aimlessly, and is also a homophonic expression (, literally "sea waiting"). This expression is contrasted with the employed returnee, having a dynamic ability to travel across the ocean: the "sea turtle" () and is also homophonic with another word (, literally "sea return").
Gallery
See also
Aquaculture of giant kelp
References
Further reading
Druehl, L.D. 1988. Cultivated edible kelp. in Algae and Human Affairs.'' Lembi, C.A. and Waaland, J.R. (Editors) 1988..
Erlandson, J.M., M.H. Graham, B.J. Bourque, D. Corbett, J.A. Estes, & R.S. Steneck. 2007. The Kelp Highway hypothesis: marine ecology, the coastal migration theory, and the peopling of the Americas. Journal of Island and Coastal Archaeology 2:161-174.
Eger, A. M., Layton, C., McHugh, T. A, Gleason, M., and Eddy, N. (2022). Kelp Restoration Guidebook: Lessons Learned from Kelp Projects Around the World. The Nature Conservancy, Arlington, VA, USA.
External links
Edible seaweeds
Seaweeds | Kelp | [
"Biology"
] | 4,974 | [
"Seaweeds",
"Algae"
] |
165,755 | https://en.wikipedia.org/wiki/Pearl%20S.%20Buck | Pearl Comfort Sydenstricker Buck (June 26, 1892 – March 6, 1973) was an American writer and novelist. She is best known for The Good Earth, the best-selling novel in the United States in 1931 and 1932 and which won her the Pulitzer Prize in 1932. In 1938, Buck became the first American woman to win the Nobel Prize in Literature "for her rich and truly epic descriptions of peasant life in China" and for her "masterpieces", two memoir-biographies of her missionary parents.
Buck was born in West Virginia, but in October 1892, her parents took their 4-month-old baby to China. As the daughter of missionaries and later as a missionary herself, Buck spent most of her life before 1934 in Zhenjiang, with her parents, and in Nanjing, with her first husband. She and her parents spent their summers in a villa in Kuling, Mount Lu, Jiujiang, and it was during this annual pilgrimage that the young girl decided to become a writer. She graduated from Randolph-Macon Woman's College in Lynchburg, Virginia, then returned to China. From 1914 to 1932, after marrying John Lossing Buck, she served as a Presbyterian missionary, but she came to doubt the need for foreign missions. Her views became controversial during the Fundamentalist–Modernist controversy, leading to her resignation. After returning to the United States in 1935, she married the publisher Richard J. Walsh and continued writing prolifically. She became an activist and prominent advocate of the rights of women and racial equality, and wrote widely on Chinese and Asian cultures, becoming particularly well known for her efforts on behalf of Asian and mixed-race adoption.
Early life and education
Originally named Comfort, Pearl Sydenstricker was born in Hillsboro, West Virginia, to Caroline Maude (Stulting) (1857–1921) and Absalom Sydenstricker, of Dutch and German descent respectively. Her parents, Southern Presbyterian missionaries, were married on July 8, 1880 and moved to China shortly thereafter, but returned to the United States for Pearl's birth. When Pearl was five months old, the family returned to China, living first in Huai'an and then in 1896 moving to Zhenjiang, which was then known as Chingkiang in the Chinese postal romanization system, near the major city of Nanjing. In summer, she and her family spent time in Kuling. Her father built a stone villa in Kuling in 1897, and lived there until his death in 1931. It was during this annual summer pilgrimage in Kuling that the young girl decided to become a writer.
Of her siblings who survived into adulthood, Edgar Sydenstricker had a distinguished career with the U.S. Public Health Service and later the Milbank Memorial Fund, and Grace Sydenstricker Yaukey (1899–1994) wrote young adult books and books about Asia under the pen name Cornelia Spencer.
Pearl recalled in her memoir that she lived in "several worlds", one a "small, white, clean Presbyterian world of my parents", and the other the "big, loving merry not-too-clean Chinese world", and there was no communication between them. The Boxer Uprising (1899–1901) greatly affected the family; their Chinese friends deserted them, and Western visitors decreased. Her father, convinced that no Chinese could wish him harm, stayed behind as the rest of the family went to Shanghai for safety. A few years later, Buck was enrolled in Miss Jewell's School in Shanghai, and was dismayed at the racist attitudes there of other students, few of whom could speak any Chinese. Both of her parents felt strongly that Chinese were their equals; they forbade the use of the word heathen, and she was raised in a bilingual environment: tutored in English by her mother, in the local dialect by her Chinese playmates, and in classical Chinese by a Chinese scholar named Mr. Kung. She also read voraciously, especially, in spite of her father's disapproval, the novels of Charles Dickens, which she later said she read through once a year for the rest of her life.
In 1911, Buck left China to attend Randolph-Macon Woman's College in Lynchburg, Virginia, where she graduated Phi Beta Kappa in 1914 and was a member of Kappa Delta sorority.
Career
China
Although Buck had not intended to return to China, much less become a missionary, she quickly applied to the Presbyterian Board when her father wrote that her mother was seriously ill. In 1914, Buck returned to China. She married an agricultural economist missionary, John Lossing Buck, on May 13, 1917, and they moved to Suzhou, Anhui Province, a small town on the Huai River (not to be confused with the better-known Suzhou in Jiangsu Province). This is the region she describes in her books The Good Earth and Sons.
From 1920 to 1933, the Bucks made their home in Nanjing, on the campus of the University of Nanking, where they both had teaching positions. She taught English literature at this private, church-run university, and also at Ginling College and at the National Central University. In 1920, the Bucks had a daughter, Carol, who was afflicted with phenylketonuria that left her severely developmentally disabled. Buck had to have a hysterectomy due to complications of Carol's birth, leaving her unable to have more biological children. In 1921, Buck's mother died of a tropical disease, sprue, and shortly afterward her father moved in. In 1924, they left China for John Buck's year of sabbatical and returned to the United States for a short time, during which Pearl Buck earned a master's degree from Cornell University. In 1925, the Bucks adopted a child named Janice (later surnamed Walsh). That autumn, they returned to China.
The tragedies and dislocations that Buck suffered in the 1920s reached a climax in March 1927, during the "Nanking Incident". In a confused battle involving elements of Chiang Kai-shek's Nationalist troops, Communist forces, and assorted warlords, several Westerners were murdered. Since her father Absalom insisted, as he had in 1900 in the face of the Boxers, the family decided to stay in Nanjing until the battle reached the city. When violence broke out, a poor Chinese family invited them to hide in their hut while the family house was looted. The family spent a day terrified and in hiding, after which they were rescued by American gunboats. They traveled to Shanghai and then sailed to Japan, where they stayed for a year, after which they moved back to Nanjing. Buck later said that this year in Japan showed her that not all Japanese were militarists. When she returned from Japan in late 1927, Buck devoted herself in earnest to the vocation of writing. Friendly relations with prominent Chinese writers of the time, such as Xu Zhimo and Lin Yutang, encouraged her to think of herself as a professional writer. She wanted to fulfill the ambitions denied to her mother, but she also needed money to support herself if she left her marriage, which had become increasingly lonely. Since the mission board could not provide it, she also needed money for Carol's specialized care.
Buck traveled once more to the United States in 1929 to find long-term care for Carol, eventually placing her in the Vineland Training School in New Jersey. Buck served on the Board of Trustees for the school, at which Carol lived for the rest of her life and where she eventually died in 1992 at age 72. While Buck was in the United States, Richard J. Walsh, editor at John Day publishers in New York, accepted her novel East Wind: West Wind. She and Walsh began a relationship that would result in marriage and many years of professional teamwork.
Back in Nanking, Buck retreated every morning to the attic of her university house, and within the year, completed the manuscript for The Good Earth. She was involved in the charity relief campaign for the victims of the 1931 China floods, writing a series of short stories describing the plight of refugees, which were broadcast on the radio in the United States and later published in her collected volume The First Wife and Other Stories.
When her husband took the family to Ithaca, New York the following year, Buck accepted an invitation to address a luncheon of Presbyterian women at the Hotel Astor in New York City. Her talk was titled "Is There a Case for the Foreign Missionary?" and her answer was a barely qualified "no". She told her American audience that she welcomed Chinese to share her Christian faith, but argued that China did not need an institutional church dominated by missionaries who were too often ignorant of China and arrogant in their attempts to control it. When the talk was published in Harper's Magazine, the scandalized reaction led Buck to resign her position with the Presbyterian Board. In 1934, Buck left China, believing she would return, while her husband remained.
United States
Buck divorced her husband in Reno, Nevada on June 11, 1935, and she married Richard Walsh that same day. He reportedly offered her advice and affection which, her biographer concludes, "helped make Pearl's prodigious activity possible". The couple moved with Janice to Green Hills Farm in Bucks County, Pennsylvania, which they quickly set about filling with adopted children. Two sons were brought home as infants in 1936 and followed by another son and daughter in 1937.
Following the Communist Revolution in 1949, Buck was repeatedly refused all attempts to return to her beloved China. Her 1962 novel Satan Never Sleeps described the Communist tyranny in China. During the Cultural Revolution, Buck, as a preeminent American writer of Chinese village life, was denounced as an "American cultural imperialist". Buck was "heartbroken" when she was prevented from visiting China with Richard Nixon in 1972.
Nobel Prize in Literature
In 1938 the Nobel Prize committee in awarding the prize said:
In her speech to the Academy, Buck took as her topic "The Chinese Novel". She explained, "I am an American by birth and by ancestry", but "my earliest knowledge of story, of how to tell and write stories, came to me in China." After an extensive discussion of classic Chinese novels, especially Romance of the Three Kingdoms, All Men Are Brothers, and Dream of the Red Chamber, she concluded that in China "the novelist did not have the task of creating art but of speaking to the people." Her own ambition, she continued, had not been trained toward "the beauty of letters or the grace of art." In China, the task of the novelist differed from the Western artist: "To farmers he must talk of their land, and to old men he must speak of peace, and to old women he must tell of their children, and to young men and women he must speak of each other." And like the Chinese novelist, she concluded, "I have been taught to want to write for these people. If they are reading their magazines by the million, then I want my stories there rather than in magazines read only by a few."
Humanitarian efforts
Buck was committed to a range of issues that were largely ignored by her generation. Many of her life experiences and political views are described in her novels, short stories, fiction, children's stories, and the biographies of her parents entitled Fighting Angel (on Absalom) and The Exile (on Carrie). She wrote on diverse subjects, including women's rights, Asian cultures, immigration, adoption, missionary work, war, the atomic bomb (Command the Morning), and violence. Long before it was considered fashionable or politically safe to do so, Buck challenged the American public by raising consciousness on topics such as racism, sex discrimination and the plight of Asian war children. Buck combined the careers of wife, mother, author, editor, international spokesperson, and political activist. Buck became well-known as an advocate for civil rights, women’s rights, and the disability rights.
In 1949, after finding that existing adoption services considered Asian and mixed-race children unadoptable, Buck founded the first permanent foster home for US-born mixed-race children of Asian descent, naming it The Welcome Home. The foster home was located in a 16-room farmhouse in Pennsylvania next door to Buck's own home, Green Hill Farm, and Buck was actively involved in everything from planning the children's diets to buying their clothing. Among the home's Board of Directors were librettist Oscar Hammerstein II and his second wife, interior designer Dorothy, composer Richard Rodgers, seed company tycoon David Burpee and his wife Lois and author James A. Michener. As more and more children were referred to the foster home, however, it quickly became apparent that it couldn't accommodate them all and adoptive homes were needed. Welcome Home was turned into the first international, interracial adoption agency, and Buck began actively promoting the adoption of mixed-race children to the American public. In an effort to overcome the longstanding public view that such children were inferior and undesirable, Buck claimed in interviews and speeches that "hybrid" children of interracial backgrounds were actually genetically superior to other children in terms of intelligence and health. She and her husband Richard then adopted two mixed-race daughters from overseas themselves: an Afro-German girl in 1951 and an Afro-Japanese girl in 1957, giving her eight children in total. In 1967 she turned over most of her earnings—more than $7 million— to the adoption agency to help with costs.
Buck established the Pearl S. Buck Foundation (name changed to Pearl S. Buck International in 1999) to "address poverty and discrimination faced by children in Asian countries." In 1964, she opened the Opportunity Center and Orphanage in South Korea, and later offices were opened in Thailand, the Philippines, and Vietnam. When establishing Opportunity House, Buck said, "The purpose ... is to publicize and eliminate injustices and prejudices suffered by children, who, because of their birth, are not permitted to enjoy the educational, social, economic and civil privileges normally accorded to children."In 1960, after a long decline in health that included a series of strokes, Buck's husband Richard Walsh died. She renewed a warm relationship with William Ernest Hocking, who died in 1966. Buck then withdrew from many of her old friends and quarreled with others.
In 1962 Buck asked the Israeli Government for clemency for Adolf Eichmann, the Nazi war criminal who was complicit in the deaths of six million Jews during World War II, as she and others believed that carrying out capital punishment against Eichmann could be seen as an act of vengeance, especially since the war had ended.
During a December 17, 1962 visit to the Kennedy White House, Buck urged the Kennedy administration to help resolve People's Republic of China-Taiwan relations by supporting de facto independence of Taiwan for a 10 to 25 year period with an agreement that afterwards a plebiscite could be held based on a negotiated settlement.
Buck’s ties with her native state remained strong. In the title essay of My Mother’s House, a small book written by Buck and others to help raise funds for the Birthplace Museum, she paid tribute to the house her mother had cherished while living far away: ‘‘For me it was a living heart in the country I knew was my own but which was strange to me until I returned to the house where I was born. In the late 1960s, Buck toured West Virginia to raise money to preserve her family farm in Hillsboro, West Virginia. Today the Pearl S. Buck Birthplace is a historic house museum and cultural center. She hoped the house would "belong to everyone who cares to go there," and serve as a "gateway to new thoughts and dreams and ways of life." Former U.S. President George H. W. Bush toured the Pearl S. Buck House in October 1998. He expressed that he, like millions of other Americans, had gained an appreciation for the Chinese people through Buck's writing.
Final years
In the mid-1960s, Buck increasingly came under the influence of Theodore Harris, a former dance instructor, who became her confidant, co-author, and financial advisor. She soon depended on him for all her daily routines, and placed him in control of Welcome House and the Pearl S. Buck Foundation. Harris, who was given a lifetime salary as head of the foundation, created a scandal for Buck when he was accused of mismanaging the foundation, diverting large amounts of the foundation's funds for his friends' and his own personal expenses, and treating staff poorly. Buck defended Harris, stating that he was "very brilliant, very high strung and artistic." Before her death, Buck signed over her foreign royalties and her personal possessions to Creativity Inc., a foundation controlled by Harris.
Death
Pearl S. Buck died of lung cancer on March 6, 1973, in Danby, Vermont.
She was interred on Green Hills Farm in Perkasie, Pennsylvania. She designed her own tombstone. Her name was not inscribed in English on her tombstone. Instead, the grave marker is inscribed with the Chinese characters 賽珍珠 () representing the name Pearl Sydenstricker; specifically, Sai is the sound of the first syllable of her last name (Chinese last names come first), and Zhenzhu is the Chinese word for pearl.
Buck left behind three contradictory wills, resulting in a three-way legal dispute over her estate between her financial advisor Theodore Harris, the nonprofit Pearl Buck Foundation, and her seven adopted children. After a six-year battle, the dispute was settled in her children's favor after both Harris and the Pearl Buck Foundation dropped their claims (the latter in return for a financial settlement from Buck's children).
Legacy
Many contemporary reviewers praised Buck's "beautiful prose", even though her "style is apt to degenerate into over-repetition and confusion". Robert Benchley wrote a parody of The Good Earth that emphasised these qualities. Peter Conn, in his biography of Buck, argues that despite the accolades awarded to her, Buck's contribution to literature has been mostly forgotten or deliberately ignored by America's cultural gatekeepers. Kang Liao argues that Buck played a "pioneering role in demythologizing China and the Chinese people in the American mind". Phyllis Bentley, in an overview of Buck's work published in 1935, was altogether impressed: "But we may say at least that for the interest of her chosen material, the sustained high level of her technical skill, and the frequent universality of her conceptions, Mrs. Buck is entitled to take rank as a considerable artist. To read her novels is to gain not merely knowledge of China but wisdom about life." These works aroused considerable popular sympathy for China, and helped foment a more critical view of Japan and its aggression.
Chinese-American author Anchee Min said she "broke down and sobbed" after reading The Good Earth for the first time as an adult, which she had been forbidden to read growing up in China during the Cultural Revolution. Min said Buck portrayed the Chinese peasants "with such love, affection and humanity" and it inspired Min's novel Pearl of China (2010), a fictional biography about Buck.
In 1973, Buck was inducted into the National Women's Hall of Fame. Buck was honored in 1983 with a 5¢ Great Americans series postage stamp issued by the United States Postal Service In 1999 she was designated a Women's History Month Honoree by the National Women's History Project.
Buck's former residence at Nanjing University is now the Pearl S. Buck Memorial House or in Mandarin 賽珍珠紀念館 () along the West Wall of the university's north campus.
Pearl Buck's papers and literary manuscripts are currently housed at Pearl S. Buck International and the West Virginia & Regional History Center.
Selected bibliography
Autobiographies
My Several Worlds: A Personal Record (New York: John Day, 1954)
My Several Worlds – abridged for younger readers by Cornelia Spencer (New York: John Day, 1957)
A Bridge for Passing (New York: John Day, 1962) – autobiographical account of the filming of Buck's children's book, The Big Wave
Biographies
The Exile: Portrait of an American Mother (New York: John Day, 1936) – about her mother, Caroline Stulting Sydenstricker (1857–1921); serialized in Woman's Home Companion magazine (10/1935–3/1936)
Fighting Angel: Portrait of a Soul (New York: Reynal & Hitchcock, 1936) – about her father, Absalom Sydenstricker (1852–1931)
The Spirit and the Flesh (New York: John Day, 1944) – includes The Exile: Portrait of an American Mother and Fighting Angel: Portrait of a Soul
Novels
East Wind: West Wind (New York: John Day, 1930) – working title Winds of Heaven
The Good Earth (New York: John Day, 1931); The House of Earth trilogy #1 – made into a feature film The Good Earth (MGM, 1937)
Sons (New York: John Day, 1933); The House of Earth trilogy #2; serialized in Cosmopolitan (4–11/1932)
A House Divided (New York: Reynal & Hitchcock, 1935); The House of Earth trilogy #3
The House of Earth (trilogy) (New York: Reynal & Hitchcock, 1935) – includes: The Good Earth, Sons, A House Divided
All Men Are Brothers (New York: John Day, 1933) – a translation by Buck of the Chinese classical prose epic Water Margin (Shui Hu Zhuan)
The Mother (New York: John Day, 1933) – serialized in Cosmopolitan (7/1933–1/1934)
This Proud Heart (New York: Reynal & Hitchcock, 1938) – serialized in Good Housekeeping magazine (8/1937–2/1938)
The Patriot (New York: John Day, 1939)
Other Gods: An American Legend (New York: John Day, 1940) – excerpt serialized in Good Housekeeping magazine as "American Legend" (12/1938–5/1939)
China Sky (New York: John Day, 1941) – China trilogy #1; serialized in Collier's Weekly magazine (2–4/1941); made into a feature film China Sky (film) (RKO, 1945)
China Gold: A Novel of War-torn China (New York: John Day, 1942) – China trilogy #2; serialized in Collier's Weekly magazine (2–4/1942)
Dragon Seed (New York: John Day, 1942) – serialized in Asia (9/1941–2/1942); made into a feature film Dragon Seed (MGM, 1944)
The Promise (New York: John Day, 1943) – sequel to Dragon Seed; serialized in Asia and the Americas (Asia) (11/1942–10/1943)
China Flight (Philadelphia: Triangle Books/Blakiston Company, 19453) – China trilogy #3; serialized in Collier's Weekly magazine (2–4/1943)
Portrait of a Marriage (New York: John Day, 1945) – illustrated by Charles Hargens
The Townsman (New York: John Day, 1945) – as John Sedges
Pavilion of Women (New York: John Day, 1946) – made into a feature film Pavilion of Women (Universal Focus, 2001)
The Angry Wife (New York: John Day, 1947) – as John Sedges
Peony (New York: John Day, 1948) – published in the UK as The Bondmaid (London: T. Brun, 1949); – serialized in Cosmopolitan (3–4/1948)
Kinfolk (New York: John Day, 1949) – serialized in Ladies' Home Journal (10/1948–2/1949)
The Long Love (New York: John Day, 1949) – as John Sedges
God's Men (New York: John Day, 1951)
Sylvia (1951) – alternate title No Time for Love, serialized in Redbook magazine (1951)
Bright Procession (New York: John Day, 1952) – as John Sedges
The Hidden Flower (New York: John Day, 1952) – serialized in Woman's Home Companion magazine (3–4/1952)
Come, My Beloved (New York: John Day, 1953)
Voices in the House (New York: John Day, 1953) – as John Sedges
Imperial Woman The Story of the Last Empress of China (New York: John Day, 1956) – about Empress Dowager Cixi; serialized in Woman's Home Companion (3–4/1956)
Letter from Peking (New York: John Day, 1957)
American Triptych: Three John Sedges Novels (New York: John Day, 1958) – includes The Townsman, The Long Love, Voices in the House
Command the Morning (New York: John Day, 1959)
Satan Never Sleeps (New York: Pocket Books, 1962) – 1962 film Satan Never Sleeps, also known as The Devil Never Sleeps and Flight from Terror
The Living Reed A Novel of Korea (New York: John Day, 1963)
Death in the Castle (New York: John Day, 1965)
The Time Is Noon (New York: John Day, 1966)
The New Year (New York: John Day, 1968)
The Three Daughters of Madame Liang (London: Methuen, 1969)
Mandala: A Novel of India (New York: John Day, 1970)
The Goddess Abides (New York: John Day, 1972)
All under Heaven (New York: John Day, 1973)
The Rainbow (New York: John Day, 1974)
The Eternal Wonder (believed to have been written shortly before her death, published in October 2013)
Non-fiction
Is There a Case for Foreign Missions? (New York: John Day, 1932)
The Chinese Novel: Nobel Lecture Delivered before the Swedish Academy at Stockholm, December 12, 1938 (New York: John Day, 1939)
Of Men and Women (New York: John Day, 1941) – Essays
American Unity and Asia (New York: John Day, 1942) – UK edition titled Asia and Democracy, London: Macmillan, 1943) – Essays
What America Means to Me (New York: John Day, 1943) – UK edition (London: Methuen, 1944) – Essays
Talk about Russia (with Masha Scott) (New York: John Day, 1945) – serialized in Asia and the Americas magazine (Asia) as Talks with Masha (1945)
Tell the People: Talks with James Yen about the Mass Education Movement (New York: John Day, 1945)
How It Happens: Talk about the German People, 1914–1933, with Erna von Pustau (New York: John Day, 1947)
American Argument with Eslanda Goode Robeson (New York: John Day, 1949)
The Child Who Never Grew (New York: John Day, 1950)
The Man Who Changed China: The Story of Sun Yat-sen (New York: John Day, 1953) – for children
Friend to Friend: A Candid Exchange between Pearl S. Buck and Carlos P. Romulo (New York: John Day, 1958)
For Spacious Skies (1966)
The People of Japan (1966)
To My Daughters, with Love (New York: John Day, 1967)
The Kennedy Women (1970)
China as I See It (1970)
The Story Bible (1971)
Pearl S. Buck's Oriental Cookbook (1972)
Words of Love (1974)
Short stories
Collections
The First Wife and Other Stories (London: Methuen, 1933) – includes: "The First Wife", "The Old Mother", "The Frill", "The Quarrell", "Repatriated", "The Rainy Day", Wang Lung", "The Communist", "Father Andrea", "The New Road", "Barren Spring", *"The Refugees", "Fathers and Mothers", "The Good River"
Today and Forever: Stories of China (New York: John Day, 1941) – includes: "The Lesson", The Angel", "Mr. Binney's Afternoon", "The Dance", "Shanghai Scene", "Hearts Come Home", "His Own Country", "Tiger! Tiger!", "Golden flower", "The Face of Buddha", "Guerrilla Mother", "A Man's Foes", "The Old Demon"
Twenty-seven Stories (Garden City, NY: Sun Dial Press, 1943) – includes (from The First Wife and Other Stories): "The First Wife", "The Old Mother", "The Frill", "The Quarrell", "Repatriated", "The Rainy Day", Wang Lung", "The Communist", "Father Andrea", "The New Road", "Barren Spring", *"The Refugees", "Fathers and Mothers", "The Good River"; and (from Today and Forever: Stories of China): "The Lesson", The Angel", "Mr. Binney's Afternoon", "The Dance", "Shanghai Scene", "Hearts Come Home", "His Own Country", "Tiger! Tiger!", "Golden flower", "The Face of Buddha", "Guerrilla Mother", "A Man's Foes", "The Old Demon"
Far and Near: Stories of Japan, China, and America (New York: John Day, 1947) – includes: "The Enemy", "Home Girl", "Mr. Right", "The Tax Collector", "A Few People", "Home to Heaven", "Enough for a Lifetime", "Mother and Sons", "Mrs. Mercer and Her Self", "The Perfect Wife", "Virgin birth", "The Truce", "Heat Wave", "The One Woman"
Fourteen Stories (New York: John Day, 1961) – includes: "A Certain Star," "The Beauty", "Enchantment", "With a Delicate Air", "Beyond Language", "Parable of Plain People", "The Commander and the Commissar", "Begin to Live", "The Engagement", "Melissa", "Gift of Laughter", "Death and the Dawn", "The Silver Butterfly", "Francesca"
Hearts Come Home and Other Stories (New York: Pocket Books, 1962)
Stories of China (1964)
Escape at Midnight and Other Stories (1964)
The Good Deed, and other Stories of Asia, Past and Present (1970)
East and West Stories (1975)
Secrets of the Heart: Stories (1976)
The Lovers and Other Stories (1977)
Mrs. Stoner and the Sea and Other Stories (1978)
The Woman Who Was Changed and Other Stories (1979)
Beauty Shop Series: "Revenge in a Beauty Shop" (1939) – original title "The Perfect Hairdresser"
Beauty Shop Series: "Gold Mine" (1940)
Beauty Shop Series: "Mrs. Whittaker's Secret"/"The Blonde Brunette" (1940)
Beauty Shop Series: "Procession of Song" (1940)
Beauty Shop Series: "Snake at the Picnic" (1940) – published as "Seed of Sin" (1941)
Beauty Shop Series: "Seed of Sin" (1941) – published as "Snake at the Picnic (1940)
Individual short stories
Unknown title (1902) – first published story, pen name "Novice", Shanghai Mercury
"The Real Santa Claus" (c. 1911)
"Village by the Sea" (1911)
"By the Hand of a Child" (1912)
"The Hours of Worship" (1914)
"When 'Lof' Comes" (1914)
"The Clutch of the Ancients" (1924)
"The Rainy Day" (c. 1925)
"A Chinese Woman Speaks" (1926)
"Lao Wang, the Farmer" (1926)
"The Solitary Priest" (1926)
"The Revolutionist" (1928) – later published as "Wang Lung" (1933)
"The Wandering Little God" (1928)
"Father Andrea" (1929)
"The New Road" (1930)
"Singing to her Death" (1930)
"The Barren Spring" (1931)
"The First Wife" (1931)
"The Old Chinese Nurse" (1932)
"The Quarrel" (1932)
"The Communist" (1933)
"Fathers and Mothers" (1933)
"The Frill" (1933)
"Hidden is the Golden Dragon" (1933)
"The Lesson" (1933) – later published as "No Other Gods" (1936; original title used in short story collections)
"The Old Mother" (1933)
"The Refugees" (1933)
"Repatriated" (1933)
"The Return" (1933)
"The River" (1933) – later published as "The Good River" (1939)
"The Two Women" (1933)
"The Beautiful Ladies" (1934) – later published as "Mr. Binney's Afternoon" (1935)
"Fool's Sacrifice" (1934)
"Shanghai Scene" (1934)
"Wedding and Funeral" (1934)
"Between These Two" (1935)
"The Dance" (1935)
"Enough for a Lifetime" (1935)
"Hearts Come Home" (1935)
"Heat Wave" (1935)
"His Own Country" (1935)
"The Perfect Wife" (1935)
"Vignette of Love" (1935) – later published as "Next Saturday and Forever" (1977)
"The Crusade" (1936)
"Strangers Are Kind" (1936)
"The Truce" (1936)
"What the Heart Must" (1937) – later published as "Someone to Remember" (1947)
"The Angel" (1937)
"Faithfully" (1937)
"Ko-Sen, the Sacrificed" (1937)
"Now and Forever" (1937) – serialized in Woman's Home Companion magazine (10/1936–3/1937)
"The Woman Who Was Changed" (1937) – serialized in Redbook magazine (7–9/1937)
"The Pearls of O-lan" – from The Good Earth (1938)
"Ransom" (1938)
"Tiger! Tiger!" (1938)
"Wonderful Woman" (1938) – serialized in Redbook magazine (6–8/1938)
"For a Thing Done" (1939) – originally titled "While You Are Here"
"The Old Demon" (1939) – reprinted in Great Modern Short Stories: An Anthology of Twelve Famous Stories and Novelettes, selected, and with a foreword and biographical notes by Bennett Cerf (New York: The Modern library, 1942)
"The Face of Gold" (1940, in Saturday Evening Post) – later published as "The Face of Buddha" (1941)
"Golden Flower" (1940)
"Iron" (1940) – later published as "A Man's Foes" (1940)
"The Old Signs Fail" (1940)
"Stay as You Are" (1940) – serialized in Cosmopolitan (3–7/1940)
"There Was No Peace" (1940) – later published as "Guerrilla Mother" (1941)
"Answer to Life" (novella; 1941)
"More Than a Woman" (1941) – originally titled "Deny It if You Can"
"Our Daily Bread" (1941) – originally titled "A Man's Daily Bread, 1–3", serialized in Redbook magazine (2–4/1941), longer version published as Portrait of a Marriage (1945)
The Enemy (1942, Harper's Magazine) – staged by the Indian "Aamra Kajon" (Drama Society), on the Bengal Theatre Festival 2019
"John-John Chinaman" (1942) – original title "John Chinaman"
"The Long Way 'Round" – serialized in Cosmopolitan (9/1942–2/1943)
"Mrs. Barclay's Christmas Present" (1942) – later published as "Gift of Laughter" (1943)
"Descent into China" (1944)
"Journey for Life" (1944) – originally titled "Spark of Life"
"The Real Thing" (1944) – serialized in Cosmopolitan (2–6/1944); originally intendeds as a serial "Harmony Hill" (1938)
"Begin to Live" (1945)
"Mother and Sons" (1945)
"A Time to Love" (1945) – later published under its original title "The Courtyards of Peace" (1969)
"Big Tooth Yang" (1946) – later published as "The Tax Collector" (1947)
"The Conqueror's Girl" (1946) – later published as "Home Girl" (1947)
"Faithfully Yours" (1947)
"Home to Heaven" (1947)
"Incident at Wang's Corner" (1947) – later published as "A Few People" (1947)
"Mr. Right" (1947)
"Mrs. Mercer and Her Self" (1947)
"The One Woman" (1947)
"Virgin Birth" (1947)
"Francesca" (Good Housekeeping magazine, 1948)
"The Ember" (1949)
"The Tryst" (1950)
"Love and the Morning Calm" – serialized in Redbook magazine (1–4/1951)
"The Man Called Dead" (1952)
"Death and the Spring" (1953)
"Moon over Manhattan" (1953)
"The Three Daughters" (1953)
"The Unwritten Rules" (1953)
"The Couple Who Lived on the Moon" (1953) – later published as "The Engagement" (1961)
"A Husband for Lili" (1953) – later published as "The Good Deed" (1969)
"The Heart's Beginning" (1954)
"The Shield of Love" (1954)
"Christmas Day in the Morning" (1955) – later published as "The Gift That Lasts a Lifetime"
"Death and the Dawn" (1956)
"Mariko" (1956)
"A Certain Star" (1957)
"Honeymoon Blues" (1957)
"China Story" (1958)
"Leading Lady" (1958) – alternately titled "Open the Door, Lady"
"The Secret" (1958)
"With a Delicate Air" (1959)
"The Bomb (Dr. Arthur Compton)" (1959)
"Heart of a Man" (1959)
"Melissa" (1960)
"The Silver Butterfly" (1960)
"The Beauty" (1961)
"Beyond Language" (1961)
"The Commander and the Commissar" (1961)
"Enchantment" (1961)
"Parable of Plain People" (1961)
"A Field of Rice" (1962)
"A Grandmother's Christmas" (1962) – later published as "This Day to Treasure" (1972)
""Never Trust the Moonlight" (1962) – later published as "The Green Sari" (1962)
"The Cockfight, 1963
"A Court of Love" (1963)
"Escape at Midnight" (1963)
"The Lighted Window" (1963)
"Night Nurse" (1963)
"The Sacred Skull" (1963)
"The Trap" (1963)
"India, My India" (1964)
"Ranjit and the Tiger" (1964)
"A Certain Wisdom" (1967, in Woman's Day magazine)
"Stranger Come Home" (1967)
"The House They Built" (1968, in Boys' Life magazine)
"The Orphan in My Home" (1968)
"Secrets of the Heart" (1968)
"All the Days of Love and Courage" 1969) – later published as "The Christmas Child" (1972)
"Dagger in the Dark" (1969)
"Duet in Asia" (1969; written 1953
"Going Home" (1969)
"Letter Home" (1969; written 1943)
"Sunrise at Juhu" (1969)
"Two in Love" (1970) – later published as "The Strawberry Vase" (1976)
"The Gifts of Joy" (1971)
"Once upon a Christmas" (1971)
"The Christmas Secret" (1972)
"Christmas Story" (1972)
"In Loving Memory" (1972) – later published as "Mrs. Stoner and the Sea" (1976)
"The New Christmas" (1972)
"The Miracle Child" (1973)
"Mrs. Barton Declines" (1973) – later published as "Mrs. Barton's Decline" and "Mrs. Barton's Resurrection" (1976)
"Darling Let Me Stay" (1975) – excerpt from "Once upon a Christmas" (1971)
"Dream Child" (1975)
"The Golden Bowl" (1975; written 1942)
"Letter from India" (1975)
"To Whom a Child is Born" (1975)
"Alive again" (1976)
"Come Home My Son" (1976)
"Here and Now" (1976; written 1941)
"Morning in the Park" (1976; written 1948)
"Search for a Star" (1976)
"To Thine Own Self" (1976)
"The Woman in the Waves" (1976; written 1953)
"The Kiss" (1977)
"The Lovers" (1977)
"Miranda" (1977)
"The Castle" (1979; written 1949)
"A Pleasant Evening" (1979; written 1948)
Christmas Miniature (New York: John Day, 1957) – in UK as Christmas Mouse (London: Methuen, 1959) – illustrated by Anna Marie Magagna
Christmas Ghost (New York: John Day, 1960) – illustrated by Anna Marie Magagna
Unpublished stories
"The Good Rich Man" (1937, unsold)
"The Sheriff" (1937, unsold)
"High and Mighty" (1938, unsold)
"Mrs. Witler's Husband" (1938, unsold)
"Mother and Daughter" (1938, unsold; alternate title "My Beloved")
"Mother without Child" (1940, unsold)
"Instead of Diamonds" (1953, unsold)
Unpublished stories, undated
"The Assignation" (submitted not sold)
"The Big Dance" (unsold)
"The Bleeding Heart" (unsold)
"The Bullfrog" (unsold)
"The Day at Dawn" (unpublished)
"The Director"
"Heart of the Jungle (submitted, unsold)
"Images" (sold but unpublished)
"Lesson in Biology" / "Useless Wife" (unsold)
"Morning in Okinawa" (unsold)
"Mrs. Jones of Jerrell Street" (unsold)
"One of Our People" (sold, unpublished)
"Summer Fruit" (unsold)
"Three Nights with Love" (submitted, unsold) – original title "More Than a Woman"
"Too Many Flowers" (unsold)
"Wang the Ancient" (unpublished)
"Wang the White Boy" (unpublished)
Stories: Date unknown
"Church Woman"
"Crucifixion"
"Dear Son"
"Escape Me Never" – alternate title of "For a Thing Done"
"The Great Soul"
"Her Father's Wife"
"Horse Face"
"Lennie"
"The Magic Dragon"
"Mrs. Jones of Jerrell Street" (unsold)
"Night of the Dance"
"One and Two"
"Pleasant Vampire"
"Rhoda and Mike"
"The Royal Family"
"The Searcher"
"Steam and Snow"
"Tinder and the Flame"
"The War Chest"
"To Work the Sleeping Land"
Children's books and stories
The Young Revolutionist (New York: John Day, 1932) – for children
Stories for Little Children (New York: John Day, 1940) – pictures by Weda Yap
"When Fun Begins" (1941)
The Chinese Children Next Door (New York: John Day, 1942)
The Water Buffalo Children (New York: John Day, 1943) – drawings by William Arthur Smith
Dragon Fish (New York: John Day, 1944) – illustrated by Esther Brock Bird
Yu Lan: Flying Boy of China (New York: John Day, 1945) – drawings by Georg T. Hartmann
The Big Wave (New York: John Day, 1948) – illustrated with prints by Hiroshige and Hokusai – for children
One Bright Day (New York: John Day, 1950) – published in the UK as One Bright Day and Other Stories for Children (1952)
The Beech Tree (New York: John Day, 1954) – illustrated by Kurt Werth – for children
"Johnny Jack and His Beginnings" (New York: John Day, 1954)
Christmas Miniature (1957) – published in the UK as The Christmas Mouse (1958)
"The Christmas Ghost" (1960)
"Welcome Child (1964)
"The Big Fight" (1965)
"The Little Fox in the Middle" (1966)
Matthew, Mark, Luke and John (New York: John Day, 1967) – set in South Korea
"The Chinese Storyteller" (1971)
"A Gift for the Children" (1973)
"Mrs Starling's Problem" (1973)
Awards
Pulitzer Prize for the Novel: The Good Earth (1932)
William Dean Howells Medal (1935)
Nobel Prize in Literature (1938)
Child Study Association of America's Children's Book Award (now Bank Street Children's Book Committee's Josette Frank Award): The Big Wave (1948)
Museums and historic houses
Several historic sites work to preserve and display artifacts from Pearl's profoundly multicultural life:
The Pearl S. Buck Summer Villa, in Kuling town, Mount Lu, Jiujiang, China
Pearl S. Buck House in Nanjing University, China
The Zhenjiang Pearl S. Buck Research Association and former residence in Zhenjiang, China
Pearl S. Buck Birthplace in Hillsboro, West Virginia
Green Hills Farm in Bucks County, Pennsylvania
The Pearl S. Buck Memorial Hall, Bucheon City, South Korea
See also
Christian feminism
List of female Nobel laureates
Notes
Further reading
Harris, Theodore F. (in consultation with Pearl S. Buck), Pearl S. Buck: a Biography (John Day, 1969. )
Theodore F. Harris (in consultation with Pearl S. Buck), Pearl S. Buck; a biography. Volume two: Her philosophy as expressed in her letters (John Day, 1971. )
.
Hunt, Michael H. "Pearl Buck-Popular Expert on China, 1931-1949." Modern China 3.1 (1977): 33-64.
Jean So, Richard. "Fictions of Natural Democracy: Pearl Buck, The Good Earth, and the Asian American Subject." Representations 112.1 (2010): 87-111.
Kang, Liao. Pearl S. Buck: A Cultural Bridge across the Pacific. (Westport, CT, London: Greenwood, Contributions to the Study of World Literature 77, 1997). .
Leong. Karen J. The China Mystique: Pearl S. Buck, Anna May Wong, Mayling Soong, and the Transformation of American Orientalism (Berkeley: University of California Press, 2005).
Lipscomb, Elizabeth Johnston, Frances E. Webb and Peter J. Conn, eds., The Several Worlds of Pearl S. Buck: Essays Presented at a Centennial Symposium, Randolph-Macon Woman's College, March 26–28, 1992. Westport, CT: Greenwood Press, Contributions in Women's Studies, 1994.
Shaffer, Robert. "Women and international relations: Pearl S. Buck's critique of the Cold War." Journal of Women's History 11.3 (1999): 151-175.
Spurling, Hilary. Burying the Bones: Pearl Buck in China (London: Profile, 2010)
Stirling, Nora B. Pearl Buck, a Woman in Conflict (Piscataway, NJ: New Century Publishers, 1983).
Suh, Chris. ""America's Gunpowder Women" Pearl S. Buck and the Struggle for American Feminism, 1937–1941." Pacific Historical Review 88.2 (2019): 175-207. online
{{citation|title= Het China-gevoel van Pearl S. Buck (The China-feeling of Pearl S. Buck |publisher = Uitgeverij Brandt |year= 2021|first= Bettine |last= Vriesekoop|author-link= Bettine Vriesekoop}}.
Wacker, Grant. "Pearl S. Buck and the Waning of the Missionary Impulse" Church history 72.4 (2003): 852-874.
Xi Lian. The Conversion of Missionaries: Liberalism in American Protestant Missions in China, 1907–1932. (University Park: Pennsylvania State University Press, 1997).
Mari Yoshihara. Embracing the East: White Women and American Orientalism. (New York: Oxford University Press, 2003).
External links
Digital collections
Physical collections
The Zhenjiang Pearl S. Buck Research Association, China (in Chinese & English)
University of Pennsylvania website dedicated to Pearl S. Buck
National Trust for Historic Preservation on the Pearl S. Buck House Restoration
The Pearl S. Buck Literary Manuscripts and Other Collections at the West Virginia & Regional History Collection, WVU Libraries
Spring, Kelly. "Pearl Buck". National Women's History Museum.
A House Divided Manuscript at Dartmouth College Library
Biographical information
Pearl S. Buck fuller bibliography at WorldCat
Presentation by Peter Conn on Pearl S. Buck: A Cultural Biography, March 5, 1997, C-SPAN
Other links
The Pearl S. Buck Birthplace in Pocahontas County West Virginia
Pearl S. Buck International
List of Works
Pearl Buck interviewed by Mike Wallace on The Mike Wallace Interview'' February 8, 1958
FBI Records: The Vault – Pearl Buck at fbi.gov
1892 births
1973 deaths
20th-century American novelists
20th-century American women writers
Activists from West Virginia
American autobiographers
American expatriates in China
American historical novelists
American human rights activists
American women human rights activists
American Nobel laureates
American Presbyterian missionaries
Female Christian missionaries
American women non-fiction writers
Children of American missionaries in China
American people of Dutch descent
American people of German descent
Christian novelists
Christian humanists
Cornell University alumni
Members of the Society of Woman Geographers
Academic staff of Nanjing University
Nobel laureates in Literature
Novelists from Pennsylvania
Novelists from West Virginia
Writers from West Virginia
Writers from Bucks County, Pennsylvania
People from Hillsboro, West Virginia
Presbyterian Church in the United States members
Presbyterian missionaries in China
Presbyterians from West Virginia
Pulitzer Prize for the Novel winners
Randolph College alumni
American women autobiographers
American women historical novelists
Women Nobel laureates
Writers from Philadelphia
Writers from Zhenjiang
American anti-communists
Members of the American Academy of Arts and Letters | Pearl S. Buck | [
"Technology"
] | 10,432 | [
"Women Nobel laureates",
"Women in science and technology"
] |
165,851 | https://en.wikipedia.org/wiki/Industrial%20design | Industrial design is a process of design applied to physical products that are to be manufactured by mass production. It is the creative act of determining and defining a product's form and features, which takes place in advance of the manufacture or production of the product. Industrial manufacture consists of predetermined, standardized and repeated, often automated, acts of replication, while craft-based design is a process or approach in which the form of the product is determined personally by the product's creator largely concurrent with the act of its production.
All manufactured products are the result of a design process, but the nature of this process can vary. It can be conducted by an individual or a team, and such a team could include people with varied expertise (e.g. designers, engineers, business experts, etc.). It can emphasize intuitive creativity or calculated scientific decision-making, and often emphasizes a mix of both. It can be influenced by factors as varied as materials, production processes, business strategy, and prevailing social, commercial, or aesthetic attitudes. Industrial design, as an applied art, most often focuses on a combination of aesthetics and user-focused considerations, but also often provides solutions for problems of form, function, physical ergonomics, marketing, brand development, sustainability, and sales.
History
Precursors
For several millennia before the onset of industrialization, design, technical expertise, and manufacturing was often done by individual crafts people, who determined the form of a product at the point of its creation, according to their own manual skill, the requirements of their clients, experience accumulated through their own experimentation, and knowledge passed on to them through training or apprenticeship.
The division of labour that underlies the practice of industrial design did have precedents in the pre-industrial era. The growth of trade in the medieval period led to the emergence of large workshops in cities such as Florence, Venice, Nuremberg, and Bruges, where groups of more specialized craftsmen made objects with common forms through the repetitive duplication of models which defined by their shared training and technique. Competitive pressures in the early 16th century led to the emergence in Italy and Germany of pattern books: collections of engravings illustrating decorative forms and motifs which could be applied to a wide range of products, and whose creation took place in advance of their application. The use of drawing to specify how something was to be constructed later was first developed by architects and shipwrights during the Italian Renaissance.
In the 17th century, the growth of artistic patronage in centralized monarchical states such as France led to large government-operated manufacturing operations epitomized by the Gobelins Manufactory, opened in Paris in 1667 by Louis XIV. Here teams of hundreds of craftsmen, including specialist artists, decorators and engravers, produced sumptuously decorated products ranging from tapestries and furniture to metalwork and coaches, all under the creative supervision of the King's leading artist Charles Le Brun. This pattern of large-scale royal patronage was repeated in the court porcelain factories of the early 18th century, such as the Meissen porcelain workshops established in 1709 by the Grand Duke of Saxony, where patterns from a range of sources, including court goldsmiths, sculptors, and engravers, were used as models for the vessels and figurines for which it became famous. As long as reproduction remained craft-based, however, the form and artistic quality of the product remained in the hands of the individual craftsman, and tended to decline as the scale of production increased.
Birth of industrial design
The emergence of industrial design is specifically linked to the growth of industrialization and mechanization that began with the Industrial Revolution in Great Britain in the mid 18th century. The rise of industrial manufacture changed the way objects were made, urbanization changed patterns of consumption, the growth of empires broadened tastes and diversified markets, and the emergence of a wider middle class created demand for fashionable styles from a much larger and more heterogeneous population.
The first use of the term "industrial design" is often attributed to the industrial designer Joseph Claude Sinel in 1919 (although he himself denied this in interviews), but the discipline predates 1919 by at least a decade. Christopher Dresser is considered among the first independent industrial designers. Industrial design's origins lie in the industrialization of consumer products. For instance, the Deutscher Werkbund (a precursor to the Bauhaus founded in 1907 by Peter Behrens and others) was a state-sponsored effort to integrate traditional crafts and industrial mass-production techniques, to put Germany on a competitive footing with Great Britain and the United States.
The earliest published use of the term may have been in The Art-Union, 15 September 1840.
The Practical Draughtsman's Book of Industrial Design by Jacques-Eugène Armengaud was printed in 1853. The subtitle of the (translated) work explains, that it wants to offer a "complete course of mechanical, engineering, and architectural drawing." The study of those types of technical drawing, according to Armengaud, belongs to the field of industrial design. This work paved the way for a big expansion in the field of drawing education in France, the United Kingdom, and the United States.
Robert Lepper helped to establish one of the USA's first industrial design degree programs in 1934 at Carnegie Institute of Technology.
Education
Product design and industrial design overlap in the fields of user interface design, information design, and interaction design. Various schools of industrial design specialize in one of these aspects, ranging from pure art colleges and design schools (product styling), to mixed programs of engineering and design, to related disciplines such as exhibit design and interior design, to schools that almost completely subordinated aesthetic design to concerns of usage and ergonomics, the so-called functionalist school. Except for certain functional areas of overlap between industrial design and engineering design, the former is considered an applied art while the latter is an applied science. Educational programs in the U.S. for engineering require accreditation by the Accreditation Board for Engineering and Technology (ABET) in contrast to programs for industrial design which are accredited by the National Association of Schools of Art and Design (NASAD). Of course, engineering education requires heavy training in mathematics and physical sciences, which is not typically required in industrial design education.
Institutions
Most industrial designers complete a design or related program at a vocational school or university. Relevant programs include graphic design, interior design, industrial design, architectural technology, and drafting. Diplomas and degrees in industrial design are offered at vocational schools and universities worldwide. Diplomas and degrees take two to four years of study. The study results in a Bachelor of Industrial Design (B.I.D.), Bachelor of Science (B.Sc) or Bachelor of Fine Arts (B.F.A.). Afterwards, the bachelor programme can be extended to postgraduate degrees such as Master of Design, Master of Fine Arts and others to a Master of Arts or Master of Science.
Definition
Industrial design studies function and form—and the connection between product, user, and environment. Generally, industrial design professionals work in small scale design, rather than overall design of complex systems such as buildings or ships. Industrial designers don't usually design motors, electrical circuits, or gearing that make machines move, but they may affect technical aspects through usability design and form relationships. Usually, they work with other professionals such as engineers who focus on the mechanical and other functional aspects of the product, assuring functionality and manufacturability, and with marketers to identify and fulfill customer needs and expectations.
Design, itself, is often difficult to describe to non-designers because the meaning accepted by the design community is not made of words. Instead, the definition is created as a result of acquiring a critical framework for the analysis and creation of artifacts. One of the many accepted (but intentionally unspecific) definitions of design originates from Carnegie Mellon's School of Design: "Everyone designs who devises courses of action aimed at changing existing situations into preferred ones." This applies to new artifacts, whose existing state is undefined, and previously created artifacts, whose state stands to be improved.
Industrial design can overlap significantly with engineering design, and in different countries the boundaries of the two concepts can vary, but in general engineering focuses principally on functionality or utility of products, whereas industrial design focuses principally on aesthetic and user-interface aspects of products. In many jurisdictions this distinction is effectively defined by credentials and/or licensure required to engage in the practice of engineering. "Industrial design" as such does not overlap much with the engineering sub-discipline of industrial engineering, except for the latter's sub-specialty of ergonomics.
At the 29th General Assembly in Gwangju, South Korea, 2015, the Professional Practise Committee unveiled a renewed definition of industrial design as follows:
"Industrial Design is a strategic problem-solving process that drives innovation, builds business success and leads to a better quality of life through innovative products, systems, services and experiences."
An extended version of this definition is as follows:
"Industrial Design is a strategic problem-solving process that drives innovation, builds business success and leads to a better quality of life through innovative products, systems, services and experiences. Industrial Design bridges the gap between what is and what's possible. It is a trans-disciplinary profession that harnesses creativity to resolve problems and co-create solutions with the intent of making a product, system, service, experience or a business, better. At its heart, Industrial Design provides a more optimistic way of looking at the future by reframing problems as opportunities. It links innovation, technology, research, business and customers to provide new value and competitive advantage across economic, social and environmental spheres.
Industrial Designers place the human in the centre of the process. They acquire a deep understanding of user needs through empathy and apply a pragmatic, user centric problem solving process to design products, systems, services and experiences. They are strategic stakeholders in the innovation process and are uniquely positioned to bridge varied professional disciplines and business interests. They value the economic, social and environmental impact of their work and their contribution towards co-creating a better quality of life. "
Design process
Although the process of design may be considered 'creative,' many analytical processes also take place. In fact, many industrial designers often use various design methodologies in their creative process. Some of the processes that are commonly used are user research, sketching, comparative product research, model making, prototyping and testing. The design process is iterative, involving dozens or even hundreds of ideas being considered until the final design is reached. Industrial designers often utilize 3D software, computer-aided industrial design and CAD programs to move from concept to production. They may also build a prototype or scaled down sketch models through a 3D printing process or using other materials such as paper, balsa wood, various foams, or clay for modeling. They may then use industrial CT scanning to test for interior defects and generate a CAD model. From this the manufacturing process may be modified to improve the product.
Product characteristics specified by industrial designers may include the overall form of the object, the location of details with respect to one another, colors, texture, form, and aspects concerning the use of the product. Additionally, they may specify aspects concerning the production process, choice of materials and the way the product is presented to the consumer at the point of sale. The inclusion of industrial designers in a product development process may lead to added value by improving usability, lowering production costs, and developing more appealing products.
Industrial design may also focus on technical concepts, products, and processes. In addition to aesthetics, human factors, ergonomics and anthropometrics, it can also encompass engineering, usefulness, market placement, and other concerns—such as psychology, and the emotional attachment of the user. These values and accompanying aspects that form the basis of industrial design can vary—between different schools of thought, and among practicing designers.
Industrial design rights
Industrial design rights are intellectual property rights that make exclusive the visual design of objects that are not purely utilitarian. A design patent would also be considered under this category. An industrial design consists of the creation of a shape, configuration or composition of pattern or color, or combination of pattern and color in three-dimensional form containing aesthetic value. An industrial design can be a two- or three-dimensional pattern used to produce a product, industrial commodity or handicraft. Under the Hague Agreement Concerning the International Deposit of Industrial Designs, a WIPO-administered treaty, a procedure for an international registration exists. An applicant can file for a single international deposit with WIPO or with the national office in a country party to the treaty. The design will then be protected in as many member countries of the treaty as desired.
In 2022, about 1.1 million industrial design applications were filed worldwide. This represents a decrease of 3% on 2021, marking a first drop in filings since 2014.
Hague top applicants
The Hague System for the International Registration of Industrial Designs provides an international mechanism that secures protection of up to 100 designs in multiple countries or regions, through a single international application. International design applications are filed directly through WIPO. The domestic legal framework of each designated contracting party governs the design protection provided by the resulting international registrations. The Hague System does not require the applicant to file a national or regional design application.
Examples of industrial design
A number of industrial designers have made such a significant impact on culture and daily life that their work is documented by historians of social science. Alvar Aalto, renowned as an architect, also designed a significant number of household items, such as chairs, stools, lamps, a tea-cart, and vases. Raymond Loewy was a prolific American designer who is responsible for the Royal Dutch Shell corporate logo, the original BP logo (in use until 2000), the PRR S1 steam locomotive, the Studebaker Starlight (including the later bulletnose), as well as Schick electric razors, Electrolux refrigerators, short-wave radios, Le Creuset French ovens, and a complete line of modern furniture, among many other items.
Richard Teague, who spent most of his career with the American Motors Corporation, originated the concept of using interchangeable body panels so as to create a wide array of different vehicles using the same stampings. He was responsible for such unique automotive designs as the Pacer, Gremlin, Matador coupe, Jeep Cherokee, and the complete interior of the Eagle Premier.
Milwaukee's Brooks Stevens was best known for his Milwaukee Road Skytop Lounge car and Oscar Mayer Wienermobile designs, among others.
Viktor Schreckengost designed bicycles manufactured by Murray bicycles for Murray and Sears, Roebuck and Company. With engineer Ray Spiller, he designed the first truck with a cab-over-engine configuration, a design in use to this day. Schreckengost also founded The Cleveland Institute of Art's school of industrial design.
Oskar Barnack was a German optical engineer, precision mechanic, industrial designer, and the father of 35mm photography. He developed the Leica, which became the hallmark for photography for 50 years, and remains a high-water mark for mechanical and optical design.
Charles and Ray Eames were most famous for their pioneering furniture designs, such as the Eames Lounge Chair Wood and Eames Lounge Chair. Other influential designers included Henry Dreyfuss, Eliot Noyes, John Vassos, and Russel Wright.
Dieter Rams is a German industrial designer closely associated with the consumer products company Braun and the Functionalist school of industrial design.
German industrial designer Luigi Colani, who designed cars for automobile manufacturers including Fiat, Alfa Romeo, Lancia, Volkswagen, and BMW, was also known to the general public for his unconventional approach to industrial design. He had expanded in numerous areas ranging from mundane household items, instruments and furniture to trucks, uniforms and entire rooms. A grand piano created by Colani, the Pegasus, is manufactured and sold by the Schimmel piano company.
Many of Apple's recent products were designed by Sir Jonathan Ive.
See also
Automotive design
Design
Designer
Design museum
Engineering design process
Engineering design
Experience design
Form follows function
Hardware interface design
Human factors and ergonomics
Industrial Designers Society of America
Interaction design
List of industrial designers
Product design
Product development
Rapid prototyping
Sensory design
Sustainable design
Transgenerational design
WikID
References
Sources
Barnwell, Maurice. Design, Creativity and Culture, Black Dog, 2011,
Barnwell, Maurice. Design Evolution: Big Bang to Big Data,Toronto, 2014.
Forty, Adrian. Objects of Desire: Design and Society Since 1750. Thames Hudson, May 1992.
Mayall, WH, Industrial Design for Engineers, London: Iliffe Books, 1967,
Mayall, WH, Machines and Perception in Industrial Design, London: Studio Vista, 1968,
Meikle, Jeffrey. Twentieth Century Limited: Industrial Design engineering in America, 1925 - 1939, Philadelphia: Temple University Press, 1979
External links
Doodles, Drafts and Designs: Industrial Drawings from the Smithsonian (2004) Smithsonian Institution Libraries
Hague Yearly Review 2024 related to industrial design Hague applications
Product management
Design history
Design for X | Industrial design | [
"Engineering"
] | 3,498 | [
"Industrial design",
"Design engineering",
"Design for X",
"Design history",
"Design"
] |
165,907 | https://en.wikipedia.org/wiki/Pip%20%28counting%29 | Pips are small but easily countable items, such as the dots on dominoes and dice, or the symbols on a playing card that denote its suit and value.
Playing cards
In playing cards, pips are small symbols on the front side of the cards that determine the suit of the card and its rank. For example, a standard 52-card deck consists of four suits of thirteen cards each: spades, hearts, clubs, and diamonds. Each suit contains three face cards – the jack, queen, and king. The remaining ten cards are called pip cards and are numbered from one to ten. (The "one" is almost always changed to "ace" and often is the highest card in many games, followed by the face cards.) Each pip card consists of an encoding in the top left-hand corner (and, because the card is also inverted upon itself, the lower right-hand corner) which tells the card-holder the value of the card. In Europe, it is more common to have corner indices on all four corners which lets left-handed players fan their cards more comfortably. The center of the card contains pips representing the suit. The number of pips corresponds with the number of the card, and the arrangement of the pips is generally the same from deck to deck.
Pip cards are also known as numerals or numeral cards.
In point-trick games where cards often score their value in pips (or equivalent if they are court cards e.g. a King may be worth 13), card points are sometimes referred to as pips.
Many French-suited packs derived from the English pattern contain a variation on the pip style for the Ace of spades, often consisting of an especially large pip or even a representative image, along with information about the deck's manufacturer, originally to display the stamp duty. This is also the case for the Ace of clubs in the Paris pattern and the Ace of diamonds in the Russian pattern. For German-suited playing cards, the deuce of hearts was used for this purpose, and for Latin-suited playing cards, the ace of coins was used.
Historically German pips are generally different from the pips used in France and England, and the latter dates from at least the fourteenth century CE.
Dice
On dice, pips are small dots on each face of a die. These pips are typically arranged in patterns denoting the numbers one through n, where n is the number of faces. For the common six-sided die, the sum of the pips on opposing faces traditionally adds up to seven.
Pips are commonly colored black on white or yellow dice, and white on dice of other colors, although colored pips on white/yellow dice are not uncommon; Asian dice often have an enlarged red single pip for the "one" face, while the dice for the 1964 commercial game Kismet feature black pips for 1 and 6, red pips for 2 and 5, and green pips for 3 and 4.
Dominoes
Dominoes use pips that are similar to dice. Each half of a domino tile can have anywhere from no pips all the way up to 18, in practice, depending on the set. A common double-six set has pips all the way up to six arranged in the same manner to dice pips. The game is generally played by up to four players only, individually or in partners (pairs). Domino sets having more pips on one half of the tile allow the game to be played by more players.
See also
References
Playing cards
Dice
Domino terms
Numeral systems | Pip (counting) | [
"Mathematics"
] | 733 | [
"Numeral systems",
"Mathematical objects",
"Numbers"
] |
165,926 | https://en.wikipedia.org/wiki/San%20Francisco%E2%80%93Oakland%20Bay%20Bridge | The San Francisco–Oakland Bay Bridge, commonly referred to as the Bay Bridge, is a complex of bridges spanning San Francisco Bay in California. As part of Interstate 80 and the direct road between San Francisco and Oakland, it carries about 260,000 vehicles a day on its two decks. It includes one of the longest bridge spans in the United States.
The toll bridge was conceived as early as the California gold rush days, with "Emperor" Joshua Norton famously advocating for it, but construction did not begin until 1933. Designed by Charles H. Purcell, and built by American Bridge Company, it opened on Thursday, November 12, 1936, six months before the Golden Gate Bridge. It originally carried automobile traffic on its upper deck, with trucks, cars, buses and commuter trains on the lower, but after the Key System abandoned its rail service on April 20, 1958, the lower deck was converted to all-road traffic as well. On October 12, 1963, traffic was reconfigured to one way traffic on each deck, westbound on the upper deck, and eastbound on the lower deck, with trucks and buses also allowed on the upper deck.
In 1986, the bridge was unofficially dedicated to former California governor James Rolph.
The bridge has two sections of roughly equal length; the older western section, officially known as the Willie L. Brown Jr. Bridge (after former San Francisco Mayor and California State Assembly Speaker Willie L. Brown Jr.), connects downtown San Francisco to Yerba Buena Island, and the newer east bay section connects the island to Oakland. The western section is a double suspension bridge with two decks, westbound traffic being carried on the upper deck while eastbound is carried on the lower one. The largest span of the original eastern section was a cantilever bridge.
During the 1989 Loma Prieta earthquake, a portion of the eastern section's upper deck collapsed onto the lower deck and the bridge was closed for a month. Reconstruction of the eastern section of the bridge as a causeway connected to a self-anchored suspension bridge began in 2002; the new eastern section opened September 2, 2013, at a reported cost of over $6.5 billion; the original estimate of $250 million was for a seismic retrofit of the existing span. Unlike the western section and the original eastern section of the bridge, the new eastern section is a single deck carrying all eastbound and westbound lanes. Demolition of the old east span was completed on September 8, 2018.
Description
The bridge consists of two crossings, east and west of Yerba Buena Island, a natural mid-bay outcropping inside San Francisco city limits. The western crossing between Yerba Buena and downtown San Francisco has two complete suspension spans connected at a center anchorage. Rincon Hill is the western anchorage and touch-down for the San Francisco landing of the bridge connected by three shorter truss spans. The eastern crossing, between Yerba Buena Island and Oakland, was a cantilever bridge with a double-tower span, five medium truss spans, and a 14-section truss causeway. Due to earthquake concerns, the eastern crossing was replaced by a new crossing that opened on Labor Day 2013. On Yerba Buena Island, the double-decked crossing is a concrete viaduct east of the west span's cable anchorage, the Yerba Buena Tunnel through the island's rocky central hill, another concrete viaduct, and a longer curved high-level steel truss viaduct that spans the final to the cantilever bridge.
The toll plaza on the Oakland side (westbound traffic only since 1969) has eighteen toll lanes, with all charges now made either through the FasTrak electronic toll collection system or through invoices mailed through the USPS, based on the license plate of the car per Department of Motor Vehicle records. Metering signals are about west of the toll plaza. Two full-time bus-only lanes bypass the toll booths and metering lights around the right (north) side of the toll plaza; other high occupancy vehicles can use these lanes during weekday morning and afternoon commute periods. The two far-left toll lanes are high-occupancy vehicle lanes during weekday commute periods. Radio and television traffic reports will often refer to congestion at the toll plaza, metering lights, or a parking lot in the median of the road for bridge employees; the parking lot is about long, stretching from about east of the toll plaza to about west of the metering lights.
During the morning commute hours, traffic congestion on the westbound approach from Oakland stretches back through the MacArthur Maze interchange at the east end of the bridge onto the three feeder highways, Interstate 580, Interstate 880, and I-80 toward Richmond. Since the number of lanes on the eastbound approach from San Francisco is structurally restricted, eastbound backups are also frequent during evening commute hours. The eastbound bottleneck is not the bridge itself, but the approach, which has just three lanes in each direction, in contrast to the bridge's five.
The western section of the Bay Bridge is currently restricted to motorized freeway traffic. Pedestrians, bicycles, and other non-freeway vehicles are not allowed to cross this section. A project to add bicycle/pedestrian lanes to the western section has been proposed but is not finalized. A Caltrans bicycle shuttle operates between Oakland and San Francisco during peak commute hours for $1.00 each way.
Freeway ramps next to the tunnel provide access to Yerba Buena Island and Treasure Island. Because the toll plaza is on the Oakland side, the western span is a de facto non-tolled bridge; traffic between the island and the main part of San Francisco can freely cross back and forth. Those who only travel from Oakland to Yerba Buena Island, and not the entire length to the main part of San Francisco, still must pay the full toll.
Early history
Developed at the entrance to the bay, San Francisco was well placed to prosper during the California Gold Rush. Almost all goods not produced locally arrived by ship, as did numerous travelers and erstwhile miners. But after the first transcontinental railroad was completed in May 1869, San Francisco was on the wrong side of the Bay, and separated from the new rail link.
Many San Franciscans feared that the city would lose its position as the regional center of trade. Businessmen had considered the concept of a bridge spanning the San Francisco Bay since the Gold Rush days. During the 1870s, several newspaper articles explored the idea. In early 1872, a "Bay Bridge Committee" was hard at work on plans to construct a railroad bridge. The April 1872 issue of the San Francisco Real Estate Circular reported on this committee:
The self-proclaimed Emperor Norton decreed three times in 1872 that a suspension bridge be constructed to connect Oakland with San Francisco. In the third of these decrees, in September 1872, Norton, frustrated that nothing had happened, proclaimed:
Unlike most of Emperor Norton's eccentric ideas, his decree to build a bridge had a widespread public and political appeal. Yet the task was too much of an engineering and economic challenge, since the bay was too wide and too deep there. In 1921, more than forty years after Norton's death, an underground tube was considered, but it became clear that one would be inadequate for vehicular traffic. Support for a trans-bay crossing increased in the 1920s based on the popularity and availability of automobiles.
Planning
The California State Legislature and governor enacted a law, effective in 1929, to establish the California Toll Bridge Authority (Stats. 1929, Chap 763) and to authorize it and the State Department of Public Works to build a bridge connecting San Francisco and Alameda County (Stats. 1929, Chap 762).
The state appointed a commission to evaluate the idea and various designs for a bridge across the Bay, the Hoover-Young Commission. Its conclusions were made public in 1930.
In January 1931, Charles H. Purcell, the State Highway Engineer of California, who had also served as the secretary of the Hoover-Young Commission, assumed the position of Chief Engineer for the Bay Bridge. Glenn B. Woodruff served as design engineer for the project. He explained in a 1936 article that several elements of the bridge required not only new designs, but also new theories of design.
To make the bridge feasible, a route was chosen via Yerba Buena Island, which would reduce both the material and the labor needed. Since Yerba Buena Island was a U.S. Navy base at the time, the state had to gain approval from Congress for this purpose as it regulates and controls all federal lands and the armed services. After a great deal of lobbying, California received Congressional approval to use the island on February 20, 1931, subject to final approvals by the Departments of War, Navy, and Commerce. The state applied for permits from the 3 federal departments as required. The permits were granted in January 1932, and formally presented in a ceremony on Yerba Buena Island on February 24, 1932.
On May 25, 1931, Governor James Rolph Jr. signed into law two acts: one providing for the financing of state bridges by revenue bonds, and another creating the San Francisco–Oakland Bay Bridge Division of the State Department of Public Works. On September 15, 1931, this new division opened its offices at 500 Sansome Street in San Francisco.
During 1931, numerous aerial photographs were taken of the chosen route for the bridge and its approaches.
That year, engineers had not determined the final design concept for the western span between San Francisco and Yerba Buena Island, although the idea of a double-span suspension bridge was already favored.
In April 1932, the preliminary final plan and design of the bridge was presented by Chief Engineer Charles Purcell to Col. Walter E. Garrison, Director of the State Department of Public Works, and to Ralph Modjeski, head of the Board of Engineering Consultants. Both agencies approved and preparation of the final design proceeded. In 1932, Joseph R. Knowland, a former U.S. Congressman from California, traveled to Washington to help persuade President Herbert Hoover and the Reconstruction Finance Corporation to advance $62 million to build the bridge.
Construction
Before work began, 12 massive underwater telephone cables were moved of the proposed bridge route by crews of the Pacific Telephone and Telegraph Co. during the summer of 1931.
Construction began on July 9, 1933 after a groundbreaking ceremony attended by former president Herbert Hoover, dignitaries, and local beauty queens.
The western section of the bridge between San Francisco and Yerba Buena Island presented an enormous engineering challenge. The bay was up to deep in places and the soil required new foundation-laying techniques. A single main suspension span some in length was considered but rejected, as it would have required too much fill and reduced wharfage space at San Francisco, had less vertical clearance for shipping, and cost more than the design ultimately adopted. The solution was to construct a massive concrete anchorage halfway between San Francisco and the island, and to build a main suspension span on each side of this central anchorage.
East of Yerba Buena Island, the bay to Oakland was spanned by a combination of double cantilever, five long-span through-trusses, and a truss causeway, forming the longest bridge of its kind at the time. The cantilever section was longest in the nation and third-longest anywhere.
Much of the original eastern section was founded upon treated wood pilings. Because of the very deep mud on the bay bottom, it was not practical to reach bedrock, although the lower levels of the mud are quite firm. Long wooden pilings were crafted from entire old-growth Douglas fir trees, which were driven through the soft mud to the firmer bottom layers. The construction project had casualties: twenty-four men would die while constructing the bridge.
Yerba Buena Tunnel
California Department of Transportation engineer C.H. Purcell served as chief engineer for the Bay Bridge, including the construction of the Yerba Buena Tunnel. Before starting excavation, the ground through which the western half of the tunnel would be bored was stabilized by injecting cement grout under pressure through 25 holes bored into the loose rock over the crown of the tunnel.
After excavating the western and eastern open portals, three drifts were bored from west to east along the path of the tunnel: one at the crown and the other two at the lower corners. The first drift broke through in July 1934, approximately one year after the start of construction. A ceremonial party led by Governor Merriam celebrated the completion of the first drift on July 24 by walking through it, followed by a short speech. The space between the three drifts was then excavated, resulting in a single arch-shaped bore (in cross-section), and the tunnel roof was constructed using steel I-beam ribs spaced apart to support the rock, which were then embedded in concrete up to thick at the crown. No cave-ins occurred during the excavation of the tunnel.
After the roof was completed, the remaining core of rock between the tunnel roof and lower deck was excavated using a power shovel. By May 1935, work on removing the core was progressing and 40 steel ribs had been placed; concrete embedment was just starting. Removal of the core was completed on November 18, 1935. Once the excavation was complete, the upper deck was placed and the interior ceiling above the upper deck was lined with tiles. The last concrete poured during the construction of the Bay Bridge was part of the upper deck lining in late summer 1936. This included the emplacement of regularly spaced refuge bays ("deadman holes") along the south wall of the lower deck tunnel, escape alcoves common in all railway tunnels into which track maintenance workers could duck if a train came along. These remain and are visible to eastbound motorists today.
The completed tunnel bore is wide and high overall, and the dimensions of the tunnel interior are wide and high. In 1936, it was hailed as the world's largest-bore tunnel. The cross-sectional area of the upper half is , and the lower half is .
Reminders of the long-gone bridge railway survive along the south side of the lower Yerba Buena Tunnel. These are the regularly spaced refuge bays ("deadman holes"), escape alcoves common in all railway tunnels, along the wall, into which track maintenance workers could safely retreat if a train came along. (The north side, which always carried only motor traffic, lacks these holes.)
The tunnel is wide, high, and long. It is the largest diameter transportation bore tunnel in the world. The large amount of material that was excavated in boring the tunnel was used for a portion of the landfill over the shoals lying adjacent to Yerba Buena Island to its north, a project which created the artificial Treasure Island. The contract to build the Yerba Buena Cable Anchorage, Tunnel & Viaduct segment was opened for bids on March 28, 1933, and awarded to the low bidder, Clinton Construction Company of California, for $1,821,129.50 (equivalent to $ in ). Yerba Buena Island was the main site of the official groundbreaking for the Bay Bridge on July 9, 1933, when President Franklin D. Roosevelt remotely set off a dynamite blast on the eastern side of the island at 12:58 p.m. local time. Former President Herbert Hoover and Governor James Rolph were onsite; the two men were the first to turn over the earth with ceremonial golden spades. Other ceremonies took place simultaneously in San Francisco (on Rincon Hill) and Oakland Harbor.
The Yerba Buena Tunnel opened, along with the rest of the Bay Bridge, on November 12, 1936. the tunnel lacks an official name.
Opening day
The bridge opened on November 12, 1936, at 12:30 p.m. In attendance were the former US president Herbert Hoover, Senator William G. McAdoo, and the Governor of California, Frank Merriam. Governor Merriam opened the bridge by cutting gold chains across it with an acetylene cutting torch. The San Francisco Chronicle report of November 13, 1936, read:
The total cost was US$77 million (equivalent to $ in ). Before opening the bridge was blessed by Cardinal Secretary of State Eugene Cardinal Pacelli, later Pope Pius XII. Because it was in effect two bridges strung together, the western spans were ranked the second and third largest suspension bridges. Only the George Washington Bridge had a longer span between towers.
As part of the celebration a United States commemorative coin was produced by the San Francisco Mint. A half dollar, the obverse portrays California's symbol, the grizzly bear, while the reverse presents a picture of the bridge spanning the bay. A total of 71,369 coins were sold, some from the bridge's tollbooths.
Post-opening history
1930s–1960s
The Bridge Railway
Construction of the Bridge Railway began on November 29, 1937, with the laying of the first ties. The first train was run across the Bay Bridge on September 23, 1938, a test run utilizing a Key System train consisting of two articulated units with California Governor Frank Merriam at the controls. On January 14, 1939, the San Francisco Transbay Terminal was dedicated. The following morning, January 15, 1939, the electric interurban trains started in revenue service, running along the south side of the lower deck of the bridge. The terminal originally was supposed to open at the same time as the Bay Bridge, but had been delayed.
Trains over the Bridge Railway were operated by the Sacramento Northern Railroad (Western Pacific), the Interurban Electric Railway (Southern Pacific) and the Key System. Freight trains never used the bridge. The tracks left the lower deck in San Francisco just southwest of the end of 1st St. They then went along an elevated viaduct above city streets, looping around and into the terminal on its east end. Departing trains exited on the loop back onto the bridge. The loop continued to be used by buses until the terminal's closure in 2010. The tracks left the lower deck in Oakland. The Interurban Electric Railway tracks ran along Engineer Road and over the Southern Pacific yard on trestles (some of it is still standing and visible from nearby roadways) onto the streets and dedicated right-of-ways in Berkeley, Albany, Oakland and Alameda. The Sacramento Northern and Key System tracks went under the SP tracks through a tunnel (which still exists and is in use as an access to the EBMUD treatment plant) and onto 40th St. Due to falling ridership, Sacramento Northern and IER service ended in 1941.
On September 13, 1942, a stop was opened at Yerba Buena Island to serve expanded wartime needs on adjacent Treasure Island.
Despite the vital role the railroad played, the last train went over the bridge in April 1958. The tracks were removed and replaced with pavement on the Transbay Terminal ramps and Bay Bridge. The Key System handled buses over the bridge until 1960 when its successor, AC Transit, took over operations. It still handles service today, running to a new transbay terminal located in the same vicinity in San Francisco, the Salesforce Transit Center.
Emperor Norton plaque and relocation
In 1872, the San Francisco entrepreneur and eccentric Emperor Norton issued three proclamations calling for the design and construction of a suspension bridge between San Francisco and Oakland via Yerba Buena Island (formerly Goat Island). A 1939 plaque honoring Emperor Norton for the original idea for the Bay Bridge was dedicated by the fraternal society E Clampus Vitus and was installed at The Cliff House in February 1955. In November 1986, in connection with the bridge's 50th anniversary, the plaque was moved to the Transbay Terminal, the public transit and Greyhound bus depot at the west end of the bridge in downtown San Francisco. When the terminal was closed in 2010, the plaque was placed in storage.
1960s–2010s
Roadway retrofit
Until the 1960s, the upper deck ( wide between curbs) carried three lanes of traffic in each direction and was restricted to automobiles only. The lower deck carried three lanes of truck and bus traffic, with autos allowed, on the north side of the bridge. In the 1950s traffic lights were added to set the direction of travel in the middle lane, but there still remained no divider. Two interurban railroad tracks on the south half of the lower deck carried the electric commuter trains. In 1958 the tracks were replaced with pavement, but the reconfiguration to what the traffic eventually became did not take place until 1963.
The Federal highway on the bridge was originally a concurrency of U.S. Highway 40 and U.S. Highway 50. The bridge was re-designated as Interstate 80 in 1964, and the western ends of U.S. 40 and U.S. 50 are now in Silver Summit, Utah, and West Sacramento, California, respectively.
Reconstruction of approaches
The original western approach to (and exit from) the upper deck of the bridge was a long ramp to Fifth Street, branching to Harrison St for westward traffic off the bridge and Bryant St for eastward traffic entering. There was also an on-ramp to the upper deck on Rincon Hill from Fremont Street (which later became an off-ramp) and an off-ramp to First Street (later extended over First St to Fremont St). The lower deck ended at Essex and Harrison St; just southwest of there, the tracks of the bridge railway left the lower deck and curved northward into the elevated loop through the Transbay Terminal that was paved for buses after rail service ended.
The eastern approach to the bridge included a causeway landing for the "incline" section, and the construction of three feeder highways, interlinked by an extensive interchange, which in later years became known as "The MacArthur Maze". A massive landfill was emplaced, extending along the north edge of the existing Key System rail mole to the existing bayshore, and continuing northward along the shore to the foot of Ashby Avenue in Berkeley. The fill was continued northward to the foot of University Avenue as a causeway which enclosed an artificial lagoon, subsequently developed by the WPA as "Aquatic Park". The three feeder highways were U.S. Highway 40 (Eastshore Highway) which led north through Berkeley, U.S. Highway 50 (38th Street, later MacArthur Blvd.) which led through Oakland, and State Route 17 which ran parallel to U.S. 50, along the Oakland Estuary and through the industrial and port sections of the city.
The current approaches were constructed in the 1960s, as the original ones were not up to interstate highway standards and were designed mainly for local use.
Yerba Buena Tunnel Reconstruction
As originally completed, the upper deck was reserved for automobile traffic, and carried six lanes, each wide. The lower deck was further divided into three lanes of traffic for heavy trucks (each wide), and the two railroad tracks on the south side ( wide for both tracks). The initial design in 1932 called for the two rail tracks to flank a central truck deck on the lower level. After Key System trains stopped running over the bridge in 1958, bids were opened on October 11, 1960, to rebuild the tunnel. The rebuild consisted of multiple stages of work:
Remove Key System rails, lower rail deck and repave
Lower the truck traffic half of the lower deck by and repave
Remove center columns supporting upper deck
Lower the upper deck by by placing precast concrete units
After the reconstruction, the tunnel would handle only road traffic. The upper deck was lowered to accommodate heavy truck traffic, as each deck would now carry five lanes of unidirectional traffic. The upper deck was dedicated to westbound traffic, and the lower deck was dedicated to eastbound traffic. The impact to traffic during reconstruction of the tunnel was minimized mainly by working outside normal commuting hours and through the use of a portable steel bridge long and wide, designed to fit between the curbs of the existing upper deck. The bridge spanned the gap between the new upper deck and old upper deck, and the overall elevation change of caused drivers to slow to , resulting in traffic jams. The first accident caused by "The Hump", the nickname the bridge acquired after prominent warning signs advertising its presence, occurred just twelve minutes after it was first deployed on November 25, 1961.
The new precast upper deck units were each long, and were installed in two halves. One side of each half rested on a temporary falsework erected in the middle of the lower deck, and the other side rested on the shoulder of the tunnel wall previously used to support the old upper deck. After the two halves were fastened together, a steel form was used to close the gap between halves, and concrete was poured in the gap. The upper deck rests on shoulders built into the tunnel wall, padded by Masonite.
The planned completion date for tunnel reconstruction was July 1962, but "The Hump" was not dismantled until October 27, 1962. The San Francisco Chronicle marked the occasion by quipping "[The Hump] produced more jams than Grandma ever made." After reconstruction, both the upper and lower decks featured of vertical clearance. Upper deck clearance is restricted by the tunnel portal, and lower deck clearance is restricted by the upper deck.
Rail removal
Automobile traffic increased dramatically in the ensuing decades of the bridge's opening. This, among other things, resulted in the Key Systems decline, and by the 1960s having rails on the bridge had become obsolete and a detriment to traffic, as they carried nothing on them. Work began on removing the tracks in October 1963. After the work was completed, the Bay Bridge was reconfigured with five lanes of westbound traffic on the upper deck and five lanes of eastbound traffic on the lower deck. The Key System originally planned to end train operations in 1948 when it replaced its streetcars with buses, but Caltrans did not approve of this. Trucks had their ban lifted and were allowed on the top deck for the first time. Due to this, the upper deck was retrofitted to handle the increased loads, with understringers added and prestressing added to the bottom of the floor beams. This retrofit is still in place today, and is visible to Eastbound traffic on the western span.
In current times, there have been attempts to restore rail service to the bridge, but none were successful. A study released in 2000 estimated the cost of restoring rail service across the bridge at up to $8 billion .
1968 aircraft accident
On February 11, 1968, a U.S. Navy training aircraft crashed into the cantilever span of the bridge, killing both reserve officers aboard. The T2V SeaStar, based at NAS Los Alamitos in southern California, was on a routine weekend mission and had just taken off in the fog from nearby NAS Alameda. The plane struck the bridge about above the upper deck roadway and then sank in the bay north of the bridge. There were no injuries among the motorists on the bridge. One of the truss sections of the bridges was replaced due to damage from the impact.
1986 Cable lighting
The series of lights adorning the westbound spans suspension cables were added in 1986 as part of the bridge's 50th-anniversary celebration.
James B. Rolpf Jr. designation
The bridge was unofficially "dedicated" to James B. "Sunny Jim" Rolph, Jr., but this was not widely recognized until the bridge's 50th-anniversary celebrations in 1986. The official name of the bridge for all functional purposes has always been the "San Francisco–Oakland Bay Bridge", and, by most local people, it is referred to simply as "the Bay Bridge". Rolph, a Mayor of San Francisco from 1912 to 1931, was the Governor of California at the time construction of the bridge began. He died in office on June 2, 1934, two years before the bridge opened, leaving the bridge to be named for him out of respect.
1989 Loma Prieta Earthquake and emergency repairs
On the evening of October 17, 1989, during the Loma Prieta earthquake, which measured a 6.9 on the moment magnitude scale, a section of the upper deck of the eastern truss portion of the bridge at Pier E9 collapsed onto the deck below, indirectly causing one death. The bridge was closed for just over a month as construction crews repaired the section. That same year, the bridge reopened to traffic on November 18.
2001 terrorism threat
On November 2, 2001, in the aftermath of the September 11 attacks, Governor Gray Davis announced a threat of a rush hour attack against a West Coast suspension bridge (a group which includes the Bay Bridge and the Golden Gate Bridge) some time between November 2 and 7, resulting in an increase of openly armed law enforcement patrols.
A small fraction of drivers shifted to ferries and BART. It was later revealed that crews had secretly been working under armed guard for several weeks to harden the suspension cable attachment points, which were vulnerable to cutting with common weapons and tools. An anchor room was filled with concrete, doors welded shut, and a razor wire fence added. A blast wall was also added to defend against a potential truck bomb. In the end, no attack occurred.
Emperor Norton naming campaign
In November 2004, after a campaign by San Francisco Chronicle cartoonist Phil Frank, then-San Francisco District 3 Supervisor Aaron Peskin introduced a resolution to the San Francisco Board of Supervisors calling for the entire two-bridge system, from San Francisco to Oakland, to be named for Emperor Norton.
On December 14, 2004, the Board approved a modified version of this resolution, calling for only "new additions"—i.e., the new eastern crossing—to be named "The Emperor Norton Bridge". Neither the City of Oakland nor Alameda County passed any similar resolution, so the effort went no further.
Western span retrofit
The western section has undergone extensive seismic retrofitting. During the retrofit, much of the structural steel supporting the bridge deck was replaced while the bridge remained open to traffic. Engineers accomplished this by using methods similar to those employed on the Chicago Skyway.
The entire bridge was fabricated using hot steel rivets, which are impossible to heat treat and so remain relatively soft. Analysis showed that these could fail by shearing under extreme stress. Therefore, at most locations, rivets were replaced with high-strength bolts. Most bolts had domed heads placed facing traffic so they looked similar to the rivets that were removed.. This work had to be performed with great care as the steel of the structure had for many years been painted with lead paint, which had to be carefully removed and contained by workers with extensive protective gear so that they would not suffocate.
Most of the beams were originally constructed of two plate -beams joined with lattices of flat strip or angle stock, depending upon structural requirements. These have all been reconstructed by replacing the riveted lattice elements with bolted steel plate and so converting the lattice beams into box beams. This replacement included adding face plates to the large diagonal beams joining the faces of the main towers, which now have an improved appearance when viewed from certain angles.
Diagonal box beams have been added to each bay of the upper and lower decks of the western spans. These add stiffness to reduce side-to-side motion during an earthquake and reduce the probability of damage to the decking surfaces.
Analysis showed that some massive concrete supports could burst and crumble under likely stresses. In particular the western supports were extensively modified. First, the location of existing reinforcing bar is determined using magnetic techniques. In areas between bars holes are drilled. Into these holes is inserted and glued an L-shaped bar that protrudes . This bar is retained in the hole with a high-strength epoxy adhesive. The entire surface of the structure is thus covered with closely spaced protrusions. A network of horizontal and vertical reinforcing bars is then attached to these protrusions. Mold surface plates are then positioned to retain high-strength concrete, which is then pumped into the void. After removal of the formwork the surface appears similar to the original concrete. This technique has been applied elsewhere throughout California to improve freeway overpass abutments and some overpass central supports that have unconventional shapes. (Other techniques such as jacket and grout are applied to simple vertical posts; see the seismic retrofit article.)
The western approaches have also been retrofitted in part, but mostly these have been replaced with new construction of reinforced concrete.
2007 Cosco Busan oil spill
In 2007, a container ship then named the Cosco Busan, and subsequently renamed the Hanjin Venezia, collided with the Delta Tower fender, resulting in the Cosco Busan oil spill.
October 2009 eyebar crack, repair failure and bridge closure
During the 2009 Labor Day weekend closure for a portion of the replacement, a major crack was found in an eyebar, significant enough to warrant bridge closure. Working in parallel with the retrofit, California Department of Transportation (Caltrans), and its contractors and subcontractors, were able to design, engineer, fabricate, and install the pieces required to repair the bridge, delaying its planned opening by only hours. The repair was not inspected by the Federal Highway Administration, which relied on state inspection reports to ensure safety guidelines were met.
On October 27, 2009, during the evening commute, the steel crossbeam and two steel tie rods repaired over Labor Day weekend snapped off the Bay Bridge's eastern section and fell to the upper deck. This may have been due to metal-on-metal vibration from bridge traffic and wind gusts of up to , which resulted in one of the rods breaking off and caused one of the metal sections to come crashing down. Three vehicles were either struck by or hit the fallen debris, though there were no injuries. On November 1, Caltrans announced that the bridge would probably stay closed at least through the morning commute of Monday, November 2 after repairs performed during the weekend failed a stress test on Sunday. BART and the Golden Gate Ferry systems added supplemental service to accommodate the increased passenger load during the bridge closure. The bridge reopened to traffic on November 2, 2009.
The pieces that broke off on October 27 were a saddle, crossbars, and two tension rods.
2010s–present
Willie L. Brown, Jr., Bridge naming resolution
In June 2013, nine state assemblymen, joined by two state senators, introduced Assembly Concurrent Resolution No. 65 (ACR 65) to name the western crossing of the bridge for former California Assembly Speaker and former San Francisco Mayor Willie Brown. Six weeks later, a grassroots petition was launched seeking to name the entire two-bridge system for Emperor Norton. In September 2013, the petition's author launched a nonprofit, The Emperor's Bridge Campaign — now known as The Emperor Norton Trust — that advocates for adding "Emperor Norton Bridge" as an honorary name (rather than "renaming" the bridge) and that undertakes other efforts to advance Norton's legacy. The state legislative resolution naming the western section of the Bay Bridge the "Willie L. Brown, Jr., Bridge" passed the Assembly in August 2013 and the Senate in September 2013. A ceremony was held on February 11, 2014, marking the resolution and the installation of signs on either end of the section.
Eastern span replacement
For various reasons, the eastern section would have been too expensive to retrofit compared to replacing it, so the decision was made to replace it.
The replacement section underwent a series of design changes, both progressive and regressive, with increasing cost estimates and contractor bids. The final design included a single-towered self-anchored suspension span starting at Yerba Buena island, leading to a long inclined viaduct to the Oakland touchdown.
Separated and protected bicycle lanes are a visually prominent feature on the south side of the new eastern section. The bikeway and pedestrian path across the eastern span opened in October 2016 and carries recreational and commuter cyclists between Oakland and Yerba Buena Island. The original eastern cantilever span had firefighting dry standpipes installed. No firefighting dry or wet standpipes were designed for the eastern section replacement, although, the firefighting wet standpipes do exist on the original western section visible on both the north-side upper and lower decks.
The original eastern section closed permanently to traffic on August 28, 2013, and the replacement span opened for traffic five days later. The original eastern section was dismantled between January 2014 and November 2017.
2013 public "light sculpture" installation
On March 5, 2013, a public art installation called "The Bay Lights" was activated on the western span's vertical cables. The installation was designed by artist Leo Villareal and consists of 25,000 LED lights originally scheduled to be on nightly display until March 2015. However, on December 17, 2014, the non-profit Illuminate The Arts announced that it had raised the $4 million needed to make the lights permanent; the display was temporarily turned off starting in March 2015 in order to perform maintenance and install sturdier bulbs and then re-lit on January 30, 2016.
In order to reduce driver distractions, the privately funded display is not visible to users of the bridge, only to distant observers. This lighting effort is intended to form part of a larger project to "light the bay". Villareal used various algorithms to generate patterns such as rainfall, reflections on water, bird flight, expanding rings, and others. Villareal's patterns and transitions will be sequenced and their duration determined by computerized random number generator to make each viewing experience unique. Owing to the efficiency of the LED system employed, the estimated operating cost is only US$15.00 per night.
The lights were switched off permanently at 8 pm on March 5, 2023 – the 10th anniversary of the artwork. This was done due to their poor condition and increasing costs to maintain properly. There is a plan to raise additional funds and install a new set of lights later in the year.
Alexander Zuckermann Bike Path
The pedestrian and bicycle route on the eastern section opened on September 3, 2013, and is named after Alexander Zuckermann, founding chair of the East Bay Bicycle Coalition. This forms a transbay route for the San Francisco Bay Trail. Until October 2016, the path did not connect to Yerba Buena and Treasure Island sidewalks, due to the need to demolish more of the old eastern section before final construction. On May 2, 2017, public access was extended to seven days a week, 6 a.m. to 9 p.m., with occasional closure days for continued demolition of the old bridge foundations. This work was completed on November 11, 2017.
Yerba Buena Tunnel closure and repair
On January 30, 2016, a chunk of concrete the size of an automobile tire fell from the tunnel wall into the slow lane of eastbound traffic on the lower deck of the Yerba Buena Tunnel, causing a minor accident. The concrete fell from where the upper deck is connected to the tunnel wall. Based on an examination of photographs, a professor from Georgia Tech postulated that water infiltration into the concrete wall had caused the reinforcing steel to corrode and expand, forcing a chunk of the tunnel wall out. A subsequent California Department of Transportation (Caltrans) investigation identified 12 spots on both sides of the tunnel wall in the lower deck space showed signs of corrosion-induced damage, but no immediate risk of further spalling. The apparent cause was rainwater leaking from upper deck drains. Caltrans engineers speculated the Masonite pads had swelled due to rainwater infiltration, cracking the tunnel walls and allowing moisture in to the reinforcing steel. Repairs to the degraded concrete started in February 2017. Drains and catch basins were replaced to reduce the likelihood of clogging, and fiberglass-reinforced mortar was used to patch removed concrete. The repairs, which required some daytime lane closures, were expected to last until June 2017.
2020 bus lane proposal
In January 2020, the AC Transit and BART boards of directors supported the establishment of dedicated bus lanes on the bridge. In February 2020, Rob Bonta introduced state legislation to begin planning bus lanes on the bridge.
Opening of the Judge John Shutter Regional Shoreline
On October 21, 2020, the Judge John Sutter Regional Shoreline park opened to the public. Located at the foot of the bridge, the opening of the park has led to easier access to the bike and pedestrian path due to improved parking and pedestrian access.
2016-2023 exit reconstructions
In the 1960s directional reconfiguration, there were three off-ramps added to Yerba Buena Island and Treasure Island: a single left-hand side exit in the western direction at the east end of the tunnel, a left-hand side exit in the eastern direction at the west end of the tunnel (originally signed as just "Treasure Island"), and a right-hand side exit in eastern direction at the east end of the tunnel (originally signed as just "Yerba Buena Island"). The eastbound left exit in particular presented an unusual hazard – drivers had to slow within the normal traffic flow and move into a very short off-ramp that ended in a short radius turn left turn; accordingly, a 15 MPH advisory was posted there. The turn had been further narrowed from its original design by the installation of crash pads on the island side. The eastbound and westbound on-ramps were then on the usual right-hand side, but they did not have dedicated merge lanes, forcing drivers to await gaps in traffic and then accelerate from a stop sign to traffic speeds in a short distance. In 2016, a new on-ramp and off-ramp at the east end of the tunnel were opened in the western direction on the right-hand side of the roadway, replacing the left-hand side off-ramp in that direction. Meanwhile, the eastbound right-hand side off-ramp and on-ramp at the east end of the tunnel was demolished during the reconstruction of the eastern span of the bridge. A new on-ramp on this side was built with a dedicated merge lane, but the off-ramp's replacement was not completed until early-May 2023, well after the bridge's bike path from the Oakland side to the island was fully completed. The eastbound left-hand side off-ramp and westbound on-ramp at the west end of the tunnel are then scheduled then close as early as late-May 2023 while the western span undergoes a seismic retrofit.
Financing and tolls
Current toll rates
Tolls are only collected from westbound traffic at the toll plaza on the Oakland side of the bridge. Those just traveling between Yerba Buena Island and the main part of San Francisco are not tolled. All-electronic tolling has been in effect since 2020, and drivers may either pay using the FasTrak electronic toll collection device or using the license plate tolling program. It remains not truly an open road tolling system until the remaining unused toll booths are removed, forcing drivers to slow substantially from freeway speeds while passing through. Effective , the toll rate for passenger cars is $8. During peak traffic hours on weekdays between 5:00 am and 10:00 am, and between 3:00 pm and 7:00 pm, carpool vehicles carrying three or more people, clean air vehicles, or motorcycles may pay a discounted toll of $4 if they have FasTrak and use the designated carpool lane. Drivers without Fastrak or a license plate account must open and pay via a "short term" account within 48 hours after crossing the bridge or they will be sent an invoice of the unpaid toll. No additional toll violation penalty will be assessed if the invoice is paid within 21 days.
Historical toll rates
When the Bay Bridge opened in 1936, the toll was 65 cents (), collected in each direction by men in booths fronting each lane of traffic. Within months, the toll was lowered to 50 cents in order to compete with the ferry system, and finally to 25 cents since this was shown sufficient to pay off the original revenue bonds on schedule (equivalent to $ and $ in respectively). In 1951 there were eighty collectors working various shifts.
On Monday, September 1, 1969, (Labor Day) a change of policy resulted in the toll being collected thereafter only from westbound traffic, at twice the previous rate; eastbound vehicles were toll-exempt.
Tolls were subsequently raised to finance improvements to the bridge approaches, required to connect with new freeways, and to subsidize public transit in order to reduce the traffic over the bridge. The toll was increased by a quarter dollar to 75 cents in 1978 (), where it remained for a decade.
Caltrans, the state highway transportation agency, maintains seven of the eight San Francisco Bay Area bridges. (The Golden Gate Bridge is owned and maintained by the Golden Gate Bridge, Highway and Transportation District.)
The basic toll (for automobiles) on the seven state-owned bridges, including the San Francisco–Oakland Bay Bridge, was standardized to $1 by Regional Measure 1, approved by Bay Area voters in 1988 (). A $1 seismic retrofit surcharge was added in 1998 by the state legislature, increasing the toll to $2 (), originally for eight years, but since then extended to December 2037 (AB1171, October 2001). On March 2, 2004, voters approved Regional Measure 2 to fund various transportation improvement projects, raising the toll by another dollar to $3 (). An additional dollar was added to the toll starting January 1, 2007, to cover cost overruns on the eastern span replacement of the Bay Bridge, increasing the toll to $4 ().
The Metropolitan Transportation Commission, a regional transportation agency, in its capacity as the Bay Area Toll Authority, administers RM1 and RM2 funds, a significant portion of which are allocated to public transit capital improvements and operating subsidies in the transportation corridors served by the bridges. Caltrans administers the "second dollar" seismic surcharge, and receives some of the MTC-administered funds to perform other maintenance work on the bridges. The Bay Area Toll Authority is made up of appointed officials put in place by various city and county governments, and is not subject to direct voter oversight.
Due to further funding shortages for seismic retrofit projects, the Bay Area Toll Authority again raised tolls on all seven of the state-owned bridges (this excludes the Golden Gate Bridge) in July 2010. The toll rate for autos on other Bay Area bridges was increased to $5 (), but in the Bay Bridge a congestion pricing tolling scheme was implemented. This variable pricing system was not truly congestion priced because toll rates came from a preset schedule and are not based on actual congestion. A $6 toll () was charged from 5 a.m. to 10 a.m. and 3 p.m. to 7 p.m., Monday through Friday. During weekends cars paid the standard $5 toll like the other bridges. Carpools before the implementation were exempted but began to pay $2.50 (), and the carpool toll discount became available only to drivers with FasTrak electronic toll devices. The toll remained at the previous toll of $4 at all other times on weekdays (now ). The Bay Area Toll Authority reported that by October 2010 fewer users are driving during the peak hours and more vehicles are crossing the Bay Bridge before and after the 5–10 a.m. period in which the congestion toll goes into effect. Commute delays in the first six months dropped by an average of 15% compared with 2009. For vehicles with at least 3 axles, the toll rate was $5 per axle.
In June 2018, Bay Area voters approved Regional Measure 3 to further raise the tolls on all seven of the state-owned bridges to fund $4.5 billion worth of transportation improvements in the area. Under the passed measure, the tolls on the Bay Bridge were raised by $1 on January 1, 2019, then again on January 1, 2022, and again on January 1, 2025. Thus under the congestion pricing scheme, the tolls for autos during the peak weekday rush hours were set to $7 in 2019, $8 in 2022, and $9 in 2025; for the non-rush periods, $5 in 2019, $6 in 2022, and $7 in 2025; and on weekends, $6 in 2019, $7 in 2022, and $8 in 2025. Congestion pricing was then suspended indefinitely in April 2020 due to the COVID-19 pandemic, leaving the weekend toll rates in effect for all days and times.
In September 2019, the MTC approved a $4 million plan to eliminate toll takers and convert all seven of the state-owned bridges to all-electronic tolling, citing that 80 percent of drivers are now using Fastrak and the change would improve traffic flow. On March 20, 2020, accelerated by the COVID-19 pandemic, all-electronic tolling was placed in effect for all seven state-owned toll bridges. The MTC then installed new systems at all seven bridges to make them permanently cashless by the start of 2021. In April 2022, the Bay Area Toll Authority announced plans to remove all remaining unused toll booths and create an open-road tolling system which functions at highway speeds; until then, drivers must still slow substantially while passing through the toll plaza.
The Bay Area Toll Authority then approved a plan in December 2024 to implement 50-cent annual toll increases on all seven state-owned bridges between 2026 and 2030 to help pay for bridge maintenance. The standard toll rate for autos will thus rise to $8.50 on January 1, 2026; $9 in 2027; $9.50 in 2028; $10 in 2029; and then to $10.50 in 2030. And becoming effective in 2027, a 25-cent surcharge will be added to any toll charged to a license plate account, and a 50-cent surcharge added to a toll violation invoice, due to the added cost of processing these payment methods.
See also
49-Mile Scenic Drive
Bay Bridge Troll
Cosco Busan oil spill
Golden Gate Bridge
List of bridges documented by the Historic American Engineering Record in California
Southern Crossing (California) - proposed parallel bridge
Treasure Island Development
25 de Abril Bridge, a bridge with a similar design in Portugal
References
Notes
Bibliography
Petroski, Henry. (1995). Engineers of Dreams: Great Bridge Builders and the Spanning of America. New York: Alfred A. Knopf. .
Reisner, Marc (1999). A Dangerous Place: California's Unsettling Fate. Penguin Books.
San Francisco-Oakland Bay Bridge East Span Seismic Safety Project. Retrieved August 24, 2005.
External links
Official sites
Bay Area FasTrak – includes toll information on this and the other Bay Area toll facilities
baybridgeinfo.org Site by Caltrans about all current construction on the bridge.
California Department of Transportation (Caltrans) official Bay Bridge site
Journals
Media
(17 minutes)
Lower Deck Rail and Roadway Off Ramps, 1939, Dorothea Lange photo
San Francisco-Oakland Bay Bridge Construction Collection MSS 722.
Special Collections & Archives, UC San Diego Library.
"Bridging San Francisco Bay", PDH Online Course C577
Other
Bay Bridge Oral History Project, Bancroft Library, UC Berkeley
"Symphonies in Steel: Bay Bridge and the Golden Gate" at The Virtual Museum of the City of San Francisco
1936 establishments in California
2013 establishments in California
Bridge disasters caused by earthquakes
Bridge disasters in the United States
Bridges completed in 1936
Bridges completed in 2013
Bridges in Alameda County, California
Bridges in San Francisco
Bridges in the San Francisco Bay Area
Bridges on the Interstate Highway System
Buildings and structures in Oakland, California
Cantilever bridges in the United States
Concrete bridges in California
Double-decker bridges
Historic American Engineering Record in California
Historic Civil Engineering Landmarks
Interstate 80
Landmarks in the San Francisco Bay Area
National Register of Historic Places in San Francisco
Railroad bridges in California
Railroad bridges on the National Register of Historic Places in California
Road bridges on the National Register of Historic Places in California
Road-rail bridges in the United States
San Francisco Bay Trail
San Francisco Bay
Self-anchored suspension bridges
Steel bridges in the United States
Suspension bridges in California
Toll bridges in California
Tolled sections of Interstate Highways
Transport disasters in 1989
Transportation disasters in California
Transportation in Oakland, California
U.S. Route 40
U.S. Route 50
Key System | San Francisco–Oakland Bay Bridge | [
"Engineering"
] | 10,653 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
165,936 | https://en.wikipedia.org/wiki/Lady%20Bird%20Lake | Lady Bird Lake (formerly, and still colloquially referred to as Town Lake) is a river-like reservoir on the Colorado River in Austin, Texas, United States. The City of Austin created the reservoir in 1960 as a cooling pond for a new city power plant. The lake, which has a surface area of , is now used primarily for recreation and flood control. The reservoir is named in honor of former First Lady of the United States Lady Bird Johnson.
Lady Bird Lake is the easternmost lake of a chain of reservoirs on the river, which is completely located in Texas, and should not be confused with the larger Colorado River located in the Southwestern United States. This chain, known locally as the Texas Highland Lakes, also includes Lake Buchanan, Inks Lake, Lake LBJ, Lake Marble Falls, Lake Travis, and Lake Austin.
History
The City of Austin constructed Longhorn Dam in 1960 to form Town Lake. The city needed the reservoir to serve as a cooling pond for the Holly Street Power Plant, which operated from 1960 until 2007.
Before 1971, the shoreline of Town Lake was mostly a mixture of weeds, unkempt bushes, and trash. Local television station KTBC referred to the lake as an "eyesore". Some concerned Austinites led small projects to clean up the lake. Roberta Crenshaw, chair of the Austin Board of Parks and Recreation, purchased nearly 400 trees and shrubs in an effort to spearhead parkland development around the lake. During his two terms in office (1971–75), the Mayor of Austin, Roy Butler, led the Austin City Council to establish the Town Lake Beautification Committee, and he appointed Lady Bird Johnson as the project's honorary chairman. Johnson's involvement brought attention and money (including $19,000 of her own) to the Town Lake project, allowing for the planting of hundreds of shrubs and trees. The city also built a system of hike and bike trails along the shoreline of the lake.
On July 26, 2007, the Austin City Council passed a resolution authorizing the renaming of the reservoir from Town Lake to Lady Bird Lake in honor of Lady Bird Johnson, who had died earlier that month. Johnson had declined the honor of having the lake renamed for her while she was alive. In renaming the lake, the City Council recognized Johnson's dedication to beautifying the lake and her efforts to create a recreational trail system around the lake's shoreline.
In 2009, non-profit organization Keep Austin Beautiful launched "Clean Lady Bird Lake". The program mobilizes thousands of community volunteers annually to conduct large-scale cleanups along the lake every other month and targeted cleanups throughout the year.
In 2014, a one-mile stretch of the Ann and Roy Butler Hike-and-Bike Trail, named after a former Austin mayor and his wife, was paved to create a boardwalk.
On February 5, 2024, members of the Austin Police Department responded to the 300 block of West Cesar Chavez Street, where first responders recovered a dead body that had been discovered by civilians earlier, following a series of four other bodies recovered from the lake in 2023.
Recreational uses
Lady Bird Lake is a major recreation area for the City of Austin. The lake's banks are bounded by the Ann and Roy Butler Hike-and-Bike Trail, and businesses offer recreational watercraft services along the lakefront portion of the trail. Austin's largest downtown park, Zilker Park, is adjacent to the lake, and Barton Springs, a major attraction for swimmers, flows into the lake.
The City of Austin prohibits the operation of most motorized watercraft on Lady Bird Lake. As a result, the lake serves as a popular recreational area for paddleboards, kayaks, canoes, dragon boats, and rowing shells. Austin's warm climate and the river's calm waters, nearly length and straight courses are especially popular with crew teams and clubs. Along with the University of Texas women's rowing team and coeducational club rowing team, who practice on Lady Bird Lake year-round, teams from other universities (including the University of Wisconsin, the University of Chicago, the University of Oklahoma, and the University of Nebraska) train on Lady Bird Lake during Christmas holidays and spring breaks.
Other water sports along the shores of the lake include swimming in Deep Eddy Pool, the oldest swimming pool in Texas, and Barton Springs Pool, a natural pool on Barton Creek which flows into Lady Bird Lake. Below Tom Miller Dam is Red Bud Isle, a small island formed by the 1900 collapse of the McDonald Dam that serves as a recreation area with a dog park and access to the lake for canoeing and fishing.
Swimming in Lady Bird Lake is illegal not due to poor water quality from the run-off from area streets, which is a false rumor, but rather due to several drownings as well as debris in the water from bridges and dams destroyed by floods in years past. The City of Austin enacted the ban in 1964, and the fine can be up to $500.
For the first time in August 2019, a toxic blue-green algae was found in the lake and reportedly killed at least 5 dogs who were exposed.
Music venues on the banks of Lady Bird Lake are home to a number of events year-round, including the Austin City Limits Music Festival in the fall, the Austin Reggae Festival and Spamarama in the spring, and many open-air concerts at Auditorium Shores on the south bank and Fiesta Gardens on the north bank. The Austin Aqua Festival was held on the shores of Lady Bird Lake from 1962 until 1998. The late Austin resident and blues guitar legend, Stevie Ray Vaughan played a number of concerts at Auditorium Shores and is honored with a memorial statue on the south bank.
Ann and Roy Butler Hike-and-Bike Trail and Boardwalk
The Ann and Roy Butler Hike-and-Bike Trail, formerly the Town Lake Hike and Bike Trail, creates a complete circuit around Lady Bird Lake. It is one of the oldest urban Texas hike and bike paths. The trail is the longest trail designed for non-motorized traffic maintained by the City of Austin Parks and Recreation Department. A local nonprofit, The Trail Conservancy, is the Trail's private steward and has made Trail-wide improvements by adding user amenities and infrastructure including trailheads and lakefront gathering areas, locally-designed jewel box restrooms, exercise equipment, as well as doing trailwide ecological restoration work on an ongoing basis.
The Butler Trail loop was completed in 2014 with the public-private partnership 1-mile Boardwalk at Lady Bird Lake project, which was spearheaded by The Trail Conservancy. Construction on the $28 million project was completed during October 2012 – June 2014.
The trail is long and mostly flat, with 97.5% of it at less than an 8% grade. The trail's surface is smooth and is mostly crushed granite except for a few lengths of concrete and a boardwalk on the South-side of the lake. A pedestrian bridge incorporated into the trail bridges Barton Creek. The Roberta Crenshaw Pedestrian Walkway spans Lady Bird Lake beneath MoPac Boulevard and provides the trail's westernmost crossing of Lady Bird Lake.
The trail encompasses the Lou Neff Point Gazebo at the confluence of Barton Creek and Lady Bird Lake. It is listed as 'Austin Art in Public Places'.
Fishing
Lady Bird Lake has been stocked with several species of fish intended to improve the utility of the reservoir for recreational fishing. The predominant fish species in Lady Bird Lake are largemouth bass, catfish, carp, and sunfish. Fishing is regulated, requiring a fishing license, and daily bag and length limits apply for most species.
A ban on the consumption of fish caught in the lake was issued by the City of Austin in 1990, as a result of excessively high levels of chlordane found in the fish. Although the use of chlordane as a pesticide was banned in the United States in 1988, the chemical sticks strongly to soil particles and can continue to pollute groundwater for years after its application. The ban on the consumption of fish caught in the lake was finally lifted in 1999.
Drinking water uses
The first water treatment facility in the City of Austin, the Thomas C. Green Water Treatment Plant, was built in 1925 to treat water from the Colorado River. The plant occupied just west of the principal downtown business district. The water treatment facility was decommissioned in late 2008.
References
External links
Texas Parks and Wildlife: Town Lake
Austin Parks and Recreation Department
The Trail Foundation
Town Lake Hike and Bike Trail Map
The History of Lady Bird Lake
Geography of Austin, Texas
Reservoirs in Texas
Protected areas of Travis County, Texas
Tourist attractions in Austin, Texas
Landmarks in Austin, Texas
Bodies of water of Travis County, Texas
1960 establishments in Texas
Cooling ponds | Lady Bird Lake | [
"Chemistry",
"Environmental_science"
] | 1,770 | [
"Cooling ponds",
"Water pollution"
] |
166,008 | https://en.wikipedia.org/wiki/Cramer%27s%20rule | In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
Cramer's rule, implemented in a naive way, is computationally inefficient for systems of more than two or three equations. In the case of equations in unknowns, it requires computation of determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant. Cramer's rule can also be numerically unstable even for 2×2 systems. However, Cramer's rule can be implemented with the same complexity as Gaussian elimination, (consistently requires twice as many arithmetic operations and has the same numerical stability when the same permutation matrices are applied).
General case
Consider a system of linear equations for unknowns, represented in matrix multiplication form as follows:
where the matrix has a nonzero determinant, and the vector is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by:
where is the matrix formed by replacing the -th column of by the column vector .
A more general version of Cramer's rule considers the matrix equation
where the matrix has a nonzero determinant, and , are matrices. Given sequences and , let be the submatrix of with rows in and columns in . Let be the matrix formed by replacing the column of by the column of , for all . Then
In the case , this reduces to the normal Cramer's rule.
The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers.
Proof
The proof for Cramer's rule uses the following properties of the determinants: linearity with respect to any given column and the fact that the determinant is zero whenever two columns are equal, which is implied by the property that the sign of the determinant flips if you switch two columns.
Fix the index of a column, and consider that the entries of the other columns have fixed values. This makes the determinant a function of the entries of the th column. Linearity with respect of this column means that this function has the form
where the are coefficients that depend on the entries of that are not in column . So, one has
(Laplace expansion provides a formula for computing the but their expression is not important here.)
If the function is applied to any other column of , then the result is the determinant of the matrix obtained from by replacing column by a copy of column , so the resulting determinant is 0 (the case of two equal columns).
Now consider a system of linear equations in unknowns , whose coefficient matrix is , with det(A) assumed to be nonzero:
If one combines these equations by taking times the first equation, plus times the second, and so forth until times the last, then for every the resulting coefficient of becomes
So, all coefficients become zero, except the coefficient of that becomes Similarly, the constant coefficient becomes and the resulting equation is thus
which gives the value of as
As, by construction, the numerator is the determinant of the matrix obtained from by replacing column by , we get the expression of Cramer's rule as a necessary condition for a solution.
It remains to prove that these values for the unknowns form a solution. Let be the matrix that has the coefficients of as th row, for (this is the adjugate matrix for ). Expressed in matrix terms, we have thus to prove that
is a solution; that is, that
For that, it suffices to prove that
where is the identity matrix.
The above properties of the functions show that one has , and therefore,
This completes the proof, since a left inverse of a square matrix is also a right-inverse (see Invertible matrix theorem).
For other proofs, see below.
Finding inverse matrix
Let be an matrix with entries in a field . Then
where denotes the adjugate matrix, is the determinant, and is the identity matrix. If is nonzero, then the inverse matrix of is
This gives a formula for the inverse of , provided . In fact, this formula works whenever is a commutative ring, provided that is a unit. If is not a unit, then is not invertible over the ring (it may be invertible over a larger ring in which some non-unit elements of may be invertible).
Applications
Explicit formulas for small systems
Consider the linear system
which in matrix format is
Assume is nonzero. Then, with the help of determinants, and can be found with Cramer's rule as
The rules for matrices are similar. Given
which in matrix format is
Then the values of and can be found as follows:
Differential geometry
Ricci calculus
Cramer's rule is used in the Ricci calculus in various calculations involving the Christoffel symbols of the first and second kind.
In particular, Cramer's rule can be used to prove that the divergence operator on a Riemannian manifold is invariant with respect to change of coordinates. We give a direct proof, suppressing the role of the Christoffel symbols.
Let be a Riemannian manifold equipped with local coordinates . Let be a vector field. We use the summation convention throughout.
Theorem.
The divergence of ,
is invariant under change of coordinates.
Let be a coordinate transformation with non-singular Jacobian. Then the classical transformation laws imply that where . Similarly, if , then .
Writing this transformation law in terms of matrices yields , which implies .
Now one computes
In order to show that this equals
,
it is necessary and sufficient to show that
which is equivalent to
Carrying out the differentiation on the left-hand side, we get:
where denotes the matrix obtained from by deleting the th row and th column.
But Cramer's Rule says that
is the th entry of the matrix .
Thus
completing the proof.
Computing derivatives implicitly
Consider the two equations and . When u and v are independent variables, we can define and
An equation for can be found by applying Cramer's rule.
First, calculate the first derivatives of F, G, x, and y:
Substituting dx, dy into dF and dG, we have:
Since u, v are both independent, the coefficients of du, dv must be zero. So we can write out equations for the coefficients:
Now, by Cramer's rule, we see that:
This is now a formula in terms of two Jacobians:
Similar formulas can be derived for
Integer programming
Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side is integer, has integer basic solutions. This makes the integer program substantially easier to solve.
Ordinary differential equations
Cramer's rule is used to derive the general solution to an inhomogeneous linear differential equation by the method of variation of parameters.
Geometric interpretation
Cramer's rule has a geometric interpretation that can be considered also a proof or simply giving insight about its geometric nature. These geometric arguments work in general and not only in the case of two equations with two unknowns presented here.
Given the system of equations
it can be considered as an equation between vectors
The area of the parallelogram determined by and is given by the determinant of the system of equations:
In general, when there are more variables and equations, the determinant of vectors of length will give the volume of the parallelepiped determined by those vectors in the -th dimensional Euclidean space.
Therefore, the area of the parallelogram determined by and has to be times the area of the first one since one of the sides has been multiplied by this factor. Now, this last parallelogram, by Cavalieri's principle, has the same area as the parallelogram determined by and
Equating the areas of this last and the second parallelogram gives the equation
from which Cramer's rule follows.
Other proofs
A proof by abstract linear algebra
This is a restatement of the proof above in abstract language.
Consider the map where is the matrix with substituted in the th column, as in Cramer's rule. Because of linearity of determinant in every column, this map is linear. Observe that it sends the th column of to the th basis vector (with 1 in the th place), because determinant of a matrix with a repeated column is 0. So we have a linear map which agrees with the inverse of on the column space; hence it agrees with on the span of the column space. Since is invertible, the column vectors span all of , so our map really is the inverse of . Cramer's rule follows.
A short proof
A short proof of Cramer's rule can be given by noticing that is the determinant of the matrix
On the other hand, assuming that our original matrix is invertible, this matrix has columns , where is the n-th column of the matrix . Recall that the matrix has columns , and therefore . Hence, by using that the determinant of the product of two matrices is the product of the determinants, we have
The proof for other is similar.
Using Geometric Algebra
Inconsistent and indeterminate cases
A system of equations is said to be inconsistent when there are no solutions and it is called indeterminate when there is more than one solution. For linear equations, an indeterminate system will have infinitely many solutions (if it is over an infinite field), since the solutions can be expressed in terms of one or more parameters that can take arbitrary values.
Cramer's rule applies to the case where the coefficient determinant is nonzero. In the 2×2 case, if the coefficient determinant is zero, then the system is incompatible if the numerator determinants are nonzero, or indeterminate if the numerator determinants are zero.
For 3×3 or higher systems, the only thing one can say when the coefficient determinant equals zero is that if any of the numerator determinants are nonzero, then the system must be inconsistent. However, having all determinants zero does not imply that the system is indeterminate. A simple example where all determinants vanish (equal zero) but the system is still incompatible is the 3×3 system x+y+z=1, x+y+z=2, x+y+z=3.
See also
Rouché–Capelli theorem
Gaussian elimination
References
External links
Proof of Cramer's Rule
WebApp descriptively solving systems of linear equations with Cramer's Rule
Online Calculator of System of linear equations
Wolfram MathWorld explanation on this subject
Theorems in linear algebra
Determinants
1750 in science | Cramer's rule | [
"Mathematics"
] | 2,324 | [
"Theorems in algebra",
"Theorems in linear algebra"
] |
166,010 | https://en.wikipedia.org/wiki/Vorticity | In continuum mechanics, vorticity is a pseudovector (or axial vector) field that describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow. It is an important quantity in the dynamical theory of fluids and provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion of vortex rings.
Mathematically, the vorticity is the curl of the flow velocity :
where is the nabla operator. Conceptually, could be determined by marking parts of a continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity would be twice the mean angular velocity vector of those particles relative to their center of mass, oriented according to the right-hand rule. By its own definition, the vorticity vector is a solenoidal field since
In a two-dimensional flow, is always perpendicular to the plane of the flow, and can therefore be considered a scalar field.
Mathematical definition and properties
Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by , defined as the curl of the velocity field describing the continuum motion. In Cartesian coordinates:
In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it.
In a two-dimensional flow where the velocity is independent of the -coordinate and has no -component, the vorticity vector is always parallel to the -axis, and therefore can be expressed as a scalar field multiplied by a constant unit vector :
The vorticity is also related to the flow's circulation (line integral of the velocity) along a closed path by the (classical) Stokes' theorem. Namely, for any infinitesimal surface element with normal direction and area , the circulation along the perimeter of is the dot product where is the vorticity at the center of .
Since vorticity is an axial vector, it can be associated with a second-order antisymmetric tensor (the so-called vorticity or rotation tensor), which is said to be the dual of . The relation between the two quantities, in index notation, are given by
where is the three-dimensional Levi-Civita tensor. The vorticity tensor is simply the antisymmetric part of the tensor , i.e.,
Examples
In a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, in the central core of a Rankine vortex.
The vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear (that is, if the flow speed varies across streamlines). For example, in the laminar flow within a pipe with constant cross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest.
Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of continuum that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocity about their center of mass is zero.
{| border="0"
|-
| style="text-align:center;" colspan=3 | Example flows:
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" | Rigid-body-like vortex
| style="text-align:center;" | Parallel flow with shear
| style="text-align:center;" | Irrotational vortex
|-
| style="text-align:center;" colspan=3 | where is the velocity of the flow, is the distance to the center of the vortex and ∝ indicates proportionality.Absolute velocities around the highlighted point:
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" colspan=3 | Relative velocities (magnified) around the highlighted point
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" | Vorticity ≠ 0
| style="text-align:center;" | Vorticity ≠ 0
| style="text-align:center;" | Vorticity = 0
|}
Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow. In the figure below, the left subfigure demonstrates no vorticity, and the right subfigure demonstrates existence of vorticity.
Evolution
The evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations.
In many real flows where the viscosity can be neglected (more precisely, in flows with high Reynolds number), the vorticity field can be modeled by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is true in the case of two-dimensional potential flow (i.e. two-dimensional zero viscosity flow), in which case the flowfield can be modeled as a complex-valued field on the complex plane.
Vorticity is useful for understanding how ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the general flow field; this flow is accounted for by a diffusion term in the vorticity transport equation.
Vortex lines and vortex tubes
A vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation
where is the vorticity vector in Cartesian coordinates.
A vortex tube is the surface in the continuum formed by all vortex lines passing through a given (reducible) closed curve in the continuum. The 'strength' of a vortex tube (also called vortex flux) is the integral of the vorticity across a cross-section of the tube, and is the same everywhere along the tube (because vorticity has zero divergence). It is a consequence of Helmholtz's theorems (or equivalently, of Kelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence.
In a three-dimensional flow, vorticity (as measured by the volume integral of the square of its magnitude) can be intensified when a vortex line is extended — a phenomenon known as vortex stretching. This phenomenon occurs in the formation of a bathtub vortex in outflowing water, and the build-up of a tornado by rising air currents.
Vorticity meters
Rotating-vane vorticity meter
A rotating-vane vorticity meter was invented by Russian hydraulic engineer A. Ya. Milovich (1874–1958). In 1913 he proposed a cork with four blades attached as a device qualitatively showing the magnitude of the vertical projection of the vorticity and demonstrated a motion-picture photography of the float's motion on the water surface in a model of a river bend.
Rotating-vane vorticity meters are commonly shown in educational films on continuum mechanics (famous examples include the NCFMF's "Vorticity" and "Fundamental Principles of Flow" by Iowa Institute of Hydraulic Research).
Specific sciences
Aeronautics
In aerodynamics, the lift distribution over a finite wing may be approximated by assuming that each spanwise segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method of computational fluid dynamics. The strengths of the vortices are then summed to find the total approximate circulation about the wing. According to the Kutta–Joukowski theorem, lift per unit of span is the product of circulation, airspeed, and air density.
Atmospheric sciences
The relative vorticity is the vorticity relative to the Earth induced by the air velocity field. This air velocity field is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally scalar rotation quantity perpendicular to the ground. Vorticity is positive when – looking down onto the Earth's surface – the wind turns counterclockwise. In the northern hemisphere, positive vorticity is called cyclonic rotation, and negative vorticity is anticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere.
The absolute vorticity is computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, the Coriolis parameter.
The potential vorticity is absolute vorticity divided by the vertical spacing between levels of constant (potential) temperature (or entropy). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the vertical direction, but the potential vorticity is conserved in an adiabatic flow. As adiabatic flow predominates in the atmosphere, the potential vorticity is useful as an approximate tracer of air masses in the atmosphere over the timescale of a few days, particularly when viewed on levels of constant entropy.
The barotropic vorticity equation is the simplest way for forecasting the movement of Rossby waves (that is, the troughs and ridges of 500 hPa geopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs for numerical weather forecasting utilized that equation.
In modern numerical weather forecasting models and general circulation models (GCMs), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is a prognostic equation.
Related to the concept of vorticity is the helicity , defined as
where the integral is over a given volume . In atmospheric science, helicity of the air motion is important in forecasting supercells and the potential for tornadic activity.
See also
Barotropic vorticity equation
D'Alembert's paradox
Enstrophy
Palinstrophy
Velocity potential
Vortex
Vortex tube
Vortex stretching
Horseshoe vortex
Wingtip vortices
Fluid dynamics
Biot–Savart law
Circulation
Vorticity equations
Kutta–Joukowski theorem
Atmospheric sciences
Prognostic equation
Carl-Gustaf Rossby
Hans Ertel
References
Bibliography
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
"Weather Glossary"' The Weather Channel Interactive, Inc.. 2004.
"Vorticity". Integrated Publishing.
Further reading
Ohkitani, K., "Elementary Account Of Vorticity And Related Equations". Cambridge University Press. January 30, 2005.
Chorin, Alexandre J., "Vorticity and Turbulence". Applied Mathematical Sciences, Vol 103, Springer-Verlag. March 1, 1994.
Majda, Andrew J., Andrea L. Bertozzi, "Vorticity and Incompressible Flow". Cambridge University Press; 2002.
Tritton, D. J., "Physical Fluid Dynamics". Van Nostrand Reinhold, New York. 1977.
Arfken, G., "Mathematical Methods for Physicists", 3rd ed. Academic Press, Orlando, Florida. 1985.
External links
Weisstein, Eric W., "Vorticity". Scienceworld.wolfram.com.
Doswell III, Charles A., "A Primer on Vorticity for Application in Supercells and Tornadoes". Cooperative Institute for Mesoscale Meteorological Studies, Norman, Oklahoma.
Cramer, M. S., "Navier–Stokes Equations -- Vorticity Transport Theorems: Introduction". Foundations of Fluid Mechanics.
Parker, Douglas, "ENVI 2210 – Atmosphere and Ocean Dynamics, 9: Vorticity". School of the Environment, University of Leeds. September 2001.
Graham, James R., "Astronomy 202: Astrophysical Gas Dynamics". Astronomy Department, UC Berkeley.
"The vorticity equation: incompressible and barotropic fluids".
"Interpretation of the vorticity equation".
"Kelvin's vorticity theorem for incompressible or barotropic flow".
"Spherepack 3.1 ". (includes a collection of FORTRAN vorticity program)
"Mesoscale Compressible Community (MC2) Real-Time Model Predictions". (Potential vorticity analysis)
Continuum mechanics
Fluid dynamics
Meteorological quantities
Rotation
fr:Tourbillon (physique) | Vorticity | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,859 | [
"Physical phenomena",
"Physical quantities",
"Continuum mechanics",
"Chemical engineering",
"Quantity",
"Classical mechanics",
"Rotation",
"Meteorological quantities",
"Motion (physics)",
"Piping",
"Fluid dynamics"
] |
166,019 | https://en.wikipedia.org/wiki/Sheridan%20College | Sheridan College Institute of Technology and Advanced Learning, formerly Sheridan College of Applied Arts and Technology, is a public polytechnic institute partnered with private Canadian College of Technology and Trades operating campuses across the Greater Toronto Area of Ontario, Canada.
Founded in 1967, it is known for academic programs in creative writing and publishing, animation and illustration, film and design, business, applied computing, and engineering technology, among others. Sheridan operates the Davis Campus in Brampton, the Trafalgar Road Campus in Oakville, and the Hazel McCallion Campus in Mississauga.
In 2024 Sheridan College's investment in International student enrollment was blamed for the cancellation of 40 programs and major layoffs.
History
Founding
Sheridan College was established in 1967. The School of Graphic Design was located in Brampton, Ontario until 1970, when it moved to the new campus in Oakville, Ontario. The Brampton campus was a converted public high school that had previously been in condemned status until re-fitted for use by Sheridan College. The school and area were subsequently replaced by residential homes. The new Oakville location was still under construction when classes began in the fall of 1970. The classes were held in a large open area under triangular skylights which allowed excellent lighting for the students. The photography department used a well equipped photo studio area and darkrooms for processing film and prints. That building has become merged with many other structures as extensive expansion of the campus has occurred on an ongoing basis. The main courses taught that year were graphic design, fashion design, photography and animation.
Contributions to animation
In the 1960s and early 1970s, the Canadian animation industry was little formed and virtually non-existent, excepting animation pioneers of the National Film Board. and such Canadian studios as Crawley Films in Ottawa and The Guest Group in Toronto, a group of creative companies owned and run by Al Guest.
The Canadian animation landscape began to change in the late 1960s with Rocket Robin Hood, a Canadian animated series produced by Al Guest and his partner Jean Mathieson. In 1968 President Porter organized the school's first course in classical animation, even though at the time there was little evidence of demand for graduates. The school took advantage of the closing of Al Guest's studio following the production of Rocket Robin Hood and were able to buy up the cameras, animation and editing equipment. Subsequently, Guest and Mathieson served as creative advisors to Sheridan and hired a number of Sheridan graduates as key personnel for their new studio Rainbow Animation.
In 1984, Sheridan student Jon Minnis created the short animation piece Charade. The five-minute film was animated by Minnis with Pantone markers on paper during a single three-month summer term at Sheridan College. The film won the Academy Award for Best Animated Short Film at the 57th Academy Awards. As Sheridan's animation department continued to grow, it produced hundreds of animators into Canadian and international studios, at one point in 1996 being called "the Harvard of animation schools" on "a worldwide basis" by animator Michael Hirsh. A significant number of graduates have held key positions at Walt Disney Animation Studios, Don Bluth Productions, Pixar Animation Studios, and DreamWorks Animation, both for traditional and CGI animation. Sheridan graduates include seven Academy Award nominees and two winners, including Domee Shi, the first woman to direct a Pixar animated short.
Animation faculty and alumni have also been recognized at the Emmy awards, including animation professor Kaj Pindal and alumni Jim Bryson, Bobby Chiu and Adam Jeffcoat.
In June 2018, animation alumnus Jon Klassen was named to the Order of Canada in recognition of his contributions to children's literature. Klassen is the author and illustrator of the award-winning book, This is Not My Hat.
In 2018, Sheridan's animation program celebrated the 50th anniversary of its founding. Today, the program aims to foster the same innovative and creative spirit in its current students as it did 50 years ago. Students now earn a four-year Bachelor's Degree in Animation, and post-graduate programs in Computer Animation, Visual Effects and Character Animation are also available.
In 2019, Sheridan was ranked as the top animation school in the world outside the United States by Animation Career Review.
Unsuccessful bid for university status
Former President Dr. Jeff Zabudsky announced in 2012 that Sheridan College would seek to become a university by 2020. The college began implementing several changes to meet the non-binding criteria of a university as set by Universities Canada including: the establishment of an academic senate to set policy, increasing the number of degree-level courses, and increasing the number of instructors with master's and doctoral degrees. The college appointed former Mayor of Mississauga Hazel McCallion as its first chancellor in 2016.
In 2018, it was announced that Sheridan will open a new campus in Brampton in partnership with Ryerson University. The campus will be located on south-east corner of Church Street West and Mill Street North in Brampton. The new campus will focus on delivering programs in science, technology, engineering, arts and mathematics (STEAM). However, this plan was cancelled in 2019.
In 2019, under the leadership of President Dr. Janet Morrison, the college unveiled a new five-year strategic plan that sets out a new vision for Sheridan: to bring together key elements of colleges, polytechnics and universities to create a new, standard-setting model of higher education.
Focus on International Students
In the late 2000's Sheridan began focusing on increasing international enrolment. By 2010 roughly 10% of college enrolment were foreign students. Over the next six years Sheridan added over 4,500 more, raising the percentage of international students to nearly 30% in 2016-17. The increase led to community concerns regarding housing, access to healthcare and food insecurity. In the following six years the college enrolled an additional 5,591, the eighth highest number for Canadian post-secondary institutions in 2022-23.
The rise in human trafficking in Brampton, particularly targeting international students, led Sheridan to raise the issue with students as early 2019. In 2021, it was reported "the top brass at Sheridan College" refused to acknowledge the risk of human trafficking to international students. In 2022, the college produced a webinar to educate faculty and staff that Sheridan campuses are located in human trafficking "hot spots", created a human trafficking resource handout for their students and worked with community services on human trafficking initiatives. In 2024, Brampton City Council passed a resolution aimed at tackling human trafficking of international students in their City, with reporting naming Sheridan as a popular destination. Mayor Patrick Brown, in October first acknowledged the issue publicly, noting that Brampton has the highest number of international students in the country, while asking for federal and provincial help. Months later Indian authorities alleged widespread human trafficking of international students.
Academics
Faculties and Schools
Faculty of Animation, Arts and Design
Pilon School of Business
Faculty of Applied Health and Community Studies
Faculty of Humanities and Social Sciences
Faculty of Applied Science and Technology
Continuing and Professional Studies
Programs
The college has more than 130 programs leading to degrees, certificates, diplomas, and post-graduate diplomas. Sheridan College has a music theatre performance program, undergraduate and post-graduate film programs, and a craft and design program. They have courses in business, animation, illustration, applied computing, engineering technology, community studies, and liberal studies, among others.
In 2012, art and design programs within Sheridan's Faculty of Animation, Arts and Design were recognized by the National Association of Schools of Art and Design (NASAD) to have "substantially equivalent" membership status. (NASAD's nomenclature for non U.S. members) Sheridan is only the second art institution in Canada to achieve this status.
Research and entrepreneurship centres
Centre for Advanced Manufacturing and Design Technologies (CAMDT)
The Centre for Advanced Manufacturing and Design Technologies (CAMDT), located at the Brampton campus, is a 40,000 sq. ft. facility housing highly specialized manufacturing and design equipment. CAMDT allows Sheridan and its industry partners to collaborate on addressing challenges in the manufacturing sector, while developing graduates with the skills and practical knowledge to make an immediate and positive impact in the workplace.
Centre for Elder Research
The stated mission of Sheridan's Centre for Elder Research is to enhance quality of life for older individuals, by developing, testing, and implementing new and realistic solutions to improve the day-to-day experiences of elders and their families. In 2018, the Centre was awarded $178,856 from the Natural Sciences and Engineering Research Council of Canada to further explore how emerging technologies, such as virtual reality tools, can be leveraged to enhance the health and well-being of older adults residing in congregate living facilities such as long-term care homes.
Screen Industries Research and Training Center (SIRT)
Opened at Pinewood Toronto Studios in 2010, Screen Industries Research and Training Center (SIRT) is a digital media sound stage and post-production facility that focuses on 2D and 3D stereoscopic production processes. SIRT was conceived and launched by Sheridan College to operate in connection to the creative industries and three levels of the Canadian government. The Center's stated mission is to conduct high-level research on film, digital cinema, and high-definition technologies in all levels of production and display. The University of Waterloo announced in July 2010 that funding was awarded for joint research between their film department and SIRT.
In 2013, SIRT was designated as the first digital media Technology Access Centre in Ontario, supported through funding from the Natural Sciences and Engineering Research Council of Canada's (NSERC) College-Community Innovation program. In 2018, this funding was renewed for an additional five years to support further applied research and industry collaboration at SIRT.
Centre for Mobile Innovation (CMI)
The Centre for Mobile Innovation is a research facility for faculty and students to create solutions in collaboration with community and industry partners in the area of Internet of things (IoT), wearable computing, augmented/virtual reality (AR/VR), and/or machine learning.
Canadian Music Theatre Project (CMTP)
The Canadian Music Theatre Project (CMTP) was established by Michael Rubinoff, and was an incubator for new musical theatre works by Canadian and international composers, lyricists and book-writers. CMTP connects creative teams with talented students who help bring new characters to life, creating an environment for material to be tested and rewritten. Three or four projects are selected each year, with a five-week workshop period culminating in staged readings in front of a 200-person audience of industry professionals and theatre enthusiasts. Since its inception in 2011, 225 students, 34 writers and composers and 25 guest directors and music directors have participated in the creation of 19 new musicals.
Connection to Come From Away
Come From Away began its life and development at the Canadian Music Theatre Project in 2012. Over two seasons, the CMTP provided the creative team 12 weeks of development time and support, with exceptional student performers, crew, and creative teams. Access to onsite recording facilities to create demo recordings aided in the continuing development of the musical. The workshops culminated in test performance with a full band and live audiences at Sheridan and the Panasonic Theatre in Toronto. The full, two-act musical was produced by Theatre Sheridan the following year. Come From Away was part of the National Alliance for Musical Theatre and Goodspeed Festivals of New Musicals. It was optioned by Tony-Award winning producers Junkyard Dog Productions.
As a credited producer on the musical, Sheridan became the first postsecondary institution to be nominated for Tony Award when Come From Away was recognized with a nomination for Best New Musical Tony in 2017.
Entrepreneurship, Discovery and Growth Engine (EDGE) Hub
The EDGE hub offers training, mentorship, co-working space and support to access funding to early-stage entrepreneurs. Since it opened its doors in 2017, over 40 start-ups have been supported, 13 of which reside in the co-working space at Sheridan's Mississauga campus.
Campuses
Sheridan College has three campuses located in Ontario. Residential dorms are currently only at Trafalgar and Davis campuses. A shuttle bus pilot program to link the three campuses has been discontinued.
Davis Campus (Brampton)
The Davis campus is located in Brampton (7899 McLaughlin Road) completed in 1977, and serves approximately 12,167 students. The former name is Brampton Campus. It is named in 1992 after former premier of Ontario William G. Davis, who created the college system and was from Brampton himself.
This campus is home to Sheridan's community services, engineering & technology, and applied health programs. The school includes three major centres: the Centre of Mobile Innovation, the Centre of Advanced Manufacturing and Design Technologies, and the Centre for Healthy Communities.
Sheridan's Skills Training Centre relocated to the Davis Campus and was upgraded in 2017. The centre has 130,000 square feet of workshops, classrooms, facilities, machinery and equipment for the apprenticeship and pre-trades programs at Sheridan. Programs include:
Apprenticeship programs: Electrician – Construction & Maintenance, General Machinist, Industrial Mechanic Millwright, Tool and Die Maker
Electrical Techniques
Electrical Engineering Technician
Electrical Engineering Degree
AI/ML Graduate Certificate
Mechanical Technician – Tool Making
Mechanical Techniques – Plumbing
Mechanical Techniques – Tool and Die Maker
Mechanical Engineering Degree
Welding Techniques
Dual-Credit programs
Trafalgar Road Campus (Oakville)
Located in Oakville (1430 Trafalgar Road), the Trafalgar Road Campus is the main campus of the Sheridan College, which serves 9,610 students. This campus is the home of the Faculty of Animation, Arts and Design, and is Canada's largest art school.
This campus includes two performance theatres which hold performances annually. Trafalgar campus is also home to the Bruins soccer, rugby and cross country teams.
The Trafalgar campus is partnered with the University of Toronto Mississauga (UTM) campus to create four cross-campus programs: Theatre & Drama Studies, CCIT, Digital Enterprise Management, and Art & Art History Studies.
Hazel McCallion Campus (Mississauga)
The Hazel McCallion campus (HMC) is located in Mississauga (4180 Duke of York Boulevard), in the city centre adjacent to Square One Shopping Centre. It opened in September 2011. The initial phase of development was intended for approximately 2,000 students, with an additional 3,700 students accommodated with the opening of HMC's second building in January 2017. In 2017, HMC opened a new wing, increasing enrolment capacity to over 5,500 students.
The Pilon School of Business is located here. Programs in Advertising, Marketing and Visual Merchandising complement the business diploma, degree and graduate certificate programs. The new wing is home to architectural programs focusing on sustainably built environments. It also includes a Centre for Creative Thinking and a gallery space.
A new 70,000-square-foot student and athletics expansion at HMC includes numerous new student life, food services, recreation and athletics spaces. The project is a collaboration between Sheridan, Sheridan Athletics, and the Sheridan Student Union.
Student life
Publications
The journalism program produces the Sheridan Sun Online, a news and information site maintained by Sheridan's journalism students.
Athletics
An informal hockey team was formed by students in Sheridan's first year of operation, 1967. The team officially joined the newly created Ontario Colleges Athletic Association (OCAA) the next year, along with 20 other new hockey teams from throughout Canada. The Bruins won their Central Division, also participating in the very first Provincial Championship tournament. The hockey team was discontinued after a successful history in 1992, with the void filled by other Bruins Varsity sports. Apart from intramural sports, Sheridan College currently has men's and women's Varsity teams for basketball, soccer, rugby, cross country running, and volleyball. They are still associated with the OCAA.
People
Presidents
Notable alumni
Kent Angus, businessman and Paul Loicq Award winner
Danny Antonucci, creator of Ed, Edd n Eddy
Jeanette Atwood, cartoonist, animator
Alan Barillaro, Pixar animator and director of Piper, Winner of the Best Animated Short Film at the 89th Academy Awards
Charles E. Bastien, animation director
Allie X, singer
Charles Bonifacio, animator
Vera Brosgol, cartoonist
Svetlana Chmakova, comics creator
Sheldon Cohen, animator and children's book illustrator
Nick Cross, Canadian animator
James Cunningham, comedian
Dean DeBlois, animator and director (Walt Disney Animation Studios, DreamWorks Pictures)
Robb Denovan, animator
Trish Doan, musician (metal band Kittie)
Ian D'Sa, musician (Billy Talent)
Kathryn Durst, illustrator and artist
Elicser Elliott, artist, author, animator
Paul Epp, industrial designer
Tom Freda, photographer, activist
Michel Gagné, animator
Wayne Gilbert, animator
Paul Gilligan, comic strip writer
Christopher Guinness, art director, animator, multi-Addy Award Winner
Bryce Hallett, animator/director, Frog Feet Productions
Steve Heineman, artist
Philip Hoffman, filmmaker, York University Professor
Bob Jaques, animator
Trevor Jimenez, Academy-award nominated animator
Jon Klassen, illustrator and children's book author
John Kricfalusi, creator of The Ren and Stimpy Show
Thao Lam, author and illustrator
Jeff Lemire, cartoonist
Troy Little, animator, graphic-novel creator
Charmaine Lurch, artist and educator
Glenn McQueen, supervisor of digital animation and supervising character animator (Pixar, PDI)
Alex Milne, comic book artist
Kent Monkman, artist
Steve Murray, known as Chip Zdarsky, comic book artist
Sidhu Moose Wala, Punjabi pop star
Lynne Naylor, animator
Gary Pearson, editorial cartoonist
Nik Ranieri, Disney animator
Graham Roumieu, author & illustrator
Kathy Shaidle, author
Domee Shi, Pixar writer and director of Bao, Winner of the Best Animated Short Film at the 91st Academy Awards, and director and co-writer of Turning Red.
David Soren, DreamWorks animator and director of Turbo
Don Sparrow, editorial illustrator
Michael Therriault, actor
Chris Williams, Disney animator and co-director of Moana and director of Big Hero 6, Winner of the Best Animated Feature at the 87th Academy Awards
Steve "Spaz" Williams, animator
Steve Wolfhard, storyboard artist for Adventure Time
Andrew Wright, artist
Samantha Youssef, Disney animator
Virginia to Vegas, singer, songwriter from Virginia but was raised in Guelph, Ontario
See also
Canadian government scientific research organizations
Canadian industrial research and development organizations
Canadian Interuniversity Sport
Canadian university scientific research organizations
Higher education in Ontario
List of colleges in Ontario
References
External links
Official athletics website
Colleges in Ontario
Educational institutions established in 1967
Education in Oakville, Ontario
Animation schools in Canada
1967 establishments in Ontario
Film schools in Canada
Glassmaking schools | Sheridan College | [
"Materials_science",
"Engineering"
] | 3,823 | [
"Glass engineering and science",
"Glassmaking schools"
] |
166,022 | https://en.wikipedia.org/wiki/Square%20matrix | In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied.
Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if is a square matrix representing a rotation (rotation matrix) and is a column vector describing the position of a point in space, the product yields another column vector describing the position of that point after that rotation. If is a row vector, the same transformation can be obtained using where is the transpose of
Main diagonal
The entries () form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements , , , .
The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal.
Special kinds
Diagonal or triangular matrix
If all entries outside the main diagonal are zero, is called a diagonal matrix. If all entries below (resp above) the main diagonal are zero, is called an upper (resp lower) triangular matrix.
Identity matrix
The identity matrix of size is the matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.
It is a square matrix of order and also a special kind of diagonal matrix. The term identity matrix refers to the property of matrix multiplication that
for any matrix
Invertible matrix and its inverse
A square matrix is called invertible or non-singular if there exists a matrix such that
If exists, it is unique and is called the inverse matrix of denoted
Symmetric or skew-symmetric matrix
A square matrix that is equal to its transpose, i.e., is a symmetric matrix. If instead then is called a skew-symmetric matrix.
For a complex square matrix often the appropriate analogue of the transpose is the conjugate transpose defined as the transpose of the complex conjugate of A complex square matrix satisfying is called a Hermitian matrix. If instead then is called a skew-Hermitian matrix.
By the spectral theorem, real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.
Definite matrix
A symmetric -matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors the associated quadratic form given by
takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive. The table at the right shows two possibilities for 2×2 matrices.
Allowing as input two different vectors instead yields the bilinear form associated to :
Orthogonal matrix
An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:
which entails
where I is the identity matrix.
An orthogonal matrix is necessarily invertible (with inverse ), unitary (), and normal (). The determinant of any orthogonal matrix is either +1 or −1. The special orthogonal group consists of the orthogonal matrices with determinant +1.
The complex analogue of an orthogonal matrix is a unitary matrix.
Normal matrix
A real or complex square matrix is called normal if If a real square matrix is symmetric, skew-symmetric, or orthogonal, then it is normal. If a complex square matrix is Hermitian, skew-Hermitian, or unitary, then it is normal. Normal matrices are of interest mainly because they include the types of matrices just listed and form the broadest class of matrices for which the spectral theorem holds.
Operations
Trace
The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors:
This is immediate from the definition of matrix multiplication:
Also, the trace of a matrix is equal to that of its transpose, i.e.,
Determinant
The determinant or of a square matrix is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in ) or volume (in ) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
The determinant of 2×2 matrices is given by
The determinant of 3×3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalizes these two formulae to all dimensions.
The determinant of a product of square matrices equals the product of their determinants:
Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices. This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1×1 matrix, which is its unique entry, or even the determinant of a 0×0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.
Eigenvalues and eigenvectors
A number and a non-zero vector satisfying
are called an eigenvalue and an eigenvector of respectively. The number is an eigenvalue of an -matrix if and only if is not invertible, which is equivalent to
The polynomial in an indeterminate given by evaluation of the determinant is called the characteristic polynomial of . It is a monic polynomial of degree n. Therefore the polynomial equation has at most n different solutions, i.e., eigenvalues of the matrix. They may be complex even if the entries of are real. According to the Cayley–Hamilton theorem, , that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix.
See also
Cartan matrix
Notes
References
External links | Square matrix | [
"Mathematics"
] | 1,558 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
166,029 | https://en.wikipedia.org/wiki/Police%20brutality | Police brutality is the excessive and unwarranted use of force by law enforcement against an individual or a group. It is an extreme form of police misconduct and is a civil rights violation. Police brutality includes, but is not limited to, asphyxiation, beatings, shootings, improper takedowns, racially-motivated violence and unwarranted use of tasers.
History
The first modern police force is widely regarded to be the Metropolitan Police Service in London, established in 1829. However, some scholars argue that early forms of policing began in the Americas as early as the 1500s on plantation colonies in the Caribbean. These slave patrols quickly spread across other regions and contributed to the development of the earliest examples of modern police forces. Early records suggest that labor strikes were the first large-scale incidents of police brutality in the United States, including events like the Great Railroad Strike of 1877, the Pullman Strike of 1894, the 1912 Lawrence textile strike, the Ludlow massacre of 1914, the Great Steel Strike of 1919, and the Hanapepe massacre of 1924.
The term "police brutality" was first used in Britain in the mid-19th century, by The Puppet-Show magazine (a short-lived rival to Punch) in September 1848, when they wrote:
The first use of the term in the American press was in 1872 when the Chicago Tribune reported the beating of a civilian who was under arrest at the Harrison Street Police Station.
In the United States, it is common for marginalized groups to perceive the police as oppressors, rather than protectors or enforcers of the law, due to the statistically disproportionate number of minority incarcerations.
Hubert G. Locke wrote:
In the 1960s, civil rights activists from the Congress of Racial Equality and the Student Nonviolent Coordinating Committee confronted police abuses with sit-ins at precinct stations, pickets outside department headquarters, and by blocking traffic to bring attention to officer misdeeds. In return, activists found themselves the targets of political repression in the form of pervasive police surveillance, infiltration by undercover officers, and retaliatory prosecutions aimed at discrediting their movement. Virtually all civil rights leaders--including Martin Luther King Jr., Ella Baker, James Forman, Fannie Lou Hamer and John Lewis--criticized police brutality in speeches at one point or another.
Sometimes riots, e.g. the 1992 Los Angeles riots, are a reaction to police brutality.
Contemporary examples
Pro-Palestine camps
Berlin, Germany
In recent times, as of 2024, there have been more protests and action against the alleged genocide in Palestine. On 8 April 2024, 'Besetzung Gegen Besatzung' – 'Occupy Against Occupation' was set up in front of the Reichstag Building. It is a pro-Palestinian camp by activists, with the demand for the German government to stop exporting arms to Israel, and to stop criminalising solidarity with the Palestinian movement. The camp lasted for two weeks. There were tents, an information booth about the history of the genocide, and field kitchens set up. Protesters were encouraged to sleep over in the camp or return frequently to provide support in numbers. There were community activities and workshops happening frequently to boost the sense of community and morale in the camp.
Police violence and brutality were very prominent at the camp, with the police forcefully evicting the occupants from the Bundestag area for various reasons. The police gave the camp many different restrictions and rules to abide by. The police came up with more new restrictions as the camp went on. This made it confusing for everyone at the camp as the restrictions were ever-changing. The police would not provide sufficient information about the restrictions, making it difficult to determine what is prohibited or not. From banning languages that are not German or English to be used, to criminalizing the songs and materials shared at their workshops. These restrictions can be seen to have extended to the bigger Berlin society previously. Since early October 2023, Palestinian keffiyeh scarves in schools have been banned by the Berlin state authorities. With the reason that it could be a “threat to school peace”.
If one does not abide by the rules set by the police, they could be arrested. These result in instances of the police violence -- arresting, shoving, grabbing, and forcefully holding down people who are resisting arrest. Police violence was most rampant and visible during the camp's eviction. Police were using heavy violence through riot control tactics like kettling the big groups of protesters, eventually arresting a huge number of them. In a Youtube video published by MEMO, 'German police violently attack Gaza solidarity camp in Berlin', police were seen grabbing protesters' faces and limbs as they arrest them or attempt to pull them away from where they were standing or sitting. The police can be seen forcibly choking protesters, while also throwing punches, and kicking at them. Affected protesters reported to have suffered injuries -- scratches, bleeding from open wounds, broken bones etc.
Columbia University
A Pro-Palestine encampment was started on April 17 2024, on Columbia University's South Lawn. The university quickly called the police in to clear the encampment. The New York Police Department arrested 108 individuals. The police were forcibly removing protesters. Officers were seen to be carrying batons and zip ties for tying the arrestees' hands together.
California, USA
Pro-Palestinian encampment was set up at the University of Southern California (USC). The encampment has been in the Alumni Park, on USC's campus for almost 2 weeks. The university call the Los Angeles police to disperse the camp, which ended in 93 people being arrested . More people and students returned soon after to resist the police's efforts to clear the encampment. The police reported no arrests while clearing the encampment for the second time.
The encampment at University of California, Los Angeles (UCLA), was met with more violent police brutality. It was reported that more than 200 people were arrested. With many people being heavily injured. A man was struck in the chest with a rubber bullet at one point.
Police brutality in Brazil
Brazil is consistently ranked as one of the most violent countries in the world due to record-breaking homicide numbers each year. The issue is exacerbated by widespread and systemic police brutality, particularly against Black people from the poorest neighborhoods. While there was a noticeable decline in police killings from 2020 to 2022 as a result of government reforms, Brazil's police forces are still responsible for a significant proportion of killings annually. This violence is often justified by authorities as self-defense or part of the ongoing "war on drugs," yet it is frequently carried out with impunity. Reports from human rights organizations have highlighted racism, corruption, a culture of excessive force and retaliation, and a lack of institutional police control mechanisms as key contributors to the persistence of police brutality in Brazil. These structural issues stem from a long-standing system of aggressive social control that started in the colonial era, was reinforced during military dictatorships, and has carried on throughout the ongoing process of democratization in Brazil.
One notable case that brought international attention to police brutality in Brazil occurred in 2020 in the Salgueiro favelas of Rio de Janeiro. João Pedro Matas Pinto, a 14-year-old Black teenager was shot and killed during a police raid that was targeted at suspected local drug traffickers. João Pedro was at his aunt's house playing with his cousins when police stormed the building and opened fire. The teenager was shot in the abdomen by an assault rifle and sustained fatal injuries. Building on the momentum of global Black Lives Matter protests following the murder of George Floyd by police officers in Minneapolis, widespread anti-police brutality protests were held across the country to call for justice and accountability. Members of the public criticized the police for their reckless tactics and the systemic targeting of Black youth. Despite public outrage, progress has been slow-moving, with many similar cases remaining unsolved.
Causes
Hard on drugs campaigns
In nations with a reputation for having a high number of drug-related issues, including gang violence, drug trafficking, and overdose deaths, one common solution that government will enact is a collective campaign against drugs that spans the entirety of the state's establishment. Changes to address these issues encompass education, bureaucracy, and, most notably, law enforcement policy and tactics. Law enforcement agencies expand and receive more funding to attack drug problems in communities. Acceptance of harsher policing tactics grows as well, as an philosophy develops within the law enforcement community and the militarization of local police forces. However, many studies have concluded that these efforts are in vain, as the drug market has grown in such nations despite anti-drug policies. For example, in the United States, critics of the War on Drugs waged by the government have been very vocal about the ineffectiveness of the policy, citing an increase in drug-related crimes and overdoses since President Nixon first introduced this policy.
Legal system
A type of government failure that can result in the normalization of police brutality is a lack of accountability and repercussions for officers mistreating civilians. While it is currently commonplace for civilians to hold officers accountable by recording them, the actual responsibility of police oversight rests heavily on the criminal justice system of a given nation, as police represent the enforcement of the law. One method of increasing police accountability that has become more common is the employment of body cameras as a part of police uniforms. However, the effectiveness of body cameras has been called into question due to the lack of transparency shown in police brutality cases where the footage is withheld from the public. In many cases of police brutality, the criminal justice system has no policy in place to condemn or prohibit police brutality. Certain nations have laws that permit lawful, violent treatment of civilians, like qualified immunity, which protects officers from being sued for their use of violence if their actions can be justified under the law.
Police officers are legally permitted to use force. Jerome Herbert Skolnick writes in regards to dealing largely with disorderly elements of the society, "some people working in law enforcement may gradually develop an attitude or sense of authority over society, particularly under traditional reaction-based policing models; in some cases, the police believe that they are above the law."
There are many reasons why police officers can sometimes be excessively aggressive. It is thought that psychopathy makes some officers more inclined to use excessive force than others. In one study, police psychologists surveyed officers who had used excessive force. The information obtained allowed the researchers to develop five unique types of officers, only one of which was similar to the bad apples stereotype. These include personality disorders; previous traumatic job-related experience; young, inexperienced, or authoritarian officers; officers who learn inappropriate patrol styles; and officers with personal problems. Schrivers categorized these groups and separated the group that was the most likely to use excessive force. However, this "bad apple paradigm" is considered by some to be an "easy way out". A broad report commissioned by the Royal Canadian Mounted Police (RCMP) on the causes of misconduct in policing calls it "a simplistic explanation that permits the organization and senior management to blame corruption on individuals and individual faultsbehavioural, psychological, background factors, and so on, rather than addressing systemic factors." The report continues to discuss the systemic factors, which include:
Pressures to conform to certain aspects of "police culture", such as the Blue Code of Silence, which can "sustain an oppositional criminal subculture protecting the interests of police who violate the law" and a "we-they' perspective in which outsiders are viewed with suspicion or distrust"
Command and control structures with a rigid hierarchical foundation ("results indicate that the more rigid the authoritarian hierarchy, the lower the scores on a measure of ethical decision-making" concludes one study reviewed in the report); and
Deficiencies in internal accountability mechanisms (including internal investigation processes).
The use of force by police officers is not kept in check in many jurisdictions by the issuance of a use of force continuum, which describes levels of force considered appropriate in direct response to a suspect's behavior. This power is granted by the government, with few if any limits set out in statutory law as well as common law.
Violence used by police can be excessive despite being lawful, especially in the context of political repression. Police brutality is often used to refer to violence used by the police to achieve politically desirable ends (terrorism) and, therefore, when none should be used at all according to widely held values and cultural norms in the society (rather than to refer to excessive violence used where at least some may be considered justifiable).
Studies show that there are officers who believe the legal system they serve is failing and that they must pick up the slack. This is known as "vigilantism", where the officer-involved may think the suspect deserves more punishment than what they may have to serve under the court system.
During high-speed pursuits of suspects, officers can become angry and filled with adrenaline, which can affect their judgment when they finally apprehend the suspect. The resulting loss of judgment and heightened emotional state can result in inappropriate use of force. The effect is colloquially known as "high-speed pursuit syndrome".
Global prevalence
The Amnesty International 2007 report on human rights also documented widespread police misconduct in many other countries, especially countries with authoritarian regimes
In the UK, the reports into the death of New Zealand teacher and anti-racism campaigner Blair Peach in 1979 was published on the Metropolitan Police website on 27 April 2010. They concluded that Peach was killed by a police officer, but that the other police officers in the same unit had refused to cooperate with the inquiry by lying to investigators, making it impossible to identify the actual killer.
In the UK, Ian Tomlinson was filmed by an American tourist being hit with a baton and pushed to the floor as he was walking home from work during the 2009 G-20 London summit protests. Tomlinson then collapsed and died. Although he was arrested on suspicion of manslaughter, the officer who allegedly assaulted Tomlinson was released without charge. He was later dismissed for gross misconduct.
In the UK, in 2005, a young Brazilian man was arrested and shot by Metropolitan Police in Central London. The man, Jean Charles Menezes, died later.
In Serbia, police brutality occurred in numerous cases during protests against Slobodan Milošević, and has also been recorded at protests against governments since Milošević lost power. The most recent case was recorded in July 2010, when five people, including two girls, were arrested, handcuffed, beaten with clubs, and mistreated for one hour. Security camera recordings of the beating were obtained by the media and public outrage when released. Police officials, including Ivica Dačić, the Serbian minister of internal affairs, denied this sequence of events and accused the victims "to have attacked the police officers first". He also publicly stated that "police [aren't] here to beat up citizens", but that it is known "what one is going to get when attacking the police".
Episodes of police brutality in India include the Rajan case, the death of Udayakumar, and of Sampath.
Police violence episodes against peaceful demonstrators appeared during the 2011 Spanish protests Furthermore, on 4 August 2011, Gorka Ramos, a journalist of Lainformacion was beaten by police and arrested while covering 15-M protests near the Interior Ministry in Madrid. A freelance photographer, Daniel Nuevo, was beaten by police while covering demonstrations against the Pope's visit in August 2011.
In Brazil, incidents of police violence have been very well-reported and Brazil has one of the highest prevalences of police brutality in the world today
South Africa from apartheid to today has had incidents of police brutality, though police violence is not as prevalent as during the apartheid years
Investigation
In England and Wales, an independent organization known as the Independent Police Complaints Commission (IPCC) investigates reports of police misconduct. They automatically investigate any deaths caused by or thought to be caused by, police action.
A similar body known as the Police Investigations and Review Commissioner (PIRC) operates in Scotland. In Northern Ireland, the Police Ombudsman for Northern Ireland has a similar role to that of the IPCC and PIRC.
In Africa, there exist two such bodies: one in South Africa and another one in Kenya known as the Independent Policing Oversight Authority.
In the United States, more police are wearing body cameras after the shooting of Michael Brown. The US Department of Justice has made a call to action for police departments across the nation to implement body cameras in their departments so that further investigation will be possible.
Measurement
Police brutality is measured based on the accounts of people who have experienced or seen it, as well as the juries who are present for trials involving police brutality cases, as there is no objective method to quantify the use of excessive force for any particular situation.
In addition to this, police brutality may also be filmed by police body cameras, worn by police officers. Whereas body cams could be a tool against police brutality (by prevention, and by increasing accountability). However according to Harlan Yu, executive director from Upturn, for this to occur, it needs to be embedded in a broader change in culture and legal framework. In particular, the public's ability to access the body camera footage can be an issue.
In 1985, only one out of five people thought that police brutality was a serious problem. Police brutality is relative to a situation: it depends on if the suspect is resisting. Out of the people who were surveyed about their account of police brutality in 2008, only about 12 percent felt as if they had been resisting. Although the police force itself cannot be quantified, the opinion of brutality among various races, genders, and ages can. African Americans, women, and younger people are more likely to have negative opinions about the police than Caucasians, men, and middle-aged to elderly individuals.
Independent oversight
Various community groups have criticized police brutality. These groups often stress the need for oversight by independent civilian review boards and other methods of ensuring accountability for police action.
Umbrella organizations and justice committees usually support those affected. Amnesty International is a non-governmental organization focused on human rights with over threemillion members and supporters around the world. The stated objective of the organization is "to conduct research and generate action to prevent and end grave abuses of human rights, and to demand justice for those whose rights have been violated".
Tools used by these groups include video recordings, which are sometimes broadcast using websites such as YouTube.
Civilians have begun independent projects to monitor police activity to try to reduce violence and misconduct. These are often called "Cop Watch" programs.
See also
Authoritarian personality
Civil liberties
Civil rights
Death squad
International Day Against Police Brutality (15 March)
Law enforcement agency
Law enforcement and society
Legal observer
List of cases of police brutality
List of unarmed African Americans killed by law enforcement officers in the United States
List of killings by law enforcement officers in the United States
List of killings by law enforcement officers in Canada
Militarization of police
Orwellian
Photography is Not a Crime
Police misconduct
Police riot
Prisoner abuse
Rough ride
Suicide by cop
Use of force continuum
US specific
Christopher Commission
Copwatch
Pitchess motion
Police brutality in the United States
References
Further reading
External links
Police Violence
Police Brutality Statistics
Worldwide Police Brutalities archive
Names of Victims of Police Brutality In Canada
Policing the Police: Civilian Video Monitoring of Police Activity
To Protect and Serve?: Five Decades of Posters Protesting Police Violence
1870s neologisms
Human rights abuses
brutality
Political repression
Torture
Violence
Articles containing video clips | Police brutality | [
"Biology"
] | 3,991 | [
"Behavior",
"Aggression",
"Human behavior",
"Violence"
] |
166,035 | https://en.wikipedia.org/wiki/Shame | Shame is an unpleasant self-conscious emotion often associated with negative self-evaluation; motivation to quit; and feelings of pain, exposure, distrust, powerlessness, and worthlessness.
Definition
Shame is a discrete, basic emotion, described as a moral or social emotion that drives people to hide or deny their wrongdoings.
Moral emotions are emotions that have an influence on a person's decision-making skills and monitors different social behaviors. The focus of shame is on the self or the individual with respect to a perceived audience. It can bring about profound feelings of deficiency, defeat, inferiority, unworthiness, or self-loathing. Our attention turns inward; we isolate from our surroundings and withdraw into closed-off self-absorption. Not only do we feel alienated from others but also from the healthy parts of ourselves. The alienation from the world is replaced with painful emotions and self-deprecating thoughts and inner anguish.
Empirical research demonstrates that it is dysfunctional for the individual and group level. Shame can also be described as an unpleasant self-conscious emotion that involves negative evaluation of the self. Shame can be a painful emotion that is seen as a "...comparison of the self's action with the self's standards..." but may equally stem from comparison of the self's state of being with the ideal social context's standard. According to Neda Sedighimornani, shame is relevant in several psychological disorders such as depression, phobia of social interactions, and even some eating disorders.
Some scales of shame measure it to assess emotional states, whereas other shame scales are used to assess emotional traits or dispositions- shame proneness. "To shame" generally means to actively assign or communicate a state of shame to another person. Behaviors designed to "uncover" or "expose" others are sometimes used to place shame on the other person. Whereas, having shame means to maintain a sense of restraint against offending others (as with modesty, humility, and deference). In contrast to having shame is to have no shame; behaving without restraint, offending others, similar to other emotions like pride or hubris.
Identification and self-evaluation
Nineteenth-century scientist Charles Darwin described shame affect in the physical form of blushing, confusion of mind, downward cast eyes, slack posture, and lowered head; Darwin noted these observations of shame affect in human populations worldwide, as mentioned in his book "The Expression of the Emotions in Man and Animals". Darwin also mentions how the sense of warmth or heat, associated with the vasodilation of the face and skin, can result in an even greater sense of shame. More commonly, the act of crying can be associated with shame.
When people feel shame, the focus of their evaluation is on the self or identity. Shame is a self-punishing acknowledgment of something gone wrong. It is associated with "mental undoing". Studies of shame showed that when ashamed people feel that their entire self is worthless, powerless, and small, they also feel exposed to an audience—real or imagined—that exists purely for the purpose of confirming that the self is worthless. Shame and the sense of self is stigmatized, or treated unfairly, like being overtly rejected by parents in favor of siblings' needs, and is assigned externally by others regardless of one's own experience or awareness. An individual who is in a state of shame will assign the shame internally from being a victim of the environment, and the same is assigned externally, or assigned by others regardless of one's own experience or awareness.
A "sense of shame" is the feeling known as guilt but "consciousness" or awareness of "shame as a state" or condition defines core/toxic shame (Lewis, 1971; Tangney, 1998). The person experiencing shame might not be able to, or perhaps simply will not, identify their emotional state as shame, and there is an intrinsic connection between shame and the mechanism of denial. " The key emotion in all forms of shame is contempt (Miller, 1984; Tomkins, 1967). Two realms in which shame is expressed are the consciousness of self as bad and self as inadequate. People employ negative coping responses to counter deep rooted, associated sense of "shameworthiness". The shame cognition may occur as a result of the experience of shame affect or, more generally, in any situation of embarrassment, dishonor, disgrace, inadequacy, humiliation, or chagrin.
The dynamics of shame and devaluation appear to be consistent across cultures. This has led some researchers to propose the existence of a universal human psychology related to how we assign value and worth. This applies both to us and to others.
Behavioural expression
Physiological symptoms caused by the autonomic nervous system include blushing, perspiration, dizziness, or nausea. A feeling of paralysis, numbness, or loss of muscle tone might set in making it difficult to think, act, or talk. Children often visibly slump and hang their head. In an effort to hide this reaction, adults are more likely to laugh, stare, avoid eye contact, freeze their face, tighten their jaw, or show a look of contempt. In another's presence, there's a feeling of being strange, naked, transparent, or exposed, as if wanting to disappear or hide.
The Shame Code was developed to capture behavior as it unfolds in real time during the socially stressful and potentially shaming spontaneous speech task and was coded into the following categories: (1) Body Tension, (2) Facial Tension, (3) Stillness, (4) Fidgeting, (5) Nervous Positive Affect, (6) Hiding and Avoiding, (7) Verbal Flow and Uncertainty, and (8) Silence. Shame tendencies were associated with more fidgeting and less freezing, but both stillness and fidgeting were social cues that convey distress to the observer and may elicit less harsh responses. Thus, both may be an attempt to diminish further shaming experiences. Shame involves global, self-focused negative attributions based on the anticipated, imagined, or real negative evaluations of others and is accompanied by a powerful urge to hide, withdraw, or escape from the source of these evaluations. These negative evaluations arise from transgressions of standards, rules, or goals and cause the individual to feel separate from the group for which these standards, rules, or goals exist, resulting in one of the most powerful, painful, and potentially destructive experiences known to humans.
Comparison with other emotions
Distinguishing between shame, guilt, and embarrassment can be challenging. They are all similar reactions or emotions in the fact that they are self-conscious, "implying self-reflection and self-evaluation."
Comparison with guilt
According to cultural anthropologist Ruth Benedict, shame arises from a violation of cultural or social values while guilt feelings arise from violations of one's internal values. Thus shame arises when one's 'defects' are exposed to others, and results from the negative evaluation (whether real or imagined) of others; guilt, on the other hand, comes from one's own negative evaluation of oneself, for instance, when one acts contrary to one's values or idea of one's self. Shame is more attributed to internal characteristics and guilt is more attributed to behavioral characteristics. Thus, it might be possible to feel ashamed of thought or behavior that no one actually knows about (because one is afraid of what they find), and conversely, feeling guilty about the act of gaining approval from others.
Psychoanalyst Helen B. Lewis argued that, "The experience of shame is directly about the self, which is the focus of evaluation. In guilt, the self is not the central object of negative evaluation, but rather the thing done is the focus." Similarly, Fossum and Mason say in their book Facing Shame that "While guilt is a painful feeling of regret and responsibility for one's actions, shame is a painful feeling about oneself as a person."
Following this line of reasoning, Psychiatrist Judith Lewis Herman concludes that "Shame is an acutely self-conscious state in which the self is 'split,' imagining the self in the eyes of the other; by contrast, in guilt the self is unified."
Clinical psychologist Gershen Kaufman's view of shame is derived from that of affect theory, namely that shame is one of a set of instinctual, short-duration physiological reactions to stimulation. In this view, guilt is seen as a learned behavior consisting primarily of self-directed blame or contempt, and the shame that results from this behavior, making up a part of the overall experience of guilt. Here, self-blame and self-contempt mean the application, towards (a part of) one's self, of exactly the same dynamic that blaming of, and contempt for, others represents when it is applied interpersonally.
Kaufman saw that mechanisms such as blame or contempt may be used as a defending strategy against the experience of shame and that someone who has a pattern of applying them to himself may well attempt to defend against a shame experience by applying self-blame or self-contempt. This, however, can lead to an internalized, self-reinforcing sequence of shame events for which Kaufman coined the term "shame spiral". Shame can also be used as a strategy when feeling guilty, especially when the hope is to avoid punishment by inspiring compassion.
Comparison with embarrassment
One view of difference between shame and embarrassment says that shame does not necessarily involve public humiliation while embarrassment does; that is, one can feel shame for an act known only to oneself but to be embarrassed one's actions must be revealed to others. In the field of ethics (moral psychology, in particular), however, there is debate as to whether or not shame is a heteronomous emotion, i.e., whether or not shame does involve recognition on the part of the ashamed that they have been judged negatively by others.
Another view of the dividing line between shame and embarrassment holds that the difference is one of intensity. In this view embarrassment is simply a less intense experience of shame. It is adaptive and functional. Extreme or toxic shame is a much more intense experience and one that is not functional. In fact, according to this view, toxic shame can be debilitating. The dividing line then is between functional and dysfunctional shame. This includes the idea that shame has a function or benefit for the organism.
Immanuel Kant and his followers held that shame is heteronomous (comes from others); Bernard Williams and others have argued that shame can be autonomous (comes from oneself). Shame may carry the connotation of a response to something that is morally wrong whereas embarrassment is the response to something that is morally neutral but socially unacceptable. Another view of shame and guilt is that shame is a focus on self, while guilt is a focus on behavior. Simply put: A person who feels guilt is saying "I did something bad.", while someone who feels shame is saying "I am bad".
Embarrassment has occasionally been viewed as a less severe or intense form of shame, which usually varies on different aspects such as intensity, the physical reaction of the person, or the size of the present social audience, but it is distinct from shame in that it involves a focus on the self-presented to an audience rather than the entire self. It is experienced as a sense of fluster and slight mortification resulting from a social awkwardness that leads to a loss of esteem in the eyes of others. Embarrassment has been characterized as a sudden-onset sense of fluster and mortification that results when the self is evaluated negatively because one has committed, or anticipates committing, a gaffe or awkward performance before an audience. So, because shame is focused on the entire self, those who become embarrassed apologize for their mistake, and then begin to repair things and this repair involves redressing harm done to the presented self.
One view of difference between shame and embarrassment says that shame does not necessarily involve public humiliation while embarrassment does; that is, one can feel shame for an act known only to oneself but to be embarrassed one's actions must be revealed to others. Therefore shame can only be experienced in private and embarrassment can never be experienced in private. In the field of ethics (moral psychology, in particular), however, there is debate as to whether or not shame is a heteronomous emotion, i.e. whether or not shame does involve recognition on the part of the ashamed that they have been judged negatively by others. This is a mature heteronomous type of shame where the agent does not judge herself negatively, but, due to the negative judgments of others, suspects that she may deserve negative judgment, and feel shame on this basis. Therefore, shame may carry the connotation of a response to something that is morally wrong whereas embarrassment is the response to something that is morally neutral but socially unacceptable.
Subtypes of shame
Robert Karen's types of shame
Psychologist Robert Karen identified four categories of shame: existential, situational, class, and narcissistic. Existential shame occurs when we become self-aware of an objective, unpleasant truth about
ourselves or our situation. Situational shame is the feeling we have when violating an ethical principle, interpersonal boundary, or cultural norm. Class shame relates to social power and pertains to skin color, social class, ethnic background, and gender and occurs in societies that have rigid caste stratifications or disparate classes. Narcissistic shame occurs when our self-image and pride are wounded, affecting how we feel and think about ourselves as an individual, in contrast as a member of a group.
Joseph Burgo's shame paradigms
There are many different reasons that people might feel shame. According to Joseph Burgo, there are four different aspects of shame. He calls these aspects of shame paradigms.
Unrequited love: "Unreciprocated love that causes yearning for more complete love."
Unwanted exposure: Something personal that we would like to keep private is unexpectedly revealed, or when we make a mistake in [a] public [setting]."
Disappointed expectation: "The feeling of dissatisfaction that follows the failure of expectations or hopes to manifest."
Exclusion: Being left out of connection or involvement with others or groups that we would like to belong to.
In his first subdivision of shame he looks into is unrequited love; which is when you love someone but your partner does not reciprocate, or one is rejected by somebody that they like; this can be mortifying and shaming. Unrequited love can be shown in other ways as well. For example, the way a mother treats her new born baby. An experiment called "The Still Face Experiment" was done where a mother showed her baby love and talked to the baby for a set period of time. She then went a few minutes without talking to the baby. This resulted with the baby making different expressions to get the mother's attention. When the mother stopped giving the baby attention, the baby felt shame. According to research on unrequited love, people tend to date others who are similar in attractiveness, leaving those less attractive to feel an initial disappointment that creates a type of unrequited love in the person. The second type of shame is unwanted exposure. This would take place if you were called out in front of a whole class for doing something wrong or if someone saw you doing something you did not want them to see. This is what you would normally think of when you hear the word shame. Disappointed expectation would be your third type of shame according to Burgo. This could be not passing a class, having a friendship go wrong, or not getting a big promotion in a job that you thought you would get. The fourth and final type of shame according to Burgo is exclusion which also means being left out. Many people will do anything to just fit in or want to belong in society, e.g., at school, work, friendships, relationships, everywhere.
Other subtypes
Addiction shaming
Age shaming
Bottom shaming
Genuine shame: is associated with genuine dishonor, disgrace, or condemnation.
False shame: is associated with false condemnation as in the double bind form of false shaming; "he brought what we did to him upon himself". Author and TV personality John Bradshaw calls shame the "emotion that lets us know we are finite".
Fat shaming
Femme shaming'
Food shaming
Secret shame: describes the idea of being ashamed to be ashamed, so causing ashamed people to keep their shame a secret. Psychiatrist James Gilligan discovered, while working as a prison psychiatrist, that violence is primarily caused by secret shame. Gilligan stated, "...so intense and so painful that it threatens to overwhelm him and bring about the death of the self, cause him to lose his mind, his soul, or his sacred honor"
Internalized shame: Internalization of shame was first coined by Gershen Kaufman. In contrast to an acute short-lived experience of shame, internalized shame reflects deep-seated beliefs of inadequacy that feel permanent and irreversible and are accompanied by words, voices, and images. Internalized shame stems from chronic or less frequent severe experiences of shame occurring with prior trauma or in childhood. It can take over a child's emotions and identity and continue into adulthood or may gradually increase over time. Once internalized, the original shaming event(s) and beliefs need not be recalled nor be conscious. Later experiences of shame are intensified and last longer. They do not require an external event or another person to trigger associated feelings and thoughts and can cause depression and feelings of hopelessness and despair. It also causes "shame anxiety," which makes people apprehensive about experiencing shame.
Identity Shaming
Kink shaming
Online shaming
Social media shaming
Slut-shaming
Tech shame: describes the shame that employees, particularly younger workers, feel when they have challenges utilizing technology at work.
Toxic shame: describes false, pathological shame. It was coined by Sylvan Tomkins in the early 60s. John Bradshaw states that toxic shame is induced, inside children, by all forms of child abuse. Incest and other forms of child sexual abuse can cause particularly severe toxic shame. Toxic shame often induces what is known as complex trauma in children who cannot cope with toxic shaming as it occurs and who dissociate the shame until it is possible to cope with.
Vicarious shame: refers to the experience of shame on behalf of another person. Individuals vary in their tendency to experience vicarious shame, which is related to neuroticism and to the tendency to experience personal shame. Extremely shame-prone people might even experience vicarious shame even to an increased degree, in other words: shame on behalf of another person who is already feeling shame on behalf of a third party (or possibly on behalf of the individual proper).
Victim shaming
Shame and mental illness
Narcissism
It has been suggested that narcissism in adults is related to defenses against shame and that narcissistic personality disorder is connected to shame as well. According to psychiatrist Glen Gabbard, NPD can be broken down into two subtypes, a grandiose, arrogant, thick-skinned "oblivious" subtype and an easily hurt, oversensitive, ashamed "hypervigilant" subtype. The oblivious subtype presents for admiration, envy, and appreciation a grandiose self that is the antithesis of a weak internalized self which hides in shame, while the hypervigilant subtype neutralizes devaluation by seeing others as unjust abusers.
Depression
Another form of mental illness where shame is one of the most notable symptoms is depression. In a meta-analytic review performed in 2011, it was found that there were stronger associations with shame and depression than with guilt and depression. External shame, or a negative view of the self, seen through other people, had larger effect sizes correlated with depression than did internal shame. There are different degrees or levels of symptoms of shame in depression depending on different cultures. Those who show greater symptoms of shame in depression usually live in more socio-economic cultures.
Social aspects
According to the anthropologist Ruth Benedict, cultures may be classified by their emphasis on the use of either shame (a shame society) or guilt to regulate the social activities of individuals.
Shame may be used by those people who commit relational aggression and may occur in the workplace as a form of overt social control or aggression. Shaming is used in some societies as a type of punishment, shunning, or ostracism. In this sense, "the real purpose of shaming is not to punish crimes but to create the kind of people who don't commit them".
Stigma
In 1963, Erving Goffman published Stigma: Notes on the Management of Spoiled Identity. For Goffman, the condition when a particular person is excluded from full societal reception is greatly discrediting. This negative evaluation may be "felt" or "enacted". Thus, stigma can occur when society labels someone as tainted, less desirable, or handicapped. When felt, it refers to the shame associated with having a condition and the fear of being discriminated against... when enacted it refers to actual discrimination of this kind. Shame in relation to stigma studies have most often come from the sense and mental consequences that young adolescents find themselves trapped in when they are deciding to use a condom in STD or HIV protection. The other use of stigma and shame is when someone has a disease, such as cancer, where people look to blame something for their feelings of shame and circumstance of sickness. Jessica M. Sales et al. researched young adolescents ages 15–21 on whether they had used protection in the 14 days prior to coming in for the study. The answers showed implications of shame and stigma, which received an accommodating score. The scores, prior history of STDs, demographics, and psychosocial variables were put into a hierarchical regression model to determine probability of an adolescents chances of using protected sex in the future. The study found that the higher sense of shame and stigma the higher chance the adolescent would use protection in the future. This means that if a person is more aware of consequences, is more in-tune with themselves and the stigma (stereotypes, disgrace, etc.), they will be more likely to protect themselves. The study shows that placing more shame and stigma in the mind of people can be more prone to protecting themselves from the consequences that follow the action of unprotected sex.
HIV-related stigma from those who are born with HIV due to their maternal genetics have a proneness to shame and avoidant coping. David S. Bennett et al. studied the ages 12–24 of self-reported measures of potential risk factors and three domains of internalizing factors: depression, anxiety, and PTSD. The findings suggested that those who had more shame-proneness and more awareness of HIV-stigma had a greater amount of depressive and PTSD symptoms. This means that those who have high HIV-stigma and shame do not seek help from interventions. Rather, they avoid the situation that could cause them to find themselves in a predicament of other mental health issues. Older age was related to greater HIV-related stigma and the female gender was more related to stigma and internalizing symptoms (depression, anxiety, PTSD). Stigma was also associated with greater shame-proneness.
Chapple et al. researched people with lung cancer in regards to the shame and stigma that comes from the disease. The stigma that accompanies lung cancer is most commonly caused by smoking. However, there are many ways to contract lung cancer, therefore those who did not receive lung cancer from smoking often feel shame; blaming themselves for something they did not do. The stigma effects their opinions of themselves, while shame is found to blame other cancer causing factors (tobacco products/anti-tobacco products) or ignoring the disease in avoidant coping altogether. The stigma associated with lung cancer effected relationships of patients with their family members, peers, and physicians who were attempting to provide comfort because the patients felt shame and victimized themselves.
Shame campaign
A shame campaign is a tactic in which particular individuals are singled out because of their behavior or suspected crimes, often by marking them publicly, such as Hester Prynne in Nathaniel Hawthorne's The Scarlet Letter. In the Philippines, Alfredo Lim popularized such tactics during his term as mayor of Manila. On July 1, 1997, he began a controversial "spray paint shame campaign" in an effort to stop drug use. He and his team sprayed bright red paint on two hundred squatter houses whose residents had been charged, but not yet convicted, of selling prohibited substances. Officials of other municipalities followed suit. Former Senator Rene A. Saguisag condemned Lim's policy. Communists in the 20th century used struggle sessions to handle corruption and other problems.
Public humiliation, historically expressed by confinement in stocks and in other public punishments may occur in social media through viral phenomena.
Research
Psychologists and other researchers who study shame use validated psychometric testing instruments to determine whether or how much a person feels shame. Some of these tools include the Guilt and Shame Proneness (GASP) Scale, the Shame and Stigma Scale (SSS), the Experience of Shame Scale, and the Internalized Shame Scale. Some scales are specific to the person's situation, such as the Weight- and Body-Related Shame and Guilt scale (WEB-SG), the HIV Stigma Scale for people living with HIV and the Cataldo Lung Cancer Stigma Scale (CLCSS) for people with lung cancer. Others are more general, such as the Emotional Reactions and Thoughts Scale, which deals with anxiety, depression, and guilt as well as shame.
Treatments
There has been little research performed on treatment options concerning shame and people who experience this negative, despairing emotion. Different scientific approaches concerning a treatment have been put forward, using components of psychodynamic and cognitive-behavioral precepts. Unfortunately, the effectiveness of these approaches is not known because the studies have not been run or looked at in depth. An example of treatment for shame consists of group-based CBT and Compassion Focused Therapy, which patients report have helped them feel connectedness and encouraged to overcome difficult challenges related to shame.
Empathy
Brene Brown explains that shame (using a metaphor of a petri-dish) only needs three things to grow: secrecy, silence, and judgement. Shame cannot grow or thrive, in the context (or supportive environment) of empathy. It is important, that when reaching out for a supportive or empathetic person (i.e. when reaching out to share a story or experience): that we choose the people who have earned the right to hear our story (i.e. someone trustworthy); share with people with whom we have a relationship that can bear the weight of the story.
See also
Acquiescence
Badge of shame
Cognitive dissonance
Haya (Islam)
Lady Macbeth effect
Online shaming
Psychological projection
Reintegrative shaming
Scopophobia
So You've Been Publicly Shamed, a 2015 book by journalist Jon Ronson about online shaming
References
Further reading
Sample chapter.
External links
Brene Brown Listening to Shame, TED Talk, March 2012
Hiding from Humanity: Disgust, Shame, and the Law
Humiliation is Simply Wrong (USA Today Editorial/Opinion)
Sexual Guilt and Shame
Shame
Shame and Group Psychotherapy
Shame and Psychotherapy
Social usage of shame in historical times
Understanding Shame and Humiliation in Torture
US Forces Make Iraqis Strip and Walk Naked in Public
Emotions
Moral psychology
Narcissism
Shame | Shame | [
"Biology"
] | 5,664 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
166,072 | https://en.wikipedia.org/wiki/Sound%20barrier | The sound barrier or sonic barrier is the large increase in aerodynamic drag and other undesirable effects experienced by an aircraft or other object when it approaches the speed of sound. When aircraft first approached the speed of sound, these effects were seen as constituting a barrier, making faster speeds very difficult or impossible. The term sound barrier is still sometimes used today to refer to aircraft approaching supersonic flight in this high drag regime. Flying faster than sound produces a sonic boom.
In dry air at 20 °C (68 °F), the speed of sound is 343 metres per second (about 767 mph, 1234 km/h or 1,125 ft/s). The term came into use during World War II when pilots of high-speed fighter aircraft experienced the effects of compressibility, a number of adverse aerodynamic effects that deterred further acceleration, seemingly impeding flight at speeds close to the speed of sound. These difficulties represented a barrier to flying at faster speeds. In 1947, American test pilot Chuck Yeager demonstrated that safe flight at the speed of sound was achievable in purpose-designed aircraft, thereby breaking the barrier. By the 1950s, new designs of fighter aircraft routinely reached the speed of sound, and faster.
History
Some common whips such as the bullwhip or stockwhip are able to move faster than sound: the tip of the whip exceeds this speed and causes a sharp crack—literally a sonic boom. Firearms made after the 19th century generally have a supersonic muzzle velocity.
The sound barrier may have been first breached by living beings about 150 million years ago. Some paleobiologists report that computer models of their biomechanical capabilities suggest that certain long-tailed dinosaurs such as Brontosaurus, Apatosaurus, and Diplodocus could flick their tails at supersonic speeds, creating a cracking sound. This finding is theoretical and disputed by others in the field. Meteorites in the Earth's upper atmosphere usually travel at higher than Earth's escape velocity, which is much faster than sound.
Early problems
The existence of the sound barrier was evident to aerodynamicists before any direct in-aircraft evidence was available. In particular, the very simple theory of thin airfoils at supersonic speeds produced a curve that went to infinite drag at Mach 1, dropping with increasing speed. This could be seen in tests using projectiles fired from guns, a common method for checking the stability of various projective shapes. As the projectile slowed from its initial speed and began to approach the speed of sound, it would undergo a rapid increase in drag and slow much more rapidly. It was understood that the drag did not go infinite, or it would be impossible for the projectile to get above Mach 1 in the first place, but there was no better theory, and data was matching theory to some degree. At the same time, ever-increasing wind tunnel speeds were showing a similar effect as one approached Mach 1 from below. In this case, however, there was no theoretical development that suggested why this might be. What was noticed was that the increase in drag was not smooth, it had a distinct "corner" where it began to suddenly rise. This speed was different for different wing planforms and cross sections, and became known as the "critical Mach".
According to British aerodynamicist W. F. Hilton, of Armstrong Whitworth Aircraft, the term itself was created accidentally. He was giving demonstrations at the annual show day at the National Physical Laboratory in 1935 where he demonstrated a chart of wind tunnel measurements comparing the drag of a wing to the velocity of the air. During these explanations he would state "See how the resistance of a wing shoots up like a barrier against higher speed, as we approach the speed of sound." The next day, the London newspapers were filled with statements about a "sound barrier." Whether or not this is the first use of the term is debatable, but by the 1940s use within the industry was already common.
By the late 1930s, one practical outcome of this was becoming clear. Although aircraft were still operating well below Mach 1, generally half that at best, their engines were rapidly pushing past 1,000 hp. At these power levels, the traditional two-bladed propellers were clearly showing rapid increases in drag. The tip speed of a propeller blade is a function of the rotational speed and the length of the blade. As the engine power increased, longer blades were needed to apply this power to the air while operating at the most efficient RPM of the engine. The velocity of the air is also a function of the forward speed of the aircraft. When the aircraft speed is high enough, the tips reach transonic speeds. Shock waves form at the blade tips and sap the shaft power driving the propeller. To maintain thrust, the engine power must replace this loss, and must also match the aircraft drag as it increases with speed. The required power is so great that the size and weight of the engine becomes prohibitive. This speed limitation led to research into jet engines, notably by Frank Whittle in England and Hans von Ohain in Germany. This also led to propellers with ever-increasing numbers of blades, three, four and then five were seen during the war. As the problem became better understood, it also led to "paddle bladed" propellers with increased chord, as seen (for example) on late-war models of the Republic P-47 Thunderbolt.
Nevertheless, propeller aircraft were able to approach their critical Mach number, different for each aircraft, in a dive. Doing so led to numerous crashes for a variety of reasons. Flying the Mitsubishi Zero, pilots sometimes flew at full power into terrain because the rapidly increasing forces acting on the control surfaces of their aircraft overpowered them. In this case, several attempts to fix it only made the problem worse. Likewise, the flexing caused by the low torsional stiffness of the Supermarine Spitfire's wings caused them, in turn, to counteract aileron control inputs, leading to a condition known as control reversal. This was solved in later models with changes to the wing. Worse still, a particularly dangerous interaction of the airflow between the wings and tail surfaces of diving Lockheed P-38 Lightnings made "pulling out" of dives difficult; in one 1941 test flight test pilot Ralph Virde was killed when the plane flew into the ground at high speed. The problem was later solved by the addition of a "dive flap" that upset the airflow under these circumstances. Flutter due to the formation of shock waves on curved surfaces was another major problem, which led most famously to the breakup of a de Havilland Swallow and death of its pilot Geoffrey de Havilland, Jr. on 27 September 1946. A similar problem is thought to have been the cause of the 1943 crash of the BI-1 rocket aircraft in the Soviet Union.
All of these effects, although unrelated in most ways, led to the concept of a "barrier" making it difficult for an aircraft to exceed the speed of sound. Erroneous news reports caused most people to envision the sound barrier as a physical "wall," which supersonic aircraft needed to "break" with a sharp needle nose on the front of the fuselage. Rocketry and artillery experts' products routinely exceeded Mach 1, but aircraft designers and aerodynamicists during and after World War II discussed Mach 0.7 as a limit dangerous to exceed.
Early claims
During WWII and immediately thereafter, a number of claims were made that the sound barrier had been broken in a dive. The majority of these purported events can be dismissed as instrumentation errors. The typical airspeed indicator (ASI) uses air pressure differences between two or more points on the aircraft, typically near the nose and at the side of the fuselage, to produce a speed figure. At high speed, the various compression effects that lead to the sound barrier also cause the ASI to go non-linear and produce inaccurately high or low readings, depending on the specifics of the installation. This effect became known as "Mach jump". Before the introduction of Mach meters, accurate measurements of supersonic speeds could only be made remotely, normally using ground-based instruments. Many claims of supersonic speeds were found to be far below this speed when measured in this fashion.
In 1942, Republic Aviation issued a press release stating that Lts. Harold E. Comstock and Roger Dyar had exceeded the speed of sound during test dives in a Republic P-47 Thunderbolt. It is widely agreed that this was due to inaccurate ASI readings. In similar tests, the North American P-51 Mustang demonstrated limits at Mach 0.85, with every flight over Mach 0.84 causing the aircraft to be damaged by vibration.
One of the highest recorded instrumented Mach numbers attained for a propeller aircraft is the Mach 0.891 for a Spitfire PR XI, flown during dive tests at the Royal Aircraft Establishment, Farnborough in April 1944. The Spitfire, a photo-reconnaissance variant, the Mark XI, fitted with an extended "rake type" multiple pitot system, was flown by Squadron Leader J. R. Tobin to this speed, corresponding to a corrected true airspeed (TAS) of 606 mph. In a subsequent flight, Squadron Leader Anthony Martindale achieved Mach 0.92, but it ended in a forced landing after over-revving damaged the engine.
Hans Guido Mutke claimed to have broken the sound barrier on 9 April 1945 in the Messerschmitt Me 262 jet aircraft. He states that his ASI pegged itself at . Mutke reported not just transonic buffeting, but the resumption of normal control once a certain speed was exceeded, then a resumption of severe buffeting once the Me 262 slowed again. He also reported engine flame-out.
This claim is widely disputed, even by pilots in his unit. All of the effects he reported are known to occur on the Me 262 at much lower speeds, and the ASI reading is simply not reliable in the transonic. Further, a series of tests made by Karl Doetsch at the behest of Willy Messerschmitt found that the plane became uncontrollable above Mach 0.86, and at Mach 0.9 would nose over into a dive that could not be recovered from. Post-war tests by the RAF confirmed these results, with the slight modification that the maximum speed using new instruments was found to be Mach 0.84, rather than Mach 0.86.
In 1999, Mutke enlisted the help of Professor Otto Wagner of the Munich Technical University to run computational tests to determine whether the aircraft could break the sound barrier. These tests do not rule out the possibility, but are lacking accurate data on the coefficient of drag that would be needed to make accurate simulations. Wagner stated: "I don't want to exclude the possibility, but I can imagine he may also have been just below the speed of sound and felt the buffeting, but did not go above Mach-1."
One bit of evidence presented by Mutke is on page 13 of the "Me 262 A-1 Pilot's Handbook" issued by Headquarters Air Materiel Command, Wright Field, Dayton, Ohio as Report No. F-SU-1111-ND on January 10, 1946:
The comments about restoration of flight control and cessation of buffeting above Mach 1 are very significant in a 1946 document. However, it is not clear where these terms came from, as it does not appear the US pilots carried out such tests.
In his 1990 book Me-163, former Messerschmitt Me 163 "Komet" pilot Mano Ziegler claims that his friend, test pilot Heini Dittmar, broke the sound barrier while diving the rocket plane, and that several people on the ground heard the sonic booms. He claims that on 6 July 1944, Dittmar, flying Me 163B V18, bearing the alphabetic code VA+SP, was measured traveling at a speed of 1,130 km/h (702 mph). However, no evidence of such a flight exists in any of the materials from that period, which were captured by Allied forces and extensively studied. Dittmar had been officially recorded at 1,004.5 km/h (623.8 mph) in level flight on 2 October 1941 in the prototype Me 163A V4. He reached this speed at less than full throttle, as he was concerned by the transonic buffeting. Dittmar himself does not make a claim that he broke the sound barrier on that flight and notes that the speed was recorded only on the AIS. He does, however, take credit for being the first pilot to "knock on the sound barrier".
There are a number of uncrewed vehicles that flew at supersonic speeds during this period. In 1933, Soviet designers working on ramjet concepts fired phosphorus-powered engines out of artillery guns to get them to operational speeds. It is possible that this produced supersonic performance as high as Mach 2, but this was not due solely to the engine itself. In contrast, the German V-2 ballistic missile routinely broke the sound barrier in flight, for the first time on 3 October 1942. By September 1944, V-2s routinely achieved Mach 4 (1,200 m/s, or 3044 mph) during terminal descent.
Breaking the sound barrier
In 1942, the United Kingdom's Ministry of Aviation began a top-secret project with Miles Aircraft to develop the world's first aircraft capable of breaking the sound barrier. The project resulted in the development of the prototype Miles M.52 turbojet-powered aircraft, which was designed to reach 1,000 mph (417 m/s; 1,600 km/h) (over twice the existing speed record) in level flight, and to climb to an altitude of 36,000 ft (11 km) in 1 minute 30 seconds.
A number of advanced features were incorporated into the resulting M.52 design, which resulted from consulting experts in government establishments with a current knowledge of supersonic aerodynamics. In particular, the design featured a conical nose, for low supersonic drag, and sharp wing leading edges. The design used very thin wings of biconvex section proposed by Jakob Ackeret for low drag. The wing tips were "clipped" to keep them clear of the conical shock wave generated by the nose of the aircraft. The fuselage had a 5-foot diameter with an annular fuel tank around the engine.
Another critical addition was the use of a power-operated stabilator, also known as the all-moving tail or flying tail, a key to transonic and supersonic flight control, which contrasted with traditional hinged tailplanes (horizontal stabilizers) connected mechanically to the pilots control column. Conventional control surfaces became ineffective at the high subsonic speeds then being achieved by fighters in dives, due to the aerodynamic forces caused by the formation of shockwaves at the hinge and the rearward movement of the centre of pressure, which together could override the control forces that could be applied mechanically by the pilot, hindering recovery from the dive. A major impediment to early transonic flight was control reversal, the phenomenon which caused flight inputs (stick, rudder) to switch direction at high speed; it was the cause of many accidents and near-accidents. An all-flying tail is required for an aircraft to pass through the transonic speed range safely, without losing pilot control. The Miles M.52 was the first instance of this solution, which has since been universally applied.
Initially, the aircraft was to use Frank Whittle's latest engine, the Power Jets W.2/700, with which it would only reach supersonic speed in a shallow dive. To develop a fully supersonic version of the aircraft, extra thrust would be provided with the addition of the No.4 augmentor which gave extra airflow from a ducted fan and reheat behind the fan.
Although the project was eventually cancelled, the research was used to construct an uncrewed 30% scale model of the M.52 that went on to achieve a speed of Mach 1.38 in a successful, controlled transonic and supersonic level test flight in October 1948; this was a unique achievement at that time which provided "some validation of the aerodynamics of the M.52 upon which the model was based".
Meanwhile, test pilots achieved high speeds in the tailless, swept-wing de Havilland DH 108. One of them was Geoffrey de Havilland, Jr., who was killed on 27 September 1946 when his DH 108 broke up at about Mach 0.9. John Derry has been called "Britain's first supersonic pilot" because of a dive he made in a DH 108 on 6 September 1948.
The first aircraft to officially break the sound barrier
The British Air Ministry signed an agreement with the United States to exchange all its high-speed research data and designs, including that for the M.52, with equivalent US research but the U.S. reneged on the agreement, and nothing was forthcoming in return.
The Bell X-1, the first US crewed aircraft built to break the sound barrier, was visually similar to the Miles M.52 but with a high-mounted horizontal tail to keep it clear of the wing wake. Compared to the all-moving tail on the M.52 the X-1 used a conventional tail with elevators but with a movable stabilizer to maintain control passing through the sound barrier. It was in the X-1 that Chuck Yeager became the first person to break the sound barrier in level flight on 14 October 1947, flying at an altitude of 45,000 ft (13.7 km). George Welch made a plausible but officially unverified claim to have broken the sound barrier on 1 October 1947, while flying an XP-86 Sabre. He also claimed to have repeated his supersonic flight on 14 October 1947, 30 minutes before Yeager broke the sound barrier in the Bell X-1. Although evidence from witnesses and instruments strongly imply that Welch achieved supersonic speed, the flights were not properly monitored and are not officially recognized. The XP-86 officially achieved supersonic speed on 26 April 1948.
On 14 October 1947, just under a month after the United States Air Force had been created as a separate service, the tests culminated in the first crewed supersonic flight, piloted by Air Force Captain Charles "Chuck" Yeager in aircraft #46-062, which he had christened Glamorous Glennis. The rocket-powered aircraft was launched from the bomb bay of a specially modified B-29 and glided to a landing on a runway. XS-1 flight number 50 is the first one where the X-1 recorded supersonic flight, with a maximum speed of Mach 1.06 (361 m/s, 1,299 km/h, 807.2 mph).
As a result of the X-1's initial supersonic flight, the National Aeronautics Association voted its 1947 Collier Trophy to be shared by the three main participants in the program. Honored at the White House by President Harry S. Truman were Larry Bell for Bell Aircraft, Captain Yeager for piloting the flights, and John Stack for the NACA contributions.
Jackie Cochran was the first woman to break the sound barrier, which she did on 18 May 1953, piloting a plane borrowed from the Royal Canadian Air Force, with Yeager accompanying her.
On December 3, 1957, Margaret Chase Smith became the first woman in Congress to break the sound barrier, which she did as a passenger in an F-100 Super Sabre piloted by Air Force Major Clyde Good.
In the late 1950s, Allen Rowley, a British journalist, was able to fly in a Super Sabre at 1000 mph, one of the few non-American civilians to exceed the speed of sound and one of the few civilians anywhere to make such a trip.
On 21 August 1961, a Douglas DC-8-43 (registration N9604Z) unofficially exceeded Mach 1 in a controlled dive during a test flight at Edwards Air Force Base, as observed and reported by the flight crew; the crew were William Magruder (pilot), Paul Patten (co-pilot), Joseph Tomich (flight engineer), and Richard H. Edwards (flight test engineer). This was the first supersonic flight by a civilian airliner, achieved before the Concorde or the Tu-144 flew.
The sound barrier understood
As the science of high-speed flight became more widely understood, a number of changes led to the eventual understanding that the "sound barrier" is easily penetrated, with the right conditions. Among these changes were the introduction of thin swept wings, the area rule, and engines of ever-increasing performance. By the 1950s, many combat aircraft could routinely break the sound barrier in level flight, although they often suffered from control problems when doing so, such as Mach tuck. Modern aircraft can transit the "barrier" without control problems.
By the late 1950s, the issue was so well understood that many companies started investing in the development of supersonic airliners, or SSTs, believing that to be the next "natural" step in airliner evolution. However, this has not yet happened. Although the Concorde and the Tupolev Tu-144 entered service in the 1970s, both were later retired without being replaced by similar designs. The last flight of a Concorde in service was in 2003. Despite a resurgence of interest in the 2010s, as of 2024 there are no commercial supersonic airliners in service.
Although Concorde and the Tu-144 were the first aircraft to carry commercial passengers at supersonic speeds, they were not the first or only commercial airliners to break the sound barrier. On 21 August 1961, a Douglas DC-8 broke the sound barrier at Mach 1.012, or 1,240 km/h (776.2 mph), while in a controlled dive through 41,088 feet (12,510 m). The purpose of the flight was to collect data on a new design of leading edge for the wing.
Breaking the sound barrier in a land vehicle
On 12 January 1948, a Northrop uncrewed rocket sled became the first land vehicle to break the sound barrier. At a military test facility at Muroc Air Force Base (now Edwards AFB), California, it reached a peak speed of 1,019 mph (1,640 km/h) before jumping the rails.
On 15 October 1997, in a vehicle designed and built by a team led by Richard Noble, Royal Air Force pilot Andy Green became the first person to break the sound barrier in a land vehicle in compliance with Fédération Internationale de l'Automobile rules. The vehicle, called the ThrustSSC ("Super Sonic Car"), captured the record 50 years and one day after Yeager's first supersonic flight.
Breaking the sound barrier as a human projectile
Felix Baumgartner
In October 2012 Felix Baumgartner, with a team of scientists and sponsor Red Bull, attempted the highest sky-dive on record. The project would see Baumgartner attempt to jump 120,000 ft (36,580 m) from a helium balloon and become the first parachutist to break the sound barrier. The launch was scheduled for 9 October 2012, but was aborted due to adverse weather; subsequently the capsule was launched instead on 14 October. Baumgartner's feat also marked the 65th anniversary of U.S. test pilot Chuck Yeager's successful attempt to break the sound barrier in an aircraft.
Baumgartner landed in eastern New Mexico after jumping from a world record 128,100 feet (39,045 m), or 24.26 miles, and broke the sound barrier as he traveled at speeds up to 833.9 mph (1342 km/h, or Mach 1.26). In the press conference after his jump, it was announced that he was in freefall for 4 minutes 18 seconds, the second longest freefall after the 1960 jump of Joseph Kittinger for 4 minutes 36 seconds.
Alan Eustace
In October 2014, Alan Eustace, a senior vice president at Google, broke Baumgartner's record for highest sky-dive and also broke the sound barrier in the process. However, because Eustace's jump involved a drogue parachute, while Baumgartner's did not, their vertical speed and free-fall distance records remain in different categories.
Legacy
David Lean directed The Sound Barrier, a fictionalized retelling of the de Havilland DH 108 test flights.
See also
Sonic boom
Vapor cone
References
Notes
Citations
Bibliography
"Breaking the Sound Barrier." Modern Marvels (TV program). July 16, 2003.
Hallion, Dr. Richard P. "Saga of the Rocket Ships." AirEnthusiast Five, November 1977 – February 1978. Bromley, Kent, UK: Pilot Press Ltd., 1977.
Miller, Jay. The X-Planes: X-1 to X-45, Hinckley, UK: Midland, 2001. .
Pisano, Dominick A., R. Robert van der Linden and Frank H. Winter. Chuck Yeager and the Bell X-1: Breaking the Sound Barrier. Washington, DC: Smithsonian National Air and Space Museum (in association with Abrams, New York), 2006. .
Radinger, Willy and Walter Schick. Me 262 (in German). Berlin: Avantic Verlag GmbH, 1996. .
Rivas, Brian (2012), A Very British Sound Barrier: DH 108, A Story of Courage, Triumph and Tragedy, Walton-on-Thames, Surrey: Red Kite, .
Winchester, Jim. "Bell X-1." Concept Aircraft: Prototypes, X-Planes and Experimental Aircraft (The Aviation Factfile). Kent, UK: Grange Books plc, 2005. .
Wolfe. Tom. The Right Stuff. New York: Farrar, Straus and Giroux, 1979. .
Yeager, Chuck, Bob Cardenas, Bob Hoover, Jack Russell and James Young. The Quest for Mach One: A First-Person Account of Breaking the Sound Barrier. New York: Penguin Studio, 1997. .
Yeager, Chuck and Leo Janos. Yeager: An Autobiography. New York: Bantam, 1986. .
External links
Fluid Mechanics, a collection of tutorials by Dr. Mark S. Cramer, Ph.D.
Breaking the Sound Barrier with an Aircraft by Carl Rod Nave, Ph.D.
a video of a Concorde reaching Mach 1 at intersection TESGO taken from below
An interactive Java applet, illustrating the sound barrier.
Aircraft aerodynamics
Airspeed
Articles containing video clips
Barrier
de:Überschallflug#Schallmauer | Sound barrier | [
"Physics"
] | 5,468 | [
"Wikipedia categories named after physical quantities",
"Airspeed",
"Physical quantities"
] |
166,084 | https://en.wikipedia.org/wiki/Compressibility | In thermodynamics and fluid mechanics, the compressibility (also known as the coefficient of compressibility or, if the temperature is held constant, the isothermal compressibility) is a measure of the instantaneous relative volume change of a fluid or solid as a response to a pressure (or mean stress) change. In its simple form, the compressibility (denoted in some fields) may be expressed as
,
where is volume and is pressure. The choice to define compressibility as the negative of the fraction makes compressibility positive in the (usual) case that an increase in pressure induces a reduction in volume. The reciprocal of compressibility at fixed temperature is called the isothermal bulk modulus.
Definition
The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is isentropic or isothermal. Accordingly, isothermal compressibility is defined:
where the subscript indicates that the partial differential is to be taken at constant temperature.
Isentropic compressibility is defined:
where is entropy. For a solid, the distinction between the two is usually negligible.
Since the density of a material is inversely proportional to its volume, it can be shown that in both cases
For instance, for an ideal gas,
. Hence .
Consequently, the isothermal
compressibility of an ideal gas is
.
The ideal gas (where the particles do not interact with each other) is an abstraction. The particles in real materials interact with each other. Then, the relation between the pressure, density and temperature is known as the equation of state denoted by some function . The Van der Waals equation is an example of an equation of state for a realistic gas.
.
Knowing the equation of state, the compressibility can be determined for any substance.
Relation to speed of sound
The speed of sound is defined in classical mechanics as:
It follows, by replacing partial derivatives, that the isentropic compressibility can be expressed as:
Relation to bulk modulus
The inverse of the compressibility is called the bulk modulus, often denoted (sometimes or ).).
The compressibility equation relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid.
Thermodynamics
The isothermal compressibility is generally related to the isentropic (or adiabatic) compressibility by a few relations:
where is the heat capacity ratio, is the volumetric coefficient of thermal expansion, is the particle density, and is the thermal pressure coefficient.
In an extensive thermodynamic system, the application of statistical mechanics shows that the isothermal compressibility is also related to the relative size of fluctuations in particle density:
where is the chemical potential.
The term "compressibility" is also used in thermodynamics to describe deviations of the thermodynamic properties of a real gas from those expected from an ideal gas.
The compressibility factor is defined as
where is the pressure of the gas, is its temperature, and is its molar volume, all measured independently of one another. In the case of an ideal gas, the compressibility factor is equal to unity, and the familiar ideal gas law is recovered:
can, in general, be either greater or less than unity for a real gas.
The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results.
Earth science
The Earth sciences use compressibility to quantify the ability of a soil or rock to reduce in volume under applied pressure. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions: solids and voids (or same as porosity). The void space can be full of liquid or gas. Geologic materials reduce in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period of time, resulting in settlement.
It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques.
Fluid dynamics
The degree of compressibility of a fluid has strong implications for its dynamics. Most notably, the propagation of sound is dependent on the compressibility of the medium.
Aerodynamics
Compressibility is an important factor in aerodynamics. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond .
Many effects are often mentioned in conjunction with the term "compressibility", but regularly have little to do with the compressible nature of air. From a strictly aerodynamic point of view, the term should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. There are two effects in particular, wave drag and critical mach.
One complication occurs in hypersonic aerodynamics, where dissociation causes an increase in the "notional" molar volume because a mole of oxygen, as O2, becomes 2 moles of monatomic oxygen and N2 similarly dissociates to 2 N. Since this occurs dynamically as air flows over the aerospace object, it is convenient to alter the compressibility factor , defined for an initial 30 gram moles of air, rather than track the varying mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric oxygen in the 2,500–4,000 K temperature range, and in the 5,000–10,000 K range for nitrogen.
In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity greatly increases. For moderate pressures, above 10,000 K the gas further dissociates into free electrons and ions. for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. Ions or free radicals transported to the object surface by diffusion may release this extra (nonthermal) energy if the surface catalyzes the slower recombination process.
Negative compressibility
For ordinary materials, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, that is, an increase in pressure squeezes the material to a smaller volume. This condition is required for mechanical stability. However, under very specific conditions, materials can exhibit a compressibility that can be negative.
See also
Mach number
Mach tuck
Poisson ratio
Prandtl–Glauert singularity, associated with supersonic flight
Shear strength
References
Thermodynamic properties
Fluid dynamics
Mechanical quantities | Compressibility | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,584 | [
"Thermodynamic properties",
"Mechanical quantities",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mechanics",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
166,086 | https://en.wikipedia.org/wiki/Streamlines%2C%20streaklines%2C%20and%20pathlines | Streamlines, streaklines and pathlines are field lines in a fluid flow.
They differ only when the flow changes with time, that is, when the flow is not steady.
Considering a velocity vector field in three-dimensional space in the framework of continuum mechanics, we have that:
Streamlines are a family of curves whose tangent vectors constitute the velocity vector field of the flow. These show the direction in which a massless fluid element will travel at any point in time.
Streaklines are the loci of points of all the fluid particles that have passed continuously through a particular spatial point in the past. Dye steadily injected into the fluid at a fixed point (as in dye tracing) extends along a streakline.
Pathlines are the trajectories that individual fluid particles follow. These can be thought of as "recording" the path of a fluid element in the flow over a certain period. The direction the path takes will be determined by the streamlines of the fluid at each moment in time.
Timelines are the lines formed by a set of fluid particles that were marked at a previous instant in time, creating a line or a curve that is displaced in time as the particles move.
By definition, different streamlines at the same instant in a flow do not intersect, because a fluid particle cannot have two different velocities at the same point. However, pathlines are allowed to intersect themselves or other pathlines (except the starting and end points of the different pathlines, which need to be distinct). Streaklines can also intersect themselves and other streaklines.
Streamlines and timelines provide a snapshot of some flowfield characteristics, whereas streaklines and pathlines depend on the -history of the flow. However, often sequences of timelines (and streaklines) at different instants—being presented either in a single image or with a video stream—may be used to provide insight in the flow and its history.
If a line, curve or closed curve is used as start point for a continuous set of streamlines, the result is a stream surface. In the case of a closed curve in a steady flow, fluid that is inside a stream surface must remain forever within that same stream surface, because the streamlines are tangent to the flow velocity. A scalar function whose contour lines define the streamlines is known as the stream function.
Dye line may refer either to a streakline: dye released gradually from a fixed location during time; or it may refer to a timeline: a line of dye applied instantaneously at a certain moment in time, and observed at a later instant.
Mathematical description
Streamlines
Streamlines are defined by
where "" denotes the vector cross product and is the parametric representation of just one streamline at one moment in time.
If the components of the velocity are written and those of the streamline as we deduce
which shows that the curves are parallel to the velocity vector. Here is a variable which parametrizes the curve Streamlines are calculated instantaneously, meaning that at one instance of time they are calculated throughout the fluid from the instantaneous flow velocity field.
A streamtube consists of a bundle of streamlines, much like communication cable.
The equation of motion of a fluid on a streamline for a flow in a vertical plane is:
The flow velocity in the direction of the streamline is denoted by . is the radius of curvature of the streamline. The density of the fluid is denoted by and the kinematic viscosity by . is the pressure gradient and the velocity gradient along the streamline. For a steady flow, the time derivative of the velocity is zero: . denotes the gravitational acceleration.
Pathlines
Pathlines are defined by
The subscript indicates that we are following the motion of a fluid particle.
Note that at point the curve is parallel to the flow velocity vector , where the velocity vector is evaluated at the position of the particle at that time .
Streaklines
Streaklines can be expressed as,
where, is the velocity of a particle at location and time . The parameter , parametrizes the streakline and , where is a time of interest.
Steady flows
In steady flow (when the velocity vector-field does not change with time), the streamlines, pathlines, and streaklines coincide. This is because when a particle on a streamline reaches a point, , further on that streamline the equations governing the flow will send it in a certain direction . As the equations that govern the flow remain the same when another particle reaches it will also go in the direction . If the flow is not steady then when the next particle reaches position the flow would have changed and the particle will go in a different direction.
This is useful, because it is usually very difficult to look at streamlines in an experiment. However, if the flow is steady, one can use streaklines to describe the streamline pattern.
Frame dependence
Streamlines are frame-dependent. That is, the streamlines observed in one inertial reference frame are different from those observed in another inertial reference frame. For instance, the streamlines in the air around an aircraft wing are defined differently for the passengers in the aircraft than for an observer on the ground. In the aircraft example, the observer on the ground will observe unsteady flow, and the observers in the aircraft will observe steady flow, with constant streamlines. When possible, fluid dynamicists try to find a reference frame in which the flow is steady, so that they can use experimental methods of creating streaklines to identify the streamlines.
Application
Knowledge of the streamlines can be useful in fluid dynamics. The curvature of a streamline is related to the pressure gradient acting perpendicular to the streamline. The center of curvature of the streamline lies in the direction of decreasing radial pressure. The magnitude of the radial pressure gradient can be calculated directly from the density of the fluid, the curvature of the streamline and the local velocity.
Dye can be used in water, or smoke in air, in order to see streaklines, from which pathlines can be calculated. Streaklines are identical to streamlines for steady flow. Further, dye can be used to create timelines. The patterns guide design modifications, aiming to reduce the drag. This task is known as streamlining, and the resulting design is referred to as being streamlined. Streamlined objects and organisms, like airfoils, streamliners, cars and dolphins are often aesthetically pleasing to the eye. The Streamline Moderne style, a 1930s and 1940s offshoot of Art Deco, brought flowing lines to architecture and design of the era. The canonical example of a streamlined shape is a chicken egg with the blunt end facing forwards. This shows clearly that the curvature of the front surface can be much steeper than the back of the object. Most drag is caused by eddies in the fluid behind the moving object, and the objective should be to allow the fluid to slow down after passing around the object, and regain pressure, without forming eddies.
The same terms have since become common vernacular to describe any process that smooths an operation. For instance, it is common to hear references to streamlining a business practice, or operation.
See also
Drag coefficient
Elementary flow
Equipotential surface
Flow visualization
Flow velocity
Scientific visualization
Seeding (fluid dynamics)
Stream function
Streamsurface
Streamlet (scientific visualization)
Notes and references
Notes
References
External links
Streamline illustration
Tutorial - Illustration of Streamlines, Streaklines and Pathlines of a Velocity Field(with applet)
Joukowsky Transform Interactive WebApp
Continuum mechanics
Numerical function drawing | Streamlines, streaklines, and pathlines | [
"Physics"
] | 1,539 | [
"Classical mechanics",
"Continuum mechanics"
] |
166,087 | https://en.wikipedia.org/wiki/Software%20development%20kit | A software development kit (SDK) is a collection of software development tools in one installable package. They facilitate the creation of applications by having a compiler, debugger and sometimes a software framework. They are normally specific to a hardware platform and operating system combination. To create applications with advanced functionalities such as advertisements, push notifications, etc; most application software developers use specific software development kits.
Some SDKs are required for developing a platform-specific app. For example, the development of an Android app on the Java platform requires a Java Development Kit. For iOS applications (apps) the iOS SDK is required. For Universal Windows Platform the .NET Framework SDK might be used. There are also SDKs that add additional features and can be installed in apps to provide analytics, data about application activity, and monetization options. Some prominent creators of these types of SDKs include Google, Smaato, InMobi, and Facebook.
Details
An SDK can take the form of application programming interfaces in the form of on-device libraries of reusable functions used to interface to a particular programming language, or it may be as complex as hardware-specific tools that can communicate with a particular embedded system. Common tools include debugging facilities and other utilities, often presented in an integrated development environment. SDKs may include sample software and/or technical notes along with documentation, and tutorials to help clarify points made by the primary reference material.
SDKs often include licenses that make them unsuitable for building software intended to be developed under an incompatible license. For example, a proprietary SDK is generally incompatible with free software development, while a GNU General Public License'd SDK could be incompatible with proprietary software development, for legal reasons. However, SDKs built under the GNU Lesser General Public License are typically usable for proprietary development. In cases where the underlying technology is new, SDKs may include hardware. For example, AirTag's 2012 near-field communication SDK included both the paying and the reading halves of the necessary hardware stack.
The average Android mobile app implements 15.6 separate SDKs, with gaming apps implementing on average 17.5 different SDKs. The most popular SDK categories for Android mobile apps are analytics and advertising.
SDKs can be unsafe (because they are implemented within apps yet run separate code). Malicious SDKs (with honest intentions or not) can violate users' data privacy, damage app performance, or even cause apps to be banned from Google Play or the App Store. New technologies allow app developers to control and monitor client SDKs in real time.
Providers of SDKs for specific systems or subsystems sometimes substitute a more specific term instead of software. For instance, both Microsoft and Citrix provide a driver development kit for developing device drivers.
Examples
Examples of software development kits for various platforms include:
AmigaOS NDK
Android NDK
iOS SDK
Java Development Kit
Java Web Services Development Pack
Microsoft Windows SDK
VaxTele SIP Server SDK
Visage SDK
Vuforia Augmented Reality SDK
Windows App SDK
Xbox Development Kit
See also
Game development kit
Widget toolkit
References
Computer libraries
Software development | Software development kit | [
"Technology",
"Engineering"
] | 640 | [
"IT infrastructure",
"Computer occupations",
"Computer libraries",
"Software engineering",
"Software development"
] |
166,105 | https://en.wikipedia.org/wiki/Paper%20size | Paper size standards govern the size of sheets of paper used as writing paper, stationery, cards, and for some printed documents.
The ISO 216 standard, which includes the commonly used A4 size, is the international standard for paper size. It is used across the world except in North America and parts of Central and South America, where North American paper sizes such as "Letter" and "Legal" are used. The international standard for envelopes is the C series of ISO 269.
International standard paper sizes
The international paper size standard is ISO 216. It is based on the German DIN 476 standard for paper sizes. Each ISO paper size is one half of the area of the next larger size in the same series. ISO paper sizes are all based on a single aspect ratio of the square root of 2, or approximately 1:1.41421. There are different series, as well as several extensions.
The following international paper sizes are included in Cascading Style Sheets (CSS): A3, A4, A5, B4, B5.
A series
There are 11 sizes in the A series, designated A0–A10, all of which have an aspect ratio of , where a is the long side and b is the short side.
Since A series sizes share the same aspect ratio they can be scaled to other A series sizes without being distorted, and two sheets can be reduced to fit on exactly one sheet without any cutoff or margins.
The A0 base size is defined as having an area of 1 m; given an aspect ratio of , the dimensions of A0 are:
by .
or, rounded to the nearest millimetre, .
A series sizes are related in that the smaller dimension of a given size is the larger dimension of the next smaller size, and folding an A series sheet in half in its larger dimension—that is, folding it in half parallel to its short edge—results in two halves that are each the size of the next smaller A series size. As such, a folded brochure of a given A-series size can be made by folding sheets of the next larger size in half, e.g. A4 sheets can be folded to make an A5 brochure. The fact that halving a sheet with an aspect ratio of results in two sheets that themselves both have an aspect ratio of is proven as follows:
where a is the long side and b is the short side. The aspect ratio for the new dimensions of the folded paper is:
The advantages of basing a paper size upon an aspect ratio of were noted in 1786 by the German scientist and philosopher Georg Christoph Lichtenberg. He also observed that some raw sizes already adhered to that ratio so that when a sheet is folded, the length to width ratio does not change.
Briefly after the introduction of the metric system, a handful of new paper formats equivalent to modern ones were developed in France, having been proposed by the mathematician Lazare Carnot, and published for judicial purposes in 1798 during the French Revolution. These were never widely adopted, however:
grand registre (A2)
moyen papier (A3)
grand papier (B3)
petit papier (B4)
demi feuille (B5)
effets de commerce (half-B5)
Early in the 20th century, the ratio was used to specify the world format starting with 1 cm as the short edge of the smallest size. Walter Porstmann started with the largest sizes instead, assigning one an area of 1 m2 (A0) and the other a short edge of 1 m (B0). He thereby turned the forgotten French sizes (relatively few in number) into a logically-simple and comprehensive plan for a full range of paper sizes, while introducing systematic alphanumeric monikers for them. Generalized to nothing less than four series, this system was introduced as a DIN standard (DIN 476) in Germany in 1922, replacing a vast variety of other paper formats. Even today, the paper sizes are called "DIN A4" () in everyday use in Germany and Austria.
The DIN 476 standard spread quickly to other countries. Before the outbreak of World War II, it had been adopted by the following countries in Europe:
Belgium (1924)
Netherlands (1925)
Norway (1926)
Finland (1927)
Switzerland (1929)
Sweden (1930) with later extensions
Soviet Union (1934) with custom extensions
Hungary (1938)
Italy (1939)
During World War II, the standard spread to South America and was adopted by Uruguay (1942), Argentina (1943) and Brazil (1943), and afterwards spread to other countries:
Australia (1974)
Austria (1948)
Bangladesh (1972)
Barbados (1973)
Chile (1968)
Colombia (1975)
Czechoslovakia (1953)
Denmark (1953)
Ecuador (1974)
France (1967)
Greece (1970)
Iceland (1964)
India (1957) with custom extensions
Iran (1948)
Ireland (1959)
Israel (1954)
Japan (1951) with different B series
Kuwait (1975)
Mexico (1965)
New Zealand (1963)
Peru (1967)
Poland (1957)
Portugal (1954)
Rhodesia (1970)
Romania (1949)
Singapore (1970)
South Africa (1966)
Spain (1947)
Thailand (1973)
Turkey (1967)
United Kingdom (1971)
Venezuela (1962)
Yugoslavia(1956)
By 1975, so many countries were using the German system that it was established as an ISO standard, as well as the official United Nations document format. By 1977, A4 was the standard letter format in 88 of 148 countries. Today the standard has been adopted by all countries in the world except the United States and Canada. In Mexico, Costa Rica, Colombia, Venezuela, Chile, and the Philippines, the US letter format is still in common use, despite their official adoption of the ISO standard.
The weight of an A-series sheet of a given paper weight can be calculated by knowing the ratio of its size to the A0 sheet. For example, an A4 sheet is the size of an A0 sheet, so if it is made from paper, it weighs of 80 g, which is 5 g.
B series
The B series paper sizes are less common than the A series. They have the same aspect ratio as the A series:
However, they have a different area. The area of B series sheets is in fact the geometric mean of successive A series sheets. B1 is between A0 and A1 in size, with an area of m, or about 0.707 m. As a result, B0 is 1 metre wide, and other sizes of the series are a half, a quarter, or further fractions of a metre wide: in general, every B size has exactly one side of length for . That side is the short side for B0, B2, B4, etc., and the long side for B1, B3, B5, etc.
While less common in office use, the B series is used for a variety of applications in which one A-series size would be too small but the next A-series size is too large, or because they are convenient for a particular purpose.
B4, B5, and B6 are used for envelopes that will hold C-series envelopes.
B4 is quite common in printed music sheets.
B5 is a relatively common choice for books.
B7 is equal to the passport size ID-3 from ISO/IEC 7810.
Many posters use B-series paper or a close approximation, such as 50 cm × 70 cm ~ B2.
The B-series is widely used in the printing industry to describe both paper sizes and printing press sizes, including digital presses. B3 paper is used to print two US letter or A4 pages side by side using imposition; four pages would be printed on B2, eight on B1, etc.
C series
The C series is defined in ISO 269, which was withdrawn in 2009 without a replacement, but is still specified in several national standards. It is primarily used for envelopes. The area of C series sheets is the geometric mean of the areas of the A and B series sheets of the same number; for instance, the area of a C4 sheet is the geometric mean of the areas of an A4 sheet and a B4 sheet. This means that C4 is slightly larger than A4, and slightly smaller than B4. The practical usage of this is that a letter written on A4 paper fits inside a C4 envelope, and both A4 paper and C4 envelope fits inside a B4 envelope.
Some envelope formats with mixed sides from adjacent sizes (and thus an approximate aspect ratio of 2:1) are also defined in national adaptations of the ISO standard, e.g. DIN C6/C5 (also known as C65) is 114 mm × 229 mm where the common side to C5 and C6 is 162 mm. This format allows an envelope holding an A-sized paper folded in three, e.g. for the C65, an A4.
Overview of ISO paper sizes
The variables are the distinct first terms in the three geometric progressions of the same common ratio equal to the square root of two. Each of the three geometric progressions (corresponding to the three series A, B, and C) is formed by all possible paper dimensions (length and width) of the series arranged in decreasing order. This interesting arrangement of dimensions is also very useful—not only does it form a geometric progression with easy-to-remember formulae, but also each consecutive pair of values (like a sliding window of size 2) will automatically correspond to the dimensions of a standard paper format in the series.
The tolerances specified in the standard are
±1.5 mm (0.06 in) for dimensions up to 150 mm (5.9 in),
±2 mm (0.08 in) for lengths in the range 150 to 600 mm (5.9 to 23.6 in) and
±3 mm (0.12 in) for any dimension above 600 mm (23.6 in).
Related regional sizes
German original
The German standard DIN 476 was published on 18 August 1922 and is the original specification of the A, B and C sizes. In 1991, it was split into DIN 476-1 for the A and B formats and 476-2 for the C series. The former has been withdrawn in 2002 in favour of adopting the international standard as DIN EN ISO 216, but part 2 has been retained and was last updated in 2008.
The first and the second editions of DIN 476 from 1922 and 1925 also included a D series.
The smallest formats in the original specifications for each series were A13, B13, C8, and D8. Sizes A11 through A13 were no longer listed in the 1930 edition, nor were B11 through B13. C9 and C10 were added in the 1976 revision for compatibility with photography sizes: C8 closely matches 6×9 photos, and C9 and C10 closely match 7×7 and 5×5 slides, respectively.
DIN 476 provides for formats larger than A0, denoted by a prefix factor. In particular, it lists the formats 2A0 and 4A0, which are twice and four times the size of A0 respectively.
However, ISO 216:2007 notes 2A0 and 4A0 in the table of Main series of trimmed sizes (ISO A series) as well: "The rarely used sizes [2A0 and 4A0] which follow also belong to this series."
DIN 476 also used to specify slightly tighter tolerances than ISO 216:
±1 mm (0.04 in) for dimensions up to 150 mm (5.9 in),
±1.5 mm (0.06 in) for lengths in the range 150 mm to 600 mm (5.9 to 23.6 in) and
±2 mm (0.08 in) for any dimension above 600 mm (23.6 in).
There used to be a standard, DIN 198, that was just a table of recommended A series formats for a number of business applications. The 1976 edition of this standard introduced a size A4 and suggested it for some forms and slips.
Swedish extensions
The Swedish standard SIS 01 47 11 generalized the ISO system of A, B, and C formats by adding D, E, F, and G formats to it. Its D format sits between a B format and the next larger A format (just like C sits between A and the next larger B). The remaining formats fit in between all these formats, such that the sequence of formats A4, E4, C4, G4, B4, F4, D4, *H4, A3 is a geometric progression, in which the dimensions grow by a factor from one size to the next. However, this SIS standard does not define any size between a D format and the next larger A format (called *H in the previous example).
Of these additional formats, G5 (169 × 239 mm) and E5 (155 × 220 mm) are popular in Sweden and the Netherlands for printing dissertations, but the other formats have not turned out to be particularly useful in practice. They have not been adopted internationally and the Swedish standard has been withdrawn.
The Swedish and German D series basically contain the same sizes but are offset by one, i.e. DIN D4 equals SIS D5 and so on.
Japanese variation
The Japanese standard JIS P 0138 defines two main series of paper sizes. The JIS A-series is identical to the ISO A-series except that it has slightly different tolerances. The area of B-series paper is 1.5 times that of the corresponding A-paper (instead of the factor for the ISO B-series), so the length ratio is approximately 1.22 times the length of the corresponding A-series paper. The aspect ratio of the paper is the same as for the A-series paper. Both A- and B-series paper are widely available in Japan, Taiwan and China, and most photocopiers are loaded with at least A4 and either one of A3, B4, and B5 paper.
Cascading Style Sheets (CSS) only supports the most popular of the Japanese sizes, JIS-B4 and JIS-B5.
A popular size for books, dubbed AB, combines the shorter edges of A4 and B4. Another two with an aspect ratio approximating 16:9 are 20% narrower variants of A6 and B6, respectively, the latter resulting from cutting JIS B1 into sheets (thus "B40").
There are also a number of traditional paper sizes, which are now used mostly by printers. The most common of these old series is the Shiroku-ban and the Kiku paper sizes.
Chinese extensions
The Chinese standard GB/T 148–1997, which replaced GB 148–1989, documents the standard ISO series, A and B, but adds a custom D series. This Chinese format originates from the Republic of China (1912–1949). The D series is not identical to the German or Swedish D series. It does not strictly follow the same principles as ISO paper sizes: The aspect ratio is only very roughly . The short side of the size is always 4 mm longer than the long side of the next smaller size. The long side of the size is always exactly – i.e. without further rounding – twice as long as the short side of the next smaller size.
Indian variants
The Bureau of Indian Standards recommends the "ISO-A series" size of drawing sheet for engineering drawing works. The Bureau of Indian Standards specifies all the recommendations for engineering drawing sheets in its bulletin IS 10711: 2001.
The Bureau extended the ISO-A series with a Special Elongated Sizes (Second Choice). These sizes are achieved by increasing the shorter dimensions of a sheet of the ISO A series to lengths that are multiples of the shorter dimensions of the chosen basic sheet; in effect, all of the Indian elongated sizes emulate having several regular-size sheets joined on their long edge.
There is also a Exceptional Elongated Sizes (Third Choice). These sizes are obtained by increasing the shorter dimensions of a sheet of the ISO-A series to lengths that are multiples of the shorter dimensions of the chosen basic sheet. These sizes are used when a very large or extra elongated sheet is needed.
Soviet variants
The first standard of paper size in the Soviet Union was OST 303 in 1926. Six years later, it was replaced by OST 5115 which generally followed DIN 476 principles, but used Cyrillic lowercase letters instead of Latin uppercase, had the second row shifted so that б0 (B0) roughly corresponded to B1 and, more importantly, had slightly different sizes:
The general adaptation of ISO 216 in the Soviet Union, which replaced OST 5115, was GOST 9327. In its 1960 version, it lists formats down to A13, B12 and C8 and also specifies , and prefixes for halving the shorter side (repeatedly) for stripe formats, e.g. A4 = 105 mm × 297 mm.
A standard for technical drawings from 1960, GOST 3450, introduces alternative numeric format designations to deal with very high or very wide sheets.
These 2-digit codes are based upon A4 = "11": The first digit is the factor the longer side (297 mm) is multiplied by and the second digit is the one for the shorter side (210 mm), so "24" is 2×297 mm × 4×210 mm = 594 mm × 840 mm.
GOST 3450 from 1960 was replaced by ESKD GOST 2301 in 1968, but the numeric designations remained in popular use much longer.
The new designations were not purely numeric but consisted of the ISO label followed by an 'x', or possibly the multiplication sign '×', and the factor, e.g. DIN 2A0 = GOST A0×2, but DIN 4A0 ≠ GOST A0×4, also listed are: A0×3, A1×3, A1×4, A2×3–A2×5, A3×3–A3×7, A4×3–A4×9. The formats ...×1 and ...×2 usually would be aliases for existing formats.
Elongated sizes
ISO 5457, last updated in 1999, introduces elongated sizes that are formed by a combination of the dimensions of the short side of an A-size (e.g. A2) with the dimensions of the long side of another larger A-size (e.g. A0). The result is a new size, for example with the abbreviation A2.0 we would have a mm size.
These drawing paper sizes have been adopted by ANSI/ASME Y14.1M for use in the United States, alongside A0 through A4 and alongside inch-based sizes.
International envelope and insert sizes
DIN 5008 (previously DIN 676) prescribes, among many other things, two variants, A and B, for the location of the address field on the first page of a business letter and how to fold the A4 sheet accordingly, so the only part visible of the main content is the subject line.
International raw sizes
ISO 5457 specifies drawing paper sizes with a trimmed size equal to the A series sizes from A4 upward. The untrimmed sizes are 3 to 4 cm larger and rounded to the nearest centimetre. A0 through A3 are used in landscape orientation, while A4 is used in portrait orientation. Designations for preprinted drawing paper include the base sizes and a suffix, either T for trimmed or U for untrimmed sheets.
The withdrawn standard ISO 2784 did specify sizes of continuous, fan-fold forms based upon whole inches as was common for paper in continuous lengths in automatic data processing (ADP) equipment. Specifically, 12 inches (304.8 mm) were considered an untrimmed variant of the A4 height of 297 mm.
Transitional paper sizes
PA4 or L4
A transitional size called PA4 (), sometimes dubbed L4, was proposed for inclusion into the ISO 216 standard in 1975. It has the height of Canadian P4 paper (215 mm × 280 mm, about in × 11 in) and the width of international A4 paper (), i.e. it uses the smaller value among the two for each side. The table shows how this format can be generalized into an entire format series.
The PA formats did not end up in ISO 216, because the committee decided that the set of standardized paper formats should be kept to the minimum necessary. However, PA4 remains of practical use today. In landscape orientation, it has the same 4:3 aspect ratio as the displays of traditional TV sets, some computer displays (e.g. the iPad) and data projectors. PA4, with appropriate margins is, therefore, a good choice as the format of presentation slides.
As a compromise between the two most popular paper sizes globally, PA4 is used today by many international magazines, because it can be printed easily on equipment designed for either A4 or US Letter. That means (in practice) it has turned out to be not so much a paper size as a page format. Apple, for instance, requires this format for digital music album booklets.
The size 210 mm × 280 mm was documented in the Canadian standard CAN2-200.2-M79 "Common Image Area for Paper Sizes P4 and A4".
F4
A non-standard F4 paper size is common in Southeast Asia. It is a transitional size with the shorter side of ISO A4 (210 mm, inch) and the longer side of British Foolscap (13-inch, 330 mm). ISO A4 is exactly 90% the height of F4.
This size is sometimes also known as (metric) 'foolscap' or 'folio'.
In some countries, the narrow side of F4 is slightly broader: 8.5 inches (216 mm) or 215 mm. It is then equivalent to the US Government Legal and Foolscap Folio sizes.
In Indonesia, where it is the legally-mandated paper size for use in the printing of national legislation, it is sometimes called Folio or HVS (from , "wood-free writing paper").
In Philippines, it is commonly called long bond as opposed to short bond which refers to the US Letter paper size.
A sheet of F4 can be cut from a sheet of SRA4 with very little wastage. The size is also smaller than its Swedish equivalent SIS F4 at 239 mm × 338 mm.
Weltformat
The Weltformat (world format) was developed by German chemist Wilhelm Ostwald in 1911 as part of Die Brücke, around the same time DIN 476 was first discussed. It shares the same design primitives, especially the aspect ratio, but is based upon 1 cm as the short edge of the smallest size. Sizes were designated by roman numerals.
The result, for the fourth through fourteenth size, is close to the DIN/ISO C series.
The sizes have been used for some print products in the early 20th century in central Europe but got replaced by DIN sizes almost entirely. However, it was successfully adopted from 1913 onwards for posters and placards in Switzerland. Even today, the default size for posters in Swiss advertisements, F4, is colloquially known as Weltformat, although it measures 895 mm × 1280 mm, i.e. 1 cm less than size XIV. This poster size goes alongside F12 "Breitformat" 2685 mm × 1280 mm (3 × F4) and F24 "Großformat" 2685 mm × 2560 mm (2 × 3 × F4, ) as well as F200 "Cityformat" 1165 mm × 1700 mm.
A0a
Although the movement is towards the international standard metric paper sizes, on the way there from the traditional ones there has been at least one new size just a little larger than that used internationally.
British architects and industrial designers once used a size called "Antiquarian", , as listed above, but given in the New Metric Handbook (Tutt & Adler 1981) as for board size. This is a little larger than ISO A0, 841 mm × 1189 mm. So for a short time, a size called A0a of was used in Britain, which is actually just a slightly shorter version of ISO B0 at 1414 mm.
Pliego
The most common paper sizes used for commercial and industrial printing in Colombia are based upon a size referred to as pliego that is ISO B1 (707 mm × 1000 mm) cut to full decimetres. Smaller sizes are derived by halving as usual and just get a vulgar fraction prefix: pliego and pliego.
K
In East Asia, i.e. Japan, Taiwan, and China in particular, there is a number of similar paper sizes in common use for book-making and other purposes.
Confusingly, a single designation is often used with slightly different edge measures: The base sheet is labeled 1K (or , where K stands for ; or in Japanese); all smaller sizes derived by halving have the power of two number, i = 2n, in front of the uppercase letter K. The number in ISO designations, in contrast, is the exponent n that would yield the number of sheets cut from the base sizes.
The sizes of such folios depend on the base sheet. Pre-metric standards include:
The imperial 菊判 (kiku-ban), named after the Chrysanthemum watermark on imperial paper, measuring 636mm × 939mm.
The four-by-six 四六判 shi-roku-ban (4×6 or 4/6), where the final size at 32K was measured 4 by 6 sun in Japan, c. mm, or slightly more, mm i.e. sun.
In Taiwan, the traditional base size 1K inherited from Japan is sometimes quoted as measuring inches exactly, which is off by c. 1 millimetre from the commonly quoted metric base size of mm, which is directly derived from sun or shaku.
The three-by-five 三五判 (3×5 or 3/5), where the final size at 32K is slightly less than 3 by 5 sun, often given as mm which would be c. sun.
The 4/6 standard has given rise to newer metric book-size standards, including:
The modern Japanese size for books, simply labeled B and is specified as millimetres. It is not directly related to the similar JIS B series, where B1 is slightly smaller.
The Chinese SAC D series.
North American paper sizes
Inch-based loose sizes
The United States, Canada, and the Philippines primarily use a different system of paper sizes from the rest of the world. The current standard sizes are unique to those countries, although due to the size of the North American market and proliferation of both software and printing hardware from the region, other parts of the world have become increasingly familiar with these sizes (though not necessarily the paper itself). Some traditional North American inch-based sizes differ from the Imperial British sizes described below.
Common American loose sizes
Letter, Legal and Ledger/Tabloid are by far the most commonly used of these for everyday activities, and the only ones included in Cascading Style Sheets (CSS).
The origins of the exact dimensions of Letter size paper are lost in tradition and not well documented. The American Forest and Paper Association argues that the dimension originates from the days of manual papermaking and that the 11-inch length of the page is about a quarter of "the average maximum stretch of an experienced vatman's arms." However, this does not explain the width or aspect ratio.
Outside of North America, Letter size may also be known as "American Quarto". If one accepts some trimming, the size is indeed one quarter of the old Imperial paper size known as Demy, .
Manufacturers of computer printers, however, recognize inch-based Quarto as or long.
Usage and adoption
US paper sizes are currently standard in the United States and are the most commonly used formats at least in the Philippines, most of Mesoamerica and Chile. The latter use US Letter, but their Legal size is 13 inches tall (recognized as Foolscap by printer manufacturers, i.e. one inch shorter than its US equivalent.
Mexico and Colombia, for instance, have adopted the ISO standard, but the US Letter format is still the system in use throughout the country. It is rare to encounter ISO standard papers in day-to-day uses, with Carta (Letter), Oficio (Government-Legal), and Doble carta (Ledger/Tabloid) being nearly universal.
Printer manufacturers, however, recognize Oficio as long.
In Canada, select US paper sizes are a de facto standard.
Variant American loose sizes
There is an additional paper size, , to which the name Government-Letter was given by the IEEE Printer Working Group (PWG). It was prescribed by Herbert Hoover when he was Secretary of Commerce to be used for US government forms, apparently to enable discounts from the purchase of paper for schools, but more likely due to the standard use of trimming books (after binding) and paper from the standard letter size paper to produce consistency and allow "bleed" printing. In later years, as photocopy machines proliferated, citizens wanted to make photocopies of the forms, but the machines did not generally have this size of paper in their bins. Ronald Reagan therefore had the US government switch to regular Letter size, which is half an inch both longer and wider. The former government size is still commonly used in spiral-bound notebooks, for children's writing and the like, a result of trimming from the current Letter dimensions.
By extension of the American standards, the halved Letter size, , meets the needs of many applications. It is variably known as Statement, Stationery, Memo, Half Letter, Half A (from ANSI sizes) or simply Half Size, and as Invoice by printer manufacturers. Like the similar-sized ISO A5, it is used for everything from personal letter writing to official aeronautical maps. Organizers, notepads, and diaries also often use this size of paper; thus 3-ring binders are also available in this size. Booklets of this size are created using word processing tools with landscape printing in two columns on letter paper which are then cut or folded into the final size.
A foot-long sheet with the common width of Letter and (Government) Legal, i.e. , would have an aspect ratio very close to the square root of two as used by international paper sizes and would actually almost exactly match ISO RA4 (215 mm × 305 mm). This size is sometimes known as European Fanfold.
While Executive refers to in America, the Japanese organization for standardization specified it as , which is elsewhere known as Government Legal or Foolscap.
Standardized American paper sizes
In 1996, the American National Standards Institute adopted ANSI/ASME Y14.1 which defined a regular series of paper sizes based upon the de facto standard Letter size which it assigned "ANSI A", intended for technical drawings, hence sometimes labeled "Engineering". This series is somewhat similar to the ISO standard in that cutting a sheet in half would produce two sheets of the next smaller size and therefore also includes Ledger/Tabloid as "ANSI B". Unlike the ISO standard, however, the arbitrary base sides forces this series to have two alternating aspect ratios. For example, ANSI A is less elongated than A4, while ANSI B is more elongated than A3.
The Canadian standard CAN2 9.60-M76 and its successor CAN/CGSB 9.60-94 "Paper Sizes for Correspondence" specified paper sizes P1 through P6, which are the U.S. paper sizes rounded to the nearest 5 mm. All custom Canadian paper size standards were withdrawn in 2012.
With care, documents can be prepared so that the text and images fit on either ANSI or their equivalent ISO sheets at a 1:1 reproduction scale.
Other, informal, larger sizes continuing the alphabetic series illustrated above exist, but they are not part of the series per se, because they do not exhibit the same aspect ratios. For example, Engineering F size is with ca. 1.4286:1; it is commonly required for NAVFAC drawings, but is generally less commonly used. Engineering G size is high, but it is a roll format with a variable width up to in increments of . Engineering H through N sizes are also roll formats.
Such huge sheets were at one time used for full-scale layouts of aircraft parts, automotive parts, wiring harnesses, and the like, but are slowly being phased out, due to widespread use of computer-aided design (CAD) and computer-aided manufacturing (CAM). Some visual arts fields also continue to use these paper formats for large-scale printouts, such as for displaying digitally painted character renderings at life-size as references for makeup artists and costume designers or to provide an immersive landscape reference.
Architectural sizes
In addition to the system as listed above, there is a corresponding series of paper sizes used for architectural purposes defined in the same standard, ANSI/ASME Y14.1, which is usually abbreviated "Arch". This series also shares the property that bisecting each size produces two of the size below, with alternating aspect ratios. It may be preferred by North American architects because the aspect ratios (4:3 and 3:2) are ratios of small integers, unlike their ANSI (or ISO) counterparts. Furthermore, the aspect ratio 4:3 matches the traditional aspect ratio for computer displays.
The size Arch E1 has a different aspect ratio because it derives from adding 6 inches to each side of Arch D or subtracting the same amount from Arch E. Printer manufacturer recognize it as wide-format. An intermediate size between Arch C and D with a long side of does not exist.
Demitab
The demitab or demi-tab (a portmanteau of the French word '' [half] and 'tabloid') is , i.e. roughly one half of a sheet of tabloid-size paper.
"Demitab", "broadsheet" or "tabloid" format newspapers are not necessarily printed on paper measuring exactly their nominal size.
Notebook sizes
The sizes listed above are for paper sold loose in reams. There are many sizes of tablets of paper, that is, sheets of paper bound at one edge, usually by a strip of plastic or hardened PVA adhesive. Often there is a pad of cardboard (also known as paperboard or greyboard) at the bottom of the stack. Such a tablet serves as a portable writing surface, and the sheets often have lines printed on them, usually in non-repro blue, to make writing in a line easier. An older means of binding is to have the sheets stapled to the cardboard along the top of the tablet; there is a line of perforated holes across every page just below the top edge from which any page may be torn off. Lastly, a pad of sheets each weakly stuck with adhesive to the sheet below, trademarked as "Post-It" or "Stick-Em" and available in various sizes, serve as a sort of tablet.
"Letter pads" are , while the term "legal pad" is often used by laymen to refer to pads of various sizes including those of . Stenographers use "steno pads" of . The steno pad size is also used by Scholastic Corporation as the textblock size of their hardcover editions of the Harry Potter novels, with paperback editions using DIN D6.
Envelope sizes
This implies that all postcards have an aspect ratio in the range from = 1.18 to = 1.71, but the machinable aspect ratio is further restricted to a minimum of 1.30.
The only ISO 216 size in the US postcard range is A6.
The theoretical maximum aspect ratio for enveloped letters is = 3.29, but is explicitly limited to 2.50.
Personal organizer sizes
Index card sizes
Photography sizes
Grain
Most industry standards express the direction of the grain last when giving dimensions (that is, 17 × 11 inches is short grain paper and 11 × 17 inches is long grain paper), although alternatively the grain alignment can be explicitly indicated with an underline (11 × 17 is a short grain) or the letter "M" for "machine" (11M × 17 is a short grain). Grain is important because the paper will crack if folded across the grain: for example, if a sheet 17 × 11 inches is to be folded to divide the sheet into two 8.5 × 11 halves, then the grain will be along the 11-inch side. Paper intended to be fed into a machine that will bend the paper around rollers, such as a printing press, photocopier or typewriter, should be fed grain edge first so that the axis of the rollers is along the grain.
Traditional inch-based paper sizes
Traditionally, a number of different sizes were defined for large sheets of paper, and paper sizes were defined by the sheet name and the number of times it had been folded. Thus a full sheet of "royal" paper was 25 × 20 inches, and "royal octavo" was this size folded three times, so as to make eight sheets, and was thus 10 × inches. Royal sizes were used for posters and billboards.
Imperial sizes were used in the United Kingdom and its territories and some survived in US book printing.
Traditional British paper sizes
Traditional British paper sizes are referred to by the number of sheets that can be cut from a sheet of uncut paper. The standard Imperial uncut paper sizes used in offices and schools were "foolscap", "post", and "copy". Each uncut sheet can then be halved into folios, quartered into quartos, or eighthed into octavos.
Traditional French paper sizes
Before the adoption of the ISO standard system in 1967, France had its own paper size system. Raisin format is still in use today for artistic paper. All are standardized by the AFNOR. Their names come from the watermarks that the papers were branded with when they were handcrafted, which is still the case for certain art papers. They also generally exist in double versions where the smallest measure is multiplied by two, or in quadruple versions where both measures have been doubled.
Business card sizes
The international business card has the size of the smallest rectangle containing a credit card rounded to full millimetres, but in Western Europe, it is rounded to half centimetres (rounded up in Northern Europe), in Eastern Europe to full centimetres, in North America to half inches. However, credit card size, as defined in ISO/IEC 7810, also specifies rounded corners and thickness.
Newspaper sizes
Newspapers have a separate set of sizes.
Compact: AR 1.54
Berliner: aspect ratio is 1.5
Rhenish: AR 1.4–1.5
Tabloid 1.34
Broadsheet: aspect ratio 1.25
In a recent trend many newspapers have been undergoing what is known as "web cut down", in which the publication is redesigned to print using a narrower (and less expensive) roll of paper. In extreme examples, some broadsheet papers are nearly as narrow as traditional tabloids.
See also
Book size
Continuous stationery
Hole punch – filing holes
Margin
New Zealand standard for school stationery
PC LOAD LETTER
Paper density
Photo print sizes
Tiled printing
Toilet paper § Sheet size
Units of paper quantity – ream, quire etc.
References
Further reading
(54 pages)
International standard ISO 216, Writing paper and certain classes of printed matter—Trimmed sizes—A and B series. International Organization for Standardization, Geneva, 1975.
International standard ISO 217: Paper—Untrimmed sizes—Designation and tolerances for primary and supplementary ranges, and an indication of machine direction. International Organization for Standardization, Geneva, 1995.
External links
Notably: About margin settings for using just the space common to both A4 and US Letter.
Book design
Mechanical standards
Paper
Stationery
Technical drawing | Paper size | [
"Engineering"
] | 8,299 | [
"Design engineering",
"Mechanical standards",
"Technical drawing",
"Civil engineering",
"Mechanical engineering",
"Book design",
"Design"
] |
166,125 | https://en.wikipedia.org/wiki/List%20of%20Quercus%20species | The genus Quercus contains about 500 known species, plus about 180 hybrids between them. The genus, as is the case with many large genera, is divided into subgenera and sections. Traditionally, the genus Quercus was divided into the two subgenera Cyclobalanopsis, the ring-cupped oaks, and Quercus, which included all the other sections. However, a comprehensive revision in 2017 identified different relationships. Now the genus is commonly divided into a subgenus Quercus and a subgenus Cerris, with Cyclobalanopsis included in the latter. The sections of subgenus Quercus are mostly native to the New World, with the notable exception of the white oaks of sect. Quercus and the endemic Quercus pontica. In contrast, the sections of the subgenus Cerris are exclusively native to the Old World.
Unless otherwise indicated, the lists which follow contain all the species accepted by Plants of the World Online , plus selected hybrids that are also accepted, with placement into sections based on a list produced by Denk et al. for their 2017 classification of the genus.
Legend
Species with evergreen foliage ("live oaks") are tagged '#'. Species in the genus have been recategorized between deciduous and evergreen on numerous occasions, although this does not necessarily mean that species in the two groups are closely related.
Subgenus Quercus
Section Quercus
Section Mesobalanus was included in section Quercus in the 2017 classification used here. Other synonyms include Q. sect. Albae and Q. sect. Macrocarpae. The section comprises the white oaks from Europe, Asia, north Africa, Central and North America. Styles short; acorns mature in 6 months, sweet or slightly bitter, inside of acorn shell hairless.
Quercus aculcingensis Trel. – Mexico
Quercus ajoensis C.H.Mull. – Ajo Mountain shrub oak, Blue shrub oak – Arizona, New Mexico, Baja California
Quercus alba L. – white oak – eastern and central North America
Quercus aliena Blume – Oriental white oak – eastern Asia
Quercus alpescens Trel. – Mexico
Quercus ariifolia Trel. – Mexico
Quercus arizonica Sarg. – Arizona white oak – # southwestern U.S., northwestern Mexico
Quercus austrina Small – bluff oak – southeastern North America
Quercus × basaseachicensis C.H.Mull. – Mexico
Quercus × bebbiana C.K.Schneid. — Bebb's oak — northeastern North America
Quercus berberidifolia Liebm. – California scrub oak – # California
Quercus bicolor Willd. – swamp white oak – eastern and midwestern North America
Quercus × bimundorum E.J.Palmer — two worlds oak
Quercus boyntonii Beadle – Boynton's post oak – south central North America
Quercus canariensis Willd. – Mirbeck's oak or Algerian oak – # North Africa & Spain
Quercus carmenensis C.H.Mull. – Carmen oak – Coahuila and Texas
Quercus × cerrioides Willk. & Costa – east Spain
Quercus chapmanii Sarg. – Chapman oak – # southeastern North America
Quercus chartacea Trel. – Mexico
Quercus chihuahuensis Trel. – Chihuahua oak – northern Mexico and Texas
Quercus congesta C.Presl – Italy
Quercus convallata Trel. – Mexico
Quercus copeyensis C.H.Mull. – # Costa Rica, Panama
Quercus cornelius-mulleri Nixon & K.P.Steele – Muller oak – # southwestern North America
Quercus corrugata Hook. – Mexico, Central America
Quercus dalechampii Ten. – southeastern Europe
Quercus deliquescens C.H.Mull. – Mexico
Quercus dentata Thunb. – daimyo oak – eastern Asia
Quercus depressipes Trel. – Davis Mountain oak – northern Mexico and Texas
Quercus deserticola Trel. – # Mexico
Quercus diversifolia Née – Mexico
Quercus douglasii Hook. & Arn. – blue oak – California
Quercus dumosa Nutt. – coastal scrub oak – # southern California, Baja California, Arizona
Quercus durata Jeps. – leather oak – # California
Quercus edwardsiae C.H.Mull. – Mexico
Quercus engelmannii Greene – Engelmann oak – # southern California, Baja California
Quercus estremadurensis O.Schwarz – Portugal, Spain, Morocco
Quercus fabrei Hance – Faber's oak – central to southern China
Quercus faginea Lam. – Portuguese oak – # southwestern Europe
Quercus frainetto Ten. — Hungarian oak — southeastern Europe
Quercus frutex Trel. – Mexico
†Quercus furuhjelmi — Eocene to Miocene - Alaska, Kazakhstan
Quercus gambelii Nutt. – Gambel oak – southwestern North America
Quercus garryana Douglas ex Hook. – Oregon white oak or Garry oak – western North America
Quercus germana Schltdl. & Cham. – Mexico
Quercus glabrescens Benth. – Mexico
Quercus glaucescens Bonpl. – encino blanco – Mexico
Quercus glaucoides M.Martens & Galeotti – # Mexico
Quercus greggii (A.DC.) Trel. – # Mexico
Quercus griffithii Hook.f. & Thomson ex Miq. – southeast Asia
Quercus grisea Liebm. – gray oak – # southwestern North America
Quercus hartwissiana Steven – Strandzha oak – southeastern Bulgaria, northern Turkey, western Georgia, southwestern Russia
Quercus havardii Rydb. – Havard oak, shinnery oak, shin oak – south central North America
†Quercus hiholensis — Miocene — # Washington State
Quercus hinckleyi C.H.Mull. – Hinckley oak – # Texas, northwestern Mexico
Quercus ichnusae Mossa, Bacch. & Brullo – Sardinia
Quercus infectoria G.Olivier – Aleppo oak, Cyprus oak – southern Europe, southwestern Asia
Quercus insignis M.Martens & Galeotti – Mexico, Belize, Costa Rica, Guatemala, Panama
Quercus intricata Trel. – Coahuila scrub oak – # two isolated localities in west Texas, northern Mexico
Quercus invaginata Trel. – Mexico
Quercus john-tuckeri Nixon & C.H.Mull. – Tucker's oak – California
Quercus juergensenii Liebm. – Mexico
Quercus kotschyana O.Schwarz – Lebanon
Quercus laceyi Small – lacey oak – Edwards Plateau of Texas, northern Mexico
Quercus laeta Liebm. – Mexico
Quercus lancifolia Schltdl. & Cham. – southwestern North America, Mexico, South America
Quercus liebmannii Oerst. ex Trel. — Mexico
Quercus lobata Née – valley oak or California white oak – California
Quercus lusitanica Lam. – gall oak or Lusitanian oak – Iberia, North Africa
Quercus lyrata Walter – overcup oak – eastern North America
Quercus × macdonaldii Greene & Kellogg – California
Quercus macdougallii Martínez – Mexico
Quercus macranthera Fisch. & C.A.Mey. ex Hohen. – Caucasian oak or Persian oak – western Asia
Quercus macrocarpa Michx. – bur oak – eastern and central North America
Quercus magnoliifolia Née – Mexico
Quercus manzanillana Trel. – Mexico
Quercus margarettae (also Q. margaretiae and Q. margarettiae) — sand post oak — southeastern North America
Quercus martinezii C.H.Mull. – # Mexico
Quercus michauxii Nutt. – swamp chestnut oak – eastern North America
Quercus microphylla Née – Mexico
Quercus mohriana Buckley ex Rydb. – Mohr oak – # southwestern North America
Quercus mongolica Fisch. ex Ledeb. – Mongolian oak – eastern Asia
Quercus monnula Y.C.Hsu & H.Wei Jen – south-central China
Quercus montana Willd. – chestnut oak – eastern North America (= Quercus prinus)
Quercus muehlenbergii Engelm. – Chinkapin oak – eastern, central, and southwestern US (West Texas and New Mexico), northern Mexico
Quercus ningqiangensis S.Z.Qu & W.H.Zhang – southeastern China
Quercus oblongifolia Torr. – Arizona blue oak, Southwestern blue oak, or Mexican blue oak – # southwestern U.S., northwestern Mexico
Quercus obtusata Bonpl. – Mexico
Quercus oglethorpensis W.H.Duncan – Oglethorpe oak – southeastern North America
Quercus oocarpa Liebm. – Mexico
Quercus opaca Trel. – Mexico
Quercus pacifica Nixon & C.H.Mull. – # Channel Islands, California
Quercus peduncularis Née – # Central America
Quercus perpallida Trel. – Mexico
Quercus petraea (Matt.) Liebl. – sessile oak, durmast oak – Europe, Anatolia
Quercus petraea subsp. polycarpa (Schur) Soó - Georgian oak - Austria to Iran and the Caucasus
Quercus petraea subsp. pinnatiloba (K.Koch) Menitsky - Lebanon-Syria, Turkey, South Caucasus
Quercus polymorpha Schltdl. & Cham. – Monterrey oak, Mexican white oak – # Mexico and extreme S. Texas
Quercus porphyrogenita Trel. – Mexico
Quercus potosina Trel. – Mexico
Quercus praeco Trel. – Mexico
Quercus pringlei Seemen ex Loes. – Mexico
Quercus prinoides Willd. – dwarf chinkapin oak – eastern North America
Quercus pubescens Willd. – downy oak or Italian oak – Europe, Anatolia
Quercus pungens Liebm. – sandpaper oak or pungens oak – southwestern U.S., Mexico
Quercus purulhana Trel. – Mexico, Central America
Quercus pyrenaica Willd. – Pyrenean oak – southwestern Europe
Quercus rekonis Trel. – Mexico
Quercus repanda Bonpl. – Mexico
Quercus resinosa Liebm. – Mexico
Quercus robur L. – pedunculate oak, English oak or French oak – Europe, West Asia
Quercus rugosa Née – netleaf oak or Rugosa oak – # southwestern U.S., northwestern Mexico
Quercus × schuettei Trel. — Schuette's oak — US, Canada
Quercus sebifera Trel. – # Mexico
Quercus segoviensis Liebm. – Mexico and northern Central America
Quercus serrata Murray – bao li – # China, Taiwan, Japan, Korea
Quercus shennongii C.C.Huang & S.H.Fu – southeastern China
Quercus shingjenensis Y.T.Chang – China (Guizhou)
Quercus similis Ashe – swamp post oak – southeastern North America
Quercus sinuata Walter – bastard oak – southern US (formerly identified as Quercus durandii)
Quercus sororia Liebm. – Mexico
Quercus stellata Wangenh. – post oak – eastern North America
Quercus striatula Trel. – Mexico (Sierra Madre Occidental and Mexican Plateau ranges)
Quercus subspathulata Trel. – Mexico
Quercus supranitida C.H.Mull. – Mexico
Quercus tinkhamii C.H.Mull. — Mexico
Quercus toumeyi Sarg. — Toumey oak — # southwest New Mexico, southeastern Arizona, northern Mexico
Quercus tuberculata Liebm. — Mexico
Quercus turbinella Greene — turbinella oak, Arizona Blue Shrub oak, Shrub live oak or scrub live oak — # southwestern North America
Quercus vaseyana Buckley — Vasey oak — # southwestern North America
Quercus verde C.H.Mull. — Mexico
Quercus vicentensis Trel. — El Salvador, Guatemala, and southern Mexico
Quercus vulcanica Boiss. & Heldr. ex Kotschy — Kasnak oak — southwestern Asia
Quercus welshii R.A.Denham — havard oak, Utah sand oak, wavy leaf oak — # southwestern North America
Quercus wutaishanica Mayr — Liaoning oak — China, Mongolia
Quercus xylina Scheidw. — Mexico
Section Ponticae
Species are native to Western Asia and Western North America. They produce catkins up to 10cm long; the acorns mature annually.
Quercus pontica — Pontine oak — western Asia
Quercus sadleriana — deer oak — # southwestern Oregon, northern California
Section Protobalanus
The intermediate oaks. Southwest USA and northwest Mexico. Styles short, acorns mature in 18 months, very bitter, inside of acorn shell woolly.
Quercus cedrosensis — Cedros Island oak — # California + Baja California
Quercus chrysolepis — canyon live oak — # southwestern North America
Quercus palmeri — Palmer oak — # California, western Arizona
Quercus tomentella — island oak — # offshore islands of California + Baja California
Quercus vacciniifolia — huckleberry oak — # southwestern North America
Section Lobatae
The red oaks (synonym sect. Erythrobalanus), native to North, Central and South America. Styles long, acorns mature in 18 months (in most species), very bitter, inside of acorn shell woolly.
Quercus acatenangensis Trel. – Mexico, Guatemala, El Salvador
Quercus acerifolia (E.J.Palmer) Stoynoff & W.J.Hess ex R.J.Jensen – maple-leaved oak or mapleleaf oak – Arkansas
Quercus acherdophylla Trel. – # Mexico
Quercus acutangula Trel. – Mexico
Quercus acutifolia, syn. Quercus conspersa — Mexico
Quercus aerea Trel. — Mexico
Quercus affinis Scheidw. – # Mexico
Quercus agrifolia Née – coast live oak – # California, northern Baja California
Quercus albocincta Trel. – Mexico (Sierra Madre Occidental)
Quercus aristata Hook. & Arn. – Mexico
Quercus arkansana Sarg. – Arkansas oak – southeastern North America
Quercus benthamii A.DC. – # southern Mexico and Central America
Quercus brenesii Trel. – Costa Rica, Mexico
Quercus buckleyi Nixon & Dorr – Texas red oak – south central North America
Quercus calophylla Schltdl. & Cham. — Mexico
Quercus canbyi Trel. (syn. Quercus graciliformis) – Canby oak or Mexican red oak – # Texas, Mexico
Quercus castanea Née – # Mexico
Quercus charcasana Trel. ex A.Camus – Mexico
Quercus chimaltenangana Trel. – Guatemala
Quercus coahuilensis Nixon & C.H.Mull. – Mexico
Quercus coccinea Münchh. – scarlet oak – eastern North America
Quercus coffeicolor Trel. – Mexico
Quercus confertifolia Bonpl. (syn. Quercus gentryi) — Mexico
Quercus conzattii Trel. – Mexico
Quercus cortesii Liebm. – # southern Mexico and Central America
Quercus costaricensis Liebm. – # Costa Rica, Panama
Quercus crassifolia Bonpl. – Mexico
Quercus crassipes Bonpl. – Mexico
Quercus crispifolia Trel. — Mexico
Quercus crispipilis Trel. – Mexico, Guatemala
Quercus cualensis L.M.González – # Mexico (Sierra Madre del Sur)
Quercus delgadoana S.Valencia, Nixon & L.M.Kelly – Mexico
Quercus depressa Bonpl. – Mexico
Quercus devia Goldman – Mexico (Baja California Peninsula)
Quercus durifolia Seemen ex Loes. – Mexico (Sierra Madre Occidental)
Quercus × dysophylla — Mexico
Quercus eduardi Trel. — Mexico
Quercus ellipsoidalis E.J.Hill – northern pin oak – eastern North America
Quercus elliptica Née – Mexico
Quercus emoryi Torr. – Emory oak – # southwestern U.S., northern Mexico
Quercus falcata Michx. – southern red oak or Spanish oak – southeastern North America
Quercus floccosa Liebm. – Mexico
Quercus flocculenta C.H.Mull. — Mexico
Quercus fulva Liebm. – Mexico
Quercus furfuracea Liebm. – Mexico
Quercus galeanensis C.H.Mull. – Mexico
Quercus georgiana M.A.Curtis – Georgia oak – southeastern North America
Quercus gracilior C.H.Mull. – Honduras
Quercus grahamii Benth. – Mexico
Quercus gravesii Sudw. – Chisos red oak or Graves oak – Mexico, southwestern North America (Texas)
Quercus gulielmi-treleasei C.H.Mull. – Costa Rica to western Panama
Quercus hemisphaerica W.Bartram ex Willd. – laurel oak or Darlington oak – # southeastern North America
Quercus hintonii E.F.Warb. – Mexico
Quercus hintoniorum Nixon & C.H.Mull. – # Mexico
Quercus hirtifolia M.L.Vázquez, S.Valencia & Nixon – # Mexico
Quercus humboldtii Bonpl. – Andean oak – # northern South America (Colombia & Panama)
Quercus hypoleucoides A.Camus – silverleaf oak – # southwestern North America
Quercus hypoxantha Trel. – # Mexico
Quercus ignaciensis C.H.Mull. – Sonora
Quercus ilicifolia Wangenh. – bear oak – eastern North America
Quercus iltisii L.M.González – western Mexico
Quercus imbricaria Michx. – shingle oak – eastern North America
Quercus incana W.Bartram – bluejack oak – southeastern North America
Quercus inopina Ashe – sandhill oak – Florida
Quercus jonesii Trel. – northern Mexico
Quercus kelloggii Newb. – California black oak – California, southwestern Oregon
Quercus laevis Walter – turkey oak – southeastern North America
Quercus laurifolia Michx. – laurel oak – # southeastern North America
Quercus laurina Bonpl. – # Mexico
Quercus marilandica (L.) Münchh. – blackjack oak – eastern North America
Quercus mcvaughii Spellenb. (orth. var. Q. macvaughii) — Mexico (northern and central Sierra Madre Occidental)
Quercus mexicana Bonpl. – Mexico
Quercus miquihuanensis Nixon & C.H.Mull. – Mexico (Sierra Madre Oriental)
Quercus mulleri Martínez – Mexico
Quercus myrtifolia Willd. – myrtle oak – # southeastern North America
Quercus nigra L. – water oak – # eastern North America
Quercus nixoniana S.Valencia & Lozada-Pérez – Mexico
Quercus pagoda Raf. – cherrybark oak – southeastern North America
Quercus palustris Münchh. – pin oak – eastern North America
Quercus panamandinaea C.H.Mull. – Panama
Quercus parvula Greene – Shreve oak, Santa Cruz Island oak – # coastal California
Quercus paxtalensis C.H.Mull. – Mexico
Quercus peninsularis Trel. – Mexico (Baja California)
Quercus pennivenia Trel. – Mexico
Quercus phellos L. – willow oak – eastern North America
Quercus pinnativenulosa C.H.Mull. – Mexico
Quercus planipocula Trel. – western Mexico
Quercus pumila Walter – runner oak – # southeastern North America
Quercus radiata Trel. – Mexico (southern Sierra Madre Occidental)
Quercus robusta C.H.Mull. – Chisos Mountains of Texas
Quercus rubra L. – northern red oak – eastern North America
Quercus rubramenta Trel. – Mexico
Quercus runcinatifolia Trel. & C.H.Mull. – Mexico
Quercus rysophylla Weath. – loquat-leaf oak – # Mexico
Quercus salicifolia Née – # Mexico
Quercus saltillensis Trel. — Mexico
Quercus sapotifolia Liebm. – # southern Mexico, Central America
Quercus sartorii Liebm. – Mexico
Quercus scytophylla Liebm. — Mexico
Quercus seemannii Liebm. – southeastern Mexico and Central America
Quercus shumardii Buckley – Shumard oak – eastern North America
Quercus sideroxyla Bonpl. – Mexico
Quercus skinneri Benth. – Mexico (Chiapas, Oaxaca, Tamaulipas, Veracruz) Guatemala, El Salvador, Honduras
Quercus tarahumara Spellenb., J.D.Bacon & Breedlove – Mexico
Quercus tardifolia C.H.Mull. – lateleaf oak – # two small clumps in Chisos Mountains of Texas
Quercus texana Buckley – Nuttall's oak – south central North America (Lower Mississippi River Valley)
Quercus tonduzii Seemen – Costa Rica
Quercus tuitensis L.M.González — Mexico
Quercus urbani Trel. – Mexico
Quercus uxoris McVaugh – Mexico
Quercus velutina Lam. – black oak or eastern black oak or dyer's oak – eastern North America
Quercus viminea Trel. – # Mexico
Quercus wislizeni A.DC. – interior live oak – # California
Quercus xalapensis Bonpl. – Mexico, Central America
Section Virentes
Section Virentes has also been treated at lower ranks. Species are native south-eastern Northern America, Mexico, the West Indies (Cuba), and Central America. A 2017 classification included seven species:
Quercus brandegeei Goldman – Brandegee oak- Baja California Sur
Quercus fusiformis Small – Texas live oak or plateau live oak – # south central North America
Quercus geminata Small – sand live oak – # southeastern United States
Quercus minima (Sarg.) Small – dwarf live oak – # southeastern North America
Quercus oleoides Schltdl. & Cham. – # from Costa Rica into Mexico
Quercus sagrana (also spelt Q. sagraeana) – Cuban oak – # western Cuba
Quercus virginiana Mill. – southern live oak – # southeastern North America
Subgenus Cerris
Section Cerris
Species are native to Europe, north Africa and Asia. Styles long; acorns mature in 18 months, very bitter, inside of acorn shell hairless or slightly hairy.
Quercus acutissima Carruth. – sawtooth oak – # China (including Tibet), Korea, Japan, Indochina, the Himalayas (Nepal, Bhutan, northeastern India).
Quercus afares Pomel – African oak – North Africa
Quercus brantii Lindl. – Persian oak – southwestern Asia
Quercus castaneifolia C.A.Mey. – chestnut-leaved oak – Caucasus, Iran (Persia)
Quercus cerris L. – Turkey oak – southern Europe, southwestern Asia
Quercus chenii Nakai – SE China
Quercus × crenata Lam. – Spanish oak – France, mainland Italy, Sicily, former Yugoslavia
Quercus gussonei – northern Sicily
Quercus apiculata Djav.-Khoie – northwestern Iran
Quercus carduchorum Koch – northwestern Iran
Quercus ithaburensis Decne. – Mount Tabor's oak – southeastern Europe, southwestern Asia
Quercus ithaburensis subsp. macrolepis, syn. Quercus macrolepis — Vallonea oak — # southwestern Asia
Quercus libani G.Olivier – Lebanon oak – southwestern Asia
Quercus look Kotschy – Levant
Quercus magnosquamata Djav.-Khoie – northwestern Iran
Quercus ophiosquamata Djav.-Khoie – northwestern Iran
Quercus persica Jaub. & Spach – western Iran
Quercus suber L. – cork oak – # southwestern Europe, northwestern Africa
Quercus trojana Webb – Macedonian oak – # southeastern Europe
Quercus ungeri Kotschy – northwestern Iran
Quercus variabilis Blume – Chinese cork oak – eastern Asia
Section Ilex
Species in section Ilex are native to Eurasia and northern Africa. Styles medium-long; acorns mature in 12–24 months, appearing hairy on the inside. Evergreen leaves, with bristle-like extensions on the teeth. (Sister group to sect. Cerris and sometimes included in it.)
Quercus acrodonta Seemen — # China
Quercus alnifolia Poech — golden oak — # Cyprus
Quercus aquifolioides Rehder & E.H.Wilson — # China (including Tibet)
Quercus aucheri Jaub. & Spach – eastern Aegean Islands and southwestern Turkey
Quercus baloot Griff. – Afghanistan to western Himalayas
Quercus baronii Skan – China
Quercus bawanglingensis – # Hainan
Quercus coccifera L., syn. Quercus calliprinos Webb – kermes oak – # southern Europe
Quercus cocciferoides Hand.-Mazz. – south-central China
Quercus dolicholepis A.Camus – China
Quercus engleriana Seemen – Tibet and southern China
Quercus fimbriata Y.C.Hsu & H.Wei Jen – south-central China
Quercus floribunda Lindl. ex A.Camus – Moru oak – # Himalayas
Quercus franchetii Skan – China, eastern Asia
Quercus gilliana Rehder & E.H.Wilson – Tibet and China (Sichuan, Yunnan, and Gansu)
Quercus guyavifolia H.Lév. – China
Quercus handeliana A.Camus – China (Yunnan)
Quercus ilex L. – holly oak or holm oak – # southern Europe
Quercus kingiana Craib – south-Central China, Myanmar, and Thailand
Quercus kongshanensis Y.C.Hsu & H.Wei Jen – China (Sichuan)
Quercus lanata Sm. – woolly-leaved oak – # Himalayas, southeast Asia
Quercus leucotrichophora A.Camus – Banj oak, blackjack oak, grey oak – # Himalayas
Quercus lodicosa O.E.Warb. & E.F.Warb. – Assam, southeastern Tibet, northern Myanmar
Quercus longispica (Hand.-Mazz.) A.Camus – China (Yunnan and Sichuan)
Quercus marlipoensis Hu & W.C.Cheng – China (Yunnan)
Quercus monimotricha (Hand.-Mazz.) Hand.-Mazz. – south-central China and northern Myanmar
Quercus oxyphylla (E.H.Wilson) Hand.-Mazz. – China
Quercus pannosa Hand.-Mazz. # – China
Quercus phillyreoides A.Gray – Southern China, Ryukyu Islands, Japan
Quercus pseudococcifera Desf. – Iberia, Morocco, Algeria, Tunisia, Sardinia, Sicily
Quercus rehderiana Hand.-Mazz. – Tibet to China (Yunnan, Sichuan, Guizhou)
Quercus rotundifolia Lam. – ballota oak or holm oak – # Iberian peninsula, northwestern Africa
Quercus semecarpifolia Sm. – brown oak or Kharshu oak – # Himalayas
Quercus senescens Hand.-Mazz. – Eastern Himalayas, Tibet, south-central China
Quercus setulosa Hickel & A.Camus – # Laos, Vietnam
Quercus spinosa David – China, Myanmar
Quercus tarokoensis Hayata – eastern Taiwan
Quercus tungmaiensis Y.T.Chang – Arunachal Pradesh and southeastern Tibet
Quercus utilis Hu & W.C.Cheng – China (Yunnan, Guizhou, and Guangxi)
Quercus yiwuensis Y.C.Hsu & H.Wei Jen – China (Yunnan)
Section Cyclobalanopsis
The ring-cupped oaks (synonym genus Cyclobalanopsis), native to eastern and southeastern tropical Asia. They have corns with distinctive cups bearing concrescent rings of scales. They commonly also have densely clustered acorns, though this does not apply to all of the species. About 90 species.
Species
Quercus acuta Thunb. – Japanese evergreen oak. # Japan, Korea
Quercus albicaulis Chun & W.C.Ko – # China
Quercus annulata Sm. – # The Himalayas to Vietnam
Quercus arbutifolia Hickel & A.Camus – # Vietnam (endemic)
Quercus argentata Korth. – # Malaysia, Indonesia
Quercus argyrotricha A.Camus – # Guizhou (China)
Quercus asymmetrica Hickel & A.Camus – # China, northern Vietnam
Quercus augustinii Skan – # China, Vietnam
Quercus austrocochinchinensis Hickel & A.Camus – # China, Thailand, Vietnam
Quercus baniensis A.Camus – central Vietnam
Quercus bambusifolia Hance – China (Guangdong, Guangxi), northern Vietnam
Quercus bella Chun & Tsiang – # China
Quercus blakei Skan (syn. Q. chrysocalyx) — # China, Laos, Vietnam
Quercus blaoensis A.Camus – southeastern Vietnam
Quercus braianensis A.Camus – # Vietnam (endemic)
Quercus brandisiana Kurz – upland Indochina, Bangladesh
Quercus brevicalyx A.Camus — # China, Laos
Quercus breviradiata (W.C.Cheng) C.C.Huang – central and south-central China
Quercus cambodiensis Hickel & A.Camus – Cambodia
Quercus championii Benth. – # China, Taiwan
Quercus chevalieri Hickel & A.Camus – # China, Vietnam
Quercus chrysocalyx Hickel & A.Camus – # Vietnam
Quercus chrysotricha A.Camus – Borneo (Sarawak)
Quercus chungii F.P.Metcalf – # China
Quercus ciliaris C.C.Huang & Y.T.Chang – China (Hubei, Sichuan, Zhejiang, Anhui)
Quercus daimingshanensis (S.K.Lee) C.C.Huang – # China
Quercus dankiaensis A.Camus — # Vietnam (endemic)
Quercus delavayi Franch. – # China
Quercus delicatula Chun & Tsiang – # China
Quercus dilacerata Hickel & A.Camus – northern Vietnam
Quercus dinghuensis C.C.Huang – # China
Quercus disciformis Chun & Tsiang – # China
Quercus donnaiensis A.Camus – southeastern Vietnam
Quercus edithiae Skan – # China, Vietnam
Quercus elevaticostata (Q.F.Zheng) C.C.Huang – # Fujian (China)
Quercus elmeri Merr. – Sumatra, Borneo, Peninsular Malaysia
Quercus eumorpha Kurz – southern Myanmar
Quercus fuliginosa Chun & W.C.Ko – Hainan
Quercus gaharuensis Soepadmo – western Borneo, eastern Sumatra, Peninsular Malaysia
Quercus gambleana A.Camus – # China, India
Quercus gemelliflora Blume – # Malaysia, Indonesia, Vietnam
Quercus gilva Blume – # Japan, Taiwan, China
Quercus glauca Thunb. – ring-cupped oak – # from Afghanistan to Japan and Vietnam
Quercus gomeziana A.Camus – # Vietnam
Quercus gracilenta Chun – China (Guangdong)
Quercus helferiana A.DC. – # China, India, Burma/Myanmar, Thailand, Laos, Vietnam
Quercus hondae Makino – # Kyūshū (Japan)
Quercus hui Chun – China
Quercus hypargyrea (Seemen ex Diels) C.C.Huang & Y.T.Chang (as Q. multinervis) — # China
Quercus hypophaea Hayata – # Taiwan
Quercus jenseniana Hand.-Mazz. – # China
Quercus jinpinensis (Y.C.Hsu & H.Wei Jen) C.C.Huang – # China
Quercus kerangasensis Soepadmo – Borneo
Quercus kerrii Craib – Kerr's oak – # Vietnam, Thailand, possibly China
Quercus kinabaluensis Soepadmo – Borneo
Quercus kiukiangensis (Y.T.Chang) Y.T.Chang – # China
Quercus kouangsiensis A.Camus – # China
Quercus lamellosa Sm. – # Himalayas
Quercus langbianensis Hickel & A.Camus (syn. Q. camusiae) – # Cambodia, China, Vietnam
Quercus lenticellata Barnett – northern Thailand
Quercus liaoi C.F.Shen – Taiwan
Quercus lineata Blume – # Assam, Bangladesh, Indochina, Hainan, Malaysia, western Indonesia
Quercus litseoides Dunn – # China
Quercus lobbii Hook.f. & Thomson ex Ettingsh. – # China, India
Quercus longinux Hayata – # Taiwan
Quercus lowii King – # Borneo
Quercus lungmaiensis (Hu) C.C.Huang & Y.T.Chang – # Yunnan (China)
Quercus macrocalyx Hickel & A.Camus – # Vietnam
Quercus meihuashanensis (Q.F.Zheng) C.C.Huang – China (Fujian)
Quercus merrillii Seemen – # Sabah and Sarawak (Malaysia), Palawan (Philippines)
Quercus mespilifolia Wall. ex A.DC. – # Vietnam
Quercus miyagii (orth. var. Q. miyagei) — Ryukyu Islands
Quercus morii Hayata – # Taiwan
Quercus motuoensis C.C.Huang – # China
Quercus myrsinifolia Blume (syn. Q. neglecta) – bamboo-leaf oak – # China, Japan, Korea, Laos, Thailand, Vietnam
Quercus ningangensis (orth. var. Q. ninggangensis) — # China
Quercus nivea King – Malaysia (Peninsular Malaysia, Sarawak)
Quercus oidocarpa Korth. – Myanmar, Thailand, Vietnam, Malaysia, Indonesia (Kalimantan, Sumatra, Bangka)
Quercus oxyodon Miq. (syn. Q. songtavanensis) – # Assam, Myanmar, China, Bhutan, Nepal, northern Vietnam
Quercus pachyloma Seemen – # China, Taiwan
Quercus pentacycla Y.T.Chang – # China
Quercus percoriacea Soepadmo – Borneo
Quercus petelotii A.Camus – # Vietnam (endemic)
Quercus phanera Chun – # China
Quercus pinbianensis (Y.C.Hsu & H.Wei Jen) C.C.Huang & Y.T.Chang – China (Yunnan)
Quercus platycalyx Hickel & A.Camus – Vietnam
Quercus poilanei (orth. var. Q. poilanii) — # China, Thailand, Vietnam
Quercus pseudoverticillata Soepadmo – Borneo
Quercus quangtriensis Hickel & A.Camus – # Vietnam
Quercus ramsbottomii A.Camus – Myanmar, Thailand
Quercus rex Hemsl. – # China, India, Laos, Myanmar, Vietnam
Quercus rupestris Hickel & A.Camus – # Vietnam (endemic)
Quercus salicina Blume – # Japan, South Korea
Quercus saravanensis A.Camus – # China, Laos, Vietnam
Quercus schottkyana Rehder & E.H.Wilson – # China
Quercus semiserrata Roxb. – # China, Bangladesh, India, Myanmar, Thailand
Quercus semiserratoides (Y.C.Hsu & H.Wei Jen) C.C.Huang & Y.T.Chang – China (southeastern Yunnan)
Quercus sessilifolia Blume – # Japan, Taiwan, China
Quercus sichourensis (Y.C.Hsu) C.C.Huang & Y.T.Chang – # Yunnan (China)
Quercus steenisii Soepadmo – northern Sumatra
Quercus stenophylloides Hayata – # Taiwan
Quercus stewardiana A.Camus – # China
Quercus subsericea A.Camus – # Sumatra, Borneo, western Java, Malay Peninsula
Quercus sumatrana Soepadmo – # Sumatra and Borneo
Quercus thomsoniana A.DC. – Sikkim, Bhutan, northern Bangladesh
Quercus thorelii Hickel & A.Camus – # China, Laos, Vietnam
Quercus tiaoloshanica Chun & W.C.Ko – Hainan
Quercus tomentosinervis (Y.C.Hsu & H.Wei Jen) C.C.Huang – # China
Quercus treubiana Seemen – # Sumatra, Borneo
Quercus valdinervosa Soepadmo – Borneo
Quercus vestita Griff. – Assam
Quercus xanthoclada Drake – Myanmar, Laos, Vietnam
Quercus xanthotricha A.Camus – # China, Laos, Vietnam
Quercus xuanlienensis H.T.Binh, Ngoc & T.N.Bo – Vietnam
Quercus yonganensis L.K.Ling & C.C.Huang – # China
Section uncertain
Quercus changhualingensis – Hainan
Quercus dongfangensis – Hainan
Quercus yanqianii – Hainan
Intersectional hybrids
Quercus × turneri = Quercus ilex × Quercus robur — sect. Ilex × sect. Quercus – Turner's oak — Spain
References
External links
Quercus
Quercus
Quercus | List of Quercus species | [
"Biology"
] | 8,066 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
166,127 | https://en.wikipedia.org/wiki/LyX | LyX (styled as LYX; pronounced ) is an open source, graphical user interface document processor based on the LaTeX typesetting system. Unlike most word processors, which follow the WYSIWYG ("what you see is what you get") paradigm, LyX has a WYSIWYM ("what you see is what you mean") approach, where what shows up on the screen roughly depicts the semantic structure of the page and is only an approximation of the document produced by TeX.
Since LyX relies on the typesetting system of LaTeX without being a full-fledged LaTeX editor itself, it has the power and flexibility of LaTeX, and can handle documents including books, notes, theses, academic papers, letters, etc. LyX's interface is structured so that while knowledge of the LaTeX markup language is not necessary for basic usage, new LaTeX directives can be added into the document to support more complex features during editing — though not at the level of full control a full-fledged LaTeX editor can provide.
LyX is popular among technical authors and scientists for its advanced mathematical modes, though it is increasingly used by non-mathematically-oriented scholars as well for its bibliographic database integration and its ability to manage multiple files. LyX has also become a popular publishing tool among self-publishers.
LyX is available for all major operating systems, including Windows, MacOS, Linux, UNIX, ChromeOS, OS/2 and Haiku. LyX can be redistributed and modified under the terms of the GNU General Public License and is thus free software.
Features
LyX is a fully featured document processor. It provides structured document creation and editing, branches for having different versions of the same document, master and child documents, change tracking, support for writing documents in many languages and scripts, spell checking, graphics, table editing and automatic cross-reference (hyperlink) managing. LyX provides automatically numbered headings, titles, and paragraphs, with document outline. It features a powerful mathematical formula editor with point-and-click or keyboard-only interface.
LyX has native support for many document classes and templates available in LaTeX through \documentclass{theclass}. User layouts and modules can be made for those missing. Text is laid out according to standard typographic rules, including ligatures, kerning, indents, spacing, and hyphenation. It provides BibTeX/BibLaTeX citation support, comprehensive cross-referencing and PDF hyperlinks. LyX can import various common text formats.
Documents can be processed in LaTeX, PdfLaTeX, XeTeX and LuaTeX typesetting systems or exported to XHTML, DocBook, EPUB (via Docbook) and plain text. Versioning is provided through external control systems like SVN, Git, RCS, and CVS.
LyX supports right-to-left languages like Arabic, Persian, and Hebrew, along with support for bi-directional text. Chinese, Japanese, and Korean languages are supported as well.
Documents can embed calculations via Octave or Computer Algebra Systems (CAS) like Maple, Maxima and Mathematica. Commands will be forwarded to the external programs and results will be embedded in the document.
History
Matthias Ettrich started developing a shareware program called Lyrix in 1995. It was then announced on Usenet, where it received a great deal of attention in the following years.
Shortly after the initial release, Lyrix was renamed to LyX due to a name clash with a word processor produced by the company Santa Cruz Operation. The name LyX was chosen because of the file suffix .lyx for Lyrix files.
Versions
LyX has no set release schedule. Releases occur when there are important bug fixes or significant improvements. The following table lists the dates of all major releases. For collaboration between different users using the same major release is recommended as LyX file format remains fixed within each major release (e.g. all minor LyX versions 2.3.0, 2.3.1, 2.3.2, ... use strictly the same file format).
See also
Comparison of word processors
Comparison of TeX editors
Comparison of desktop publishing software
List of desktop publishing software
List of word processors
Scientific WorkPlace – A proprietary software (non-free) counterpart to LyX
References
External links
LyX Wiki
A comparative review of Scientific WorkPlace and LyX in Journal of Statistical Software
1995 software
Cross-platform free software
Desktop publishing software
Formula editors
Free multilingual software
Free software programmed in C++
Free TeX editors
Free word processors
Linux TeX software
TeX editors
TeX editors that use Qt
TeX software for macOS
TeX software for Windows | LyX | [
"Mathematics"
] | 980 | [
"Formula editors",
"Mathematical software"
] |
166,134 | https://en.wikipedia.org/wiki/Cosmos%201 | Cosmos 1 was a project by Cosmos Studios and The Planetary Society to test a solar sail in space. As part of the project, an uncrewed solar-sail spacecraft named Cosmos 1 was launched into space at 19:46:09 UTC (15:46:09 EDT) on 21 June 2005 from the submarine in the Barents Sea. However, a rocket failure prevented the spacecraft from reaching its intended orbit. Once in orbit, the spacecraft was supposed to deploy a large sail, upon which photons from the Sun would push, thereby increasing the spacecraft's velocity (the contributions from the solar wind are similar, but of much smaller magnitude).
Had the mission been successful, it would have been the first ever orbital use of a solar sail to speed up a spacecraft, as well as the first space mission by a space advocacy group. The project budget was US$4 million. The Planetary Society planned to raise another US$4 million for Cosmos 2, a reimplementation of the experiment provisionally to be launched on a Soyuz resupply mission to the International Space Station (ISS). The Discovery Channel was an early investor. However, advances in technology and the greater availability of lower-mass piggyback slots on more launch vehicles led to a redesign similar to NanoSail-D, called LightSail-1, announced in November 2009.
Planned mission profile
To test the solar sail concept, the Cosmos 1 project launched an orbital spacecraft they named Cosmos 1 with a full complement of eight sail blades on 21 June 2005; the summer solstice. The spacecraft had a mass of and consisted of eight triangular sail blades, which would be deployed from a central hub after launch by the inflating of structural tubes. The sail blades were each long, had a total surface area of , and were made of aluminized-reinforced PET film (MPET).
The spacecraft was launched on a Volna launch vehicle (a converted SS-N-18 intercontinental ballistic missile (ICBM)) from the Russian Delta III submarine , submerged in the Barents Sea. The spacecraft's initial circular orbit would have been at an altitude of about , where it would have unfurled the sails. The sails would then have gradually raised the spacecraft to a higher Earth orbit. "Cosmos 1 might boost its orbit over the expected 30-day life of the mission", said Louis Friedman of The Planetary Society.
The mission was expected to end within a month of launch, as the mylar of the blades would degrade in sunlight.
Possible beam propulsion
The solar-sail craft could also have been used to measure the effect of artificial microwaves aimed at it from a radar installation. A dish at the Goldstone facility of NASA Deep Space Network would have been used to irradiate the sail with a 450 kW beam. This experiment in beam-powered propulsion would only have been attempted after the prime mission objective of controlled solar-sail flight was achieved.
Tracking
The craft would have been visible to the naked eye from most of the Earth's surface: the planned orbit had an inclination of 80°, so it would have been visible from latitudes of up to approximately 80° north and south.
A network of tracking stations around the world, including the Tarusa station, south of Moscow, and the Space Sciences Laboratory at the University of California, Berkeley, tried to maintain contact with the solar sail during the mission. Mission control was based primarily at the Russian company NPO Lavochkin in Moscow; a center that the Planetary Society calls Mission Operations Moscow (MOM).
Physics
The craft would have been gradually accelerating during each orbit as a result of the radiation pressure of photons colliding with the sails. As photons reflected from the surface of the sails, they would transfer momentum to them. As there would be no air resistance to oppose the velocity of the spacecraft, acceleration would be proportional to the number of photons colliding with it per unit time. Sunlight amounts to a tiny acceleration in the vicinity of the Earth. Over one day, the spacecraft's speed would reach ; in 100 days its speed would be , in 2.74 years .
At that speed, a craft would reach Pluto, a very distant dwarf planet in the Solar System, in less than 5 years, although in practice the acceleration of a sail drops dramatically as the spacecraft gets farther from the Sun. However, in the vicinity of Earth, a solar sail's acceleration is larger than that of some other propulsion techniques; for example, the ion thruster-propelled SMART-1 spacecraft has a maximum acceleration of , which allowed SMART-1 to achieve lunar orbit in November 2004 after launch in September 2003.
Other aspects
Besides the main spacecraft, launched in June 2005, the Cosmos 1 project has funded two other craft:
A suborbital test was attempted in 2001 with only two sail blades. The spacecraft failed to separate from the rocket.
A second orbital spacecraft (LightSail-1) was launched in May 2015.
One of Cosmos 1 solar-sail blades was displayed at the Rockefeller Center office complex in New York City in 2003.
References
External links
Cosmos 1 homepage at the Planetary Society
Planetary Society's solar sail updates and press releases - current information about the Cosmos 2 follow-on project.
Cosmos 1 page (flash only) from Cosmos Studios
Space technology: Setting sail for history (Nature, February 16, 2005)
Space yacht rides to stars on rays of sunlight (The Guardian, February 27, 2005)
Cosmos 1 to test solar sail (Wired News, June 16, 2005)
Cosmos 1 videos (Windows Media, RealPlayer, QuickTime formats)
Private spaceflight
Satellite launch failures
Spacecraft launched in 2005
Solar sail spacecraft
The Planetary Society
Rocket launches in 2005 | Cosmos 1 | [
"Astronomy"
] | 1,160 | [
"The Planetary Society",
"Astronomy organizations"
] |
166,135 | https://en.wikipedia.org/wiki/The%20Planetary%20Society | The Planetary Society is an American internationally-active non-governmental nonprofit organization. It is involved in research, public outreach, and political space advocacy for engineering projects related to astronomy, planetary science, and space exploration. It was founded in 1980 by Carl Sagan, Bruce Murray, and Louis Friedman, and has about 60,000 members from more than 100 countries around the world.
The Society is dedicated to the exploration of the Solar System, the search for near-Earth objects, and the search for extraterrestrial life. The society's mission is stated as: "Empowering the world’s citizens to advance space science and exploration." The Planetary Society is a strong advocate for space funding and missions of exploration within NASA. They lobby Congress and engage their membership in the United States to write and call their representatives in support of NASA funding.
In addition to public outreach, The Planetary Society has sponsored solar sail and microorganisms-in-space projects to foster space exploration. In June 2005, the Society launched the Cosmos 1 craft to test the feasibility of solar sailing, but the rocket failed shortly after liftoff. LightSail was originally conceived as a series of three solar sail experiments but later shortened to two missions. LightSail 1 launched on May 20, 2015, and demonstrated a test deployment of its solar sail on June 7, 2015. LightSail 2 launched on June 25, 2019, and successfully used sunlight to change its orbit.
Living Interplanetary Flight Experiment (LIFE), was a two-part program designed to test the ability of microorganisms to survive in space. The first phase flew on STS-134, Space Shuttle Endeavours final flight in 2011. The second phase rode on Russia's Fobos-Grunt mission, which attempted to go to Mars' moon Phobos and back but failed to escape Earth orbit.
History
The Planetary Society was founded in 1980 by Carl Sagan, Bruce Murray, and Louis Friedman as a champion of public support of space exploration and the search for extraterrestrial life. Until the death of Carl Sagan in 1996, the Society was led by Sagan, who used his celebrity and political clout to influence the political climate of the time, including protecting SETI in 1981 from congressional cancellation. Throughout the 1980s and 1990s, the Society pushed its scientific and technologic agenda, which led to an increased interest in rover-based planetary exploration and NASA's New Horizons mission to Pluto.
In addition to its political affairs, the Society has created a number of space related projects and programs. The SETI program began with Paul Horowitz's Suitcase SETI and has grown to encompass searches in radio and optical wavelengths from the northern and southern hemispheres of the Earth. SETI@home, the largest distributed computing experiment on Earth, is perhaps the Society's best-known SETI project. Other projects include the development of the Mars Microphone instrument which flew on the failed Mars Polar Lander project, as well as two LightSail projects, solar sail technology demonstrators designed to determine whether space travel is possible by using only sunlight.
Program summary
The Planetary Society currently runs seven different program areas with a number of programs in each area:
Advocacy and education
Extrasolar planets
Innovative technologies
International mission participation
Mars exploration
Near-Earth objects
Search for extraterrestrial life
Organization
The Planetary Society is currently governed by a 12-member volunteer board of directors chosen for their passion about and knowledge of space exploration. The Board has a chairman, President, and Vice President and an Executive Committee, and normally meets twice per year to set the Society's policies and future directions. Nominations are sought and considered periodically from a variety of sources, including from members of the Board and Advisory Council, Society Members, staff, and experts in the space community. On June 7, 2010, the Society announced that American science educator Bill Nye would become the new executive director of the society.
Members
The Planetary Society's current board of directors consists of:
Bill Nye, chief executive officer
Daniel Geraci, chairman of the board
Bethany Ehlmann, President and member of executive committee
Heidi Hammel, Vice President and member of executive committee
Lon Levin, Treasurer of the Board and member of executive committee
Jim Bell, Secretary and member of executive committee
G. Scott Hubbard
John Logsdon
Britney Schmidt
Bijal (Bee) Thakore
Fillmore Wood
Robert Picardo
The advisory council consists of:
Buzz Aldrin
Robert D. Braun
David Brin
Neil deGrasse Tyson
Gary E. Hunt
Bruce Jakosky
Charles E. Kohlhase Jr.
Ben Lamm
Laurie Leshin
Jon Lomberg
Rosaly Lopes
Bob McDonald
Donna L. Shirley
Pete Slosberg
Kevin Stube
Lorne Trottier
Other well known members:
Emily Lakdawalla
Science and technology
The Planetary Society sponsors science and technology projects to seed further exploration. All of these projects are funded by the Society's members and donors. Some projects include:
Earthdials
FINDS Exo-Earths
Micro-Rovers for Assisting Humans
Mars Climate Sounder
Pioneer anomaly
Near-Earth Objects Research
Planetrek
Laser Bees
Search for Extraterrestrial Intelligence
Solar sailing with Cosmos 1 and the LightSail project
Living Interplanetary Flight Experiment
SETI@home
The Planetary Report
The Planetary Report is the quarterly internationally recognized flagship magazine of The Planetary Society, featuring articles and full-color photos to provide comprehensive coverage of discoveries on Earth and other planets. It went from bimonthly to quarterly with the June (summer solstice) 2011 issue.
This magazine reaches 60,000 members of The Planetary Society all over the world, with news about planetary missions, spacefaring nations, space explorers, planetary science controversies, and the latest findings in humankind's exploration of the Solar System. It will be edited beginning in September 2018 by Emily Lakdawalla, who takes over from Donna Stevens.
Planetary Radio
The Planetary Society also produces Planetary Radio, a weekly 30-minute radio program and podcast hosted and produced by Sarah Al-Ahmed. The show's programming consists mostly of interviews and telephone-based conversations with scientists, engineers, project managers, artists, writers, astronauts, and many other professionals who can provide some insight or perspective into the current state of space exploration.
Science and Technology Empowered by the Public program
In 2022, the Planetary Society awarded its first grants as part of its Science and Technology Empowered by the Public (STEP) program. The inaugural grant winners were a team from University of California, Los Angeles for a SETI project and a team from University of Belgrade, Serbia, for a planetary defense project.
UnmannedSpaceflight.com
UnmannedSpaceflight.com is funded by the Planetary Society, and uses the internet forum software Invision Power Board from Invision Power Services.
See also
List of astronomical societies
References
External links
Astronomy websites
Clubs and societies in California
Non-profit technology
Space organizations
Organizations based in Pasadena, California
Non-profit organizations based in California
501(c)(3) organizations
Scientific organizations established in 1980
1980 establishments in California
Science and technology in California | The Planetary Society | [
"Astronomy",
"Technology"
] | 1,423 | [
"Non-profit technology",
"Works about astronomy",
"Astronomy organizations",
"Space organizations",
"Information technology",
"The Planetary Society",
"Astronomy websites"
] |
166,146 | https://en.wikipedia.org/wiki/Advance%20healthcare%20directive | An advance healthcare directive, also known as living will, personal directive, advance directive, medical directive or advance decision, is a legal document in which a person specifies what actions should be taken for their health if they are no longer able to make decisions for themselves because of illness or incapacity. In the U.S. it has a legal status in itself, whereas in some countries it is legally persuasive without being a legal document.
A living will is one form of advance directive, leaving instructions for treatment. Another form is a specific type of power of attorney or health care proxy, in which the person authorizes someone (an agent) to make decisions on their behalf when they are incapacitated. People are often encouraged to complete both documents to provide comprehensive guidance regarding their care, although they may be combined into a single form. An example of combination documents includes the Five Wishes in the United States. The term living will is also the commonly recognised vernacular in many countries, especially the U.K. The legality of advance consent for advance healthcare directives depends on jurisdiction.
Background
Advance directives were created in response to the increasing sophistication and prevalence of medical technology. Numerous studies have documented critical deficits in the medical care of the dying; it has been found to be unnecessarily prolonged, painful, expensive, and emotionally burdensome to both patients and their families.
Living will
The living will is the oldest form of advance directive. It was first proposed by an Illinois attorney, Luis Kutner, in a speech to the Euthanasia Society of America in 1967 and published in a law journal in 1969. Kutner drew from existing estate law, by which an individual can control property affairs after death (i.e., when no longer available to speak for himself or herself) and devised a way for an individual to express their health care desires when no longer able to express current healthcare wishes. Because this form of "will" was to be used while an individual was still alive (but no longer able to make decisions), it was dubbed the "living will". The U.S. Patient Self-Determination Act (PSDA) went into effect in December 1991 and required healthcare providers (primarily hospitals, nursing homes, and home health agencies) to give patients information about their rights to make advance directives under state law.
A living will usually provides specific directives about the course of treatment healthcare providers and caregivers are to follow. In some cases a living will may forbid the use of various kinds of burdensome medical treatment. It may also be used to express wishes about the use or foregoing of food and water, if supplied via tubes or other medical devices. The living will is used only if the individual has become unable to give informed consent or refusal due to incapacity. A living will can be very specific or very general. An example of a statement sometimes found in a living will is: "If I suffer an incurable, irreversible illness, disease, or condition and my attending physician determines that my condition is terminal, I direct that life-sustaining measures that would serve only to prolong my dying be withheld or discontinued."
More specific living wills may include information regarding an individual's desire for such services such as analgesia (pain relief), antibiotics, hydration, feeding, and the use of ventilators or cardiopulmonary resuscitation. However, studies have also shown that adults are more likely to complete these documents if they are written in everyday language and less focused on technical treatments.
However, by the late 1980s, public advocacy groups became aware that many people remained unaware of advance directives and even fewer actually completed them. In part, this was seen as a failure of health care providers and medical organizations to promote and support the use of these documents. The public's response was to press for further legislative support. The most recent result was the Patient Self-Determination Act of 1990, which attempted to address this awareness problem by requiring health care institutions to better promote and support the use of advance directives.
Living wills proved to be very popular, and by 2007, 41% of Americans had completed a living will. In response to public needs, state legislatures soon passed laws in support of living wills in virtually every state in the union.
However, as living wills began to be better recognized, key deficits were soon discovered. Most living wills tended to be limited in scope and often failed to fully address presenting problems and needs. Further, many individuals wrote out their wishes in ways that might conflict with quality medical practice. Ultimately, it was determined that a living will alone might be insufficient to address many important health care decisions. This led to the development of what some have called "second generation" advance directives – the "health care proxy appointment" or "medical power of attorney."
Living wills also reflect a moment in time, and may therefore need regular updating to ensure that the correct course of action can be chosen.
Healthcare proxy
Power of attorney statutes have existed in the United States since the days of "common law" (i.e., laws brought from England to the United States during the colonial period). These early powers of attorney allowed an individual to name someone to act in their stead. Drawing upon these laws, "durable powers of attorney for health care" and "healthcare proxy appointment" documents were created and codified in law, allowing an individual to appoint someone to make healthcare decisions in their behalf if they should ever be rendered incapable of making their wishes known. People will normally benefit from having both a durable power of attorney and a healthcare proxy.
A healthcare proxy document appoints a person, the proxy, who can make decisions on behalf of the granting individual in the event of incapacity. The appointed healthcare proxy has, in essence, the same rights to request or refuse treatment that the individual would have if still capable of making and communicating health care decisions.
The appointed representative is authorized to make real-time decisions in actual circumstances, as opposed to advance decisions framed in hypothetical situations, as might be recorded in a living will. The healthcare proxy was rapidly accepted within the U.S. and authorizing legislation was soon enacted in most states.
One problem with a conventional healthcare proxy is that it may not be possible for the appointed proxy to determine what care choices the individual would have made if still capable, as healthcare proxies may be too vague for meaningful interpretation. While a study comparing next-of-kin decisions on behalf of an incapacitated person, (who later recovered) found that these surrogates chose correctly 68% of the time overall.
Values-based directives
One alternative to a conventional healthcare proxy is the values history, a "two-part advance directive instrument that elicits patient values about terminal medical care and therapy-specific directives." The goal of this advance directive is to move away from a focus on specific treatments and medical procedures to a focus on patient values and personal goals.
Studies suggest that values regarding financial and psychological burden are strong motivators in not wanting a broad array of end-of-life therapies.
Another alternative to a conventional healthcare proxy is the medical directive, a document that describes six case scenarios for advance medical decision-making. The scenarios are each associated with a roster of commonly considered medical procedures and interventions, allowing the individual to decide in advance which treatments are wanted or not wanted under the circumstances.
A study conducted to address concerns that a non-statutory advance directive might leave an incapacitated person with a document that may not be honored found that they are generally accepted.
Psychiatric advance directives
A psychiatric advance directive (PAD), also known as a mental health advance directive, is a written document that describes what a person wants to happen if at some time in the future they are judged to have a mental disorder in such a way that they are deemed unable to decide for themselves or to communicate effectively.
A PAD can inform others about what treatment they want or do not want from psychiatrists or other mental health professionals, and it can identify a person to whom they have given the authority to make decisions on their behalf. A mental health advance directive is one kind of advance health care directive.
Legal foundations
Psychiatric advance directives are legal documents used by persons currently enjoying legal capacity to declare their preferences and instructions for future mental health treatment, or to appoint a surrogate decision maker through Health Care Power of Attorney (HCPA), in advance of being targeted by coercive mental health laws, during which they may be stripped of legal capacity to make decisions.
In the United States, although 25 states have now passed legislation in the past decade establishing authority for PADs, there is relatively little public information available to address growing interest in these legal documents. The Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) requires behavioral health facilities to ask patients if they have PADs.
Clinical benefits
A NIH-funded study conducted by researchers at Duke University has shown that creating a PAD with a trained facilitator increases therapeutic alliance with clinicians, enhances involuntary patients' treatment satisfaction and perceived autonomy, and improves treatment decision-making capacity among people labeled with severe mental illness.
PADs also provide a transportable document—increasingly accessible through electronic directories—to convey information about a detainee's treatment history, including medical disorders, emergency contact information, and medication side effects. Clinicians often have limited information about citizens detained and labeled as psychiatric patients who present or are coercively presented and labeled as in crisis. A PAD may help clinicians gain prompt access to relevant information about individual cases and thus improve the quality of clinical decision-making, and enhance patient safety and long-term autonomy.
Barriers
National surveys in the United States indicate that although approximately 70% of people targeted by coercive psychiatry laws would want a PAD if offered assistance in completing one, less than 10% have actually completed a PAD.
In a survey conducted of 600 psychiatrists, psychologists, and social workers showed that the vast majority thought advance care planning for crises would help improve patients' overall mental health care. Further, the more clinicians knew about PAD laws, the more favorable were their attitudes toward these practices. For instance, while most psychiatrists, social workers, and psychologists surveyed believed PADs would be helpful to people detained and targeted for forced drugging and electroshock when labeled with severe mental illnesses, clinicians with more legal knowledge about PAD laws were more likely to endorse PADs as a beneficial part of patients' treatment planning.
Many clinicians reported not knowing enough about how PADs work and specifically indicated they lacked resources to readily help patients fill out PADs or to help their clients develop crisis plans.
Worldwide
Australia
The laws regarding advance directives, powers of attorney, and enduring guardianships vary from state to state. In Queensland, for example, the concept of an advance health directive is defined in the Powers of attorney act of 1998 and Guardianship and Administration act of 2000. Tasmania has no specific legislation concerning advance healthcare directives. Advance Care Planning (ACP) has been gaining prominence in Australia for its role in enhancing a patient's autonomy and as an important component of good end-of-life care.
Canada
Health Canada – Canada's federal health agency – has acknowledged the need for a greater investment in palliative and hospice care as the country faces a rapidly growing population of elderly and terminally ill citizens.
Much of the current focus in Canada is on advance care planning which involves encouraging individuals to reflect on and express their wishes for future care, including end-of-life care, before they become terminally ill or incapable of making decisions for themselves. A number of publicly funded initiatives exist to promote advance care planning and to encourage people to appoint "substitute decision makers" who make medical decisions and can give or withhold consent for medical procedures according to the patient's pre-expressed wishes when the patient becomes incapable of doing so themselves
In 2008, The Advance Care Planning in Canada: A National Framework and Implementation Project was founded. The goal was to engage healthcare professionals and educate patients
about the importance of advance care planning and end of life care.
Polling indicates that 96% of Canadians think that having a conversation with a loved one about planning for the end of life is important. However, the same polls show that only about 13% have actually done so, or have created an advance care plan for themselves.
A 2014 Ipsos Reid Survey reveals that only about a third of Canadian doctors and nurses working in primary care feel comfortable discussing end of life issues with their patients. End-of-life issues in Canada have recently been highlighted due to the ongoing related debate about physician-assisted death in Canada. Former Federal Health Minister Rona Ambrose (July 15, 2013 to November 4, 2015) has stated: "I think the starting point for me is that we still don't have the best elderly care and palliative care yet… So let's talk about making sure we have the best end-of-life care before we start talking about assisted suicide and euthanasia."
England and Wales
In England and Wales, people may make an advance directive or appoint a proxy under the Mental Capacity Act 2005. This is only for an advance refusal of treatment for when the person lacks mental capacity; to be legally binding, the advance decision must be specific about the treatment that is being refused and the circumstances in which the refusal will apply. To be valid, the person must have been competent and must have understood the decision when they signed the directive. Where the patient's advance decision relates to a refusal of life-prolonging treatment this must be recorded in writing and witnessed. Any advance refusal is legally binding providing that the patient is an adult, the patient was competent and properly informed when reaching the decision, it is clearly applicable to the present circumstances and there is no reason to believe that the patient has changed their mind. If an advance decision does not meet these criteria but appears to set out a clear indication of the patient's wishes, it will not be legally binding but should be taken into consideration in determining the patient's best interests. In June 2010, the Wealth Management Solicitors, Moore Blatch, announced that research showed demand for Living Wills had trebled in the two years previous, indicating the rising level of people concerned about the way in which their terminal illness will be managed. According to the British Government, every adult with mental capacity has the right to agree to or refuse medical treatment. To make their advance wishes clear, people can use a living will, which can include general statements about wishes, which are not legally binding, and specific refusals of treatment called "advance decisions" or "advance directives".
European Union
Country reports on advance directives is a 2008 paper summarizing advance health care legislation on each country in the European Union with a shorter summary for the U.S.; a 2009 paper also provides a European overview.
Germany
On 18 June 2009 the Bundestag passed a law on advance directives, applicable since 1 September 2009. Such law, based on the principle of the right of self-determination, provides for the assistance of a fiduciary and of the physician.
Italy
On 14 December 2017, Italian Senate officially approved a law on advance healthcare directive that came into force on 31 January 2018.
Controversy over end-of-life care emerged in Italy in 2006, when a terminally ill patient suffering from muscular dystrophy, Piergiorgio Welby, petitioned the courts for removal of his respirator. Debated in Parliament, no decision was reached. A doctor eventually honored Welby's wishes by removing the respirator under sedation. The physician was initially charged for violating Italy's laws against euthanasia, but was later cleared. Further debate ensued after the father of a 38-year-old woman, Eluana Englaro, petitioned the courts for permission to withdraw feeding tubes to allow her to die. Englaro had been in a coma for 17 years, following a car accident. After petitioning the courts for 10 years, authorization was granted and Englaro died in February 2009. In May 2008, apparently as a result of the recent Court of Cassation's holding in the case of Englaro, a guardianship judge in Modena, Italy used relatively new legislation to work around the lack of the advance directive legislation. The new law permitted a judicially appointed guardian ("amministratore di sostegno") to make decisions for an individual. Faced with a 70-year-old woman with end-stage Lou Gehrig's Disease who was petitioning the court (with the support of her family) to prevent any later use of a respirator, the judge appointed her husband as guardian with the specific duty to refuse any tracheotomy and/or respirator use if/when the patient became unable to refuse such treatment herself.
Japan
Advance healthcare directives are not legally recognized in Japan. According to a 2017 survey by the Ministry of Health, Labor and Welfare (MHLW), 66% of surveyed individuals supported the idea of such directives, but only 8.1% had prepared their own. The private organization Nihon Songenshi Kyōkai (Japan Society for Dying with Dignity) offers members a semi-standardized "living will" (ribingu uiru) form that is registered with the organization, though it holds no legal weight.
Korea
Advance healthcare directives are legally recognized in Korea since 2016, when the Act on Decisions on Life-Sustaining Treatment for Patients in Hospice and Palliative Care was implemented in Korea, providing a broad framework for end-of-life decision-making.
Nigeria
Advance healthcare directives is yet to be legalised in Nigeria, and patient’s “written directives” for determining their preferences, tradition still dominates in Nigeria and most African countries.
Israel
In 2005, the Knesset passed a law allowing people to write advanced care directives. Right to refuse care is only recognized if the patient is considered terminally ill and their life expectancy is less than six months.
Switzerland
In Switzerland, there are several organizations which take care of registering patient decrees, forms which are signed by the patients declaring that in case of permanent loss of judgement (e.g., inability to communicate or severe brain damage) all means of prolonging life shall be stopped. Family members and these organizations also keep proxies which entitle their holder to enforce such patient decrees. Establishing such decrees is relatively uncomplicated.
In 2013 a law concerning advanced healthcare directives has been voted. Every adult with testamentary capacity can redact a legal binding document declaring a will in the event of loss of judgement. They may also designate a natural person to discuss medical procedures with the attending doctor and make decisions on their behalf if they are longer capable of judgment.
United States
Aggressive medical intervention leaves nearly two million Americans confined to nursing homes, and over 1.4 million Americans remain so medically frail as to survive only through the use of feeding tubes. Of U.S. deaths, about a third occur in health care facilities. As many as 30,000 persons are kept alive in comatose and permanently vegetative states.
Cost burdens to individuals and families are considerable. A national study found that: "In 20% of cases, a family member had to quit work", 31% lost "all or most savings" (even though 96% had insurance), and "20% reported loss of [their] major source of income". Yet, studies indicate that 70-95% of people would rather refuse aggressive medical treatment than have their lives medically prolonged in incompetent or other poor prognosis states.
As more and more Americans experienced the burdens and diminishing benefits of invasive and aggressive medical treatment in poor prognosis states – either directly (themselves) or through a loved one – pressure began to mount to devise ways to avoid the suffering and costs associated with treatments one did not want in personally untenable situations. The first formal response was the living will.
In the United States, all states recognize some form of living wills or the designation of a health care proxy. The term living will is not officially recognized under California law, but an advance health care directive or durable power of attorney may be used for the same purpose as a living will.
In Pennsylvania on November 30, 2006, Governor Edward Rendell signed into law Act 169, that provides a comprehensive statutory framework governing advance health care directives and health care decision-making for incompetent patients. As a result, health care organizations make available a "Combined Living Will & Health Care Power of Attorney Example Form from Pennsylvania Act 169 of 2006."
Several states offer living will "registries" where citizens can file their living will so that they are more easily and readily accessible by doctors and other health care providers. However, in recent years some of these registries, such as the one run by the Washington State Department of Health, have been shuttered by the state government because of low enrollment, lack of funds, or both.
On July 28, 2009, Barack Obama became the first United States President to announce publicly that he had a living will, and to encourage others to do the same. He told an AARP town meeting, "So I actually think it's a good idea to have a living will. I'd encourage everybody to get one. I have one; Michelle has one. And we hope we don't have to use it for a long time, but I think it's something that is sensible." The announcement followed controversy surrounding proposed health care legislation that included language that would permit the payment of doctors under Medicare to counsel patients regarding living wills, sometimes referred to as the "infamous" page 425. Shortly afterwards, bioethicist Jacob Appel issued a call to make living wills mandatory.
India
Supreme Court of India on March 9, 2018, permitted living wills and withholding and withdrawing life sustaining treatments. The country's apex court held that the right to a dignified life extends up to the point of having a dignified death.
See also
Engage with Grace
My body, my choice
Ordinary and extraordinary care
Do not resuscitate
References
External links
Collaboratory on Advance Directives. Andalusian School of Public Health. Spain.
National Resource Center on Psychiatric Advance Directives (U.S.)
Best interests decision-making for adults who lack capacity, toolkit for medical professionals from the British Medical Association.
Patient guidance on advance directives (U.K.)
Euthanasia
Health informatics
Health law
Legal documents
Power of attorney
Legal aspects of death | Advance healthcare directive | [
"Biology"
] | 4,611 | [
"Health informatics",
"Medical technology"
] |
166,152 | https://en.wikipedia.org/wiki/Zhuang%20Zhou | Zhuang Zhou (), commonly known as Zhuangzi (; ; literally "Master Zhuang"; also rendered in the Wade–Giles romanization as Chuang Tzu), was an influential Chinese philosopher who lived around the 4th century BCE during the Warring States period, a period of great development in Chinese philosophy, the Hundred Schools of Thought. He is credited with writing—in part or in whole—a work known by his name, the Zhuangzi, which is one of two foundational texts of Taoism, alongside the Tao Te Ching.
Life
The only account of the life of Zhuangzi is a brief sketch in chapter 63 of Sima Qian's Records of the Grand Historian, and most of the information it contains seems to have simply been drawn from anecdotes in the Zhuangzi itself. In Sima's biography, he is described as a minor official from the town of Meng (in modern Anhui) in the state of Song, living in the time of King Hui of Liang and King Xuan of Qi (late fourth century BC). Sima Qian writes that Zhuangzi was especially influenced by Laozi, and that he turned down a job offer from King Wei of Chu, because he valued his personal freedom.
His existence has been questioned by Russell Kirkland, who asserts that "there is no reliable historical data at all" for Zhuang Zhou, and that most of the available information on the Zhuangzi comes from its third-century commentator, Guo Xiang.
Writings
Zhuangzi is traditionally credited as the author of at least part of the work bearing his name, the Zhuangzi. This work, in its current shape consisting of 33 chapters, is traditionally divided into three parts: the first, known as the "Inner Chapters", consists of the first seven chapters; the second, known as the "Outer Chapters", consist of the next 15 chapters; the last, known as the "Mixed Chapters", consist of the remaining 11 chapters. The meaning of these three names is disputed: according to Guo Xiang, the "Inner Chapters" were written by Zhuangzi, the "Outer Chapters" written by his disciples, and the "Mixed Chapters" by other hands; the other interpretation is that the names refer to the origin of the titles of the chapters—the "Inner Chapters" take their titles from phrases inside the chapter, the "Outer Chapters" from the opening words of the chapters, and the "Mixed Chapters" from a mixture of these two sources.
Further study of the text does not provide a clear choice between these alternatives. On the one side, as Martin Palmer points out in the introduction to his translation, two of the three chapters Sima Qian cited in his biography of Zhuangzi, come from the "Outer Chapters" and the third from the "Mixed Chapters". "Neither of these are allowed as authentic Chuang Tzu chapters by certain purists, yet they breathe the very spirit of Chuang Tzu just as much as, for example, the famous 'butterfly passage' of chapter 2."
On the other hand, chapter 33 has been often considered as intrusive, being a survey of the major movements during the "Hundred Schools of Thought" with an emphasis on the philosophy of Hui Shi. Further, A.C. Graham and other critics have subjected the text to a stylistic analysis and identified four strains of thought in the book: a) the ideas of Zhuangzi or his disciples; b) a "primitivist" strain of thinking similar to Laozi in chapters 8-10 and the first half of chapter 11; c) a strain very strongly represented in chapters 28-31 which is attributed to the philosophy of Yang Zhu; and d) a fourth strain which may be related to the philosophical school of Huang-Lao. In this spirit, Martin Palmer wrote that "trying to read Chuang Tzu sequentially is a mistake. The text is a collection, not a developing argument."
Zhuangzi was renowned for his brilliant wordplay and use an original form of gōng'àn (Chinese: 公案) or parables to convey messages. His critiques of Confucian society and historical figures are humorous and at times ironic.
See also
Dream argument
Goblet word
Liezi
Tao Te Ching
Notes
Citations
References
Ames, Roger T. (1991), 'The Mencian Concept of Ren Xing: Does it Mean Human Nature?' in Chinese Texts and Philosophical Contexts, ed. Henry Rosemont, Jr. LaSalle, Ill.: Open Court Press.
Ames, Roger T. (1998) ed. Wandering at Ease in the Zhuangzi. Albany: State University of New York Press.
Bruya, Brian (translator). (2019). Zhuangzi: The Way of Nature. Princeton: Princeton University Press. .
Graham A.C, Chuang-Tzû, the seven inner chapters, Allen & Unwin, London, 1981
Chuang-tzu: The Inner Chapters and other Writings from the Book of Chuang-tzu (London: Unwin Paperbacks, 1986)
Hansen, Chad (2003). "The Relatively Happy Fish," Asian Philosophy 13:145-164.
Herbjørnsrud, Dag (2018). "A Sea for Fish on Dry Land," the blog of the Journal of History of Ideas.
(Google Books)
Merton, Thomas. (1969). The Way of Chuang Tzu. New York: New Directions.
Waltham, Clae (editor). (1971). Chuang Tzu: Genius of the Absurd. New York: Ace Books.
The complete work of Chuang Tzu, Columbia University Press, 1968
External links
Zhuangzi Bilingual Chinese-English version (James Legge's translation) - Chinese Text Project
The Zhuangzi "Being Boundless", Complete translation of Zhuangzi by Nina Correa
Chuang Tzu at Taoism.net, Chuang Tzu's Stories and Teachings - translations by Derek Lin
Zhuangzi, The Internet Encyclopedia of Philosophy
Zhuangzi, Stanford Encyclopedia of Philosophy
Selection from The Zhuangzi, translated by Patricia Ebrey
Chuang-tzu at Taopage.org
Zhuang Zi, chapter 1
Zhuang Zi, chapter 2
James Legge Complete Translation In English The Legge translation of the complete Chuang Tzu (Zhuangzi) updated
360s BC births
280s BC deaths
Year of birth uncertain
Year of death uncertain
4th-century BC Chinese people
4th-century BC Chinese philosophers
3rd-century BC Chinese people
3rd-century BC Chinese philosophers
Metaphysicians
Chinese ethicists
Chinese logicians
Guqin players
People from Bozhou
Possibly fictional people from Asia
Philosophers from Anhui
Philosophers of culture
Philosophers of education
Philosophers of language
Philosophers of logic
Philosophers of science
Chinese political philosophers
Proto-anarchists
Proto-evolutionary biologists
Social philosophers
Taoist immortals
Zhou dynasty philosophers
Zhou dynasty Taoists | Zhuang Zhou | [
"Biology"
] | 1,417 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
166,159 | https://en.wikipedia.org/wiki/Henry%20Ward%20Beecher | Henry Ward Beecher (June 24, 1813 – March 8, 1887) was an American Congregationalist clergyman, social reformer, and speaker, known for his support of the abolition of slavery, his emphasis on God's love, and his 1875 adultery trial. His rhetorical focus on Christ's love has influenced mainstream Christianity through the 21st century.
Beecher was the son of Lyman Beecher, a Calvinist minister who became one of the best-known evangelists of his era. Several of his brothers and sisters became well-known educators and activists, most notably Harriet Beecher Stowe, who achieved worldwide fame with her abolitionist novel Uncle Tom's Cabin. Henry Ward Beecher graduated from Amherst College in 1834 and Lane Seminary in 1837 before serving as a minister in Lawrenceburg, Indiana, and later in Indianapolis's Second Presbyterian Church when the congregation resided at Circle Hall at Monument Circle.
In 1847, Beecher became the first pastor of the Plymouth Church in Brooklyn, New York. He soon acquired fame on the lecture circuit for his novel oratorical style in which he employed humor, dialect, and slang. Over the course of his ministry, he developed a theology emphasizing God's love above all else. He also grew interested in social reform, particularly the abolitionist movement. In the years leading up to the Civil War, he raised money to purchase slaves from captivity and to send rifles—nicknamed "Beecher's Bibles"—to abolitionists fighting in Kansas. He toured Europe during the Civil War, speaking in support of the Union.
After the war, Beecher supported social reform causes such as women's suffrage and temperance. He also championed Charles Darwin's theory of evolution, stating that it was not incompatible with Christian beliefs. He was widely rumored to be an adulterer, and in 1872 the Woodhull & Claflin's Weekly published a story about his affair with Elizabeth Richards Tilton, the wife of his friend and former co-worker Theodore Tilton. In 1874, Tilton filed charges for "criminal conversation" against Beecher. The subsequent trial resulted in a hung jury and was one of the most widely reported trials of the century.
After the death of his father in 1863, Beecher was unquestionably "the most famous preacher in the nation". Beecher's long career in the public spotlight led biographer Debby Applegate to call her biography of him The Most Famous Man in America.
Early life
Beecher was born in Litchfield, Connecticut, the eighth of 13 children born to Lyman Beecher, a Presbyterian preacher from Boston. His siblings included author Harriet Beecher Stowe, educators Catharine Beecher and Thomas K. Beecher, and activists Charles Beecher and Isabella Beecher Hooker, and his father became known as "the father of more brains than any man in America". Beecher's mother Roxana died when Henry was three, and his father married Harriet Porter, whom Henry described as "severe" and subject to bouts of depression.
Beecher also taught school for a time in Whitinsville, Massachusetts.
The Beecher household was "the strangest and most interesting combination of fun and seriousness". The family was poor, and Lyman Beecher assigned his children "a heavy schedule of prayer meetings, lectures, and religious services" while banning the theater, dancing, most fiction, and the celebration of birthdays or Christmas. The family's pastimes included story-telling and listening to their father play the fiddle.
Beecher had a childhood stammer. He was also considered slow-witted and one of the less promising of the brilliant Beecher children. His poor performance earned him punishments, such as being forced to sit for hours in the girls' corner while wearing a dunce cap. At 14, he began his oratorical training at Mount Pleasant Classical Institute, a boarding school in Amherst, Massachusetts, where he met Constantine Fondolaik Newell, a Smyrna Greek. They attended Amherst College together, where they signed a contract pledging lifelong friendship and brotherly love. Fondolaik died of cholera after returning to Greece around October 1848, and Beecher named his third son after him.
During his years in Amherst, Beecher had his first taste of public speaking, giving his first sermon or talk in 1831 about four miles southeast, in the schoolhouse at a village then called Logtown, today known as Dwight. He was in his second year at Amherst College, and he soon thereafter resolved to join the ministry, setting aside his early dream of going to sea. He met his future wife Eunice Bullard, the daughter of a well-known physician, and they were engaged on January 2, 1832. He also developed an interest in the pseudoscience of phrenology, an attempt to link personality traits with features of the human skull, and he befriended Orson Squire Fowler who became the theory's best-known American proponent.
Beecher graduated from Amherst College in 1834 and then attended Lane Theological Seminary outside Cincinnati, Ohio. Lane was headed by Beecher's father, who had become "America's most famous preacher". The student body was divided by the slavery question, whether to support a form of gradual emancipation, as Lyman Beecher did, or to demand immediate emancipation. Beecher stayed largely clear of the controversy, sympathetic to the radical students but unwilling to defy his father. He graduated in 1837.
Early ministry
On August 3, 1837, Beecher married Eunice Bullard, and the two proceeded to the small, impoverished town of Lawrenceburg, Indiana, where Beecher had been offered a post as a minister of the First Presbyterian Church. He received his first national publicity when he became involved in the break between "New School" and "Old School" Presbyterianism, which were split over questions of original sin and the slavery issue; Henry's father Lyman was a leading proponent of the New School. Because of Henry's adherence to the New School position, the Old School-dominated presbytery declined to install him as the pastor, and the resulting controversy split the western Presbyterian Church into rival synods.
Though Henry Beecher's Lawrenceburg church declared its independence from the Synod to retain him as its pastor, the poverty that followed the Panic of 1837 caused him to look for a new position. Banker Samuel Merrill invited Beecher to visit Indianapolis in 1839, and he was offered the ministry of the Second Presbyterian Church there on May 13, 1839. Unusually for a speaker of his era, Beecher would use humor and informal language including dialect and slang as he preached. His preaching was a major success, building Second Presbyterian into the largest church in the city, and he also led a successful revival meeting in nearby Terre Haute. However, mounting debt led to Beecher again seeking a new position in 1847, and he accepted the invitation of businessman Henry Bowen to head a new Plymouth Congregational Church in Brooklyn, New York. Beecher's national fame continued to grow, and he took to the lecture circuit, becoming one of the most popular speakers in the country and charging correspondingly high fees.
In the course of his preaching, Henry Ward Beecher came to reject his father Lyman's theology, which "combined the old belief that 'human fate was preordained by God's plan' with a faith in the capacity of rational men and women to purge society of its sinful ways". Henry instead preached a "Gospel of Love" that emphasized God's absolute love rather than human sinfulness, and doubted the existence of Hell. He also rejected his father's prohibitions against various leisure activities as distractions from a holy life, stating instead that "Man was made for enjoyment".
Social and political activism
Abolitionism
Henry Ward Beecher became involved in many social issues of his day, most notably abolition. Though Beecher hated slavery as early as his seminary days, his views were generally more moderate than those of abolitionists like William Lloyd Garrison, who advocated the breakup of the Union if it would also mean the end of slavery. A personal turning point for Beecher came in October 1848 when he learned of two escaped young female slaves who had been recaptured; their father had been offered the chance to ransom them from captivity, and appealed to Beecher to help raise funds. Beecher raised over two thousand dollars to secure the girls' freedom. On June 1, 1856, he held another mock slave auction seeking enough contributions to purchase the freedom of a young woman named Sarah.
In his widely reprinted piece "Shall We Compromise", Beecher assailed the Compromise of 1850, a compromise between anti-slavery and pro-slavery forces brokered by Whig Senator Henry Clay. The compromise banned slavery from California and slave-trading from Washington, D.C., at the cost of a stronger Fugitive Slave Act; Beecher objected to the last provision in particular, arguing that it was a Christian's duty to feed and shelter escaped slaves. Slavery and liberty were fundamentally incompatible, Beecher argued, making compromise impossible: "One or the other must die". In 1856, Beecher campaigned for Republican John C. Frémont, the first presidential candidate of the Republican Party; despite Beecher's aid, Frémont lost to Democrat James Buchanan. During the pre-Civil-War conflict in the Kansas Territory, known as "Bloody Kansas", Beecher raised funds to send Sharps rifles to abolitionist forces, stating that the weapons would do more good than "a hundred Bibles". The press subsequently nicknamed the weapons "Beecher's Bibles". Beecher became widely hated in the American South for his abolitionist actions and received numerous death threats.
In 1863, during the Civil War, President Abraham Lincoln sent Beecher on a speaking tour of Europe to build support for the Union cause. Beecher's speeches helped turn European popular sentiment against the rebel Confederate States of America and prevent its recognition by foreign powers. At the close of the war in April 1865, Beecher was invited to speak at Fort Sumter, South Carolina, where the first shots of the war had been fired; Lincoln had again personally selected him, stating, "We had better send Beecher down to deliver the address on the occasion of raising the flag because if it had not been for Beecher there would have been no flag to raise." (See Raising the Flag at Fort Sumter.)
Other views
Beecher advocated for the temperance movement throughout his career and was a strict teetotaler. Following the Civil War, he also became a leader in the women's suffrage movement. In 1867, he campaigned unsuccessfully to become a delegate to the New York Constitutional Convention of 1867–1868 on a suffrage platform, and in 1869, was elected unanimously as the first president of the American Woman Suffrage Association.
In the Reconstruction Era, Beecher sided with President Andrew Johnson's plan for swift restoration of Southern states to the Union. He believed that captains of industry should be the leaders of society and supported Social Darwinist ideas. During the Great Railroad Strike of 1877, he preached strongly against the strikers whose wages had been cut, stating, "Man cannot live by bread alone but the man who cannot live on bread and water is not fit to live," and "If you are being reduced, go down boldly into poverty". His remarks were so unpopular that cries of "Hang Beecher!" became common at labor rallies, and plainclothes detectives protected his church.
Influenced by British author Herbert Spencer, Beecher embraced Charles Darwin's theory of evolution in the 1880s, identifying as a "cordial Christian evolutionist". He argued that the theory was in keeping with what Applegate called "the inevitability of progress", seeing a steady march toward perfection as a part of God's plan. In 1885, he wrote Evolution and Religion to expound these views. His sermons and writings helped to gain acceptance for the theory in America.
Beecher was a prominent advocate for allowing Chinese immigration to continue to the US, helping to delay passage of the Chinese Exclusion Act until 1882. He argued that as other American peoples, such as the Irish, had seen a gradual increase in their social standing, a new people was required to do "what we call the menial work", and that the Chinese, "by reason of their training, by the habits of a thousand years, are adapted to do that work."
Personal life
Marriage
Beecher married Eunice Bullard in 1837 after a five-year engagement. Their marriage was not a happy one; as Applegate writes, "within a year of their wedding they embarked on the classic marital cycle of neglect and nagging", marked by Henry's prolonged absences from home. The couple also suffered the deaths of four of their eight children.
Beecher enjoyed the company of women, and rumors of extramarital affairs circulated as early as his Indiana days, when he was believed to have had an affair with a young member of his congregation. In 1858, the Brooklyn Eagle wrote a story accusing him of an affair with another young church member who had later become a prostitute. The wife of Beecher's patron and editor, Henry Bowen, confessed on her deathbed to her husband of an affair with Beecher; Bowen concealed the incident during his lifetime.
Several members of Beecher's circle reported that Beecher had had an affair with Edna Dean Proctor, an author with whom he was collaborating on a book of his sermons. The couple's first encounter was the subject of dispute: Beecher reportedly told friends that it had been consensual, while Proctor reportedly told Henry Bowen that Beecher had raped her. Regardless of the initial circumstances, Beecher and Proctor allegedly then carried on their affair for more than a year. According to historian Barry Werth, "it was standard gossip that 'Beecher preaches to seven or eight of his mistresses every Sunday evening.'"
The Beecher–Tilton Scandal Case (1875)
In a highly publicized scandal, Beecher was tried on charges that he had committed adultery with a friend's wife, Elizabeth Tilton. In 1870, Elizabeth had confessed to her husband, Theodore Tilton, that she had had a relationship with Beecher. The charges became public after Theodore told Elizabeth Cady Stanton and others of his wife's confession. Stanton repeated the story to fellow women's rights leaders Victoria Woodhull and Isabella Beecher Hooker.
Henry Ward Beecher had publicly denounced Woodhull's advocacy of free love. Seeing a chance to get even no matter the cost, she published a story titled "The Beecher–Tilton Scandal Case" in her paper Woodhull and Claflin's Weekly on November 2, 1872; the article made detailed allegations that America's most renowned clergyman was secretly practicing the free-love doctrines that he denounced from the pulpit. Woodhull was arrested in New York City and imprisoned for sending obscene material through the mail. The scandal split the Beecher siblings; Harriet and others supported Henry, while Isabella publicly supported Woodhull. The first trial was Woodhull's, who was released on a technicality.
Subsequent hearings and trial, in the words of Walter A. McDougall, "drove Reconstruction off the front pages for two and a half years" and became "the most sensational 'he said, she said' in American history". On October 31, 1873, Plymouth Church excommunicated Theodore Tilton for "slandering" Beecher. The Council of Congregational Churches held a board of inquiry from March 9 to 29, 1874, to investigate the disfellowshipping of Tilton, and censured Plymouth Church for acting against Tilton without first examining the charges against Beecher. As of June 27, 1874, Plymouth Church established its own investigating committee which exonerated Beecher. Tilton then sued Beecher on civil charges of adultery. The Beecher–Tilton trial began in January 1875, and ended in July when the jurors deliberated for six days but were unable to reach a verdict. In February 1876, the Congregational church held a final hearing to exonerate Beecher.
Stanton was outraged by Beecher's repeated exonerations, calling the scandal a "holocaust of womanhood". French author George Sand planned a novel about the affair, but died the following year before it could be written.
Later life and legacy
Later life
In 1871, Yale University established "The Lyman Beecher Lectureship", of which Henry taught the first three annual courses. After the heavy expenses of the trial, Beecher embarked on a lecture tour of the West that returned him to solvency. In 1884, he angered many of his Republican allies when he endorsed Democratic candidate Grover Cleveland for the presidency, arguing that Cleveland should be forgiven for having fathered an illegitimate child. He made another lecture tour of England in 1886.
On March 6, 1887, Beecher suffered a stroke and died in his sleep on March 8. Still a widely popular figure, he was mourned in newspapers and sermons across the country. Henry Ward Beecher is interred at Green-Wood Cemetery in Brooklyn, New York. Beecher's grandson Harry Beecher was a college football player who was the subject of the first American football card, printed in 1888.
Legacy
In assessing Beecher's legacy, Applegate states that
At his best, Beecher represented what remains the most lovable and popular strain of American culture: incurable optimism; can-do enthusiasm; and open-minded, open-hearted pragmatism ... His reputation has been eclipsed by his own success. Mainstream Christianity is so deeply infused with the rhetoric of Christ's love that most Americans can imagine nothing else, and have no appreciation or memory of the revolution wrought by Beecher and his peers.
In 1929, First Presbyterian Church in Lawrenceburg was renamed Beecher Presbyterian.
A Henry Ward Beecher Monument created by the sculptor John Quincy Adams Ward was unveiled on June 24, 1891, in Borough Hall Park, Brooklyn, and was later relocated to Cadman Plaza, Brooklyn in 1959.
A limerick written about Beecher by poet Oliver Herford became well known in the USA:
Oliver Wendell Holmes Sr. offered his own limerick on Beecher:
Christopher J Barry, Canadian published songwriter, offered this alternative limerick:
In 2022, New Hampshire Historical Marker no. 274 was unveiled in Carroll, New Hampshire, commemorating Beecher and his open-air sermons in the town.
Writings
Background
Henry Ward Beecher was a prolific author as well as speaker. His public writing began in Indiana, where he edited an agricultural journal, The Farmer and Gardener. He was one of the founders and for nearly twenty years an editorial contributor of the New York Independent, a Congregationalist newspaper, and from 1861 till 1863 was its editor. His contributions to this were signed with an asterisk, and many of them were afterward collected and published in 1855 as Star Papers; or, Experiences of Art and Nature.
In 1865, Robert E. Bonner of the New York Ledger offered Beecher twenty-four thousand dollars to follow his sister's example and compose a novel; the subsequent novel, Norwood, or Village Life in New England, was published in 1868. Beecher stated his intent for Norwood was to present a heroine who is "large of soul, a child of nature, and, although a Christian, yet in childlike sympathy with the truths of God in the natural world, instead of books." McDougall describes the resulting novel as "a New England romance of flowers and bosomy sighs ... 'new theology' that amounted to warmed-over Emerson". The novel was moderately well received by critics of the day. In 1964 sculptor Joseph Kiselewski created a bronze medal depicting Henry Ward Beecher for the Hall of Fame for Great Americans at the Bronx Community College in New York City. The sculptor John Massey Rhind created the Hall's bust of Beecher.
List of published works
Seven Lectures to Young Men (1844) (pamphlet)
Star Papers; or, Experiences of Art and Nature (1855). Columns from the New York Independent. New York: J. C. Derby.
Life Thoughts, Gathered from the Extemporaneous Discourses of Henry Ward Beecher by One of His Congregation. Notes taken of Beecher's sermons by Edna Dean Proctor. Boston: Phillips, Sampson and Company, 1858
Notes from Plymouth Pulpit (1859)
Plain and Pleasant Talk About Fruits, Flowers and Farming. Articles taken from the Western Farmer and Gardner New York: Derby & Jackson, 1859.
The Independent (1861–63) (periodical, editor)
Eyes and Ears (1862) (collection of letters from the New York Ledger newspaper)
Freedom and War (1863) Boston, Ticknor and Fields (1863).
Lectures to Young Men, On Various Important Subjects. New edition with additional lectures. Boston: Ticknor and Fields, 1868
Christian Union (1870–78) (periodical, as editor)
Summer in the Soul (1858)
Prayers from the Plymouth Pulpit (1867)
Norwood, or Village Life in New England (1868) (novel)
Life of Jesus, the Christ (1871) New York: J. B. Ford and Company.
Yale Lectures on Preaching (1872)
Evolution and Religion (1885); reissued by Cambridge University Press 2009.
Proverbs from Plymouth Pulpit (1887)
A Biography of Rev. Henry Ward Beecher by Wm. C. Beecher and Rev. Samuel Scoville (1888)
In popular culture
Beecher Cascades on Crawford Brook in Carroll, New Hampshire, is named for him. It is rumored that he slipped and fell into the brook there on a visit.
In March 1993, a new musical, Loving Henry, inspired by the Beecher–Tilton scandal, was presented at the First Presbyterian Church of Brooklyn. It was written by Dick Turmail and Clinton Corbett, with the music composed by jazz violinist Noel Pointer.
Citations
Cited works
Hibben, Paxton. Henry Ward Beecher: An American Portrait. New York: The press of the Readers club, 1942. (Foreword by Sinclair Lewis.)
Further reading
Duyckinck, Evert A. (1873). "Henry Ward Beecher". In Portrait Gallery of Eminent Men and Women of Europe and America. Embracing History, Statesmanship, Naval and Military Life, Philosophy, The Drama, Science, Literature and Art. With Biographies. New York: Johnson, Wilson and Company, vol. 2, pp. 600–604.
McFarland, Philip (2007). Loves of Harriet Beecher Stowe. New York: Grove Press. (Her "loves" are husband Calvin, father Lyman, and brother Henry.)
Menand, Louis (April 14, 2024). "When Preachers Were Rock Stars". The New Yorker. This is the foreward to Shaplen (2024).
Shaplen, Robert (2024). Free Love: The Story of a Great American Scandal. New York: McNally Editions. First published in 1954 by Alfred A. Knopf, Inc. as Free Love and Heavenly Sinners.
Smith, Matthew Hale (1869). "Mr. Beecher and Plymouth Church". Ch. IX of Sunshine and Shadow in New York. Hartford: J. B. Burr and Company, pp. 86–100.
External links
Henry Ward Beecher by Lymon Abbott (1904)
The Beecher-Tilton Affair from the Museum of the City of New York Collections blog
Beecher family collection from Princeton University Library. Special Collections
Violet Beach - Henry Ward Beecher Collection at the Amherst College Archives & Special Collections
Joseph Winner dedicated the song "Contraband's Song of Freedom" to Rev. Beecher.
1813 births
1887 deaths
19th-century American clergy
19th-century American male writers
19th-century American non-fiction writers
19th-century Congregationalist ministers
Abolitionists from New York City
American Congregationalist ministers
American evangelicals
American male non-fiction writers
American people of English descent
American people of Welsh descent
American social reformers
American temperance activists
Amherst College alumni
Beecher family
Bleeding Kansas
Burials at Green-Wood Cemetery
Congregationalist abolitionists
Congregationalist writers
Hall of Fame for Great Americans inductees
Lane Theological Seminary alumni
People from Litchfield, Connecticut
People of the American Civil War
Religious leaders from Connecticut
Religious leaders from New York City
Suffragists from Connecticut
Theistic evolutionists | Henry Ward Beecher | [
"Biology"
] | 4,967 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
166,202 | https://en.wikipedia.org/wiki/White%20people | White is a racial classification of people generally used for those of predominantly European ancestry. It is also a skin color specifier, although the definition can vary depending on context, nationality, ethnicity and point of view.
Description of populations as "White" in reference to their skin color is occasionally found in Greco-Roman ethnography and other ancient or medieval sources, but these societies did not have any notion of a White race or pan-European identity. The term "White race" or "White people", defined by their light skin among other physical characteristics, entered the major European languages in the later seventeenth century, when the concept of a "unified White" achieved greater acceptance in Europe, in the context of racialized slavery and social status in the European colonies. Scholarship on race distinguishes the modern concept from pre-modern descriptions, which focused on physical complexion rather than the idea of race. Prior to the modern era, no European peoples regarded themselves as "White", but rather defined their identity in terms of their religion, ancestry, ethnicity, or nationality.
Contemporary anthropologists and other scientists, while recognizing the reality of biological variation between different human populations, regard the concept of a unified, distinguishable "White race" as a social construct with no scientific basis.
Physical descriptions in antiquity
According to anthropologist Nina Jablonski:
The Ancient Egyptian (New Kingdom) funerary text known as the Book of Gates distinguishes "four groups" in a procession. These are the Egyptians, the Levantine and Canaanite peoples or "Asiatics", the "Nubians" and the "fair-skinned Libyans". The Egyptians are depicted as considerably darker-skinned than the Levantines (persons from what is now Lebanon, Israel, Palestine and Jordan) and Libyans, but considerably lighter than the Nubians (modern Sudan).
The assignment of positive and negative connotations of White and Black to certain persons date to the very old age in a number of Indo-European languages, but these differences were not necessarily used in respect to skin colors. Religious conversion was sometimes described figuratively as a change in skin color. Similarly, the Rigveda uses "black skin" as a metaphor for irreligiosity. Ancient Egyptians, Mycenaean Greeks and Minoans generally depicted women as having pale or white skin while men were depicted as dark brown or tanned. As a result, men with pale or light skin, leukochrōs (λευκόχρως, "white-skinned") could be considered weak and effeminate by Ancient Greek writers such as Plato and Aristotle. According to Aristotle "Those whose skin is too dark are cowardly: witness Egyptians and the Ethiopians. Those whose skin is too light are equally cowardly: witness women. The skin color typical of the courageous should be halfway between the two." Similarly, Xenophon of Athens describes Persian prisoners of war as "white-skinned because they were never without their clothing, and soft and unused to toil because they always rode in carriages" and states that Greek soldiers as a result believed "that the war would be in no way different from having to fight with women."
Classicist James H. Dee states "the Greeks do not describe themselves as 'White people'or as anything else because they had no regular word in their color vocabulary for themselves." People's skin color did not carry useful meaning; what mattered is where they lived. Herodotus described the Scythian Budini as having deep blue eyes and bright red hair and the Egyptians – quite like the Colchians – as (, "dark-skinned") and curly-haired. He also gives the possibly first reference to the common Greek name of the tribes living south of Egypt, otherwise known as Nubians, which was (, "burned-faced"). Later Xenophanes of Colophon described the Aethiopians as black and the Thracians as having red hair and blue eyes. In his description of the Scythians, Hippocrates states that the cold weather "burns their white skin and turns it ruddy."
Modern racial hierarchies
The term "White race" or "White people" entered the major European languages in the later seventeenth century, originating with the racialization of slavery at the time, in the context of the Atlantic slave trade and the enslavement of indigenous peoples in the Spanish Empire. It has repeatedly been ascribed to strains of blood, ancestry, and physical traits, and was eventually made into a subject of pseudoscientific research, which culminated in scientific racism, which was later widely repudiated by the scientific community. According to historian Irene Silverblatt, "Race thinking… made social categories into racial truths." Bruce David Baum, citing the work of Ruth Frankenberg, states, "the history of modern racist domination has been bound up with the history of how European peoples defined themselves (and sometimes some other peoples) as members of a superior 'white race'." Alastair Bonnett argues that "white identity", as it is presently conceived, is an American project, reflecting American interpretations of race and history.
According to Gregory Jay, a professor of English at the University of Wisconsin–Milwaukee:
In the sixteenth and seventeenth centuries, "East Asian peoples were almost uniformly described as White, never as yellow." Michael Keevak's history Becoming Yellow, finds that East Asians were redesignated as being yellow-skinned because "yellow had become a racial designation," and that the replacement of White with yellow as a description came through pseudoscientific discourse.
A social category formed by colonialism
A three-part racial scheme in color terms was used in seventeenth-century Latin America under Spanish rule. Irene Silverblatt traces "race thinking" in South America to the social categories of colonialism and state formation: "White, black, and brown are abridged, abstracted versions of colonizer, slave, and colonized." By the mid-seventeenth century, the novel term ("Spaniard") was being equated in written documents with , or "White". In Spain's American colonies, Black African, Indigenous (), Jewish, or morisco ancestry formally excluded individuals from the "purity of blood" () requirements for holding any public office under the Royal Pragmatic of 1501. Similar restrictions applied in the military, some religious orders, colleges, and universities, leading to a nearly all-White priesthood and professional stratum. Blacks and were subject to tribute obligations and forbidden to bear arms, and black and women were forbidden to wear jewels, silk, or precious metals in early colonial Mexico and Peru. Those (people with dark skin) and (people of mixed African and European ancestry) with resources largely sought to evade these restrictions by passing as White. A brief royal offer to buy the privileges of Whiteness for a substantial sum of money attracted fifteen applicants before pressure from White elites ended the practice.
In the British colonies in North America and the Caribbean, the designation English or Christian was initially used in contrast to Native Americans or Africans. Early appearances of White race or White people in the Oxford English Dictionary begin in the seventeenth century. Historian Winthrop Jordan reports that, "throughout the [thirteen] colonies the terms Christian, free, English, and white were ... employed indiscriminately" in the seventeenth century as proxies for one another. In 1680, Morgan Godwyn "found it necessary to explain" to English readers that "in Barbados, 'white' was 'the general name for Europeans.'" Several historians report a shift towards greater use of White as a legal category alongside a hardening of restrictions on free or Christian blacks. White remained a more familiar term in the American colonies than in Britain well into the 1700s, according to historian Theodore W. Allen.
Scientific racism
Western studies of race and ethnicity in the eighteenth and nineteenth centuries developed into what would later be termed scientific racism. Prominent European pseudoscientists writing about human and natural difference included a White or West Eurasian race among a small set of human races and imputed physical, mental, or aesthetic superiority to this White category. These ideas were discredited by twentieth-century scientists.
Eighteenth century beginnings
In 1758, Carl Linnaeus proposed what he considered to be natural taxonomic categories of the human species. He distinguished between Homo sapiens and Homo sapiens europaeus, and he later added four geographical subdivisions of humans: white Europeans, red Americans, yellow Asians and black Africans. Although Linnaeus intended them as objective classifications, his descriptions of these groups included cultural patterns and derogatory stereotypes.
In 1775, the naturalist Johann Friedrich Blumenbach asserted that "The white color holds the first place, such as is that of most European peoples. The redness of the cheeks in this variety is almost peculiar to it: at all events it is but seldom to be seen in the rest".
In the various editions of his On the Natural Variety of Mankind, he categorized humans into four or five races, largely built on Linnaeus' classifications. But while, in 1775, he had grouped into his "first and most important" race "Europe, Asia this side of the Ganges, and all the country situated to the north of the Amoor, together with that part of North America, which is nearest both in position and character of the inhabitants", he somewhat narrows his "Caucasian variety" in the third edition of his text, of 1795: "To this first variety belong the inhabitants of Europe (except the Lapps and the remaining descendants of the Finns) and those of Eastern Asia, as far as the river Obi, the Caspian Sea and the Ganges; and lastly, those of Northern Africa." Blumenbach quotes various other systems by his contemporaries, ranging from two to seven races, authored by the authorities of that time, including, besides Linnæus, Georges-Louis Leclerc, Comte de Buffon, Christoph Meiners and Immanuel Kant.
In the question of color, he conducts a rather thorough inquiry, considering also factors of diet and health, but ultimately believes that "climate, and the influence of the soil and the temperature, together with the mode of life, have the greatest influence". Blumenbach's conclusion was, however, to proclaim all races' attribution to one single human species. Blumenbach argued that physical characteristics like skin color, cranial profile, etc., depended on environmental factors, such as solarization and diet. Like other monogenists, Blumenbach held to the "degenerative hypothesis" of racial origins. He claimed that Adam and Eve were Caucasian inhabitants of Asia, and that other races came about by degeneration from environmental factors such as the sun and poor diet. He consistently believed that the degeneration could be reversed in a proper environmental control and that all contemporary forms of man could revert to the original Caucasian race.
Nineteenth and twentieth century: the "Caucasian race"
Between the mid-nineteenth and mid-twentieth centuries, race scientists, including most physical anthropologists classified the world's populations into three, four, or five races, which, depending on the authority consulted, were further divided into various sub-races. During this period the Caucasian race, named after people of the Caucasus Mountains but extending to all Europeans, figured as one of these races and was incorporated as a formal category of both pseudoscientific research and, in countries including the United States, social classification.
There was never any scholarly consensus on the delineation between the Caucasian race, including the populations of Europe, and the Mongoloid one, including the populations of East Asia. Thus, Carleton S. Coon (1939) included the populations native to all of Central and Northern Asia under the Caucasian label, while Thomas Henry Huxley (1870) classified the same populations as Mongoloid, and Lothrop Stoddard (1920) classified as "brown" most of the populations of the Middle East, North Africa and Central Asia, and counted as "White" only the European peoples and their descendants, as well as some populations in parts of Anatolia and the northern areas of Morocco, Algeria and Tunisia. Some authorities, following Huxley (1870), distinguished the Xanthochroi or "light Whites" of Northern Europe with the Melanochroi or "dark Whites" of the Mediterranean.
Although modern neo-Nazis often invoke Nazi iconography on behalf of White nationalism, Nazi Germany repudiated the idea of a unified White race, instead promoting Nordicism. In Nazi propaganda, Eastern European Slavs were often referred to as Untermensch (subhuman in English), and the relatively under-developed economic status of Eastern European countries such as Poland and the USSR was attributed to the racial inferiority of their inhabitants. Fascist Italy took the same view, and both of these nations justified their colonial ambitions in Eastern Europe on racist, anti-Slavic grounds. These nations were not alone in their view; during the long nineteenth century and interwar period, there were numerous casesregardless of the position in the political spectrum of the personwhere European ethnic groups and nations labeled or treated other Europeans as members of another, somehow "inferior race". Between the Enlightenment era and interwar period, the racist worldviews fit well into the liberal worldview, and they were almost general among the liberal thinkers and politicians.
Census and social definitions in different regions
Definitions of White have changed over the years, including the official definitions used in many countries, such as the United States and Brazil. Through the mid to late twentieth century, numerous countries had formal legal standards or procedures defining racial categories (see cleanliness of blood, casta, apartheid in South Africa, hypodescent). Some countries do not ask questions about race or colour at all in their census.
Africa
South Africa
White Dutch people first arrived in South Africa around 1652. By the beginning of the eighteenth century, some 2,000 Europeans and their descendants were established in the region. Although these early Afrikaners represented various nationalities, including German peasants and French Huguenots, the community retained a thoroughly Dutch character.
The Kingdom of Great Britain captured Cape Town in 1795 during the Napoleonic Wars and permanently acquired South Africa from Amsterdam in 1814. The first British immigrants numbered about 4,000 and were introduced in 1820. They represented groups from England, Ireland, Scotland, or Wales and were typically more literate than the Dutch. The discovery of diamonds and gold led to a greater influx of English speakers who were able to develop the mining industry with capital unavailable to Afrikaners. They have been joined in more subsequent decades by former colonials from elsewhere, such as Zambia and Kenya, and poorer British nationals looking to escape famine at home.
Both Afrikaners and English have been politically dominant in South Africa during the past; due to the controversial racial order under apartheid, the nation's predominantly Afrikaner government became a target of condemnation by other African states and the site of considerable dissension between 1948 and 1991.
There were 4.6 million Whites in South Africa in 2011, down from an all-time high of 5.2 million in 1995 following a wave of emigration commencing in the late twentieth century. However, many returned over time.
Asia
Hong Kong
In the recent 2021 census of Hong Kong, 61,582 people identified as white representing 0.8% of the total population.
Philippines
According to Spanish colonial statistics, 5% of the Philippine population in the 1700s had partial Spanish ancestry.
Australia and Oceania
Australia
The recent 2021 Australian census does not use the term “white” on their census form, therefore results showed 54.7% of the population identifying with a European ancestry.
From 1788, when the first British colony in Australia was founded, until the early nineteenth century, most immigrants to Australia were English, Scottish, Welsh and Irish convicts. These were augmented by small numbers of free settlers from the British Isles and other European countries. However, until the mid-nineteenth century, there were few restrictions on immigration, although members of ethnic minorities tended to be assimilated into the Anglo-Celtic populations.
People of many nationalities, including many non-White people, emigrated to Australia during the goldrushes of the 1850s. However, the vast majority was still White and the goldrushes inspired the first racist activism and policy, directed mainly at Chinese immigrants.
From the late nineteenth century, the Colonial/State and later federal governments of Australia restricted all permanent immigration to the country by non-Europeans. These policies became known as the "White Australia policy", which was consolidated and enabled by the Immigration Restriction Act 1901, but was never universally applied. Immigration inspectors were empowered to ask immigrants to take dictation from any European language as a test for admittance, a test used in practice to exclude people from Asia, Africa, and some European and South American countries, depending on the political climate.
Although they were not the prime targets of the policy, it was not until after World War II that large numbers of southern European and eastern European immigrants were admitted for the first time. Following this, the White Australia Policy was relaxed in stages: non-European nationals who could demonstrate European descent were admitted (e.g., descendants of European colonizers and settlers from Latin America or Africa), as were autochthonous inhabitants (such as Maronites, Assyrians and Mandeans) of various nations from the Middle East, most significantly from Lebanon and to a lesser degree Iraq, Syria and Iran. In 1973, all immigration restrictions based on race and geographic origin were officially terminated.
Australia enumerated its population by race between 1911 and 1966, by racial origin in 1971 and 1976, and by self-declared ancestry alone since 1981, meaning no attempt is now made to classify people according to skin color. As at the 2016 census, it was estimated by the Australian Human Rights Commission that around 58% of the Australian population were Anglo-Celtic Australians with 18% being of other European origins, a total of 76% for European ancestries as a whole.
New Zealand
According to the 2023 New Zealand census 67.8% or 3,383,742 people identified with a European ethnic origin, down from 70.2% in 2018, and 90.6% in 1966.
In 1926, 95.0% of the population was of European descent.
The establishment of British colonies in Australia from 1788 and the boom in whaling and sealing in the Southern Ocean brought many Europeans to the vicinity of New Zealand. Whalers and sealers were often itinerant, and the first real settlers were missionaries and traders in the Bay of Islands area from 1809. Early visitors to New Zealand included whalers, sealers, missionaries, mariners, and merchants, attracted to natural resources in abundance. They came from the Australian colonies, Great Britain and Ireland, Germany (forming the next biggest immigrant group after the British and Irish), France, Portugal, the Netherlands, Denmark, the United States, and Canada.
In the 1860s, the discovery of gold started a gold rush in Otago. By 1860 more than 100,000 British and Irish settlers lived throughout New Zealand. The Otago Association actively recruited settlers from Scotland, creating a definite Scottish influence in that region, while the Canterbury Association recruited settlers from the south of England, creating a definite English influence over that region.
In the 1870s, MP Julius Vogel borrowed millions of pounds from Britain to help fund capital development such as a nationwide rail system, lighthouses, ports, and bridges, and encouraged mass migration from Britain. By 1870 the non-Māori population reached over 250,000. Other smaller groups of settlers came from Germany, Scandinavia, and other parts of Europe as well as from China and India, but British and Irish settlers made up the vast majority and did so for the next 150 years.
Other Oceania
Europe
France
White people in France are a broad racial-based, or skin color-based, social category in French society.
In statistical terms, the French government banned the collection of racial or ethnic information in 1978, and the National Institute of Statistics and Economic Studies (INSEE), therefore, does not provide census data on White residents or citizens in France. French courts have, however, made cases, and issued rulings, which have identified White people as a demographic group within the country.
White people in France are defined, or discussed, as a racial or social grouping, from a diverse and often conflicting range of political and cultural perspectives; in anti-racism activism in France, from right-wing political dialogue or propaganda, and other sources.
Background
Whites in France have been studied with regard the group's historical involvement in French colonialism; how "whites in France have played a major international role in colonizing areas of the globe such as the African continent."
They have been described as a privileged social class within the country, comparatively sheltered from racism and poverty. has reported how "most white people in France only know the banlieues as a kind of caricature". Banlieues, outer-city regions across the country that are increasingly identified with minority groups, often have residents who are disproportionately affected by unemployment and poverty.
The lack of census data collected by the INED and INSEE for Whites in France has been analyzed, from some academic perspectives, as masking racial issues within the country, or a form of false racial color blindness. Writing for Al Jazeera, French journalist Rokhaya Diallo suggests that "a large portion of White people in France are not used to having frank conversations about race and racism." According to political sociologist Eduardo Bonilla-Silva, "whites in France lie to themselves and the world by proclaiming that they do not have institutional racism in their nation." Sociologist Crystal Marie Fleming has written; "While many whites in France refuse to acknowledge institutionalized racism and white supremacy, there is widespread belief in the specter of 'anti-white racism'".
Use in right-wing politics
Accusations of anti-White racism, suggestions of the displacement of, or lack of representation for, the group, and rhetoric surrounding Whites in France experiencing poverty have been, at times, utilised by various right-wing political elements in the country. University of Lyon's political scientist Angéline Escafré-Dublet has written that "the equivalent to a White backlash in France can be traced through the debate over the purported neglect of the 'poor Whites' in France".
In 2006, French politician Jean-Marie Le Pen suggested there were too many "players of colour" in the France national football team after he suggested that 7 of the 23-player squad were White. In 2020, French politician Nadine Morano stated that French actress Aïssa Maïga, who was born in Senegal, should "go back to Africa" if she "was not happy with seeing so many white people in France".
Republic of Ireland
According to the 2022 Irish census, 4,444,145 or 87.4% of the total population declared their race as “White Irish” and Other White, this was a decline from 92.4% in 2016 and 94.24% in 2011.
People who identified as “White Irish” in 2022 were 3,893,056 or 76.5% of the total population, a decline from 87.4% in 2006.
Malta
As of the 2021 census, 89.1% self-identified as Caucasian racial origin. Maltese-born natives make up the majority of the island with 386,280 people out of a total population of 519,562. However, there are minorities, the largest of which by European birthplace were: 15,082 from the United Kingdom, Italy (13,361) and Serbia (5,935). Among racial origins for the non-Maltese, 58.1% of all identified as Caucasian.
United Kingdom
Historical White identities
Before the Industrial Revolutions in Europe whiteness may have been associated with social status. Aristocrats may have had less exposure to the sun and therefore a pale complexion may have been associated with status and wealth. This may be the origin of "blue blood" as a description of royalty, the skin being so lightly pigmented that the blueness of the veins could be clearly seen. The change in the meaning of White that occurred in the colonies (see above) to distinguish Europeans from non-Europeans did not apply to the 'home land' countries (England, Ireland, Scotland and Wales). Whiteness therefore retained a meaning associated with social status for the time being, and, during the nineteenth century, when the British Empire was at its peak, many of the bourgeoisie and aristocracy developed extremely negative attitudes to those of lower social rank.
Edward Lhuyd discovered that Welsh, Gaelic, Cornish and Breton are all part of the same language family, which he termed the "Celtic family", and was distinct from the Germanic English; this can be seen in context of the emerging romantic nationalism, which was also prevalent among those of Celtic descent.
Just as race reified whiteness in America, Africa, and Asia, capitalism without social welfare reified whiteness with regard to social class in nineteenth-century Britain and Ireland; this social distinction of whiteness became, over time, associated with racial differences. For example, George Sims in his 1883 book How the poor live wrote of "a dark continent that is within easy reach of the General Post Office ... the wild races who inhabit it will, I trust, gain public sympathy as easily as [other] savage tribes".
Modern and official use
From the early 1700s, Britain received a small-scale immigration of black people due to the transatlantic slave trade. The oldest Chinese community in Britain (as well as in Europe) dates from the nineteenth century. Since the end of World War II, a substantial immigration from the African, Caribbean and South Asian (namely the British Raj) colonies changed the picture more radically, while the adhesion to the European Union brought with it a heightened immigration from Central and Eastern Europe.
Today the Office for National Statistics uses the term White as an ethnic category. The terms White British, White Irish, White Scottish and White Other are used. These classifications rely on individuals' self-identification, since it is recognised that ethnic identity is not an objective category. Socially, in the UK White usually refers only to people of native British, Irish and European origin. As a result of the 2011 census the White population stood at 85.5% in England (White British: 79.8%), at 96% in Scotland (White British: 91.8%), at 95.6% in Wales (White British: 93.2%), while in Northern Ireland 98.28% identified themselves as White, amounting to a total of 87.2% White population (or White British and Irish).
North America
Bermuda (U.K.)
At the 2016 census the number of Bermudians who identify as white was 19,466 or 31 percent of the total population. The White population of Bermuda made up the entirety of the Bermuda's population, other than a black and an Indian slave brought in for a very short-lived pearl fishery in 1616, from settlement (which began accidentally in 1609 with the wreck of the Sea Venture) until the middle of the 17th century, and the majority until some point in the 18th century.
In 2010, census data found that White Bermudians accounted for 31% including 10% native Bermudians and 21% foreign-born.
Canada
Of the over 36 million Canadians enumerated in 2021 approximately 25 million reported being "White", representing 69.8 percent of the population.
In the 1995 Employment Equity Act, "'members of visible minorities' means persons, other than Aboriginal peoples, who are non-Caucasian in race or non-white in colour". In the 2001 Census, persons who selected Chinese, South Asian, African, Filipino, Latin American, Southeast Asian, Arab, West Asian, Middle Eastern, Japanese, or Korean were included in the visible minority population. A separate census question on "cultural or ethnic origin" (question 17) does not refer to skin color.
Costa Rica
The 2022 census counted a total population of 5,044,197 people. In 2022, the census also recorded ethnic or racial identity for all groups separately for the first time in more than ninety-five years since the 1927 census. Options included indigenous, Black or Afro-descendant, Mulatto, Chinese, Mestizo, white and other on section IV: question 7.
White people (including mestizo) make up 94%, 3% are black people, 1% are Amerindians, and 1% are Chinese. White Costa Ricans are mostly of Spanish ancestry, but there are also significant numbers of Costa Ricans descended from British, Italian, German, English, Dutch, French, Irish, Portuguese and Polish families, as well a sizable Jewish (namely Ashkenazi and Sephardic) community.
Cuba
White people in Cuba make up 64.1% of the total population according to the 2012 census with the majority being of diverse Spanish descent. However, after the mass exodus resulting from the Cuban Revolution in 1959, the number of white Cubans actually residing in Cuba diminished. Today various records claiming the percentage of Whites in Cuba are conflicting and uncertain; some reports (usually coming from Cuba) still report a less, but similar, pre-1959 number of 65% and others (usually from outside observers) report a 40–45%. Despite most White Cubans being of Spanish descent, many others are of French, Portuguese, German, Italian and Russian descent.
During the eighteenth, nineteenth, and early part of the twentieth century, large waves of Canarians, Catalans, Andalusians, Castilians, and Galicians emigrated to Cuba. Many European Jews have also immigrated there, with some of them being Sephardic. Between 1901 and 1958, more than a million Spaniards arrived to Cuba from Spain; many of these and their descendants left after Castro's communist regime took power. Historically, Chinese descendants in Cuba were classified as White.
In 1953, it was estimated that 72.8% of Cubans were of European ancestry, mainly of Spanish origin, 12.4% of African ancestry, 14.5% of both African and European ancestry (mulattos), and 0.3% of the population was of Chinese and or East Asian descent (officially called "amarilla" or "yellow" in the census). However, after the Cuban revolution, due to a combination of factors, mainly mass exodus to Miami, United States, a drastic decrease in immigration, and interracial reproduction, Cuba's demography changed. As a result, those of complete European ancestry and those of pure African ancestry have decreased, the mixed population has increased, and the Chinese (or East Asian) population has, for all intents and purposes, disappeared.
The Institute for Cuban and Cuban American Studies at the University of Miami says the present Cuban population is 38% White and 62% Black/Mulatto. The Minority Rights Group International says that "An objective assessment of the situation of Afro-Cubans remains problematic due to scant records and a paucity of systematic studies both pre- and post-revolution. Estimates of the percentage of people of African descent in the Cuban population vary enormously, ranging from 33.9 per cent to 62 per cent".
Dominican Republic
They are 18.7% of the Dominican Republic's population, according to a 2022 survey by the United Nations Population Fund. The majority of white Dominicans have ancestry from the first European settlers to arrive in Hispaniola in 1492 and are descendants of the Spanish and Portuguese who settled in the island during colonial times, as well as the French who settled in the 17th and 18th centuries. About 9.2% of the Dominican population claims a European immigrant background, according to the 2021 Fondo de Población de las Naciones Unidas survey.
El Salvador
In 2013, White Salvadorans were a minority ethnic group in El Salvador, accounting for 12.7% of the country's population. An additional 86.3% of the population were mestizo, having mixed Amerindian and European ancestry.
Guatemala
In 2010, 18.5% of Guatemalans belonged to the White ethnic group, with 41.7% of the population being Mestizo, and 39.8% of the population belonging to the 23 Indigenous groups. It is difficult to make an accurate census of Whites in Guatemala, because the country categorizes all non-indigenous people are mestizo or ladino and a large majority of White Guatemalans consider themselves as mestizos or ladinos. By the nineteenth century the majority of immigrants were Germans, many who were bestowed fincas and coffee plantations in Cobán, while others went to Quetzaltenango and Guatemala City. Many young Germans married mestiza and indigenous Q'eqchi' women, which caused a gradual whitening. There was also immigration of Belgians to Santo Tomas and this contributed to the mixture of black and mestiza women in that region.
Honduras
As of 2013, Hondurans of solely White ancestry are a small minority in Honduras, accounting for 1% of the country's population. An additional 90% of the population is mestizo, having mixed indigenous and European ancestry.
Mexico
White Mexicans are individuals in Mexico who identify as white, often due to their physical appearance or their recognition of European or West Asian ancestry. The Mexican government conducts ethnic censuses that allow individuals to identify as "White," but the specific results of these censuses are not made public. Instead, the government releases data on the percentage of "light-skinned Mexicans" in the country, with nationwide surveys conducted by the Mexico's National Institute of Statistics and the National Council to Prevent Discrimination reporting results of about one-third. The term "Light-skinned Mexican" is preferred by both the government and media to describe individuals in Mexico who possess European physical traits when discussing ethno-racial dynamics. However, "White Mexican" is still used at times.
Europeans began arriving in Mexico during the Spanish conquest of the Aztec Empire; and while during the colonial period, most European immigration was Spanish (mostly from northern provinces such as Cantabria, Navarra, Galicia and the Basque Country,), in the nineteenth and twentieth centuries European and European-derived populations from North and South America did immigrate to the country. According to twentieth- and twenty-first-century academics, large-scale intermixing between the European immigrants and the native Indigenous peoples produced a Mestizo group which would become the overwhelming majority of Mexico's population by the time of the Mexican Revolution. However, according to church and censal registers from the colonial times, the majority of Spanish men married Spanish women. Said registers also put in question other narratives held by contemporary academics, such as European immigrants who arrived to Mexico being almost exclusively men or that "pure Spanish" people were all part of a small powerful elite, as Spaniards were often the most numerous ethnic group in the colonial cities and there were menial workers and people in poverty who were of complete Spanish origin.
Another ethnic group in Mexico, the Mestizos, is composed of people with varying degrees of European and indigenous ancestry, with some showing a European genetic ancestry higher than 90%. However, the criteria for defining what constitutes a Mestizo varies from study to study, as in Mexico a large number of White people have been historically classified as Mestizos, because after the Mexican Revolution the Mexican government began defining ethnicity on cultural standards (mainly the language spoken) rather than racial ones in an effort to unite all Mexicans under the same racial identity.
Estimates of Mexico's White population differ greatly in both, methodology and percentages given, extra-official sources such as the World Factbook, which use the 1921 census results as the base of their estimations, calculate Mexico's White population as only 10% (the results of the 1921 census, however, have been contested by various historians and deemed inaccurate). other sources suggest rather higher percentages: using the presence of blond hair as reference to classify a Mexican as White, the Metropolitan Autonomous University of Mexico calculated the percentage of said ethnic group at 23% within said institution. With a similar methodology, the American Sociological Association obtained a percentage of 18.8%. Another study made by the University College London in collaboration with Mexico's National Institute of Anthropology and History found that the frequencies of blond hair and light eyes in Mexicans are of 18% and 28% respectively.
A study performed in hospitals of Mexico City suggests that socioeconomic factors influence the frequency of Mongolian spots among newborns, as evidenced by the higher prevalence of 85% in newborns from a public institution, typically associated with lower socioeconomic status, compared to a 33% prevalence in newborns from private hospitals, which generally cater to families with higher socioeconomic status. The Mongolian spot appears with a very high frequency (85-95%) in Native American, and African children, but can be present in some individuals in the Mediterranean populations. The skin lesion reportedly almost always appears on South American and Mexican children who are racially Mestizos, while having a very low frequency (5–10%) in European children. According to the Mexican Social Security Institute (shortened as IMSS) nationwide, around half of Mexican babies have the Mongolian spot.
Mexico's northern and western regions have the highest percentages of white population, with the majority of the people not having native admixture or being of predominantly European ancestry. In the north and west of Mexico the indigenous tribes were substantially smaller and unlike those found in central and southern Mexico they were mostly nomadic, therefore remaining isolated from colonial population centers, with hostilities between them and Mexican colonists often taking place. This eventually led the northeast region of the country to become the region with the highest proportion of whites during the Spanish colonial period albeit recent migration waves have been changing its demographic trends. A number of settlements on which European immigrants have maintained their original culture and language survive to this day and are spread all over Mexican territory; among the most notable groups are the Mennonites who have colonies in states as variated as Chihuahua or Campeche and the town of Chipilo in the state of Puebla, inhabited nearly in its totality by descendants of Italian immigrants that still speak their Venetian-derived dialect.
Nicaragua
As of 2013, the White ethnic group in Nicaragua accounts for 17% of the country's population. An additional 69% of the population is Mestizo, having mixed indigenous and European ancestry. In the nineteenth century, Nicaragua was the subject of central European immigration, mostly from Germany, England and the United States, who often married native Nicaraguan women. Some Germans were given land to grow coffee in Matagalpa, Jinotega and Esteli, although most Europeans settled in San Juan del Norte. In the late seventeenth century, pirates from England, France and Holland mixed with the indigenous population and started a settlement at Bluefields (Mosquito Coast).
Puerto Rico (U.S.)
Puerto Rico had a small stream of predominantly European immigration. Puerto Ricans of Spanish, Italian and French descent comprise the majority.
According to the most recent 2020 census, the number of people who identified as "White alone" was 536,044 with an additional non-Hispanic 24,548, for a total of 560,592 or 17.1% of the population.
Previously in 1899, one year after the United States acquired the island, 61.8% or 589,426 people self-identified as White. One hundred years later (2000), the total increased to 80.5% or 3,064,862; due to a change of race perceptions, mainly because of Puerto Rican elites to portray Puerto Rico's image as the "White island of the Antilles", partly as a response to scientific racism.
Hundreds are from Corsica, France, Italy, Portugal, Ireland, Scotland, and Germany, along with large numbers of immigrants from Spain. This was the result of granted land from Spain during the Real Cedula de Gracias de 1815 (Royal Decree of Graces of 1815), which allowed European Catholics to settle on the island with a certain amount of free land.
Between 1960 and 1990, the census questionnaire in Puerto Rico did not ask about race or color. Racial categories therefore disappeared from the dominant discourse on the Puerto Rican nation. However, the 2000 census included a racial self-identification question in Puerto Rico and, for the first time since 1950, allowed respondents to choose more than one racial category to indicate mixed ancestry. (Only 4.2% chose two or more races.) With few variations, the census of Puerto Rico used the same questionnaire as in the U.S. mainland. According to census reports, most islanders responded to the new federally mandated categories on race and ethnicity by declaring themselves "White"; few declared themselves to be Black or some other race. However, it was estimated that 20% of White Puerto Ricans may have Black ancestry.
Trinidad and Tobago
United States
The cultural boundaries separating White Americans from other racial or ethnic categories are contested and always changing. Professor David R. Roediger of the University of Illinois, suggests that the construction of the White race in the United States was an effort to mentally distance slave owners from slaves. By the eighteenth century, White had become well established as a racial term. Author John Tehranian has noted the changing classifications of immigrant ethnic groups in American history. At various times each of the following groups has been allegedly excluded from being considered White, despite generally having been considered legally White under the US census and US naturalization law: Germans, Greeks, White Hispanics, Arabs, Iranians, Afghans, Irish, Italians, Jews of European and Mizrahi descent, Slavs, and Spaniards. On several occasions Finns were "racially" discriminated against in their early years of immigration and not considered European but "Asian". Some believed that they were of Mongolian ancestry rather than "native" European origin due to the Finnish language belonging to the Uralic and not the Indo-European language family.
During American history, the process of officially being defined as White by law often came about in court disputes over the pursuit of citizenship. The Immigration Act of 1790 offered naturalization only to "any alien, being a free white person". In at least 52 cases, people denied the status of White by immigration officials sued in court for status as White people. By 1923, courts had vindicated a "common-knowledge" standard, concluding that "scientific evidence" was incoherent. Legal scholar John Tehranian says that this was a "performance-based" standard, relating to religious practices, education, intermarriage, and a community's role in the United States.
In 1923, the Supreme Court decided in United States v. Bhagat Singh Thind that people of Indian descent were not White men, and thus not eligible for citizenship. While Thind was a high caste Hindu born in the northern Punjab region and classified by certain scientific authorities as of the Aryan race, the court conceded that he was not White or Caucasian since the word Aryan "has to do with linguistic and not at all with physical characteristics" and "the average man knows perfectly well that there are unmistakable and profound differences" between Indians and White people. In United States v. Cartozian (1925), an Armenian immigrant successfully argued (and the Supreme Court agreed) that his nationality was White in contradistinction to other people of the Near EastKurds, Turks, and Arabs in particularon the basis of their Christian religious traditions. In conflicting rulings In re Hassan (1942) and Ex parte Mohriez, United States District Courts found that Arabs did not, and did qualify as White, respectively, under immigration law.
In the early twenty-first century, the relationship between some ethnic groups and whiteness remains complex. In particular, some Jewish and Arab individuals both self-identify and are considered as part of the White American racial category, but others with the same ancestry feel they are not White and may not always be perceived as White by American society. The United States Census Bureau proposed but withdrew plans to add a new category for Middle Eastern and North African peoples in the U.S. Census 2020. Specialists disputed whether this classification should be considered a White ethnicity or a race. According to Frank Sweet, "various sources agree that, on average, people with 12 percent or less admixture appear White to the average American and those with up to 25 percent look ambiguous (with a Mediterranean skin tone)".
The current U.S. Census definition includes as White "a person having origins in any of Europe, the Middle East or North Africa." The U.S. Department of Justice's Federal Bureau of Investigation describes White people as "having origins in any of the original peoples of Europe, the Middle East, or North Africa through racial categories used in the Uniform Crime Reports Program adopted from the Statistical Policy Handbook (1978) and published by the Office of Federal Statistical Policy and Standards, U.S. Department of Commerce." The "White" category in the UCR includes non-black Hispanics.
White Americans made up nearly 90% of the population in 1950. A report from the Pew Research Center in 2008 projects that by 2050, non-Hispanic White Americans will make up 47% of the population, down from 67% projected in 2005. According to a study on the genetic ancestry of Americans, White Americans (stated "European Americans") on average are 98.6% European, 0.2% African and 0.2% Native American. Whites born in those Southern states with higher proportions of African-American populations, tend to have higher percentages of African ancestry. For instance, according to the 23andMe database, up to 13% of self-identified White American Southerners have greater than 1% African ancestry. White persons born in Southern states with the highest African-American populations tended to have the highest percentages of hidden African ancestry. Robert P. Stuckert, member of the Department of Sociology and Anthropology at Ohio State University, has said that today the majority of the descendants of African slaves are White.
Black author Rich Benjamin, in his book, Searching for Whitopia: An Improbable Journey to the Heart of White America, reveals how racial divides and White decline, both real and perceived, shape democratic and economic urgencies in America. The book examines how White flight, and the fear of White decline, affects the country's political debates and policy-making, including housing, lifestyle, social psychology, gun control, and community. Benjamin says that such issues as fiscal policy or immigration or "Best Place to Live" lists, which might be considered race-neutral, are also defined by racial anxiety over perceived White decline.
One-drop rule
The "one-drop rule" – that a person with any amount of known black African ancestry (however small or invisible) is considered black – is a classification that was used in parts of the United States. It is a colloquial term for a set of laws passed by 18 U.S. states between 1910 and 1931. Such laws were declared unconstitutional in 1967 when the Supreme Court ruled on anti-miscegenation laws while hearing Loving v. Virginia; it also found that Virginia's Racial Integrity Act of 1924, based on enforcing the one-drop rule in classifying vital records, was unconstitutional. The one-drop rule attempted to create a binary system, classifying all persons as either Black or White regardless of a person's physical appearance. Previously persons had sometimes been classified as mulatto or mixed-race, including on censuses up to 1930. They were also recorded as Indian. Some people with a high proportion of European ancestry could pass as "White", as noted above. This binary approach contrasts with the more flexible social structures present in Latin America (derived from the Spanish colonial era system), where there were less clear-cut divisions between various ethnicities. People are often classified not only by their appearance but by their class.
As a result of centuries of having children with White people, the majority of African Americans have some European admixture, and many people long accepted as White also have some African ancestry. Among the most notable examples of the latter is President Barack Obama, who is believed to have been descended from an early African enslaved in America, recorded as "John Punch", through his mother's apparently White line.
In the twenty-first century, writer and editor Debra Dickerson renewed questions about the one-drop rule, saying that "easily one-third of black people have White DNA". She says that, in ignoring their European ancestry, African Americans are denying their full multi-racial identities. Singer Mariah Carey, who is multi-racial, was publicly described as "another White girl trying to sing black". But in an interview with Larry King, she said that, despite her physical appearance and having been raised primarily by her White mother, she did not "feel White".
Since the late twentieth century, genetic testing has provided many Americans, both those who identify as White and those who identify as black, with more nuanced and complex information about their genetic backgrounds.
Other Caribbean
South America
Argentina
Argentina, along with other areas of new settlement like Canada, Australia, Brazil, New Zealand, the United States or Uruguay, is considered a country of immigrants where the vast majority originated from Europe. White people can be found in all areas of the country, but especially in the central-eastern region (Pampas), the central-western region (Cuyo), the southern region (Patagonia) and the north-eastern region (Litoral).
White Argentines are mainly descendants of immigrants who came from Europe and the Middle East in the late nineteenth and early twentieth centuries. After the regimented Spanish colonists, waves of European settlers came to Argentina from the late nineteenth to mid-twentieth centuries. Major contributors included Italy (initially from Piedmont, Veneto and Lombardy, later from Campania, Calabria, and Sicily), and Spain (most are Galicians and Basques, but there are Asturians, Cantabrians, Catalans, and Andalusians). Smaller but significant numbers of immigrants include Germans, primarily Volga Germans from Russia, but also Germans from Germany, Switzerland, and Austria; French which mainly came from the Occitania region of France; Portuguese, which already conformed an important community since colonial times; Slavic groups, most of which were Croats, Bosniaks, Poles, but also Ukrainians, Belarusians, Russians, Bulgarians, Serbs and Montenegrins; Britons, mainly from England and Wales; Irish who migrated due to the Great Irish Famine or prior famines and Scandinavians from Sweden, Denmark, Finland, and Norway. Smaller waves of settlers from Australia and South Africa, and the United States can be traced in Argentine immigration records.
By the 1910s, after immigration rates peaked, over 30 percent of the country's population was from outside Argentina, and over half of Buenos Aires' population was foreign-born.
However, the 1914 National Census revealed that around 80% of the national population were either European immigrants, their children or grandchildren. Among the remaining 20 percent (those descended from the population residing locally before this immigrant wave took shape in the 1870s), around a third were White. European immigration continued to account for over half the nation's population growth during the 1920s and was again significant (albeit in a smaller wave) following World War II. It is estimated that Argentina received over 6 million European immigrants during the period 1857–1940.
Since the 1960s, increasing immigration from bordering countries to the north (especially from Bolivia and Paraguay, which have Amerindian and Mestizo majorities) has lessened that majority somewhat.
Criticism of the national census states that data has historically been collected using the category of national origin rather than race in Argentina, leading to undercounting Afro-Argentines and Mestizos. (Living Africa) is a black rights group in Buenos Aires with the support of the Organization of American States, financial aid from the World Bank and Argentina's census bureau is working to add an "Afro-descendants" category to the 2010 census. The 1887 national census was the final year where blacks were included as a separate category before it was eliminated by the government.
Bolivia
There is no present day data as the Bolivian census does not count racial identity for white people. However, past census data showed that in 1900, people who self-identified as "Blanco" (white) composed 12.7% or 231,088 of the total population. This was the last time data on race was collected. There were 529 Italians, 420 Spaniards, 295 Germans, 279 French, 177 Austrians, 141 English and 23 Belgians living in Bolivia.
Brazil
Recent censuses in Brazil are conducted on the basis of self-identification. According to the 2022 Census, they totaled 88,252,121 people and made up 43.5% of the Brazilian population.
As a term, "White" in Brazil is generally applied to people of European descent. The term may also encompass other people, such as Brazilians of West Asian descent, and in some contexts, East Asians. Though Brazilians of East Asian descent are, in other contexts, classified as "Yellow" (amarela). The census shows a trend of fewer Brazilians of a different descent (most likely mixed) identifying as White people as their social status increases. Nevertheless, light-skinned Mulattoes and Mestizos with European features were also historically deemed as more closely related to "whiteness" then unmixed Blacks.
Chile
Scholarly estimates of the White population in Chile vary dramatically, ranging from 20% to 52%. According to a study by the University of Chile about 30% of the Chilean population is Caucasian, while the 2011 Latinobarómetro survey shows that some 60% of Chileans consider themselves White.
During colonial times in the eighteenth century, an important flux of emigrants from Spain populated Chile, mostly Basques, who vitalized the Chilean economy and rose rapidly in the social hierarchy and became the political elite that still dominates the country. An estimated 1.6 million (10%) to 3.2 million (20%) Chileans have a surname (one or both) of Basque origin. The Basques liked Chile because of its great similarity to their native land: similar geography, cool climate, and the presence of fruits, seafood, and wine.
Chile was not an attractive place for European migrants in the nineteenth and twentieth centuries simply because it was far from Europe and difficult to reach. Chile experienced a tiny but steady arrival of Spanish, Italians, Irish, French, Greeks, Germans, English, Scots, Croats and Ashkenazi Jews, in addition to immigration from other Latin American countries.
The original arrival of Spaniards was the most radical change in demographics due to the arrival of Europeans in Chile, since there was never a period of massive immigration, in contrast to neighboring nations such as Argentina and Uruguay. Facts about the amount of immigration do not coincide with certain national chauvinistic discourse, which claims that Chile, like Argentina or Uruguay, would be considered one of the "White" Latin American countries, in contrast to the racial mixture that prevails in the rest of the continent. However, it is undeniable that immigrants have played a major role in Chilean society. Between 1851 and 1924 Chile only received 0.5% of the European immigration flow to Latin America, compared to the 46% received by Argentina, 33% by Brazil, 14% by Cuba, and 4% by Uruguay. This was because most of the migration occurred across the Atlantic before the construction of the Panama Canal. Europeans preferred to stay in countries closer to their homelands instead of taking the long trip through the Straits of Magellan or across the Andes. In 1907, European-born immigrants composed 2.4% of the Chilean population, which fell to 1.8% in 1920, and 1.5% in 1930.
After the failed liberal revolution of 1848 in the German states, a significant German immigration took place, laying the foundation for the German-Chilean community. Sponsored by the Chilean government to "civilize" and colonize the southern region, these Germans (including German-speaking Swiss, Silesians, Alsatians and Austrians) settled mainly in Valdivia, Llanquihue and Los Ángeles. The Chilean Embassy in Germany estimated 150,000 to 200,000 Chileans are of German origin.
Another historically significant immigrant group were Croatian immigrants. The Croatian Chileans, their descendants today, number at an estimated 380,000 persons, the equivalent of 2.4% of the population. Other authors claim on the other hand, that close to 4.6% of the Chilean population have some Croatian ancestry. Over 700,000 Chileans may have British (English, Scottish or Welsh) origin, 4.5% of Chile's population. Chileans of Greek descent are estimated 90,000 to 120,000. Most of them live either in the Santiago area or in the Antofagasta area, and Chile is one of the 5 countries with the most descendants of Greeks in the world. The descendants of the Swiss reach 90,000 and it is estimated that about 5% of the Chilean population has some French ancestry. 184,000–800,000 (estimates) are descendants of Italians. Other groups of European descendants are found in smaller numbers.
Colombia
The Colombian government does not carry out official racial censuses, nor does it carry out self-identification racial censuses as is the case in Argentina, so the figures shown are usually based on data from populations considered "non-ethnic", which are those (Whites and Mestizos). According to the 2018 census, approximately 87.6% of the Colombian population are White or Mestizo.
Many Spanish began their explorations searching for gold, while other Spanish established themselves as leaders of the native social organizations teaching natives the Christian faith and the ways of their civilization. Catholic priests would provide education for Native Americans that otherwise was unavailable. 100 years after the first Spanish settlement, 90 percent of all Native Americans in Colombia had died. The majority of the deaths of Native Americans were the cause of diseases such as measles and smallpox, which were spread by European settlers. Many Native Americans were also killed by armed conflicts with European settlers.
Between 1540 and 1559, 8.9 percent of the residents of Colombia were of Basque origin. It has been suggested that the present-day incidence of business entrepreneurship in the region of Antioquia is attributable to the Basque immigration and Basque character traits. Few Colombians of distant Basque descent are aware of their Basque ethnic heritage. In Bogota, there is a small colony of thirty to forty families who emigrated as a consequence of the Spanish Civil War or because of different opportunities. Basque priests were the ones who introduced handball into Colombia. Basque immigrants in Colombia were devoted to teaching and public administration. In the first years of the Andean multinational company, Basque sailors navigated as captains and pilots on the majority of the ships until the country was able to train its own crews.
In December 1941 the United States government estimated that there were 4,000 Germans living in Colombia. There were some Nazi agitators in Colombia, such as Barranquilla businessman Emil Prufurt. Colombia invited Germans who were on the U.S. blacklist to leave. SCADTA, a Colombian-German air transport corporation that was established by German expatriates in 1919, was the first commercial airline in the Western Hemisphere.
The Italians arrived on the Colombian coast, and quickly moved towards the expanding agricultural areas. There, some of them achieved success in the commercialization of livestock, agricultural products, and imported goods, which later led to the transfer of their lucrative activities to Barranquilla. Some important buildings were created by Italians in the nineteenth century, like the famous Colón Theater of the capital. It is one of the most representative theatres of Colombia, with neoclassic architecture: was built by the Italian architect Pietro Cantini and founded in 1892; has more than 2,400 square metres (26,000 sq ft) for 900 people. This famous Italian architect also contributed to the construction of the Capitolio Nacional of the capital. Oreste Sindici was an Italian-born Colombian musician and composer, who composed the music for the Colombian national anthem in 1887. Oreste Sindici died in Bogotá on 12 January 1904, due to severe arteriosclerosis. In 1937 the Colombian government honored his memory. After the Second World War, Italian emigration to Colombia was directed primarily toward Bogota, Cali and Medellin. They have Italian schools in Bogota (Institutes "Leonardo da Vinci" and "Alessandro Volta"), Medellín ("Leonardo da Vinci") & Barranquilla ("Galileo Galilei"). The Italian migration government estimates that there are at least 2 million Colombians of Italian descent, making them the second largest and most numerous European group in the country after the Spanish.
The first and largest wave of immigration from the Middle East began around 1880 and remained during the first two decades of the twentieth century. They were mainly Maronite Christians from Greater Syria (Syria and Lebanon) and Palestine, fleeing the then colonized Ottoman territories. Syrians, Palestinians, and Lebanese continued since then to settle in Colombia. Due to poor existing information it is impossible to know the exact number of Lebanese and Syrians that immigrated to Colombia. A figure of 5,000–10,000 from 1880 to 1930 may be reliable. Whatever the figure, Syrians and Lebanese are perhaps the biggest immigrant group next to the Spanish since independence. Those who left their homeland in the Middle East to settle in Colombia left for different reasons such as religious, economic, and political reasons. Some left to experience the adventure of migration. After Barranquilla and Cartagena, Bogota stuck next to Cali, among cities with the largest number of Arabic-speaking representatives in Colombia in 1945. The Arabs that went to Maicao were mostly Sunni Muslim with some Druze and Shiites, as well as Orthodox and Maronite Christians. The mosque of Maicao is the second largest mosque in Latin America. Middle Easterns are generally called (Turkish).
Ecuador
According to the most-recent 2022 national census, 2.2% of Ecuadorians self-identified as European Ecuadorian, a decrease from 6.1% in 2010.
Guyana
In 2016, 0.3% of Guyana were of European descent, predominantly Portuguese Guyanese.
Paraguay
Peru
According to the 2017 census 5.9% or 1.3 million (1,336,931) people 12 years of age and above self-identified as White. There were 619,402 (5.5%) males and 747,528 (6.3%) females. This was the first time a question for ethnic origins had been asked. The regions with the highest proportion of self-identified Whites were in La Libertad (10.5%), Tumbes and Lambayeque (9.0% each), Piura (8.1%), Callao (7.7%), Cajamarca (7.5%), Lima Province (7.2%) and Lima Region (6.0%).
Suriname
In 2012, there were 1,667 or 0.3% of the population identified as white.
Many Dutch settlers left Suriname after independence in 1975 and this diminished Suriname's Dutch population. Currently there are around 1,000 boeroes left in Suriname, and 3,000 outside Suriname.
Uruguay
Different estimates state that Uruguay's population of 3.4 million is composed of 88% to 93% White Uruguayans. Though Uruguay has welcomed immigrants from around the world, its population largely consists of people of European origin, mainly Spaniards and Italians. Other European immigrants include Jews from Eastern and Central Europe.
According to the 2006 National Survey of Homes by the Uruguayan National Institute of Statistics: 94.6% self-identified as having a White background, 9.1% chose black ancestry, and 4.5% chose an Amerindian ancestry (people surveyed were allowed to choose more than one option).
Venezuela
According to the official Venezuelan census, the term "White" involves external issues such as light skin, shape, and color of hair and eyes, among other factors. Though the meaning and usage of the term "White" has varied in different ways depending on the time period and area, leaving its precise definition as somewhat confusing. The 2011 Venezuelan Census states that "White" in Venezuela is used to describe Venezuelans of European origin. The 2011 National Population and Housing Census states that 43.6% of the Venezuelan population (approx. 13.1 million people) identify as White. Genetic research by the University of Brasília shows an average admixture of 60.6% European, 23.0% Amerindian and 16.3% African ancestry in Venezuelan populations. The majority of White Venezuelans are of Spanish, Italian, Portuguese and German descent. Nearly half a million European immigrants, mostly from Spain (as a consequence of the Spanish Civil War), Italy, and Portugal, entered the country during and after World War II, attracted by a prosperous, rapidly developing country where educated and skilled immigrants were welcomed.
Spaniards were introduced into Venezuela during the colonial period. Most of them were from Andalusia, Galicia, Basque Country and from the Canary Islands. Until the last years of World War II, a large part of the European immigrants to Venezuela came from the Canary Islands, and its cultural impact was significant, influencing the development of Castilian in the country, its gastronomy, and customs. With the beginning of oil operations during the first decades of the twentieth century, citizens and companies from the United States, United Kingdom, and Netherlands established themselves in Venezuela. Later, in the middle of the century, there was a new wave of originating immigrants from Spain (mainly from Galicia, Andalucia and the Basque Country), Italy (mainly from southern Italy and Venice) and Portugal (from Madeira) and new immigrants from Germany, France, England, Croatia, Netherlands, and other European countries, among others, animated simultaneously by the program of immigration and colonization implanted by the government.
See also
Caucasoid
Criollo people
Demographics of Europe
Ethnic groups in Europe
Ethnic groups in West Asia
European diaspora
Westerners
White demographic decline
White flight
White identity
References
Bibliography
Allen, Theodore, The Invention of the White Race, 2 vols. Verso, London 1994.
Baum, Bruce David, The Rise and Fall of the Caucasian Race: A Political History of Racial Identity. NYU Press, New York and London 2006, .
Brodkin, Karen, How Jews Became White Folks and What That Says About Race in America, Rutgers, 1999, .
Foley, Neil, The White Scourge: Mexicans, Blacks, and Poor Whites in Texas Cotton Culture (Berkeley: University of California Press, 1997)
Gossett, Thomas F., Race: The History of an Idea in America, New ed. (New York: Oxford University, 1997)
Guglielmo, Thomas A., White on Arrival: Italians, Race, Color, and Power in Chicago, 1890–1945, 2003,
Hannaford, Ivan, Race: The History of an Idea in the West (Baltimore: Johns Hopkins University, 1996)
Ignatiev, Noel, How the Irish Became White, Routledge, 1996, .
Jackson, F. L. C. (2004). Book chapter: British Medical Bulletin 2004; 69: 215–35 . Retrieved 29 December 2006.
Jacobson, Matthew Frye, Whiteness of a Different Color: European Immigrants and the Alchemy of Race, Harvard, 1999, .
Oppenheimer, Stephen (2006). The Origins of the British: A Genetic Detective Story. Constable and Robinson, London. .
Smedley, Audrey, Race in North America: Origin and Evolution of a Worldview, 2nd ed. (Boulder: Westview, 1999).
Tang, Hua., Tom Quertermous, Beatriz Rodriguez, Sharon L. R. Kardia, Xiaofeng Zhu, Andrew Brown, James S. Pankow, Michael A. Province, Steven C. Hunt, Eric Boerwinkle, Nicholas J. Schork, and Neil J. Risch (2005) Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies Am. J. Hum. Genet. 76: 268–275.
Further reading
External links
Race (human categorization)
White | White people | [
"Biology"
] | 14,087 | [
"Human skin color",
"Pigmentation"
] |
166,206 | https://en.wikipedia.org/wiki/Indirection | In computer programming, an indirection (also called a reference) is a way of referring to something using a name, reference, or container instead of the value itself. The most common form of indirection is the act of manipulating a value through its memory address. For example, accessing a variable through the use of a pointer. A stored pointer that exists to provide a reference to an object by double indirection is called an indirection node. In some older computer architectures, indirect words supported a variety of more-or-less complicated addressing modes.
Another important example is the domain name system which enables names such as en.wikipedia.org to be used in place
of network addresses such as 208.80.154.224. The indirection from human-readable names to network addresses means that the references to a web page become more memorable, and links do not need to change when a web site is relocated to a different server.
Overview
A famous aphorism of Butler Lampson that is attributed to David Wheeler goes: "All problems in computer science can be solved by another level of indirection" (the "fundamental theorem of software engineering").
This is often deliberately mis-quoted with "abstraction layer" substituted for "level of indirection". A corollary to this aphorism, and the original conclusion from Wheeler, is "...except for the problem of too many layers of indirection."
A humorous Internet memorandum, , insists that:
Object-oriented programming makes use of indirection extensively, a simple example being dynamic dispatch. Higher-level examples of indirection are the design patterns of the proxy and the proxy server. Delegation is another classic example of an indirection pattern. In strongly typed interpreted languages with dynamic data types, most variable references require a level of indirection: first the type of the variable is checked for safety, and then the pointer to the actual value is dereferenced and acted on.
Recursive data types are usually implemented using indirection, because otherwise if a value of a data type can contain the entirety of another value of the same data type, there is no limit to the size a value of this data type could need.
When doing symbolic programming from a formal mathematical specification the use of indirection can be quite helpful. To start with a simple example the variables x, y and z in an equation such as can refer to any number. One could imagine objects for various numbers and then x, y and z could point to the specific numbers being used for a particular problem. The simple example has its limitation as there are infinitely many real numbers. In various other parts of symbolic programming there are only so many symbols. So to move on to a more significant example, in logic the formula α can refer to any formula, so it could be β, γ, δ, ... or η→π, ς ∨ σ, ... When set-builder notation is employed the statement Δ={α} means the set of all formulae — so although the reference is to α there are two levels of indirection here, the first to the set of all α and then the second to a specific formula for each occurrence of α in the set Δ.
See also
Handle
Delegation pattern
Pointer
Reference
Dereference operator
Law of Demeter
References
Data types
Programming constructs
Computing terminology
Unary operations | Indirection | [
"Mathematics",
"Technology"
] | 682 | [
"Functions and mappings",
"Computing terminology",
"Unary operations",
"Mathematical objects",
"Mathematical relations"
] |
166,283 | https://en.wikipedia.org/wiki/Dawn%20chorus%20%28birds%29 | The dawn chorus is the outbreak of birdsong at the start of a new day. In temperate countries this is most noticeable in spring when the birds are either defending a breeding territory, trying to attract a mate or calling in the flock. In a given location it is common for different species to do their dawn singing at different times.
In some territories where bird life is extensive and birds are vocal, the sound of a dawn chorus may make it difficult for humans to sleep in the early morning.
Timing
In a 2007 study of the Ecuadorian forest, it was determined that birds perching higher in the trees and birds with larger eyes tend to sing first. This may be due to differences in the amount of light perceived by the birds.
Moller used a play-back technique to investigate the effects of singing by the black wheatear (Oenanthe leucura) on the behaviour of both conspecifics and heterospecifics. It was found that singing increased in both groups in response to the wheatear. Moller suggested the dawn (and dusk) chorus of bird song may be augmented by social facilitation due to the singing of conspecifics as well as heterospecifics.
International Dawn Chorus Day
An annual International Dawn Chorus Day is held on the first Sunday in May when the public are encouraged to rise early to listen to bird song at organised events. The first ever was held at Moseley Bog in Birmingham, England, in 1987, organized by the Urban Wildlife Trust (now The Wildlife Trust for Birmingham and the Black Country).
New Zealand
Early explorers and European settlers noted that the New Zealand forest had a loud dawn chorus. The dawn chorus is no longer as loud as it once was, owing to extensive loss of forests, the introduction of bird predators and competing species such as wasps. The bellbird and the tūī are two of the birds that would have formed part of the dawn chorus since they have a vocal and melodious call.
United Kingdom
In the UK the dawn chorus may begin as early as 3am in early summer.
United States
The dawn chorus may also be heard in the United States.
See also
Bird song
Dawn chorus (electromagnetic)
Natural sounds
References
External links
International Dawn Chorus Day
The language of birds: The dawn chorus
Bird sounds
Ethology | Dawn chorus (birds) | [
"Biology"
] | 471 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
166,346 | https://en.wikipedia.org/wiki/OPEC | The Organization of the Petroleum Exporting Countries (OPEC, ) is a cartel enabling the co-operation of leading oil-producing and oil-dependent countries in order to collectively influence the global oil market and maximize profit. It was founded on 14 September 1960, in Baghdad by the first five members which are Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela. The organization, which currently comprises 12 member countries, accounted for 38 percent of global oil production, according to a 2022 report. Additionally, it is estimated that 79.5 percent of the world's proven oil reserves are located within OPEC nations, with the Middle East alone accounting for 67.2 percent of OPEC's total reserves.
In a series of steps in the 1960s and 1970s, OPEC restructured the global system of oil production in favor of oil-producing states and away from an oligopoly of dominant Anglo-American oil firms (the "Seven Sisters"). In the 1970s, restrictions in oil production led to a dramatic rise in oil prices with long-lasting and far-reaching consequences for the global economy. Since the 1980s, OPEC has had a limited impact on world oil-supply and oil-price stability, as there is frequent cheating by members on their commitments to one another, and as member commitments reflect what they would do even in the absence of OPEC.
The formation of OPEC marked a turning point toward national sovereignty over natural resources. OPEC decisions have come to play a prominent role in the global oil-market and in international relations. Economists have characterized OPEC as a textbook example of a cartel
(a group whose members cooperate to reduce market competition) but one whose consultations may be protected by the doctrine of state immunity under international law.
Algeria, Equatorial Guinea, Gabon, Iran, Iraq, Kuwait, Libya, Nigeria, the Republic of the Congo, Saudi Arabia, the United Arab Emirates and Venezuela. Meanwhile, Angola, Ecuador, Indonesia, and Qatar are former OPEC members. A larger group called OPEC+, consisting of OPEC members plus other oil-producing countries, formed in late 2016 to exert more control on the global crude-oil market. Canada, Egypt, Norway, and Oman are observer states.
Organization and structure
In a series of steps in the 1960s and 1970s, OPEC restructured the global system of oil production in favor of oil-producing states and away from an oligopoly of dominant Anglo-American oil firms (the Seven Sisters). Coordination among oil-producing states within OPEC made it easier for them to nationalize oil production and structure oil prices in their favor without incurring punishment by Western governments and firms. Prior to the creation of OPEC, individual oil-producing states were punished for taking steps to alter the governing arrangements of oil production within their borders. States were coerced militarily (e.g. in 1953, the US-UK-sponsored a coup against Mohammad Mosaddegh after he nationalized Iran's oil production) or economically (e.g. the Seven Sisters slowed down oil production in one non-compliant state and ramped up oil production elsewhere) when acted contrary to the interests of the Seven Sisters and their governments.
The organisational logic that underpins OPEC is that it is in the collective interest of its members to limit the world oil supply in order to reap higher prices. However, the main problem within OPEC is that it is individually rational for members to cheat on commitments and produce as much oil as possible.
Political scientist Jeff Colgan has argued that OPEC has since the 1980s largely failed to achieve its goals (limits on world oil supply, stabilized prices, and raising of long-term average revenues). He finds that members have cheated on 96% of their commitments. The analysis spans over the period 1982–2009. To the extent that when member states comply with their commitments, it is because the commitments reflect what they would do even if OPEC did not exist. One large reason for the frequent cheating is that OPEC does not punish members for non-compliance with commitments.
In June 2020, all countries participating in the OPEC+ framework collectively agreed to the introduction of a Compensation Mechanism aimed at ensuring full conformity with and adherence to the agreed-upon oil production cuts. This initiative aligns with one of OPEC's stated objectives: to maintain a stable oil market, which, notably, has been relatively more stable than other energy commodities.
Leadership and decision-making
The OPEC Conference is the supreme authority of the organisation, and consists of delegations normally headed by the oil ministers of member countries. The chief executive of the organisation is the OPEC secretary general. The conference ordinarily meets at the Vienna headquarters, at least twice a year and in additional extraordinary sessions when necessary. It generally operates on the principles of unanimity and "one member, one vote", with each country paying an equal membership fee into the annual budget. However, since Saudi Arabia is by far the largest and most-profitable oil exporter in the world, with enough capacity to function as the traditional swing producer to balance the global market, it serves as "OPEC's de facto leader".
International cartel
At various times, OPEC members have displayed apparent anti-competitive cartel behavior through the organisation's agreements about oil production and price levels. Economists often cite OPEC as a textbook example of a cartel that cooperates to reduce market competition, as in this definition from OECD's Glossary of Industrial Organisation Economics and Competition Law:
While OPEC is at times cited as a textbook example of a cartel, various authoritative and academic sources provide a broader perspective on the organization's role. For instance, the US Energy Information Administration's glossary explains OPEC as:
The Oxford Dictionary of Energy Science (2017) defines OPEC as:
OPEC members strongly prefer to describe their organisation as a modest force for market stabilisation, rather than a powerful anti-competitive cartel. In its defense, the organisation was founded as a counterweight against the previous "Seven Sisters" cartel of multinational oil companies, and non-OPEC energy suppliers have maintained enough market share for a substantial degree of worldwide competition. Moreover, because of an economic "prisoner's dilemma" that encourages each member nation individually to discount its price and exceed its production quota, widespread cheating within OPEC often erodes its ability to influence global oil prices through collective action. Political scientist Jeff Colgan has challenged that OPEC is a cartel, pointing to endemic cheating in the organization: "A cartel needs to set tough goals and meet them; OPEC sets easy goals and fails to meet even those."
OPEC has not been involved in any disputes related to the competition rules of the World Trade Organization, even though the objectives, actions, and principles of the two organisations diverge considerably. A key US District Court decision held that OPEC consultations are protected as "governmental" acts of state by the Foreign Sovereign Immunities Act, and are therefore beyond the legal reach of US competition law governing "commercial" acts. Despite popular sentiment against OPEC, legislative proposals to limit the organisation's sovereign immunity, such as the NOPEC Act, have so far been unsuccessful.
Conflicts
OPEC often has difficulty agreeing on policy decisions because its member countries differ widely in their oil export capacities, production costs, reserves, geological features, population, economic development, budgetary situations, and political circumstances. Indeed, over the course of market cycles, oil reserves can themselves become a source of serious conflict, instability and imbalances, in what economists call the "natural resource curse". A further complication is that religion-linked conflicts in the Middle East are recurring features of the geopolitical landscape for this oil-rich region. Internationally important conflicts in OPEC's history have included the Six-Day War (1967), Yom Kippur War (1973), a hostage siege directed by Palestinian militants (1975), the Iranian Revolution (1979), Iran–Iraq War (1980–1988), Iraqi occupation of Kuwait (1990–1991), September 11 attacks (2001), American occupation of Iraq (2003–2011), Conflict in the Niger Delta (2004–present), Arab Spring (2010–2012), Libyan Crisis (2011–present), and international Embargo against Iran (2012–2016). Although events such as these can temporarily disrupt oil supplies and elevate prices, the frequent disputes and instabilities tend to limit OPEC's long-term cohesion and effectiveness.
History and impact
Post-WWII situation
In 1949, Venezuela initiated the move towards the establishment of what would become OPEC, by inviting Iran, Iraq, Kuwait and Saudi Arabia to exchange views and explore avenues for more regular and closer communication among petroleum-exporting nations as the world recovered from World War II. At the time, some of the world's largest oil fields were just entering production in the Middle East. The United States had established the Interstate Oil Compact Commission to join the Texas Railroad Commission in limiting overproduction. The US was simultaneously the world's largest producer and consumer of oil; the world market was dominated by a group of multinational companies known as the "Seven Sisters", five of which were headquartered in the US following the breakup of John D. Rockefeller's original Standard Oil monopoly. Oil-exporting countries were eventually motivated to form OPEC as a counterweight to this concentration of political and economic power.
1959–1960: Anger from exporting countries
In February 1959, as new supplies were becoming available, the multinational oil companies (MOCs) unilaterally reduced their posted prices for Venezuelan and Middle Eastern crude oil by 10 percent. Weeks later, the Arab League's first Arab Petroleum Congress convened in Cairo, Egypt, where the influential journalist Wanda Jablonski introduced Saudi Arabia's Abdullah Tariki to Venezuela's observer Juan Pablo Pérez Alfonzo, representing the two then-largest oil-producing nations outside the United States and the Soviet Union. Both oil ministers were angered by the price cuts, and the two led their fellow delegates to establish the Maadi Pact or Gentlemen's Agreement, calling for an "Oil Consultation Commission" of exporting countries, to which MOCs should present price-change plans. Jablonski reported a marked hostility toward the West and a growing outcry against "absentee landlordism" of the MOCs, which at the time controlled all oil operations within the exporting countries and wielded enormous political influence. In August 1960, ignoring the warnings, and with the US favoring Canadian and Mexican oil for strategic reasons, the MOCs again unilaterally announced significant cuts in their posted prices for Middle Eastern crude oil.
1960–1975: Founding and expansion
The following month, during 10–14 September 1960, the Baghdad Conference was held at the initiative of Tariki, Pérez Alfonzo, and Iraqi prime minister Abd al-Karim Qasim, whose country had skipped the 1959 congress. Government representatives from Iran, Iraq, Kuwait, Saudi Arabia and Venezuela met in Baghdad to discuss ways to increase the price of crude oil produced by their countries, and ways to respond to unilateral actions by the MOCs. Despite strong US opposition: "Together with Arab and non-Arab producers, Saudi Arabia formed the Organization of Petroleum Export Countries (OPEC) to secure the best price available from the major oil corporations." The Middle Eastern members originally called for OPEC headquarters to be in Baghdad or Beirut, but Venezuela argued for a neutral location, and so the organization chose Geneva, Switzerland. On 1 September 1965, OPEC moved to Vienna, Austria, after Switzerland declined to extend diplomatic privileges. At the time, Switzerland was attempting to reduce their foreign population and the OPEC was the first intergovernmental body to leave the country because of restrictions on foreigners. Austria was keen to attract international organizations and offered attractive terms to the OPEC.
During the early years of OPEC, the oil-producing countries had a 50/50 profit agreement with the oil companies. OPEC bargained with the dominant oil companies (the Seven Sisters), but OPEC faced coordination problems among its members. If one OPEC member demanded too much from the oil companies, then the oil companies could slow down production in that country and ramp up production elsewhere. The 50/50 agreements were still in place until 1970 when Libya negotiated a 58/42 agreement with the oil company Occidental, which prompted other OPEC members to request better agreements with oil companies.> In 1971, an accord was signed between major oil companies and members of OPEC doing business in the Mediterranean Sea region, called the Tripoli Agreement. The agreement, signed on 2 April 1971, raised oil prices and increased producing countries' profit shares.
During 1961–1975, the five founding nations were joined by Qatar (1961), Indonesia (1962–2008, rejoined 2014–2016), Libya (1962), United Arab Emirates (originally just the Emirate of Abu Dhabi, 1967), Algeria (1969), Nigeria (1971), Ecuador (1973–1992, 2007–2020), and Gabon (1975–1994, rejoined 2016). By the early 1970s, OPEC's membership accounted for more than half of worldwide oil production. Indicating that OPEC is not averse to further expansion, Mohammed Barkindo, OPEC's acting secretary general in 2006, urged his African neighbors Angola and Sudan to join, and Angola did in 2007, followed by Equatorial Guinea in 2017. Since the 1980s, representatives from Canada, Egypt, Mexico, Norway, Oman, Russia, and other oil-exporting nations have attended many OPEC meetings as observers, as an informal mechanism for coordinating policies.
1973–1974: Oil embargo
The oil market was tight in the early 1970s, which reduced the risks for OPEC members in nationalising their oil production. One of the major fears for OPEC members was that nationalisation would cause a steep decline in the price of oil. This prompted a wave of nationalisations in countries such as Libya, Algeria, Iraq, Nigeria, Saudi Arabia and Venezuela. With greater control over oil production decisions and amid high oil prices, OPEC members unilaterally raised oil prices in 1973, prompting the 1973 oil crisis.
In October 1973, the Organisation of Arab Petroleum Exporting Countries (OAPEC, consisting of the Arab majority of OPEC plus Egypt and Syria) declared significant production cuts and an oil embargo against the United States and other industrialized nations that supported Israel in the Yom Kippur War. A previous embargo attempt was largely ineffective in response to the Six-Day War in 1967. However, in 1973, the result was a sharp rise in oil prices and OPEC revenues, from US$3/bbl to US$12/bbl, and an emergency period of energy rationing, intensified by panic reactions, a declining trend in US oil production, currency devaluations, and a lengthy UK coal-miners dispute. For a time, the UK imposed an emergency three-day workweek. Seven European nations banned non-essential Sunday driving. US gas stations limited the amount of petrol that could be dispensed, closed on Sundays, and restricted the days when petrol could be purchased, based on number plate numbers. Even after the embargo ended in March 1974, following intense diplomatic activity, prices continued to rise. The world experienced a global economic recession, with unemployment and inflation surging simultaneously, steep declines in stock and bond prices, major shifts in trade balances and petrodollar flows, and a dramatic end to the post-WWII economic boom.
The 1973–1974 oil embargo had lasting effects on the United States and other industrialized nations, which established the International Energy Agency in response, as well as national emergency stockpiles designed to withstand months of future supply disruptions. Oil conservation efforts included lower speed limits on highways, smaller and more energy-efficient cars and appliances, year-round daylight saving time, reduced usage of heating and air-conditioning, better building insulation, increased support of mass transit, and greater emphasis on coal, natural gas, ethanol, nuclear and other alternative energy sources. These long-term efforts became effective enough that US oil consumption rose only 11 percent during 1980–2014, while real GDP rose 150 percent. But in the 1970s, OPEC nations demonstrated convincingly that their oil could be used as both a political and economic weapon against other nations, at least in the short term.
The embargo also meant that a section of the Non-Aligned Movement saw power as a source of hope for their developing countries. The Algerian president Houari Boumédiène expressed this hope in a speech at the UN's sixth Special Session, in April 1974:
1975–1980: Special Fund, now the OPEC Fund for International Development
OPEC's international aid activities date from well before the 1973–1974 oil price surge. For example, the Kuwait Fund for Arab Economic Development has operated since 1961.
In the years after 1973, as an example of so-called "checkbook diplomacy", certain Arab nations have been among the world's largest providers of foreign aid, and OPEC added to its goals the selling of oil for the socio-economic growth of poorer nations. The OPEC Special Fund was conceived in Algiers, Algeria, in March 1975, and was formally established the following January. "A Solemn Declaration 'reaffirmed the natural solidarity which unites OPEC countries with other developing countries in their struggle to overcome underdevelopment,' and called for measures to strengthen cooperation between these countries... [The OPEC Special Fund's] resources are additional to those already made available by OPEC states through a number of bilateral and multilateral channels." The Fund became an official international development agency in May 1980 and was renamed the OPEC Fund for International Development, with Permanent Observer status at the United Nations. In 2020, the institution ceased using the abbreviation OFID.
1975: Hostage siege
On 21 December 1975, Saudi Arabia's Ahmed Zaki Yamani, Iran's Jamshid Amuzegar, and the other OPEC oil ministers were taken hostage at their semi-annual conference in Vienna, Austria. The attack, which killed three non-ministers, was orchestrated by a six-person team led by Venezuelan terrorist "Carlos the Jackal", and which included Gabriele Kröcher-Tiedemann and Hans-Joachim Klein. The self-named "Arm of the Arab Revolution" group declared its goal to be the liberation of Palestine. Carlos planned to take over the conference by force and hold for ransom all eleven attending oil ministers, except for Yamani and Amuzegar who were to be executed.
Carlos arranged bus and plane travel for his team and 42 of the original 63 hostages, with stops in Algiers and Tripoli, planning to fly eventually to Baghdad, where Yamani and Amuzegar were to be killed. All 30 non-Arab hostages were released in Algiers, excluding Amuzegar. Additional hostages were released at another stop in Tripoli before returning to Algiers. With only 10 hostages remaining, Carlos held a phone conversation with Algerian president Houari Boumédiène, who informed Carlos that the oil ministers' deaths would result in an attack on the plane. Boumédienne must also have offered Carlos asylum at this time and possibly financial compensation for failing to complete his assignment. Carlos expressed his regret at not being able to murder Yamani and Amuzegar, then he and his comrades left the plane. All the hostages and terrorists walked away from the situation, two days after it began.
Sometime after the attack, Carlos's accomplices revealed that the operation was commanded by Wadie Haddad, a founder of the Popular Front for the Liberation of Palestine. They also claimed that the idea and funding came from an Arab president, widely thought to be Muammar Gaddafi of Libya, itself an OPEC member. Fellow militants Bassam Abu Sharif and Klein claimed that Carlos received and kept a ransom between 20 million and US$50 million from "an Arab president". Carlos claimed that Saudi Arabia paid ransom on behalf of Iran, but that the money was "diverted en route and lost by the Revolution". He was finally captured in 1994 and is serving life sentences for at least 16 other murders.
1979–1980: Oil crisis and 1980s oil glut
In response to a wave of oil nationalizations and the high prices of the 1970s, industrial nations took steps to reduce their dependence on OPEC oil, especially after prices reached new peaks approaching US$40/bbl in 1979–1980 when the Iranian Revolution and Iran–Iraq War disrupted regional stability and oil supplies. Electric utilities worldwide switched from oil to coal, natural gas, or nuclear power; national governments initiated multibillion-dollar research programs to develop alternatives to oil; and commercial exploration developed major non-OPEC oilfields in Siberia, Alaska, the North Sea, and the Gulf of Mexico. By 1986, daily worldwide demand for oil dropped by 5 million barrels, non-OPEC production rose by an even-larger amount, and OPEC's market share sank from approximately 50 percent in 1979 to less than 30 percent in 1985. Illustrating the volatile multi-year timeframes of typical market cycles for natural resources, the result was a six-year decline in the price of oil, which culminated by plunging more than half in 1986 alone. As one oil analyst summarized succinctly: "When the price of something as essential as oil spikes, humanity does two things: finds more of it and finds ways to use less of it."
To combat falling revenue from oil sales, in 1982 Saudi Arabia pressed OPEC for audited national production quotas in an attempt to limit output and boost prices. When other OPEC nations failed to comply, Saudi Arabia first slashed its own production from 10 million barrels daily in 1979–1981 to just one-third of that level in 1985. When even this proved ineffective, Saudi Arabia reversed course and flooded the market with cheap oil, causing prices to fall below US$10/bbl and higher-cost producers to become unprofitable.
These strategic measures by Saudi Arabia to regulate oil prices had profound economic repercussions. As the swing producer in that period, the Kingdom faced significant economic strain. Its revenues dramatically decreased from $119 billion in 1981 to $26 billion by 1985, leading to substantial budget deficits and a doubling of its debt, reaching 100% of the Gross Domestic Product.
Faced with increasing economic hardship (which ultimately contributed to the collapse of the Soviet bloc in 1989), the "free-riding" oil exporters that had previously failed to comply with OPEC agreements finally began to limit production to shore up prices, based on painstakingly negotiated national quotas that sought to balance oil-related and economic criteria since 1986. (Within their sovereign-controlled territories, the national governments of OPEC members are able to impose production limits on both government-owned and private oil companies.) Generally when OPEC production targets are reduced, oil prices increase.
1990–2003: Ample supply and modest disruptions
Leading up to his August 1990 Invasion of Kuwait, Iraqi President Saddam Hussein was pushing OPEC to end overproduction and to send oil prices higher, in order to help OPEC members financially and to accelerate rebuilding from the 1980–1988 Iran–Iraq War. But these two Iraqi wars against fellow OPEC founders marked a low point in the cohesion of the organization, and oil prices subsided quickly after the short-term supply disruptions. The September 2001 Al Qaeda attacks on the US and the March 2003 US invasion of Iraq had even milder short-term impacts on oil prices, as Saudi Arabia and other exporters again cooperated to keep the world adequately supplied.
In the 1990s, OPEC lost its two newest members, who had joined in the mid-1970s. Ecuador withdrew in December 1992, because it was unwilling to pay the annual US$2 million membership fee and felt that it needed to produce more oil than it was allowed under the OPEC quota, although it rejoined in October 2007. Similar concerns prompted Gabon to suspend membership in January 1995; it rejoined in July 2016. Iraq has remained a member of OPEC since the organization's founding, but Iraqi production was not a part of OPEC quota agreements from 1998 to 2016, due to the country's daunting political difficulties.
Lower demand triggered by the 1997–1998 Asian financial crisis saw the price of oil fall back to 1986 levels. After oil slumped to around US$10/bbl, joint diplomacy achieved a gradual slowing of oil production by OPEC, Mexico and Norway. After prices slumped again in Nov. 2001, OPEC, Norway, Mexico, Russia, Oman and Angola agreed to cut production on 1 January 2002 for 6 months. OPEC contributed 1.5 million barrels a day (mbpd) to the approximately 2 mbpd of cuts announced.
In June 2003, the International Energy Agency (IEA) and OPEC held their first joint workshop on energy issues. They have continued to meet regularly since then, "to collectively better understand trends, analysis and viewpoints and advance market transparency and predictability."
2003–2011: Volatility
Widespread insurgency and sabotage occurred during the 2003–2008 height of the American occupation of Iraq, coinciding with rapidly increasing oil demand from China and commodity-hungry investors, recurring violence against the Nigerian oil industry, and dwindling spare capacity as a cushion against potential shortages. This combination of forces prompted a sharp rise in oil prices to levels far higher than those previously targeted by OPEC. Price volatility reached an extreme in 2008, as WTI crude oil surged to a record US$147/bbl in July and then plunged back to US$32/bbl in December, during the worst global recession since World War II. OPEC's annual oil export revenue also set a new record in 2008, estimated around US$1 trillion, and reached similar annual rates in 2011–2014 (along with extensive petrodollar recycling activity) before plunging again. By the time of the 2011 Libyan Civil War and Arab Spring, OPEC started issuing explicit statements to counter "excessive speculation" in oil futures markets, blaming financial speculators for increasing volatility beyond market fundamentals.
In May 2008, Indonesia announced that it would leave OPEC when its membership expired at the end of that year, having become a net importer of oil and being unable to meet its production quota. A statement released by OPEC on 10 September 2008 confirmed Indonesia's withdrawal, noting that OPEC "regretfully accepted the wish of Indonesia to suspend its full membership in the organization, and recorded its hope that the country would be in a position to rejoin the organization in the not-too-distant future."
2008: Production dispute
The differing economic needs of OPEC member states often affect the internal debates behind OPEC production quotas. Poorer members have pushed for production cuts from fellow members, to increase the price of oil and thus their own revenues. These proposals conflict with Saudi Arabia's stated long-term strategy of being a partner with the world's economic powers to ensure a steady flow of oil that would support economic expansion. Part of the basis for this policy is the Saudi concern that overly expensive oil or unreliable supply will drive industrial nations to conserve energy and develop alternative fuels, curtailing the worldwide demand for oil and eventually leaving unneeded barrels in the ground. To this point, Saudi Oil Minister Yamani famously remarked in 1973: "The Stone Age didn't end because we ran out of stones." To elucidate Saudi Arabia's contemporary approach, in 2024, Saudi Energy Minister Prince Abdulaziz bin Salman articulated a stance that reflects how the kingdom has adapted to the evolving economic needs within OPEC and the broader international community. Emphasizing the need for a balanced and fair global energy transition, he highlighted the importance of diversifying energy sources and noted significant investments in natural gas, petrochemicals, and renewables. These efforts support economic development in emerging countries and align with global climate objectives. Additionally, he addressed shifting energy security concerns, stating, "Energy security in the 70s, 80s, and 90s was more dependent on oil. Now, you get what happened last year... It was gas. The future problem on energy security will not be oil. It will be renewables. And the materials, and the mines."
On 10 September 2008, with oil prices still near US$100/bbl, a production dispute occurred when the Saudis reportedly walked out of a negotiating session where rival members voted to reduce OPEC output. Although Saudi delegates officially endorsed the new quotas, they stated anonymously that they would not observe them. The New York Times quoted one such delegate as saying: "Saudi Arabia will meet the market's demand. We will see what the market requires and we will not leave a customer without oil. The policy has not changed." Over the next few months, oil prices plummeted into the $30s, and did not return to $100 until the Libyan Civil War in 2011.
2014–2017: Oil glut
During 2014–2015, OPEC members consistently exceeded their production ceiling, and China experienced a slowdown in economic growth. At the same time, US oil production nearly doubled from 2008 levels and approached the world-leading "swing producer" volumes of Saudi Arabia and Russia, due to the substantial long-term improvement and spread of shale "fracking" technology in response to the years of record oil prices. These developments led in turn to a plunge in US oil import requirements (moving closer to energy independence), a record volume of worldwide oil inventories, and a collapse in oil prices that continued into early 2016.
In spite of global oversupply, on 27 November 2014 in Vienna, Saudi oil minister Ali Al-Naimi blocked appeals from poorer OPEC members for production cuts to support prices. Naimi argued that the oil market should be left to rebalance itself competitively at lower price levels, strategically rebuilding OPEC's long-term market share by ending the profitability of high-cost US shale oil production. As he explained in an interview:
Is it reasonable for a highly efficient producer to reduce output, while the producer of poor efficiency continues to produce? That is crooked logic. If I reduce, what happens to my market share? The price will go up and the Russians, the Brazilians, US shale oil producers will take my share... We want to tell the world that high-efficiency producing countries are the ones that deserve market share. That is the operative principle in all capitalist countries... One thing is for sure: Current prices [roughly US$60/bbl] do not support all producers.
A year later, when OPEC met in Vienna on 4 December 2015, the organization had exceeded its production ceiling for 18 consecutive months, US oil production had declined only slightly from its peak, world markets appeared to be oversupplied by at least 2 million barrels per day despite war-torn Libya pumping 1 million barrels below capacity, oil producers were making major adjustments to withstand prices as low as $40, Indonesia was rejoining the export organization, Iraqi production had surged after years of disorder, Iranian output was poised to rebound with the lifting of international sanctions, hundreds of world leaders at the Paris Climate Agreement were committing to limit carbon emissions from fossil fuels, and solar technologies were becoming steadily more competitive and prevalent. In light of all these market pressures, OPEC decided to set aside its ineffective production ceiling until the next ministerial conference in June 2016. By 20 January 2016, the OPEC Reference Basket was down to US$22.48/bbl – less than one-fourth of its high from June 2014 ($110.48), less than one-sixth of its record from July 2008 ($140.73), and back below the April 2003 starting point ($23.27) of its historic run-up.
As 2016 continued, the oil glut was partially trimmed with significant production offline in the United States, Canada, Libya, Nigeria and China, and the basket price gradually rose back into the $40s. OPEC regained a modest percentage of market share, saw the cancellation of many competing drilling projects, maintained the status quo at its June conference, and endorsed "prices at levels that are suitable for both producers and consumers", although many producers were still experiencing serious economic difficulties.
2017–2020: Production cut and OPEC+
As OPEC members grew weary of a multi-year supply-contest with diminishing returns and shrinking financial reserves, the organization finally attempted its first production cut since 2008. Despite many political obstacles, a September 2016 decision to trim approximately 1 million barrels per day was codified by a new quota-agreement at the November 2016 OPEC conference. The agreement (which exempted disruption-ridden members Libya and Nigeria) covered the first half of 2017 – alongside promised reductions from Russia and ten other non-members, offset by expected increases in the US shale-sector, Libya, Nigeria, spare capacity, and surging late-2016 OPEC production before the cuts took effect. Indonesia announced another "temporary suspension" of its OPEC membership rather than accepting the organization's requested 5-percent production-cut. Prices fluctuated around US$50/bbl, and in May 2017 OPEC decided to extend the new quotas through March 2018, with the world waiting to see if and how the oil-inventory glut might be fully siphoned-off by then. Longtime oil analyst Daniel Yergin "described the relationship between OPEC and shale as 'mutual coexistence', with both sides learning to live with prices that are lower than they would like." These production cut deals with non-OPEC countries are generally referred to as OPEC+.
In December 2017, Russia and OPEC agreed to extend the production cut of 1.8 mbpd until the end of 2018.
Qatar announced it would withdraw from OPEC effective 1 January 2019. According to The New York Times, this was a strategic response to the Qatar diplomatic crisis which Qatar was involved with Saudi Arabia, United Arab Emirates, Bahrain, and Egypt.
On 29 June 2019, Russia again agreed with Saudi Arabia to extend by six to nine months the original production cuts of 2018.
In October 2019, Ecuador announced it would withdraw from OPEC on 1 January 2020 due to financial problems facing the country.
In December 2019, OPEC and Russia agreed one of the deepest output cuts so far to prevent oversupply in a deal that will last for the first three months of 2020.
2020: Saudi-Russian price war
In early March 2020, OPEC officials presented an ultimatum to Russia to cut production by 1.5% of world supply. Russia, which foresaw continuing cuts as American shale oil production increased, rejected the demand, ending the three-year partnership between OPEC and major non-OPEC providers. Another factor was weakening global demand resulting from the COVID-19 pandemic. This also resulted in 'OPEC plus' failing to extend the agreement cutting 2.1 million barrels per day that was set to expire at the end of March. Saudi Arabia, which has absorbed a disproportionate amount of the cuts to convince Russia to stay in the agreement, notified its buyers on 7 March that they would raise output and discount their oil in April. This prompted a Brent crude price crash of more than 30% before a slight recovery and widespread turmoil in financial markets.
Several pundits saw this as a Saudi-Russian price war, or game of chicken which cause the "other side to blink first". Saudi Arabia had in March 2020 $500 billion of foreign exchange reserves, while at that time Russia's reserves were $580 billion. The debt-to-GDP ratio of the Saudis was 25%, while the Russian ratio was 15%. Another remarked that the Saudis can produce oil at as low a price as $3 per barrel, whereas Russia needs $30 per barrel to cover production costs. "To Russia, this price war is more than just about regaining market share for oil," one analyst claims. "It’s about assaulting the Western economy, especially America’s." In order to ward off the oil exporters price war which can make shale oil production uneconomical, US may protect its crude oil market share by passing the NOPEC bill. Meanwhile, Saudi Arabia, represented by Energy Minister Prince Abdulaziz bin Salman, maintains a conciliatory stance towards the U.S. shale industry. He clarified that harming this sector was never their intention, stating, "I made it clear that it was not on our radar or our intention to create any type of damage to their industry... they will rise again from the ashes and thrive and prosper." He also noted that Saudi Arabia is looking forward to a time when U.S. producers thrive once again in a market with higher oil demand."
In April 2020, OPEC and a group of other oil producers, including Russia, agreed to extend production cuts until the end of July. The cartel and its allies agreed to cut oil production in May and June by 9.7 million barrels a day, equal to around 10% of global output, in an effort to prop up prices, which had previously fallen to record lows.
2021: Saudi-Emirati dispute
In July 2021, OPEC+ member United Arab Emirates rejected a Saudi proposed eight-month extension to oil output curbs which was in place due to COVID-19 and lower oil consumption. The previous year, OPEC+ cut the equivalent of about 10% of demand at the time. The UAE asked for the maximum amount of oil the group would recognize the country of producing to be raised to 3.8 million barrels a day compared to its previous 3.2 million barrels. A compromise deal allowed UAE to increase its maximum oil output to 3.65 million barrels a day.
Under the terms of the agreement, Russia would increase its production from 11 million barrels to 11.5 million by May 2022 as well. All members would increase output by 400,000 barrels per day each month starting in August to gradually offset the previous cuts made due to the COVID pandemic. This compromise, achieved where Saudi Arabia met the United Arab Emirates halfway, underscored OPEC+ unity. UAE Energy Minister Suhail Al-Mazrouei thanked Saudi Arabia and Russia for facilitating dialogue leading to an agreement. He stated, "The UAE is committed to this group and will always work with it." On the Saudi side, Energy Minister Prince Abdulaziz bin Salman emphasized consensus building and stated that the agreement strengthens OPEC+'s ties and ensures its continuity.
2021–present: Global energy crisis
The record-high energy prices were driven by a global surge in demand as the world quit the economic recession caused by COVID-19, particularly due to strong energy demand in Asia. In August 2021, U.S. President Joe Biden's national security adviser Jake Sullivan released a statement calling on OPEC+ to boost oil production to "offset previous production cuts that OPEC+ imposed during the pandemic until well into 2022." On 28 September 2021, Sullivan met in Saudi Arabia with Saudi Crown Prince Mohammed bin Salman to discuss the high oil prices. The price of oil was about US$80 by October 2021, the highest since 2014. President Joe Biden and U.S. Energy Secretary Jennifer Granholm blamed the OPEC+ for rising oil and gas prices.
Russia's invasion of Ukraine in February 2022 has altered the global oil trade. EU leaders tried to ban the majority of Russian crude imports, but even prior to the official action imports to Northwest Europe were down. More Russian oil is now sold outside of Europe, more specifically to India and China.
In October 2022, key OPEC+ ministers agreed to oil production cuts of 2 million barrels per day, the first production cut since 2020. This led to renewed interest in the passage of NOPEC.
2022: Oil production cut
In October 2022, OPEC+ led by Saudi Arabia announced a large cut to its oil output target in order to aid Russia . In response, US President Joe Biden vowed "consequences" and said the US government would "re-evaluate" the longstanding U.S. relationship with Saudi Arabia. Robert Menendez, the Democratic chairman of the U.S. Senate Foreign Relations Committee, called for a freeze on cooperation with and arms sales to Saudi Arabia, accusing the kingdom of helping Russia underwrite its war with Ukraine.
Saudi Arabia's foreign ministry stated that the OPEC+ decision was "purely economic" and taken unanimously by all members of the conglomerate, pushing back on pressure to change its stance on the Russo-Ukrainian War at the UN. In response, the White House accused Saudi Arabia of pressuring other OPEC nations into agreeing with the production cut, some of which felt coerced, saying the United States had presented the Saudi government with an analysis showing there was no market basis for the cut. United States National Security Council spokesman John Kirby said the Saudi government knew the decision will "increase Russian revenues and blunt the effectiveness of sanctions" against Moscow, rejecting the Saudi claim that the move was "purely economic". According to a report in The Intercept, sources and experts said that Saudi Arabia had sought even deeper cuts than Russia, saying Saudi Crown Prince Mohammed bin Salman wants to sway the 2022 United States elections in favor of the GOP and the 2024 United States presidential election in favor of Donald Trump. In contrast, Saudi officials maintain that their decision to reduce oil production was driven by concerns over the global economy, not political motivations. They state that the cuts were a response to the global economic situation and low inventories, which could trigger a rally in oil prices. Saudi Arabia affirms its actions by emphasizing its strategic partnership with the U.S., focusing on peace, security, and prosperity.
In 2023, the IEA predicted that demand for fossil fuels such as oil, natural gas and coal would reach an all-time high by 2030. OPEC rejected the IEA's forecast, saying "what makes such predictions so dangerous, is that they are often accompanied by calls to stop investing in new oil and gas projects."
In November 2024, S&P Global alleged that the UAE ignored the OPEC’s oil production cuts and produced around 700,000 barrels more than the agreed quota, that is, 2.91m barrels per day. Analysts asserted that the Emirates’ “quota busting” would underestimate Saudi and Russia’s efforts to increase oil prices by cutting production. While Russia was seeking to fund its war with Ukraine, Saudi had its own plans of diversifying the economy.
Membership
Current member countries
As of January 2024, OPEC has 12 member countries: five in the Middle East (West Asia), six in Africa, and one in South America. According to the U.S. Energy Information Administration (EIA), OPEC's combined rate of oil production (including gas condensate) represented 44% of the world's total in 2016, and OPEC accounted for 81.5% of the world's "proven" oil reserves. Subsequent reports from 2022 indicate that OPEC member countries were then responsible for about 38% of total world crude oil production. It is also estimated that these countries hold 79.5% of the globe's proven oil reserves, with the Middle East alone accounting for 67.2% of OPEC's reserves.
Approval of a new member country requires agreement by three-quarters of OPEC's existing members, including all five of the founders. In October 2015, Sudan formally submitted an application to join, but it is not yet a member.
OPEC+
A number of non-OPEC member countries also participate in the organisation's initiatives such as voluntary supply cuts in order to further bind policy objectives between OPEC and non-OPEC members. This loose grouping of countries, known as OPEC+, includes Azerbaijan, Bahrain, Brunei, Brazil, Kazakhstan, Malaysia, Mexico, Oman, Russia, South Sudan and Sudan.
The collaboration among OPEC+ member countries has led to the establishment of the Declaration of Cooperation (DoC) in 2017, which has been subsequently extended multiple times due to its remarkable success. The DoC serves as a framework for cooperation and coordination between OPEC and non-OPEC countries. Additionally, OPEC+ members engage in further cooperative efforts through the Charter of Cooperation (CoC), which provides a platform for long-term collaboration. The CoC facilitates dialogue and the exchange of views on global oil and energy market conditions, with the overarching goal of ensuring a secure energy supply and fostering lasting stability that benefits producers, consumers, investors, and the global economy.
Observers
Since the 1980s, representatives from Canada, Egypt, Mexico, Norway, Oman, Russia, and other oil-exporting nations have attended many OPEC meetings as observers. This arrangement serves as an informal mechanism for coordinating policies.
New members
Uganda and Somalia with exploration of crude oil may join OPEC in the future, with likely new production dates of 2024-2025, expansion of OPEC with OPEC+ and new country's in exploration, where criteria of OPEC Charter is production of crude oil.
Lapsed members
For countries that export petroleum at relatively low volume, their limited negotiating power as OPEC members would not necessarily justify the burdens imposed by OPEC production quotas and membership costs. Ecuador withdrew from OPEC in December 1992, because it was unwilling to pay the annual US$2 million membership fee and felt that it needed to produce more oil than it was allowed under its OPEC quota at the time. Ecuador then rejoined in October 2007 before leaving again in January 2020. Ecuador's Ministry of Energy and Non-Renewable Natural Resources released an official statement on 2 January 2020 which confirmed that Ecuador had left OPEC. Similar concerns prompted Gabon to suspend membership in January 1995; it rejoined in July 2016.
In May 2008, Indonesia announced that it would leave OPEC when its membership expired at the end of that year, having become a net importer of oil and being unable to meet its production quota. It rejoined the organization in January 2016, but announced another "temporary suspension" of its membership at year-end when OPEC requested a 5% production cut.
Qatar left OPEC on 1 January 2019, after joining the organization in 1961, to focus on natural gas production, of which it is the world's largest exporter in the form of liquified natural gas (LNG).
In an OPEC meeting in November 2023, Nigeria and Angola, the biggest oil producers in Sub-Saharan Africa, expressed their discontent over OPEC's quotas which, according to them, blocked their efforts to ramp up oil production and boost their foreign reserves. In December 2023, Angola announced it was leaving the OPEC because it disagreed with the organization's production quotas scheme.
Market information
As one area in which OPEC members have been able to cooperate productively over the decades, the organisation has significantly improved the quality and quantity of information available about the international oil market. This is especially helpful for a natural-resource industry whose smooth functioning requires months and years of careful planning.
Publications and research
In April 2001, OPEC collaborated with five other international organizations (APEC, Eurostat, IEA, , UNSD) to improve the availability and reliability of oil data. They launched the Joint Oil Data Exercise, which in 2005 was joined by IEF and renamed the Joint Organisations Data Initiative (JODI), covering more than 90% of the global oil market. GECF joined as an eighth partner in 2014, enabling JODI also to cover nearly 90% of the global market for natural gas.
Since 2007, OPEC has published the "World Oil Outlook" (WOO) annually, in which it presents a comprehensive analysis of the global oil industry including medium- and long-term projections for supply and demand. OPEC also produces an "Annual Statistical Bulletin" (ASB), and publishes more-frequent updates in its "Monthly Oil Market Report" (MOMR) and "OPEC Bulletin".
Crude oil benchmarks
A "crude oil benchmark" is a standardized petroleum product that serves as a convenient reference price for buyers and sellers of crude oil, including standardized contracts in major futures markets since 1983. Benchmarks are used because oil prices differ (usually by a few dollars per barrel) based on variety, grade, delivery date and location, and other legal requirements.
The OPEC Reference Basket of Crudes has been an important benchmark for oil prices since 2000. It is calculated as a weighted average of prices for petroleum blends from the OPEC member countries: Saharan Blend (Algeria), Girassol (Angola), Djeno (Republic of the Congo) Rabi Light (Gabon), Iran Heavy (Islamic Republic of Iran), Basra Light (Iraq), Kuwait Export (Kuwait), Es Sider (Libya), Bonny Light (Nigeria), Arab Light (Saudi Arabia), Murban (UAE), and Merey (Venezuela).
North Sea Brent Crude Oil is the leading benchmark for Atlantic basin crude oils and is used to price approximately two-thirds of the world's traded crude oil. Other well-known benchmarks are West Texas Intermediate (WTI), Dubai Crude, Oman Crude, and Urals oil.
Spare capacity
The US Energy Information Administration, the statistical arm of the US Department of Energy, defines spare capacity for crude oil market management "as the volume of production that can be brought on within 30 days and sustained for at least 90 days ... OPEC spare capacity provides an indicator of the world oil market's ability to respond to potential crises that reduce oil supplies."
In November 2014, the International Energy Agency (IEA) estimated that OPEC's "effective" spare capacity, adjusted for ongoing disruptions in countries like Libya and Nigeria, was and that this number would increase to a peak in 2017 of . By November 2015, the IEA changed its assessment "with OPEC's spare production buffer stretched thin, as Saudi Arabia – which holds the lion's share of excess capacity – and its [Persian] Gulf neighbours pump at near-record rates."
See also
Organization of Arab Petroleum Exporting Countries
Big Oil
Energy diplomacy
List of country groupings
List of intergovernmental organizations
Oligopoly
World oil market chronology from 2003
Gasoline
Peak oil
Peak gas
Arun gas field
Notes
References
Further reading
Ansari, Dawud. (2017) "OPEC, Saudi Arabia, and the shale revolution: Insights from equilibrium modelling and oil politics." Energy Policy 111 (2017): 166–178. online
Claes, Dag Harald, and Giuliano Garavini eds. (2019) Handbook of OPEC and the Global Energy Order: Past, Present and Future Challenges (Routledge 2019) excerpt
Colgan, Jeff D. (2014) "The emperor has no clothes: The limits of OPEC in the global oil market." International Organization 68.3 (2014): 599–632. online
Dudley, Bob. (2019) "BP energy outlook." Report–BP Energy Economics–London: UK 9 (2019) online.
Economou, Andreas, and Bassam Fattouh. (2021) "OPEC at 60: the world with and without OPEC." OPEC Energy Review 45.1 (2021): 3-28. online, a historical perspective from 1990 to 2018.
Evans, John (1986). OPEC, Its Member States and the World Energy Market. .
Fesharaki, Fereidun (1983). OPEC, the Gulf, and the World Petroleum Market: A Study in Government Policy and Downstream Operations. .
Garavini, Giuliano. (2019). The Rise and Fall of OPEC in the Twentieth Century. Oxford University Press.
Gately, Dermot. (1984) "A ten-year retrospective: OPEC and the world oil market." Journal of Economic Literature 22.3 (1984): 1100–1114. summary of scholarly literature online
Painter, David S (2014). "Oil and geopolitics: The oil crises of the 1970s and the cold war". Historical Social Research/Historische Sozialforschung. 186–208.
Pickl, Matthias J. (2019) "The renewable energy strategies of oil majors–From oil to energy?." Energy Strategy Reviews 26 (2019): 100370. online
Ratti, Ronald A., and Joaquin L. Vespignani. (2015) "OPEC and non-OPEC oil production and the global economy." Energy Economics 50 (2015): 364–378. online
Skeet, Ian (1988). OPEC: Twenty-five Years of Prices and Politics. Cambridge UP. online
Van de Graaf, Thijs. (2020) "Is OPEC dead? Oil exporters, the Paris agreement and the transition to a post-carbon world." in Beyond market assumptions: Oil price as a global institution (Springer, Cham, 2020) pp. 63–77. online
Wight, David M. Oil Money: Middle East Petrodollars and the Transformation of US Empire, 1967-1988 (Cornell University Press, 20210 Website: rjissf.org online reviews
Woolfson, Charles, and Matthias Beck. (2019) "Corporate social responsibility in the international oil industry." in Corporate social responsibility failures in the oil industry. (Routledge, 2019) pp. 1–14. online
Yergin, Daniel (1991). The Prize: The Epic Quest for Oil, Money, and Power. online
Yergin, Daniel (2011). The quest : energy, security and the remaking of the modern world (2011) online
External links
The OPEC Fund for International Development official website
Cartels
History of the petroleum industry
Petroleum
International energy organizations
Petroleum economics
Petroleum organizations
Petroleum politics
Organisations based in Vienna
Organizations established in 1960
20th century in Baghdad | OPEC | [
"Chemistry",
"Engineering"
] | 11,110 | [
"Petroleum politics",
"Petroleum",
"Petroleum organizations",
"International energy organizations",
"Energy organizations"
] |
166,352 | https://en.wikipedia.org/wiki/Sides%20of%20an%20equation | In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric.
More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly.
Example
The expression on the right side of the "=" sign is the right side of the equation and the expression on the left of the "=" is the left side of the equation.
For example, in
is the left-hand side (LHS) and is the right-hand side (RHS).
Homogeneous and inhomogeneous equations
In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by
Lf = g,
with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution.
For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles.
Syntax
More abstractly, when using infix notation
T * U
the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though.
See also
Equals sign
References
Mathematical terminology | Sides of an equation | [
"Mathematics"
] | 369 | [
"nan"
] |
166,354 | https://en.wikipedia.org/wiki/Barotropic%20vorticity%20equation | The barotropic vorticity equation assumes the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, there is no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and the Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow Arctic highs) and warm-core lows (such as tropical cyclones).
A simplified form of the vorticity equation for an inviscid, divergence-free flow (solenoidal velocity field), the barotropic vorticity equation can simply be stated as
where is the material derivative and
is absolute vorticity, with ζ being relative vorticity, defined as the vertical component of the curl of the fluid velocity and f is the Coriolis parameter
where Ω is the angular frequency of the planet's rotation (Ω = for the earth) and φ is latitude.
In terms of relative vorticity, the equation can be rewritten as
where β = is the variation of the Coriolis parameter with distance y in the north–south direction and v is the component of velocity in this direction.
In 1950, Charney, Fjørtoft, and von Neumann integrated this equation (with an added diffusion term on the right-hand side) on a computer for the first time, using an observed field of 500 hPa geopotential height for the first timestep. This was one of the first successful instances of numerical weather prediction.
See also
Barotropic
References
External links
http://www.met.reading.ac.uk/~ross/Science/BarVor.html
Equations of fluid dynamics | Barotropic vorticity equation | [
"Physics",
"Chemistry"
] | 421 | [
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
166,356 | https://en.wikipedia.org/wiki/Pseudovector | In physics and mathematics, a pseudovector (or axial vector) is a quantity that behaves like a vector in many situations, but its direction does not conform when the object is rigidly transformed by rotation, translation, reflection, etc. This can also happen when the orientation of the space is changed. For example, the angular momentum is a pseudovector because it is often described as a vector, but by just changing the position of reference (and changing the position vector), angular momentum can reverse direction, which is not supposed to happen with true vectors (also known as polar vectors).
One example of a pseudovector is the normal to an oriented plane. An oriented plane can be defined by two non-parallel vectors, a and b, that span the plane. The vector is a normal to the plane (there are two normals, one on each side – the right-hand rule will determine which), and is a pseudovector. This has consequences in computer graphics, where it has to be considered when transforming surface normals.
In three dimensions, the curl of a polar vector field at a point and the cross product of two polar vectors are pseudovectors.
A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and angular velocity. In mathematics, in three dimensions, pseudovectors are equivalent to bivectors, from which the transformation rules of pseudovectors can be derived. More generally, in n-dimensional geometric algebra, pseudovectors are the elements of the algebra with dimension , written ⋀n−1Rn. The label "pseudo-" can be further generalized to pseudoscalars and pseudotensors, both of which gain an extra sign-flip under improper rotations compared to a true scalar or tensor.
Physical examples
Physical examples of pseudovectors include torque, angular velocity, angular momentum, magnetic field, vorticity and magnetic dipole moment.
Consider the pseudovector angular momentum . Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the "reflection" of this angular momentum "vector" (viewed as an ordinary vector) points to the right, but the actual angular momentum vector of the wheel (which is still turning forward in the reflection) still points to the left, corresponding to the extra sign flip in the reflection of a pseudovector.
The distinction between polar vectors and pseudovectors becomes important in understanding the effect of symmetry on the solution to physical systems. Consider an electric current loop in the plane that inside the loop generates a magnetic field oriented in the z direction. This system is symmetric (invariant) under mirror reflections through this plane, with the magnetic field unchanged by the reflection. But reflecting the magnetic field as a vector through that plane would be expected to reverse it; this expectation is corrected by realizing that the magnetic field is a pseudovector, with the extra sign flip leaving it unchanged.
In physics, pseudovectors are generally the result of taking the cross product of two polar vectors or the curl of a polar vector field. The cross product and curl are defined, by convention, according to the right hand rule, but could have been just as easily defined in terms of a left-hand rule. The entire body of physics that deals with (right-handed) pseudovectors and the right hand rule could be replaced by using (left-handed) pseudovectors and the left hand rule without issue. The (left) pseudovectors so defined would be opposite in direction to those defined by the right-hand rule.
While vector relationships in physics can be expressed in a coordinate-free manner, a coordinate system is required in order to express vectors and pseudovectors as numerical quantities. Vectors are represented as ordered triplets of numbers: e.g. , and pseudovectors are represented in this form too. When transforming between left and right-handed coordinate systems, representations of pseudovectors do not transform as vectors, and treating them as vector representations will cause an incorrect sign change, so that care must be taken to keep track of which ordered triplets represent vectors, and which represent pseudovectors. This problem does not exist if the cross product of two vectors is replaced by the exterior product of the two vectors, which yields a bivector which is a 2nd rank tensor and is represented by a 3×3 matrix. This representation of the 2-tensor transforms correctly between any two coordinate systems, independently of their handedness.
Details
The definition of a "vector" in physics (including both polar vectors and pseudovectors) is more specific than the mathematical definition of "vector" (namely, any element of an abstract vector space). Under the physics definition, a "vector" is required to have components that "transform" in a certain way under a proper rotation: In particular, if everything in the universe were rotated, the vector would rotate in exactly the same way. (The coordinate system is fixed in this discussion; in other words this is the perspective of active transformations.) Mathematically, if everything in the universe undergoes a rotation described by a rotation matrix R, so that a displacement vector x is transformed to , then any "vector" v must be similarly transformed to . This important requirement is what distinguishes a vector (which might be composed of, for example, the x-, y-, and z-components of velocity) from any other triplet of physical quantities (For example, the length, width, and height of a rectangular box cannot be considered the three components of a vector, since rotating the box does not appropriately transform these three components.)
(In the language of differential geometry, this requirement is equivalent to defining a vector to be a tensor of contravariant rank one. In this more general framework, higher rank tensors can also have arbitrarily many and mixed covariant and contravariant ranks at the same time, denoted by raised and lowered indices within the Einstein summation convention.)
A basic and rather concrete example is that of row and column vectors under the usual matrix multiplication operator: in one order they yield the dot product, which is just a scalar and as such a rank zero tensor, while in the other they yield the dyadic product, which is a matrix representing a rank two mixed tensor, with one contravariant and one covariant index. As such, the noncommutativity of standard matrix algebra can be used to keep track of the distinction between covariant and contravariant vectors. This is in fact how the bookkeeping was done before the more formal and generalised tensor notation came to be. It still manifests itself in how the basis vectors of general tensor spaces are exhibited for practical manipulation.
The discussion so far only relates to proper rotations, i.e. rotations about an axis. However, one can also consider improper rotations, i.e. a mirror-reflection possibly followed by a proper rotation. (One example of an improper rotation is inversion through a point in 3-dimensional space.) Suppose everything in the universe undergoes an improper rotation described by the improper rotation matrix R, so that a position vector x is transformed to . If the vector v is a polar vector, it will be transformed to . If it is a pseudovector, it will be transformed to .
The transformation rules for polar vectors and pseudovectors can be compactly stated as
where the symbols are as described above, and the rotation matrix R can be either proper or improper. The symbol det denotes determinant; this formula works because the determinant of proper and improper rotation matrices are +1 and −1, respectively.
Behavior under addition, subtraction, scalar multiplication
Suppose v1 and v2 are known pseudovectors, and v3 is defined to be their sum, . If the universe is transformed by a rotation matrix R, then v3 is transformed to
So v3 is also a pseudovector. Similarly one can show that the difference between two pseudovectors is a pseudovector, that the sum or difference of two polar vectors is a polar vector, that multiplying a polar vector by any real number yields another polar vector, and that multiplying a pseudovector by any real number yields another pseudovector.
On the other hand, suppose v1 is known to be a polar vector, v2 is known to be a pseudovector, and v3 is defined to be their sum, . If the universe is transformed by an improper rotation matrix R, then v3 is transformed to
Therefore, v3 is neither a polar vector nor a pseudovector (although it is still a vector, by the physics definition). For an improper rotation, v3 does not in general even keep the same magnitude:
.
If the magnitude of v3 were to describe a measurable physical quantity, that would mean that the laws of physics would not appear the same if the universe was viewed in a mirror. In fact, this is exactly what happens in the weak interaction: Certain radioactive decays treat "left" and "right" differently, a phenomenon which can be traced to the summation of a polar vector with a pseudovector in the underlying theory. (See parity violation.)
Behavior under cross products
For a rotation matrix R, either proper or improper, the following mathematical equation is always true:
,
where v1 and v2 are any three-dimensional vectors. (This equation can be proven either through a geometric argument or through an algebraic calculation.)
Suppose v1 and v2 are known polar vectors, and v3 is defined to be their cross product, . If the universe is transformed by a rotation matrix R, then v3 is transformed to
So v3 is a pseudovector. Similarly, one can show:
polar vector × polar vector = pseudovector
pseudovector × pseudovector = pseudovector
polar vector × pseudovector = polar vector
pseudovector × polar vector = polar vector
This is isomorphic to addition modulo 2, where "polar" corresponds to 1 and "pseudo" to 0.
Examples
From the definition, it is clear that a displacement vector is a polar vector. The velocity vector is a displacement vector (a polar vector) divided by time (a scalar), so is also a polar vector. Likewise, the momentum vector is the velocity vector (a polar vector) times mass (a scalar), so is a polar vector. Angular momentum is the cross product of a displacement (a polar vector) and momentum (a polar vector), and is therefore a pseudovector. Torque is angular momentum (a pseudovector) divided by time (a scalar), so is also a pseudovector. Continuing this way, it is straightforward to classify any of the common vectors in physics as either a pseudovector or polar vector. (There are the parity-violating vectors in the theory of weak-interactions, which are neither polar vectors nor pseudovectors. However, these occur very rarely in physics.)
The right-hand rule
Above, pseudovectors have been discussed using active transformations. An alternate approach, more along the lines of passive transformations, is to keep the universe fixed, but switch "right-hand rule" with "left-hand rule" everywhere in math and physics, including in the definition of the cross product and the curl. Any polar vector (e.g., a translation vector) would be unchanged, but pseudovectors (e.g., the magnetic field vector at a point) would switch signs. Nevertheless, there would be no physical consequences, apart from in the parity-violating phenomena such as certain radioactive decays.
Formalization
One way to formalize pseudovectors is as follows: if V is an n-dimensional vector space, then a pseudovector of V is an element of the (n − 1)-th exterior power of V: ⋀n−1(V). The pseudovectors of V form a vector space with the same dimension as V.
This definition is not equivalent to that requiring a sign flip under improper rotations, but it is general to all vector spaces. In particular, when n is even, such a pseudovector does not experience a sign flip, and when the characteristic of the underlying field of V is 2, a sign flip has no effect. Otherwise, the definitions are equivalent, though it should be borne in mind that without additional structure (specifically, either a volume form or an orientation), there is no natural identification of ⋀n−1(V) with V.
Another way to formalize them is by considering them as elements of a representation space for . Vectors transform in the fundamental representation of with data given by , so that for any matrix in , one has . Pseudovectors transform in a pseudofundamental representation , with . Another way to view this homomorphism for odd is that in this case . Then is a direct product of group homomorphisms; it is the direct product of the fundamental homomorphism on with the trivial homomorphism on .
Geometric algebra
In geometric algebra the basic elements are vectors, and these are used to build a hierarchy of elements using the definitions of products in this algebra. In particular, the algebra builds pseudovectors from vectors.
The basic multiplication in the geometric algebra is the geometric product, denoted by simply juxtaposing two vectors as in ab. This product is expressed as:
where the leading term is the customary vector dot product and the second term is called the wedge product or exterior product. Using the postulates of the algebra, all combinations of dot and wedge products can be evaluated. A terminology to describe the various combinations is provided. For example, a multivector is a summation of k-fold wedge products of various k-values. A k-fold wedge product also is referred to as a k-blade.
In the present context the pseudovector is one of these combinations. This term is attached to a different multivector depending upon the dimensions of the space (that is, the number of linearly independent vectors in the space). In three dimensions, the most general 2-blade or bivector can be expressed as the wedge product of two vectors and is a pseudovector. In four dimensions, however, the pseudovectors are trivectors. In general, it is a -blade, where n is the dimension of the space and algebra. An n-dimensional space has n basis vectors and also n basis pseudovectors. Each basis pseudovector is formed from the outer (wedge) product of all but one of the n basis vectors. For instance, in four dimensions where the basis vectors are taken to be {e1, e2, e3, e4}, the pseudovectors can be written as: {e234, e134, e124, e123}.
Transformations in three dimensions
The transformation properties of the pseudovector in three dimensions has been compared to that of the vector cross product by Baylis. He says: "The terms axial vector and pseudovector are often treated as synonymous, but it is quite useful to be able to distinguish a bivector from its dual." To paraphrase Baylis: Given two polar vectors (that is, true vectors) a and b in three dimensions, the cross product composed from a and b is the vector normal to their plane given by . Given a set of right-handed orthonormal basis vectors , the cross product is expressed in terms of its components as:
where superscripts label vector components. On the other hand, the plane of the two vectors is represented by the exterior product or wedge product, denoted by . In this context of geometric algebra, this bivector is called a pseudovector, and is the Hodge dual of the cross product. The dual of e1 is introduced as , and so forth. That is, the dual of e1 is the subspace perpendicular to e1, namely the subspace spanned by e2 and e3. With this understanding,
For details, see . The cross product and wedge product are related by:
where is called the unit pseudoscalar. It has the property:
Using the above relations, it is seen that if the vectors a and b are inverted by changing the signs of their components while leaving the basis vectors fixed, both the pseudovector and the cross product are invariant. On the other hand, if the components are fixed and the basis vectors eℓ are inverted, then the pseudovector is invariant, but the cross product changes sign. This behavior of cross products is consistent with their definition as vector-like elements that change sign under transformation from a right-handed to a left-handed coordinate system, unlike polar vectors.
Note on usage
As an aside, it may be noted that not all authors in the field of geometric algebra use the term pseudovector, and some authors follow the terminology that does not distinguish between the pseudovector and the cross product. However, because the cross product does not generalize to other than three dimensions,
the notion of pseudovector based upon the cross product also cannot be extended to a space of any other number of dimensions. The pseudovector as a -blade in an n-dimensional space is not restricted in this way.
Another important note is that pseudovectors, despite their name, are "vectors" in the sense of being elements of a vector space. The idea that "a pseudovector is different from a vector" is only true with a different and more specific definition of the term "vector" as discussed above.
See also
Exterior algebra
Clifford algebra
Antivector, a generalization of pseudovector in Clifford algebra
Orientability — discussion about non-orientable spaces.
Tensor density
Notes
References
Axial vector at Encyclopaedia of Mathematics
: The dual of the wedge product is the cross product .
Linear algebra
Vector calculus
Vectors (mathematics and physics) | Pseudovector | [
"Mathematics"
] | 3,679 | [
"Linear algebra",
"Algebra"
] |
166,365 | https://en.wikipedia.org/wiki/Vorticity%20equation | The vorticity equation of fluid dynamics describes the evolution of the vorticity of a particle of a fluid as it moves with its flow; that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity). The governing equation is:where is the material derivative operator, is the flow velocity, is the local fluid density, is the local pressure, is the viscous stress tensor and represents the sum of the external body forces. The first source term on the right hand side represents vortex stretching.
The equation is valid in the absence of any concentrated torques and line forces for a compressible, Newtonian fluid. In the case of incompressible flow (i.e., low Mach number) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation:
where is the kinematic viscosity and is the Laplace operator. Under the further assumption of two-dimensional flow, the equation simplifies to:
Physical interpretation
The term on the left-hand side is the material derivative of the vorticity vector . It describes the rate of change of vorticity of the moving fluid particle. This change can be attributed to unsteadiness in the flow (, the unsteady term) or due to the motion of the fluid particle as it moves from one point to another (, the convection term).
The term on the right-hand side describes the stretching or tilting of vorticity due to the flow velocity gradients. Note that is a vector quantity, as is a scalar differential operator, while is a nine-element tensor quantity.
The term describes stretching of vorticity due to flow compressibility. It follows from the Navier-Stokes equation for continuity, namely where is the specific volume of the fluid element. One can think of as a measure of flow compressibility. Sometimes the negative sign is included in the term.
The term is the baroclinic term. It accounts for the changes in the vorticity due to the intersection of density and pressure surfaces.
The term , accounts for the diffusion of vorticity due to the viscous effects.
The term provides for changes due to external body forces. These are forces that are spread over a three-dimensional region of the fluid, such as gravity or electromagnetic forces. (As opposed to forces that act only over a surface (like drag on a wall) or a line (like surface tension around a meniscus).
Simplifications
In case of conservative body forces, .
For a barotropic fluid, . This is also true for a constant density fluid (including incompressible fluid) where . Note that this is not the same as an incompressible flow, for which the barotropic term cannot be neglected.
This note seems to be talking about the fact that conservation of momentum says and there's a difference between assuming that ρ=constant (the 'incompressible fluid' option, above) and that (the 'incompressible flow' option, above). With the first assumption, conservation of momentum implies (for non-zero density) that ; whereas the second assumption doesn't necessary imply that ρ is constant. This second assumption only strictly requires that the time rate of change of the density is compensated by the gradient of the density, as in:. You can make sense of this by considering the ideal gas law (which is valid if the Reynolds number is large enough that viscous friction becomes unimportant.) Then, even for an adiabatic, chemically-homogenous fluid, the density can vary when the pressure changes, e.g. with Bernoulli.
For inviscid fluids, the viscosity tensor is zero.
Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to
Alternately, in case of incompressible, inviscid fluid with conservative body forces,
For a brief review of additional cases and simplifications, see also. For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to.
Derivation
The vorticity equation can be derived from the Navier–Stokes equation for the conservation of angular momentum. In the absence of any concentrated torques and line forces, one obtains:
Now, vorticity is defined as the curl of the flow velocity vector; taking the curl of momentum equation yields the desired equation. The following identities are useful in derivation of the equation:
where is any scalar field.
Tensor notation
The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol :
In specific sciences
Atmospheric sciences
In the atmospheric sciences, the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is
Here, is the polar () component of the vorticity, is the atmospheric density, , , and w are the components of wind velocity, and is the 2-dimensional (i.e. horizontal-component-only) del.
See also
Vorticity
Barotropic vorticity equation
Vortex stretching
Burgers vortex
References
Further reading
Equations of fluid dynamics
Transport phenomena | Vorticity equation | [
"Physics",
"Chemistry",
"Engineering"
] | 1,115 | [
"Transport phenomena",
"Physical phenomena",
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Fluid dynamics"
] |
166,371 | https://en.wikipedia.org/wiki/Baroclinity | In fluid dynamics, the baroclinity (often called baroclinicity) of a stratified fluid is a measure of how misaligned the gradient of pressure is from the gradient of density in a fluid. In meteorology a baroclinic flow is one in which the density depends on both temperature and pressure (the fully general case). A simpler case, barotropic flow, allows for density dependence only on pressure, so that the curl of the pressure-gradient force vanishes.
Baroclinity is proportional to:
which is proportional to the sine of the angle between surfaces of constant pressure and surfaces of constant density. Thus, in a barotropic fluid (which is defined by zero baroclinity), these surfaces are parallel.
In Earth's atmosphere, barotropic flow is a better approximation in the tropics, where density surfaces and pressure surfaces are both nearly level, whereas in higher latitudes the flow is more baroclinic. These midlatitude belts of high atmospheric baroclinity are characterized by the frequent formation of synoptic-scale cyclones, although these are not really dependent on the baroclinity term per se: for instance, they are commonly studied on pressure coordinate iso-surfaces where that term has no contribution to vorticity production.
Baroclinic instability
Baroclinic instability is a fluid dynamical instability of fundamental importance in the atmosphere and in the oceans. In the atmosphere it is the principal mechanism shaping the cyclones and anticyclones that dominate weather in mid-latitudes. In the ocean it generates a field of mesoscale eddies (100 km or smaller) that play various roles in oceanic dynamics and the transport of tracers.
Whether a fluid counts as rapidly rotating is determined in this context by the Rossby number, which is a measure of how close the flow is to solid body rotation. More precisely, a flow in solid body rotation has vorticity that is proportional to its angular velocity. The Rossby number is a measure of the departure of the vorticity from that of solid body rotation. The Rossby number must be small for the concept of baroclinic instability to be relevant. When the Rossby number is large, other kinds of instabilities, often referred to as inertial, become more relevant.
The simplest example of a stably stratified flow is an incompressible flow with density decreasing with height.
In a compressible gas such as the atmosphere, the relevant measure is the vertical gradient of the entropy, which must increase with height for the flow to be stably stratified.
The strength of the stratification is measured by asking how large the vertical shear of the horizontal winds has to be in order to destabilize the flow and produce the classic Kelvin–Helmholtz instability. This measure is called the Richardson number. When the Richardson number is large, the stratification is strong enough to prevent this shear instability.
Before the classic work of Jule Charney and Eric Eady on baroclinic instability in the late 1940s, most theories trying to explain the structure of mid-latitude eddies took as their starting points the high Rossby number or small Richardson number instabilities familiar to fluid dynamicists at that time. The most important feature of baroclinic instability is that it exists even in the situation of rapid rotation (small Rossby number) and strong stable stratification (large Richardson's number) typically observed in the atmosphere.
The energy source for baroclinic instability is the potential energy in the environmental flow. As the instability grows, the center of mass of the fluid is lowered.
In growing waves in the atmosphere, cold air moving downwards and equatorwards displaces the warmer air moving polewards and upwards.
Baroclinic instability can be investigated in the laboratory using a rotating, fluid filled annulus. The annulus is heated at the outer wall and cooled at the inner wall, and the resulting fluid flows give rise to baroclinically unstable waves.
The term "baroclinic" refers to the mechanism by which vorticity is generated. Vorticity is the curl of the velocity field. In general, the evolution of vorticity can be broken into contributions from advection (as vortex tubes move with the flow), stretching and twisting (as vortex tubes are pulled or twisted by the flow) and baroclinic vorticity generation, which occurs whenever there is a density gradient along surfaces of constant pressure. Baroclinic flows can be contrasted with barotropic flows in which density and pressure surfaces coincide and there is no baroclinic generation of vorticity.
The study of the evolution of these baroclinic instabilities as they grow and then decay is a crucial part of developing theories for the fundamental characteristics of midlatitude weather.
Baroclinic vector
Beginning with the equation of motion for a frictionless fluid (the Euler equations) and taking the curl, one arrives at the equation of motion for the curl of the fluid velocity, that is to say, the vorticity.
In a fluid that is not all of the same density, a source term appears in the vorticity equation whenever surfaces of constant density (isopycnic surfaces) and surfaces
of constant pressure (isobaric surfaces) are not aligned. The material derivative of the local vorticity is given by:
(where is the velocity and is the vorticity, is the pressure, and is the density). The baroclinic contribution is the vector:
This vector, sometimes called the solenoidal vector, is of interest both in compressible fluids and in incompressible (but inhomogeneous) fluids. Internal gravity waves as well as unstable Rayleigh–Taylor modes can be analyzed from the perspective of the baroclinic vector. It is also of interest in the creation of vorticity by the passage of shocks through inhomogeneous media, such as in the Richtmyer–Meshkov instability.
Experienced divers are familiar with the very slow waves that can be excited at a thermocline or a halocline, which are known as internal waves. Similar waves can be generated between a layer of water and a layer of oil. When the interface between these two surfaces is not horizontal and the system is close to hydrostatic equilibrium, the gradient of the pressure is vertical but the gradient of the density is not. Therefore the baroclinic vector is nonzero, and the sense of the baroclinic vector is to create vorticity to make the interface level out. In the process, the interface overshoots, and the result is an oscillation which is an internal gravity wave. Unlike surface gravity waves, internal gravity waves do not require a sharp interface. For example, in bodies of water, a gradual gradient in temperature or salinity is sufficient to support internal gravity waves driven by the baroclinic vector.
References
Bibliography
External links
Fluid dynamics
Atmospheric dynamics | Baroclinity | [
"Chemistry",
"Engineering"
] | 1,452 | [
"Piping",
"Chemical engineering",
"Atmospheric dynamics",
"Fluid dynamics"
] |
166,377 | https://en.wikipedia.org/wiki/Advection | In the field of physics, engineering, and earth sciences, advection is the transport of a substance or quantity by bulk motion of a fluid. The properties of that substance are carried with it. Generally the majority of the advected substance is also a fluid. The properties that are carried with the advected substance are conserved properties such as energy. An example of advection is the transport of pollutants or silt in a river by bulk water flow downstream. Another commonly advected quantity is energy or enthalpy. Here the fluid may be any material that contains thermal energy, such as water or air. In general, any substance or conserved, extensive quantity can be advected by a fluid that can hold or contain the quantity or substance.
During advection, a fluid transports some conserved quantity or material via bulk motion. The fluid's motion is described mathematically as a vector field, and the transported material is described by a scalar field showing its distribution over space. Advection requires currents in the fluid, and so cannot happen in rigid solids. It does not include transport of substances by molecular diffusion.
Advection is sometimes confused with the more encompassing process of convection, which is the combination of advective transport and diffusive transport.
In meteorology and physical oceanography, advection often refers to the transport of some property of the atmosphere or ocean, such as heat, humidity (see moisture) or salinity.
Advection is important for the formation of orographic clouds and the precipitation of water from clouds, as part of the hydrological cycle.
Mathematical description
The advection equation is a first-order hyperbolic partial differential equation that governs the motion of a conserved scalar field as it is advected by a known velocity vector field. It is derived using the scalar field's conservation law, together with Gauss's theorem, and taking the infinitesimal limit.
One easily visualized example of advection is the transport of ink dumped into a river. As the river flows, ink will move downstream in a "pulse" via advection, as the water's movement itself transports the ink. If added to a lake without significant bulk water flow, the ink would simply disperse outwards from its source in a diffusive manner, which is not advection. Note that as it moves downstream, the "pulse" of ink will also spread via diffusion. The sum of these processes is called convection.
The advection equation
The advection equation for a conserved quantity described by a scalar field is expressed by a continuity equation:
where vector field is the flow velocity and is the del operator. If the flow is assumed to be incompressible then is solenoidal, that is, the divergence is zero:
and the above equation reduces to
In particular, if the flow is steady, then
which shows that is constant along a streamline.
If a vector quantity (such as a magnetic field) is being advected by the solenoidal velocity field , the advection equation above becomes:
Here, is a vector field instead of the scalar field .
Solution
Solutions to the advection equation can be approximated using numerical methods, where interest typically centers on discontinuous "shock" solutions and necessary conditions for convergence (e.g. the CFL condition).
Numerical simulation can be aided by considering the skew-symmetric form of advection
where
Since skew symmetry implies only imaginary eigenvalues, this form reduces the "blow up" and "spectral blocking" often experienced in numerical solutions with sharp discontinuities.
Distinction between advection and convection
The term advection often serves as a synonym for convection, and this correspondence of terms is used in the literature. More technically, convection applies to the movement of a fluid (often due to density gradients created by thermal gradients), whereas advection is the movement of some material by the velocity of the fluid. Thus, although it might seem confusing, it is technically correct to think of momentum being advected by the velocity field in the Navier-Stokes equations, although the resulting motion would be considered to be convection. Because of the specific use of the term convection to indicate transport in association with thermal gradients, it is probably safer to use the term advection if one is uncertain about which terminology best describes their particular system.
Meteorology
In meteorology and physical oceanography, advection often refers to the horizontal transport of some property of the atmosphere or ocean, such as heat, humidity or salinity, and convection generally refers to vertical transport (vertical advection). Advection is important for the formation of orographic clouds (terrain-forced convection) and the precipitation of water from clouds, as part of the hydrological cycle.
Other quantities
The advection equation also applies if the quantity being advected is represented by a probability density function at each point, although accounting for diffusion is more difficult.
See also
Advection-diffusion equation
Atmosphere of Earth
Conservation equation
Courant–Friedrichs–Lewy condition
Kinematic wave
Overshoot (signal)
Péclet number
Radiation
Notes
References
Vector calculus
Atmospheric dynamics
Conservation equations
Equations of fluid dynamics
Oceanography
Convection
Heat transfer
Transport phenomena | Advection | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 1,086 | [
"Physical phenomena",
"Chemical engineering",
"Mathematical objects",
"Convection",
"Thermodynamics",
"Transport phenomena",
"Fluid dynamics",
"Heat transfer",
"Equations of physics",
"Atmospheric dynamics",
"Symmetry",
"Physics theorems",
"Equations of fluid dynamics",
"Hydrology",
"App... |
166,380 | https://en.wikipedia.org/wiki/Natural%20history | Natural history is a domain of inquiry involving organisms, including animals, fungi, and plants, in their natural environment, leaning more towards observational than experimental methods of study. A person who studies natural history is called a naturalist or natural historian.
Natural history encompasses scientific research but is not limited to it. It involves the systematic study of any category of natural objects or organisms, so while it dates from studies in the ancient Greco-Roman world and the mediaeval Arabic world, through to European Renaissance naturalists working in near isolation, today's natural history is a cross-discipline umbrella of many specialty sciences; e.g., geobiology has a strong multidisciplinary nature.
Definitions
Before 1900
The meaning of the English term "natural history" (a calque of the Latin historia naturalis) has narrowed progressively with time, while, by contrast, the meaning of the related term "nature" has widened (see also History below).
In antiquity, "natural history" covered essentially anything connected with nature, or used materials drawn from nature, such as Pliny the Elder's encyclopedia of this title, published , which covers astronomy, geography, humans and their technology, medicine, and superstition, as well as animals and plants.
Medieval European academics considered knowledge to have two main divisions: the humanities (primarily what is now known as classics) and divinity, with science studied largely through texts rather than observation or experiment. The study of nature revived in the Renaissance, and quickly became a third branch of academic knowledge, itself divided into descriptive natural history and natural philosophy, the analytical study of nature. In modern terms, natural philosophy roughly corresponded to modern physics and chemistry, while natural history included the biological and geological sciences. The two were strongly associated. During the heyday of the gentleman scientists, many people contributed to both fields, and early papers in both were commonly read at professional science society meetings such as the Royal Society and the French Academy of Sciences—both founded during the 17th century.
Natural history had been encouraged by practical motives, such as Linnaeus' aspiration to improve the economic condition of Sweden. Similarly, the Industrial Revolution prompted the development of geology to help find useful mineral deposits.
Since 1900
Modern definitions of natural history come from a variety of fields and sources, and many of the modern definitions emphasize a particular aspect of the field, creating a plurality of definitions with a number of common themes among them. For example, while natural history is most often defined as a type of observation and a subject of study, it can also be defined as a body of knowledge, and as a craft or a practice, in which the emphasis is placed more on the observer than on the observed.
Definitions from biologists often focus on the scientific study of individual organisms in their environment, as seen in this definition by Marston Bates: "Natural history is the study of animals and Plants—of organisms. ... I like to think, then, of natural history as the study of life at the level of the individual—of what plants and animals do, how they react to each other and their environment, how they are organized into larger groupings like populations and communities" and this more recent definition by D.S. Wilcove and T. Eisner: "The close observation of organisms—their origins, their evolution, their behavior, and their relationships with other species".
This focus on organisms in their environment is also echoed by H.W. Greene and J.B. Losos: "Natural history focuses on where organisms are and what they do in their environment, including interactions with other organisms. It encompasses changes in internal states insofar as they pertain to what organisms do".
Some definitions go further, focusing on direct observation of organisms in their environments, both past and present, such as this one by G.A. Bartholomew: "A student of natural history, or a naturalist, studies the world by observing plants and animals directly. Because organisms are functionally inseparable from the environment in which they live and because their structure and function cannot be adequately interpreted without knowing some of their evolutionary history, the study of natural history embraces the study of fossils as well as physiographic and other aspects of the physical environment".
A common thread in many definitions of natural history is the inclusion of a descriptive component, as seen in a recent definition by H.W. Greene: "Descriptive ecology and ethology". Several authors have argued for a more expansive view of natural history, including S. Herman, who defines the field as "the scientific study of plants and animals in their natural environments. It is concerned with levels of organization from the individual organism to the ecosystem, and stresses identification, life history, distribution, abundance, and inter-relationships. It often and appropriately includes an esthetic component", and T. Fleischner, who defines the field even more broadly, as "A practice of intentional, focused attentiveness and receptivity to the more-than-human world, guided by honesty and accuracy". These definitions explicitly include the arts in the field of natural history, and are aligned with the broad definition outlined by B. Lopez, who defines the field as the "Patient interrogation of a landscape" while referring to the natural history knowledge of the Eskimo (Inuit).
A slightly different framework for natural history, covering a similar range of themes, is also implied in the scope of work encompassed by many leading natural history museums, which often include elements of anthropology, geology, paleontology, and astronomy along with botany and zoology, or include both cultural and natural components of the world.
The plurality of definitions for this field has been recognized as both a weakness and a strength, and a range of definitions has recently been offered by practitioners in a recent collection of views on natural history.
History
Prehistory
Prior to the advent of Western science humans were engaged and highly competent in indigenous ways of understanding the more-than-human world that are now referred to as traditional ecological knowledge. 21st century definitions of natural history are inclusive of this understanding, such as this by Thomas Fleischner of the Natural History Institute (Prescott, Arizona):Natural history – a practice of intentional focused attentiveness and receptivity to the more-than-human world, guided by honesty and accuracy – is the oldest continuous human endeavor. In the evolutionary past of our species, the practice of natural history was essential for our survival, imparting critical information on habits and chronologies of plants and animals that we could eat or that could eat us. Natural history continues to be critical to human survival and thriving. It contributes to our fundamental understanding of how the world works by providing the empirical foundation of natural sciences, and it contributes directly and indirectly to human emotional and physical health, thereby fostering healthier human communities. It also serves as the basis for all conservation efforts, with natural history both informing the science and inspiring the values that drive these.
Ancient
As a precursor to Western science, natural history began with Aristotle and other ancient philosophers who analyzed the diversity of the natural world. Natural history was understood by Pliny the Elder to cover anything that could be found in the world, including living things, geology, astronomy, technology, art, and humanity.
was written between 50 and 70 AD by Pedanius Dioscorides, a Roman physician of Greek origin. It was widely read for more than 1,500 years until supplanted in the Renaissance, making it one of the longest-lasting of all natural history books.
From the ancient Greeks until the work of Carl Linnaeus and other 18th-century naturalists, a major concept of natural history was the scala naturae or Great Chain of Being, an arrangement of minerals, vegetables, more primitive forms of animals, and more complex life forms on a linear scale of supposedly increasing perfection, culminating in our species.
Medieval
Natural history was basically static through the Middle Ages in Europe—although in the Arabic and Oriental world, it proceeded at a much brisker pace. From the 13th century, the work of Aristotle was adapted rather rigidly into Christian philosophy, particularly by Thomas Aquinas, forming the basis for natural theology. During the Renaissance, scholars (herbalists and humanists, particularly) returned to direct observation of plants and animals for natural history, and many began to accumulate large collections of exotic specimens and unusual monsters. Leonhart Fuchs was one of the three founding fathers of botany, along with Otto Brunfels and Hieronymus Bock. Other important contributors to the field were Valerius Cordus, Konrad Gesner (), Frederik Ruysch, and Gaspard Bauhin. The rapid increase in the number of known organisms prompted many attempts at classifying and organizing species into taxonomic groups, culminating in the system of the Swedish naturalist Carl Linnaeus.
The British historian of Chinese science Joseph Needham calls Li Shizhen "the 'uncrowned king' of Chinese naturalists", and his Bencao gangmu "undoubtedly the greatest scientific achievement of the Ming". His works translated to many languages direct or influence many scholars and researchers.
Modern
A significant contribution to English natural history was made by parson-naturalists such as Gilbert White, William Kirby, John George Wood, and John Ray, who wrote about plants, animals, and other aspects of nature. Many of these men wrote about nature to make the natural theology argument for the existence or goodness of God. Since early modern times, however, a great number of women made contributions to natural history, particularly in the field of botany, be it as authors, collectors, or illustrators.
In modern Europe, professional disciplines such as botany, geology, mycology, palaeontology, physiology, and zoology were formed. Natural history, formerly the main subject taught by college science professors, was increasingly scorned by scientists of a more specialized manner and relegated to an "amateur" activity, rather than a part of science proper. In Victorian Scotland, the study of natural history was believed to contribute to good mental health. Particularly in Britain and the United States, this grew into specialist hobbies such as the study of birds, butterflies, seashells (malacology/conchology), beetles, and wildflowers; meanwhile, scientists tried to define a unified discipline of biology (though with only partial success, at least until the modern evolutionary synthesis). Still, the traditions of natural history continue to play a part in the study of biology, especially ecology (the study of natural systems involving living organisms and the inorganic components of the Earth's biosphere that support them), ethology (the scientific study of animal behavior), and evolutionary biology (the study of the relationships between life forms over very long periods of time), and re-emerges today as integrative organismal biology.
Amateur collectors and natural history entrepreneurs played an important role in building the world's large natural history collections, such as the Natural History Museum, London, and the National Museum of Natural History in Washington, DC.
Three of the greatest English naturalists of the 19th century, Henry Walter Bates, Charles Darwin, and Alfred Russel Wallace—who knew each other—each made natural history travels that took years, collected thousands of specimens, many of them new to science, and by their writings both advanced knowledge of "remote" parts of the world—the Amazon basin, the Galápagos Islands, and the Indonesian Archipelago, among others—and in so doing helped to transform biology from a descriptive to a theory-based science.
The understanding of "Nature" as "an organism and not as a mechanism" can be traced to the writings of Alexander von Humboldt (Prussia, 1769–1859). Humboldt's copious writings and research were seminal influences for Charles Darwin, Simón Bolívar, Henry David Thoreau, Ernst Haeckel, and John Muir.
Museums
Natural history museums, which evolved from cabinets of curiosities, played an important role in the emergence of professional biological disciplines and research programs. Particularly in the 19th century, scientists began to use their natural history collections as teaching tools for advanced students and the basis for their own morphological research.
Societies
The term "natural history" alone, or sometimes together with archaeology, forms the name of many national, regional, and local natural history societies that maintain records for animals (including birds (ornithology), insects (entomology) and mammals (mammalogy)), fungi (mycology), plants (botany), and other organisms. They may also have geological and microscopical sections.
Examples of these societies in Britain include the Natural History Society of Northumbria founded in 1829, London Natural History Society (1858), Birmingham Natural History Society (1859), British Entomological and Natural History Society founded in 1872, Glasgow Natural History Society, Manchester Microscopical and Natural History Society established in 1880, Whitby Naturalists' Club founded in 1913, Scarborough Field Naturalists' Society and the Sorby Natural History Society, Sheffield, founded in 1918. The growth of natural history societies was also spurred due to the growth of British colonies in tropical regions with numerous new species to be discovered. Many civil servants took an interest in their new surroundings, sending specimens back to museums in the Britain. (See also: Indian natural history)
Societies in other countries include the American Society of Naturalists and Polish Copernicus Society of Naturalists.
Professional societies have recognized the importance of natural history and have initiated new sections in their journals specifically for natural history observations to support the discipline. These include "Natural History Field Notes" of Biotropica, "The Scientific Naturalist" of Ecology, "From the Field" of Waterbirds, and the "Natural History Miscellany section" of the American Naturalist.
Benefits of Natural History
Natural history observations have contributed to scientific questioning and theory formation. In recent times such observations contribute to how conservation priorities are determined. Mental health benefits can ensue, as well, from regular and active observation of chosen components of nature, and these reach beyond the benefits derived from passively walking through natural areas.
See also
Evolutionary history of life
History of evolutionary thought
Naturalism (philosophy)
Nature documentary
Nature study
Nature writing
Russian naturalists
Timeline of natural history
Natural science
References
Further reading
Peter Anstey (2011), Two Forms of Natural History , Early Modern Experimental Philosophy .
Farber, Paul Lawrence (2000), Finding Order in Nature: The Naturalist Tradition from Linnaeus to E. O. Wilson. Johns Hopkins University Press: Baltimore.
Kohler, Robert E. (2002), Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. University of Chicago Press: Chicago.
Mayr, Ernst. (1982), The Growth of Biological Thought: Diversity, Evolution, and Inheritance. The Belknap Press of Harvard University Press: Cambridge, Massachusetts.
Rainger, Ronald; Keith R. Benson; and Jane Maienschein (eds) (1988), The American Development of Biology. University of Pennsylvania Press: Philadelphia.
External links
A History of the Ecological Sciences by Frank N. Egerton
The Cambridge natural history, Vol. 07 (of 10)'', London: Macmillan and Co., 1904
History of biology
History of Earth science
History of science | Natural history | [
"Technology"
] | 3,102 | [
"History of science",
"History of science and technology"
] |
166,404 | https://en.wikipedia.org/wiki/First%20law%20of%20thermodynamics | The first law of thermodynamics is a formulation of the law of conservation of energy in the context of thermodynamic processes. The law distinguishes two principal forms of energy transfer, heat and thermodynamic work, that modify a thermodynamic system containing a constant amount of matter. The law also defines the internal energy of a system, an extensive property for taking account of the balance of heat and work in the system. Energy cannot be created or destroyed, but it can be transformed from one form to another. In an isolated system the sum of all forms of energy is constant.
An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surroundings requires that the system's internal energy be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system to sustain the work of the system continuously.
The ideal isolated system, of which the entire universe is an example, is often only used as a model. Many systems in practical applications require the consideration of internal chemical or nuclear reactions, as well as transfers of matter into or out of the system. For such considerations, thermodynamics also defines the concept of open systems, closed systems, and other types.
Definition
For thermodynamic processes of energy transfer without transfer of matter, the first law of thermodynamics is often expressed by the algebraic sum of contributions to the internal energy, , from all work, , done on or by the system, and the quantity of heat, , supplied or withdrawn from the system. The historical sign convention for the terms has been that heat supplied to the system is positive, but work done by the system is subtracted. This was the convention of Rudolf Clausius, so that a change in the internal energy, , is written
.
Modern formulations, such as by Max Planck, and by IUPAC, often replace the subtraction with addition, and consider all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of the use of the system, for example as an engine.
When a system expands in an isobaric process, the thermodynamic work, , done by the system on the surroundings is the product, , of system pressure, , and system volume change, , whereas is said to be the thermodynamic work done on the system by the surroundings. The change in internal energy of the system is:
where denotes the quantity of heat supplied to the system from its surroundings.
Work and heat express physical processes of supply or removal of energy, while the internal energy is a mathematical abstraction that keeps account of the changes of energy that befall the system. The term is the quantity of energy added or removed as heat in the thermodynamic sense, not referring to a form of energy within the system. Likewise, denotes the quantity of energy gained or lost through thermodynamic work. Internal energy is a property of the system, while work and heat describe the process, not the system. Thus, a given internal energy change, , can be achieved by different combinations of heat and work. Heat and work are said to be path dependent, while change in internal energy depends only on the initial and final states of the system, not on the path between. Thermodynamic work is measured by change in the system, and is not necessarily the same as work measured by forces and distances in the surroundings, though, ideally, such can sometimes be arranged; this distinction is noted in the term 'isochoric work', at constant system volume, with , which is not a form of thermodynamic work.
History
In the first half of the eighteenth century, French philosopher and mathematician Émilie du Châtelet made notable contributions to the emerging theoretical framework of energy, for example by emphasising Leibniz's concept of ' vis viva ', mv2, as distinct from Newton's momentum, mv.
Empirical developments of the early ideas, in the century following, wrestled with contravening concepts such as the caloric theory of heat.
In the few years of his life (1796–1832) after the 1824 publication of his book Reflections on the Motive Power of Fire, Sadi Carnot came to understand that the caloric theory of heat was restricted to mere calorimetry, and that heat and "motive power" are interconvertible. This is known only from his posthumously published notes. He wrote:
At that time, the concept of mechanical work had not been formulated. Carnot was aware that heat could be produced by friction and by percussion, as forms of dissipation of "motive power". As late as 1847, Lord Kelvin believed in the caloric theory of heat, being unaware of Carnot's notes.
In 1840, Germain Hess stated a conservation law (Hess's law) for the heat of reaction during chemical transformations. This law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work.
In 1842, Julius Robert von Mayer made a statement that was rendered by Clifford Truesdell (1980) as "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law, for it does not express the concept of the thermodynamic state variable, the internal energy. Also in 1842, Mayer measured a temperature rise caused by friction in a body of paper pulp. This was near the time of the 1842–1845 work of James Prescott Joule, measuring the mechanical equivalent of heat. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water.
The first full statements of the law came in 1850 from Rudolf Clausius, and from William Rankine. Some scholars consider Rankine's statement less distinct than that of Clausius.
Original statements: the "thermodynamic approach"
The original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach.
The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows:
Reflecting the experimental work of Mayer and of Joule, Clausius wrote:
Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system.
The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation . In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy).
Conceptual revision: the "mechanical approach"
In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat." This definition may be regarded as expressing a conceptual revision, as follows. This reinterpretation was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth-century writers. It might be called the "mechanical approach".
Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer.
The "mechanical" approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly" (according to Martin Bailyn) refrains from labelling as 'heat' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic.
This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Hermann von Helmholtz, but also in the work of many others.
Conceptually revised statement, according to the mechanical approach
The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work.
The revised statement is then
For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes.
This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines.
Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.
Description
Cyclic processes
The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic.
A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius' statement), by the system.
In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units.
The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat.
Various statements of the law for closed systems
The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.
For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'.
There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another.
An example of a physical statement is that of Planck (1897/1903):
It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing.
This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium.
An example of a mathematical statement is that of Crawford (1963):
For a given system we let large-scale mechanical energy, large-scale potential energy, and total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition
For any finite process, whether reversible or irreversible,
The first law in a form that involves the principle of conservation of energy more generally is
Here and are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)]
This statement by Crawford, for , uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems. Internal energy is evaluated for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures, relative to a reference state.
The history of statements of the law for closed systems has two main periods, before and after the work of George H. Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date.
Carathéodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors.
Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.
The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat.
According to A. Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude.
Sometimes the concept of internal energy is not made explicit in the statement.
Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.
A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.
A respected text disregards the Carathéodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous". These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903).
Evidence for the first law of thermodynamics for closed systems
The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes.
The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work).
Adiabatic processes
In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures.
For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system.
Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank.
A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence." Another expression of this view is "no systematic precise experiments to verify this generalization directly have ever been attempted".
This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below.
That important state variable was first recognized and denoted by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it ; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hesitation, Clausius began calling his state function "energy". In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states.
In an adiabatic process, adiabatic work takes the system either from a reference state with internal energy to an arbitrary one with internal energy , or from the state to the state :
Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes or is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article.
The fact of such irreversibility may be dealt with in two main ways, according to different points of view:
Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory, is to rely on the previously established concept of quasi-static processes,Planck, M. (1897/1903), Section 71, p. 52. as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings. This can be taken to justify the formula
Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula () above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions.
The formula () above allows that to go by processes of quasi-static adiabatic work from the state to the state we can take a path that goes through the reference state , since the quasi-static adiabatic work is independent of the path
This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement:
For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy, .
Adynamic processes
A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by transferring an externally determined amount of heat into it, for instance from a resistive electrical heater inside the calorimeter through which a precisely known electric current is passed at a precisely known voltage for a precisely measured period of time. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as (surroundings-based) work. According to one textbook, "The most common device for measuring is an adiabatic bomb calorimeter." According to another textbook, "Calorimetry is widely used in present day laboratories." According to one opinion, "Most thermodynamic data come from calorimetry...".
When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy:
General case for reversible processes
Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be reversible in the strict thermodynamic sense. For a particular reversible process in general, the work done reversibly on the system, , and the heat transferred reversibly to the system, are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, , through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously.
Putting the two complementary aspects together, the first law for a particular reversible process can be written
This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems.
In particular, if no work is done on a thermally isolated closed system we have
.
This is one aspect of the law of conservation of energy and can be stated:
The internal energy of an isolated system remains constant.
General case for irreversible processes
If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient, practically frictionless, and with nearly balanced forces, then the process is irreversible. Then the heat and work transfers may be difficult to calculate with high accuracy, although the simple equations for reversible processes still hold to a good approximation in the absence of composition changes. Importantly, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, , and the heat transferred irreversibly to the system, , which belong to the same particular process defined by its particular irreversible path, , through the space of thermodynamic states.
This means that the internal energy is a function of state and that the internal energy change between two states is a function only of the two states.
Overview of the weight of evidence for the law
The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law.
State functional formulation for infinitesimal processes
When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by , rather than exact differentials denoted by , as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process.
The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy may then be expressed as a function of the system's defining state variables , entropy, and , volume: . In these terms, , the system's temperature, and , its pressure, are partial derivatives of with respect to and . These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system's internal thermodynamic equilibrium.
The first law requires that:
Then, for the fictive case of a reversible process, can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by and the quantity of heat added can be expressed as . For these conditions
While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as can be considered as a thermodynamic state function of the defining state variables and :
Equation () is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are and , with respect to which and are partial derivatives of . It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and .
In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes:
where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to:
Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters.
For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems.
A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system.
It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement.
Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero.
The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy.
Fluid dynamics
In fluid dynamics, the first law of thermodynamics reads .
Spatially inhomogeneous systems
Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if denotes the total energy of that component system, one may write
where and denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and denotes its internal energy.
Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system.
A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction between the subsystems. Thus, in an obvious notation, one may write
The quantity in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.
The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems.
First law of thermodynamics for open systems
For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed.
There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system.
Internal energy for an open system
Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.
In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that
where and denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above.
For the thermodynamic operation of adding two systems with internal energies and , to produce a new system with internal energy , one may write ; the reference states for , and should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.
There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.
Also of course
where and denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass.
Process of transfer of matter between an open system and its surroundings
A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem.
An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature.
A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.
Open system with multiple contacts
An open system can be in contact equilibrium with several other systems at once.
This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.
With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:
where ΔU0 denotes the change of internal energy of the system, and denotes the change of internal energy of the of the surrounding subsystems that are in open contact with the system, due to transfer between the system and that surrounding subsystem, and denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here.
Combination of first and second laws
If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula
where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above.
For a general natural process, there is no immediate term-wise correspondence between equations () and (), because they describe the process in different conceptual frames.
Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely.
For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write
where is the added amount of species and is the corresponding molar entropy.
For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield
where is the molar enthalpy of species .
Non-equilibrium transfers
The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined.
The first law of thermodynamics for any process on the specification of equation (3) can be defined as
where ΔU denotes the change of internal energy of the system, denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, denotes the work of the system and is the molar enthalpy of species , coming into the system from the surrounding that is in contact with the system.
Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines
To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced, which allows to formulate for the general case
Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient.
Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.
The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity.
Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow and a conduction flow. This conduction flow is by definition the heat flow . Therefore: where denotes the [internal] energy per unit mass. [These authors actually use the symbols and to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol to refer to total energy, including kinetic energy of bulk flow.]" This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase.
In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.
See also
Laws of thermodynamics
Perpetual motion
Microstate (statistical mechanics) – includes microscopic definitions of internal energy, heat and work
Entropy production
Relativistic heat conduction
References
Cited sources
Adkins, C. J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, .
Aston, J. G., Fritz, J. J. (1959). Thermodynamics and Statistical Thermodynamics, John Wiley & Sons, New York.
Balian, R. (1991/2007). From Microphysics to Macrophysics: Methods and Applications of Statistical Physics, volume 1, translated by D. ter Haar, J.F. Gregg, Springer, Berlin, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London.
Bryan, G. H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B. G. Teubner, Leipzig.
Balescu, R. (1997). Statistical Dynamics; Matter out of Equilibrium, Imperial College Press, London, .
Buchdahl, H. A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London.
Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, .
A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
. See English Translation: On the Moving Force of Heat, and the Laws regarding the Nature of Heat itself which are deducible therefrom. Phil. Mag. (1851), series 4, 2, 1–21, 102–119. Also available on Google Books.
Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
de Groot, S. R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, .
Denbigh, K. G. (1951). The Thermodynamics of the Steady State, Methuen, London, Wiley, New York.
Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, .
Eckart, C. (1940). The thermodynamics of irreversible processes. The simple fluid, Phys. Rev. 58: 267–269.
Fitts, D. D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York.
Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, .
Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W. F. Heinz, Springer-Verlag, New York.
Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, Addison-Wesley Publishing, Reading MA.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
Helmholtz, H. (1847). Ueber die Erhaltung der Kraft. Eine physikalische Abhandlung, G. Reimer (publisher), Berlin, read on 23 July in a session of the Physikalischen Gesellschaft zu Berlin. Reprinted in Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen, Band 1, J. A. Barth, Leipzig. Translated and edited by J. Tyndall, in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy (1853), volume 7, edited by J. Tyndall, W. Francis, published by Taylor and Francis, London, pp. 114–162, reprinted as volume 7 of Series 7, The Sources of Science, edited by H. Woolf, (1966), Johnson Reprint Corporation, New York, and again in Brush, S. G., The Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History of Modern Physical Sciences, edited by N. S. Hall, Imperial College Press, London, , pp. 89–110.
Kestin, J. (1966). A Course in Thermodynamics, Blaisdell Publishing Company, Waltham MA.
Kirkwood, J. G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York.
Landsberg, P. T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York.
Landsberg, P. T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, .
Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, .
Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, .
Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London.
Pippard, A. B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK.
Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London.
Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège.
Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York.
Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA.
Truesdell, C. A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, .
Truesdell, C. A., Muncaster, R. G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, .
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, .
Further reading
Chpts. 2 and 3 contain a nontechnical treatment of the first law.
Chapter 2.
External links
MISN-0-158, The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET.
First law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. Z. S. Spakovszky
Equations of physics
1
de:Thermodynamik#Erster Hauptsatz | First law of thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 14,298 | [
"Equations of physics",
"Mathematical objects",
"Equations",
"Thermodynamics",
"Laws of thermodynamics"
] |
166,415 | https://en.wikipedia.org/wiki/Heat%20equation | In mathematics and physics, the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Since then, the heat equation and its variants have been found to be fundamental in many parts of both pure and applied mathematics.
Definition
Given an open subset of and a subinterval of , one says that a function is a solution of the heat equation if
where denotes a general point of the domain. It is typical to refer to as time and as spatial variables, even in abstract contexts where these phrases fail to have their intuitive meaning. The collection of spatial variables is often referred to simply as . For any given value of , the right-hand side of the equation is the Laplacian of the function . As such, the heat equation is often written more compactly as
In physics and engineering contexts, especially in the context of diffusion through a medium, it is more common to fix a Cartesian coordinate system and then to consider the specific case of a function of three spatial variables and time variable . One then says that is a solution of the heat equation if
in which is a positive coefficient called the thermal diffusivity of the medium. In addition to other physical phenomena, this equation describes the flow of heat in a homogeneous and isotropic medium, with being the temperature at the point and time . If the medium is not homogeneous and isotropic, then would not be a fixed coefficient, and would instead depend on ; the equation would also have a slightly different form. In the physics and engineering literature, it is common to use to denote the Laplacian, rather than .
In mathematics as well as in physics and engineering, it is common to use Newton's notation for time derivatives, so that is used to denote , so the equation can be written
Note also that the ability to use either or to denote the Laplacian, without explicit reference to the spatial variables, is a reflection of the fact that the Laplacian is independent of the choice of coordinate system. In mathematical terms, one would say that the Laplacian is translationally and rotationally invariant. In fact, it is (loosely speaking) the simplest differential operator which has these symmetries. This can be taken as a significant (and purely mathematical) justification of the use of the Laplacian and of the heat equation in modeling any physical phenomena which are homogeneous and isotropic, of which heat diffusion is a principal example.
The diffusivity constant, , is often not present in mathematical studies of the heat equation, while its value can be very important in engineering. This is not a major difference, for the following reason. Let be a function with
Define a new function . Then, according to the chain rule, one has
Thus, there is a straightforward way of translating between solutions of the heat equation with a general value of and solutions of the heat equation with . As such, for the sake of mathematical analysis, it is often sufficient to only consider the case .
Since there is another option to define a satisfying as in () above by setting . Note that the two possible means of defining the new function discussed here amount, in physical terms, to changing the unit of measure of time or the unit of measure of length.
Steady-state equation
The steady-state heat equation is by definition not dependent on time. In other words, it is assumed conditions exist such that:
This condition depends on the time constant and the amount of time passed since boundary conditions have been imposed. Thus, the condition is fulfilled in situations in which the time equilibrium constant is fast enough that the more complex time-dependent heat equation can be approximated by the steady-state case. Equivalently, the steady-state condition exists for all cases in which enough time has passed that the thermal field u no longer evolves in time.
In the steady-state case, a spatial thermal gradient may (or may not) exist, but if it does, it does not change in time. This equation therefore describes the end result in all thermal problems in which a source is switched on (for example, an engine started in an automobile), and enough time has passed for all permanent temperature gradients to establish themselves in space, after which these spatial gradients no longer change in time (as again, with an automobile in which the engine has been running for long enough). The other (trivial) solution is for all spatial temperature gradients to disappear as well, in which case the temperature become uniform in space, as well.
The equation is much simpler and can help to understand better the physics of the materials without focusing on the dynamic of the heat transport process. It is widely used for simple engineering problems assuming there is equilibrium of the temperature fields and heat transport, with time.
Steady-state condition:
The steady-state heat equation for a volume that contains a heat source (the inhomogeneous case), is the Poisson's equation:
where u is the temperature, k is the thermal conductivity and q is the rate of heat generation per unit volume.
In electrostatics, this is equivalent to the case where the space under consideration contains an electrical charge.
The steady-state heat equation without a heat source within the volume (the homogeneous case) is the equation in electrostatics for a volume of free space that does not contain a charge. It is described by Laplace's equation:
Interpretation
Informally, the Laplacian operator gives the difference between the average value of a function in the neighborhood of a point, and its value at that point. Thus, if is the temperature, conveys if (and by how much) the material surrounding each point is hotter or colder, on the average, than the material at that point.
By the second law of thermodynamics, heat will flow from hotter bodies to adjacent colder bodies, in proportion to the difference of temperature and of the thermal conductivity of the material between them. When heat flows into (respectively, out of) a material, its temperature increases (respectively, decreases), in proportion to the amount of heat divided by the amount (mass) of material, with a proportionality factor called the specific heat capacity of the material.
By the combination of these observations, the heat equation says the rate at which the material at a point will heat up (or cool down) is proportional to how much hotter (or cooler) the surrounding material is. The coefficient in the equation takes into account the thermal conductivity, specific heat, and density of the material.
Interpretation of the equation
The first half of the above physical thinking can be put into a mathematical form. The key is that, for any fixed , one has
where is the single-variable function denoting the average value of over the surface of the sphere of radius centered at ; it can be defined by
in which denotes the surface area of the unit ball in -dimensional Euclidean space. This formalizes the above statement that the value of at a point measures the difference between the value of and the value of at points nearby to , in the sense that the latter is encoded by the values of for small positive values of .
Following this observation, one may interpret the heat equation as imposing an infinitesimal averaging of a function. Given a solution of the heat equation, the value of for a small positive value of may be approximated as times the average value of the function over a sphere of very small radius centered at .
Character of the solutions
The heat equation implies that peaks (local maxima) of will be gradually eroded down, while depressions (local minima) will be filled in. The value at some point will remain stable only as long as it is equal to the average value in its immediate surroundings. In particular, if the values in a neighborhood are very close to a linear function , then the value at the center of that neighborhood will not be changing at that time (that is, the derivative will be zero).
A more subtle consequence is the maximum principle, that says that the maximum value of in any region of the medium will not exceed the maximum value that previously occurred in , unless it is on the boundary of . That is, the maximum temperature in a region can increase only if heat comes in from outside . This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below).
Another interesting property is that even if initially has a sharp jump (discontinuity) of value across some surface inside the medium, the jump is immediately smoothed out by a momentary, infinitesimally short but infinitely large rate of flow of heat through that surface. For example, if two isolated bodies, initially at uniform but different temperatures and , are made to touch each other, the temperature at the point of contact will immediately assume some intermediate value, and a zone will develop around that point where will gradually vary between and .
If a certain amount of heat is suddenly applied to a point in the medium, it will spread out in all directions in the form of a diffusion wave. Unlike the elastic and electromagnetic waves, the speed of a diffusion wave drops with time: as it spreads over a larger region, the temperature gradient decreases, and therefore the heat flow decreases too.
Specific examples
Heat flow in a uniform rod
For heat flow, the heat equation follows from the physical laws of conduction of heat and conservation of energy .
By Fourier's law for an isotropic medium, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across it:
where is the thermal conductivity of the material, is the temperature, and is a vector field that represents the magnitude and direction of the heat flow at the point of space and time .
If the medium is a thin rod of uniform section and material, the position x is a single coordinate and the heat flow towards is a scalar field. The equation becomes
Let be the internal energy (heat) per unit volume of the bar at each point and time. The rate of change in heat per unit volume in the material, , is proportional to the rate of change of its temperature, . That is,
where is the specific heat capacity (at constant pressure, in case of a gas) and is the density (mass per unit volume) of the material. This derivation assumes that the material has constant mass density and heat capacity through space as well as time.
Applying the law of conservation of energy to a small element of the medium centred at , one concludes that the rate at which heat changes at a given point is equal to the derivative of the heat flow at that point (the difference between the heat flows either side of the particle). That is,
From the above equations it follows that
which is the heat equation in one dimension, with diffusivity coefficient
This quantity is called the thermal diffusivity of the medium.
Accounting for radiative loss
An additional term may be introduced into the equation to account for radiative loss of heat. According to the Stefan–Boltzmann law, this term is , where is the temperature of the surroundings, and is a coefficient that depends on the Stefan-Boltzmann constant and the emissivity of the material. The rate of change in internal energy becomes
and the equation for the evolution of becomes
Non-uniform isotropic medium
Note that the state equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or radiation). This form is more general and particularly useful to recognize which property (e.g. cp or ) influences which term.
where is the volumetric heat source.
Heat flow in non-homogeneous anisotropic media
In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space.
The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density Q, so that
Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area dS and with unit normal vector n is Thus the rate of heat flow into V is also given by the surface integral where n(x) is the outward pointing normal vector at x.
The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient where A(x) is a 3 × 3 real matrix that is symmetric and positive definite.
By the divergence theorem, the previous surface integral for heat flow into V can be transformed into the volume integral
The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ
Putting these equations together gives the general equation of heat flow:
Remarks
The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x: .
In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity k.
In the anisotropic case where the coefficient matrix A is not scalar and/or if it depends on x, then an explicit formula for the solution of the heat equation can seldom be written down, though it is usually possible to consider the associated abstract Cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup.
Three-dimensional problem
In the special cases of propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is
where:
is temperature as a function of space and time;
is the rate of change of temperature at a point over time;
, , and are the second spatial derivatives (thermal conductions) of temperature in the , , and directions, respectively;
is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity , the specific heat capacity , and the mass density .
The heat equation is a consequence of Fourier's law of conduction (see heat conduction).
If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume additional conditions, for example an exponential bound on the growth of solutions or a sign condition (nonnegative solutions are unique by a result of David Widder).
Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods.
The heat equation is the prototypical example of a parabolic partial differential equation.
Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as
where the Laplace operator, Δ or ∇2, the divergence of the gradient, is taken in the spatial variables.
The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black–Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis.
The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead – like a partial differential equation involving a second-order time derivative. Some models of nonlinear heat conduction (which are also parabolic equations) have solutions with finite heat transmission speed.
Internal heat generation
The function u above represents temperature of a body. Alternatively, it is sometimes convenient to change units and represent u as the heat density of a medium. Since heat density is proportional to temperature in a homogeneous medium, the heat equation is still obeyed in the new units.
Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume (e.g., in watts/litre - W/L) at a rate given by a known function q varying in space and time. Then the heat per unit volume u satisfies an equation
For example, a tungsten light bulb filament generates heat, so it would have a positive nonzero value for q when turned on. While the light is turned off, the value of q for the tungsten filament would be zero.
Solving the heat equation using Fourier series
The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is
where u = u(x, t) is a function of two variables x and t. Here
x is the space variable, so x ∈ [0, L], where L is the length of the rod.
t is the time variable, so t ≥ 0.
We assume the initial condition
where the function f is given, and the boundary conditions
Let us attempt to find a solution of that is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
This solution technique is called separation of variables. Substituting u back into equation ,
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus:
and
We will now show that nontrivial solutions for for values of λ ≤ 0 cannot occur:
Suppose that λ < 0. Then there exist real numbers B, C such that From we get X(0) = 0 = X(L) and therefore B = 0 = C which implies u is identically 0.
Suppose that λ = 0. Then there exist real numbers B, C such that X(x) = Bx + C. From equation we conclude in the same manner as in 1 that u is identically 0.
Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that and From we get C = 0 and that for some positive integer n,
This solves the heat equation in the special case that the dependence of u has the special form .
In general, the sum of solutions to that satisfy the boundary conditions also satisfies and . We can show that the solution to , and is given by
where
Generalizing the solution technique
The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenfunctions. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators.
Consider the linear operator Δu = uxx. The infinite sequence of functions
for n ≥ 1 are eigenfunctions of Δ. Indeed,
Moreover, any eigenfunction f of Δ with the boundary conditions f(0) = f(L) = 0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means
Finally, the sequence {en}n ∈ N spans a dense linear subspace of L2((0, L)). This shows that in effect we have diagonalized the operator Δ.
Mean-value property
Solutions of the heat equations
satisfy a mean-value property analogous to the mean-value properties of harmonic functions, solutions of
though a bit more complicated. Precisely, if u solves
and
then
where Eλ is a heat-ball, that is a super-level set of the fundamental solution of the heat equation:
Notice that
as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough.
Fundamental solutions
A fundamental solution of the heat equation is a solution that corresponds to the initial condition of an initial point source of heat at a known position. These can be used to find a general solution of the heat equation over certain domains (see, for instance, ).
In one variable, the Green's function is a solution of the initial value problem (by Duhamel's principle, equivalent to the definition of Green's function as one with a delta function as solution to the first equation)
where is the Dirac delta function. The fundamental solution to this problem is given by the heat kernel
One can obtain the general solution of the one variable heat equation with initial condition u(x, 0) = g(x) for −∞ < x < ∞ and 0 < t < ∞ by applying a convolution:
In several spatial variables, the fundamental solution solves the analogous problem
The n-variable fundamental solution is the product of the fundamental solutions in each variable; i.e.,
The general solution of the heat equation on Rn is then obtained by a convolution, so that to solve the initial value problem with u(x, 0) = g(x), one has
The general problem on a domain Ω in Rn is
with either Dirichlet or Neumann boundary data. A Green's function always exists, but unless the domain Ω can be readily decomposed into one-variable problems (see below), it may not be possible to write it down explicitly. Other methods for obtaining Green's functions include the method of images, separation of variables, and Laplace transforms (Cole, 2011).
Some Green's function solutions in 1D
A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere. In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation
where f is some given function of x and t.
Homogeneous heat equation
Initial value problem on (−∞,∞)
Comment. This solution is the convolution with respect to the variable x of the fundamental solution
and the function g(x). (The Green's function number of the fundamental solution is X00.)
Therefore, according to the general properties of the convolution with respect to differentiation, u = g ∗ Φ is a solution of the same heat equation, for
Moreover,
so that, by general facts about approximation to the identity, Φ(⋅, t) ∗ g → g as t → 0 in various senses, according to the specific g. For instance, if g is assumed bounded and continuous on R then converges uniformly to g as t → 0, meaning that u(x, t) is continuous on with
Initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions
Comment. This solution is obtained from the preceding formula as applied to the data g(x) suitably extended to R, so as to be an odd function, that is, letting g(−x) := −g(x) for all x. Correspondingly, the solution of the initial value problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0.
The Green's function number of this solution is X10.
Initial value problem on (0,∞) with homogeneous Neumann boundary conditions
Comment. This solution is obtained from the first solution formula as applied to the data g(x) suitably extended to R so as to be an even function, that is, letting g(−x) := g(x) for all x. Correspondingly, the solution of the initial value problem on R is an even function with respect to the variable x for all values of t > 0, and in particular, being smooth, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. The Green's function number of this solution is X20.
Problem on (0,∞) with homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions
Comment. This solution is the convolution with respect to the variable t of
and the function h(t). Since Φ(x, t) is the fundamental solution of
the function ψ(x, t) is also a solution of the same heat equation, and so is u := ψ ∗ h, thanks to general properties of the convolution with respect to differentiation. Moreover,
so that, by general facts about approximation to the identity, ψ(x, ⋅) ∗ h → h as x → 0 in various senses, according to the specific h. For instance, if h is assumed continuous on R with support in [0, ∞) then ψ(x, ⋅) ∗ h converges uniformly on compacta to h as x → 0, meaning that u(x, t) is continuous on with
Inhomogeneous heat equation
Problem on (-∞,∞) homogeneous initial conditions
Comment. This solution is the convolution in R2, that is with respect to both the variables x and t, of the fundamental solution
and the function f(x, t), both meant as defined on the whole R2 and identically 0 for all t → 0. One verifies that
which expressed in the language of distributions becomes
where the distribution δ is the Dirac's delta function, that is the evaluation at 0.
Problem on (0,∞) with homogeneous Dirichlet boundary conditions and initial conditions
Comment. This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0.
Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions
Comment. This solution is obtained from the first formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an even function of the variable x, that is, letting f(−x, t) := f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an even function with respect to the variable x for all values of t, and in particular, being a smooth function, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0.
Examples
Since the heat equation is linear, solutions of other combinations of boundary conditions, inhomogeneous term, and initial conditions can be found by taking an appropriate linear combination of the above Green's function solutions.
For example, to solve
let u = w + v where w and v solve the problems
Similarly, to solve
let u = w + v + r where w, v, and r solve the problems
Applications
As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem.
The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of artificial viscosity methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr.
Particle diffusion
One can model particle diffusion by an equation involving either:
the volumetric concentration of particles, denoted c, in the case of collective diffusion of a large number of particles, or
the probability density function associated with the position of a single particle, denoted P.
In either case, one uses the heat equation
or
Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation.
Brownian motion
Let the stochastic process be the solution to the stochastic differential equation
where is the Wiener process (standard Brownian motion). The probability density function of is given at any time by
which is the solution to the initial value problem
where is the Dirac delta function.
Schrödinger equation for a free particle
With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way:
,
where i is the imaginary unit, ħ is the reduced Planck constant, and ψ is the wave function of the particle.
This equation is formally similar to the particle diffusion equation, which one obtains through the following transformation:
Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wave function at any time through an integral on the wave function at t = 0:
with
Remark: this analogy between quantum mechanics and diffusion is a purely formal one. Physically, the evolution of the wave function satisfying Schrödinger's equation might have an origin other than diffusion.
Thermal diffusivity in polymers
A direct practical application of the heat equation, in conjunction with Fourier theory, in spherical coordinates, is the prediction of thermal transfer profiles and the measurement of the thermal diffusivity in polymers (Unsworth and Duarte). This dual theoretical-experimental method is applicable to rubber, various other polymeric materials of practical interest, and microfluids. These authors derived an expression for the temperature at the center of a sphere
where is the initial temperature of the sphere and the temperature at the surface of the sphere, of radius . This equation has also found applications in protein energy transfer and thermal modeling in biophysics.
Financial Mathematics
The heat equation arises in a number of phenomena and is often used in financial mathematics in the modeling of options. The Black–Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The equation describing pressure diffusion in a porous medium is identical in form with the heat equation. Diffusion problems dealing with Dirichlet, Neumann and Robin boundary conditions have closed form analytic solutions .
Image Analysis
The heat equation is also widely used in image analysis and in machine learning as the driving theory behind scale-space or graph Laplacian methods. The heat equation can be efficiently solved numerically using the implicit Crank–Nicolson method of . This method can be extended to many of the models with no closed form solution, see for instance .
Riemannian geometry
An abstract form of heat equation on manifolds provides a major approach to the Atiyah–Singer index theorem, and has led to much further work on heat equations in Riemannian geometry.
See also
Caloric polynomial
Curve-shortening flow
Diffusion equation
Relativistic heat conduction
Schrödinger equation
Weierstrass transform
Notes
References
Further reading
External links
Derivation of the heat equation
Linear heat equations: Particular solutions and boundary value problems - from EqWorld
Heat equation
Equation
Parabolic partial differential equations
Heat transfer | Heat equation | [
"Physics",
"Chemistry"
] | 7,049 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Diffusion",
"Thermodynamics",
"Heat conduction"
] |
166,428 | https://en.wikipedia.org/wiki/Ultra-wideband | Ultra-wideband (UWB, ultra wideband, ultra-wide band and ultraband) is a radio technology that can use a very low energy level for short-range, high-bandwidth communications over a large portion of the radio spectrum. UWB has traditional applications in non-cooperative radar imaging. Most recent applications target sensor data collection, precise locating, and tracking. UWB support started to appear in high-end smartphones in 2019.
Characteristics
Ultra-wideband is a technology for transmitting information across a wide bandwidth (>500 MHz). This allows for the transmission of a large amount of signal energy without interfering with conventional narrowband and carrier wave transmission in the same frequency band. Regulatory limits in many countries allow for this efficient use of radio bandwidth, and enable high-data-rate personal area network (PAN) wireless connectivity, longer-range low-data-rate applications, and the transparent co-existence of radar and imaging systems with existing communications systems.
Ultra-wideband was formerly known as pulse radio, but the FCC and the International Telecommunication Union Radiocommunication Sector (ITU-R) currently define UWB as an antenna transmission for which emitted signal bandwidth exceeds the lesser of 500 MHz or 20% of the arithmetic center frequency. Thus, pulse-based systems—where each transmitted pulse occupies the UWB bandwidth (or an aggregate of at least 500 MHz of a narrow-band carrier; for example, orthogonal frequency-division multiplexing (OFDM))—can access the UWB spectrum under the rules.
Theory
A significant difference between conventional radio transmissions and UWB is that conventional systems transmit information by varying the power level, frequency, or phase (or a combination of these) of a sinusoidal wave. UWB transmissions transmit information by generating radio energy at specific time intervals and occupying a large bandwidth, thus enabling pulse-position or time modulation. The information can also be modulated on UWB signals (pulses) by encoding the polarity of the pulse, its amplitude and/or by using orthogonal pulses. UWB pulses can be sent sporadically at relatively low pulse rates to support time or position modulation, but can also be sent at rates up to the inverse of the UWB pulse bandwidth. Pulse-UWB systems have been demonstrated at channel pulse rates in excess of 1.3 billion pulses per second using a continuous stream of UWB pulses (Continuous Pulse UWB or C-UWB), while supporting forward error-correction encoded data rates in excess of 675 Mbit/s.
A UWB radio system can be used to determine the "time of flight" of the transmission at various frequencies. This helps overcome multipath propagation, since some of the frequencies have a line-of-sight trajectory, while other indirect paths have longer delays. With a cooperative symmetric two-way metering technique, distances can be measured to high resolution and accuracy.
Applications
Real-time location
Ultra-wideband (UWB) technology is utilised for real-time locationing due to its precision and reliability. It plays a role in various industries such as logistics, healthcare, manufacturing, and transportation. UWB's centimeter-level accuracy is valuable in applications in which using traditional methods may be unsuitable, such as in indoor environments, where GPS precision may be hindered. Its low power consumption ensures minimal interference and allows for coexistence with existing infrastructure. UWB performs well in challenging environments with its immunity to multipath interference, providing consistent and accurate positioning. In logistics, UWB increases inventory tracking efficiency, reducing losses and optimizing operations. Healthcare makes use of UWB in asset tracking, patient flow optimization, and in improving care coordination. In manufacturing, UWB is used for streamlining inventory management and enhancing production efficiency through accurate tracking of materials and tools. UWB supports route planning, fleet management, and vehicle security in transportation systems.
UWB uses multiple techniques for location detection:
Time of flight (ToF)
Time difference of arrival (TDoA)
Two-way ranging (TWR)
Mobile devices with UWB capability
Apple launched the first three phones with ultra-wideband capabilities in September 2019, namely, the iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max. Apple also launched Series 6 of Apple Watch in September 2020, which features UWB, and their AirTags featuring this technology were revealed at a press event on April 20, 2021. The Samsung Galaxy Note 20 Ultra, Galaxy S21+, and Galaxy S21 Ultra also began supporting UWB, along with the Samsung Galaxy SmartTag+.
The Xiaomi MIX 4 released in August 2021 supports UWB, and offers the capability of connecting to select AIoT devices.
The FiRa Consortium was founded in August 2019 to develop interoperable UWB ecosystems including mobile phones. Samsung, Xiaomi, and Oppo are currently members of the FiRa Consortium. In November 2020, Android Open Source Project received first patches related to an upcoming UWB API; "feature-complete" UWB support (exclusively for the sole use case of ranging between supported devices) was released in version 13 of Android.
Industrial applications
Automation and robotics: Its high data rate and low latency enable real-time communication and control between machines and systems. UWB-based communication protocols ensure reliable and secure data transmission, enabling precise coordination and synchronization of automated processes. This enhances manufacturing efficiency, reduces errors, and improves overall productivity. UWB can also be integrated into robotic systems to enable precise localization, object detection, and collision avoidance, further enhancing the safety and efficiency of industrial automation.
Worker safety and proximity sensing: Worker safety is a concern in industrial settings. UWB technology provides effective proximity sensing and worker safety solutions. By equipping workers with UWB-enabled devices or badges, companies can monitor their location and movement in real-time. UWB-based systems can detect potential collisions between workers and machinery, issuing timely warnings to prevent accidents. Moreover, UWB technology allows for the creation of safety zones and controlled access areas, ensuring the safe interaction of workers with hazardous equipment or restricted zones. This helps enhance workplace safety, reduce accidents, and protect employees from potential hazards.
Asset tracking and management: Efficient asset tracking and management are crucial for industrial operations. UWB enables precise and real-time tracking of assets within industrial facilities. By attaching UWB tags to equipment, tools, and inventory, companies can monitor their location, movement, and utilization. This enhances inventory management, reduces asset loss, minimizes downtime, and streamlines maintenance processes. UWB-based asset tracking systems provide accurate and reliable data, empowering businesses to optimize their resource allocation and improve overall operational efficiency.
Radar
Ultra-wideband gained widespread attention for its implementation in synthetic aperture radar (SAR) technology. Due to its high resolution capacities using lower frequencies, UWB SAR was heavily researched for its object-penetration ability. Starting in the early 1990s, the U.S. Army Research Laboratory (ARL) developed various stationary and mobile ground-, foliage-, and wall-penetrating radar platforms that served to detect and identify buried IEDs and hidden adversaries at a safe distance. Examples include the railSAR, the boomSAR, the SIRE radar, and the SAFIRE radar. ARL has also investigated the feasibility of whether UWB radar technology can incorporate Doppler processing to estimate the velocity of a moving target when the platform is stationary. While a 2013 report highlighted the issue with the use of UWB waveforms due to target range migration during the integration interval, more recent studies have suggested that UWB waveforms can demonstrate better performance compared to conventional Doppler processing as long as a correct matched filter is used.
Ultra-wideband pulse Doppler radars have also been used to monitor vital signs of the human body, such as heart rate and respiration signals as well as human gait analysis and fall detection. It serves as a potential alternative to continuous-wave radar systems since it involves less power consumption and a high-resolution range profile. However, its low signal-to-noise ratio has made it vulnerable to errors. A commercial example of this application is RayBaby, which is a baby monitor that detects breathing and heart rate to determine whether a baby is asleep or awake. Raybaby has a detection range of five meters and can detect fine movements of less than a millimeter.
Ultra-wideband is also used in "see-through-the-wall" precision radar-imaging technology, precision locating and tracking (using distance measurements between radios), and precision time-of-arrival-based localization approaches. UWB radar has been proposed as the active sensor component in an Automatic Target Recognition application, designed to detect humans or objects that have fallen onto subway tracks.
Data transfer
Ultra-wideband characteristics are well-suited to short-range applications, such as PC peripherals, wireless monitors, camcorders, wireless printing, and file transfers to portable media players. UWB was proposed for use in personal area networks, and appeared in the IEEE 802.15.3a draft PAN standard. However, after several years of deadlock, the IEEE 802.15.3a task group was dissolved in 2006. The work was completed by the WiMedia Alliance and the USB Implementer Forum. Slow progress in UWB standards development, the cost of initial implementation, and performance significantly lower than initially expected are several reasons for the limited use of UWB in consumer products (which caused several UWB vendors to cease operations in 2008 and 2009).
Autonomous vehicles
UWB's precise positioning and ranging capabilities enable collision avoidance and centimeter-level localization accuracy, surpassing traditional GPS systems. Moreover, its high data rate and low latency facilitate seamless vehicle-to-vehicle communication, promoting real-time information exchange and coordinated actions. UWB also enables effective vehicle-to-infrastructure communication, integrating with infrastructure elements for optimized behavior based on precise timing and synchronized data. Additionally, UWB's versatility supports innovative applications such as high-resolution radar imaging for advanced driver assistance systems, secure key less entry via biometrics or device pairing, and occupant monitoring systems, potentially enhancing convenience, security, and passenger safety.
UWB products/chips
Regulation
In the U.S., ultra-wideband refers to radio technology with a bandwidth exceeding the lesser of 500 MHz or 20% of the arithmetic center frequency, according to the U.S. Federal Communications Commission (FCC). A February 14, 2002 FCC Report and Order authorized the unlicensed use of UWB in the frequency range from 3.1 to 10.6 GHz. The FCC power spectral density (PSD) emission limit for UWB transmitters is −41.3 dBm/MHz. This limit also applies to unintentional emitters in the UWB band (the "Part 15" limit). However, the emission limit for UWB emitters may be significantly lower (as low as −75 dBm/MHz) in other segments of the spectrum.
Deliberations in the International Telecommunication Union Radiocommunication Sector (ITU-R) resulted in a Report and Recommendation on UWB in November 2005. UK regulator Ofcom announced a similar decision on 9 August 2007.
There has been concern over interference between narrowband and UWB signals that share the same spectrum. Earlier, the only radio technology that used pulses was spark-gap transmitters, which international treaties banned because they interfere with medium-wave receivers. However, UWB uses much lower levels of power. The subject was extensively covered in the proceedings that led to the adoption of the FCC rules in the US, and in the meetings of the ITU-R leading to its Report and Recommendations on UWB technology. Commonly-used electrical appliances emit impulsive noise (for example, hair dryers), and proponents successfully argued that the noise floor would not be raised excessively by wider deployment of low power wideband transmitters.
Coexistence with other standards
In February 2002, the Federal Communications Commission (FCC) released an amendment (Part 15) that specifies the rules of UWB transmission and reception. According to this release, any signal with fractional bandwidth greater than 20% or having a bandwidth greater than 500 MHz is considered as an UWB signal. The FCC ruling also defines access to 7.5 GHz of unlicensed spectrum between 3.1 and 10.6 GHz that is made available for communication and measurement systems.
Narrowband signals that exist in the UWB range, such as IEEE 802.11a transmissions, may exhibit high PSD levels compared to UWB signals as seen by a UWB receiver. As a result, one would expect a degradation of UWB bit error rate performance.
Technology groups
See also
References
External links
IEEE 802.15.4a Includes a C-UWB physical layer, may be obtained from
Standard ECMA-368 High Rate Ultra Wideband PHY and MAC Standard
Standard ECMA-369 MAC-PHY Interface for ECMA-368
Standard ISO/IEC 26907:2009
Standard ISO/IEC 26908:2009
ITU-R Recommendations – SM series See: RECOMMENDATION ITU R SM.1757 Impact of devices using ultra-wideband technology on systems operating within radiocommunication services.
FCC (GPO) Title 47, Section 15 of the Code of Federal Regulations SubPart F: Ultra-wideband
Use of MIMO techniques for UWB
Numerous useful links and resources regarding Ultra-Wideband and UWB testbeds – WCSP Group – University of South Florida (USF)
The Ultra-Wideband Radio Laboratory at the University of Southern California
Data transmission
Radio communications
Radio technology | Ultra-wideband | [
"Technology",
"Engineering"
] | 2,859 | [
"Information and communications technology",
"Radio communications",
"Telecommunications engineering",
"Radio technology"
] |
166,441 | https://en.wikipedia.org/wiki/Lyapunov%20exponent | In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by
where is the Lyapunov exponent.
The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.
The exponent is named after Aleksandr Lyapunov.
Definition of the maximal Lyapunov exponent
The maximal Lyapunov exponent can be defined as follows:
The limit ensures the validity of the linear approximation
at any time.
For discrete time system (maps or fixed point iterations) , for an orbit starting with this translates into:
Definition of the Lyapunov spectrum
For a dynamical system with evolution equation in an n–dimensional phase space, the spectrum of Lyapunov exponents
in general, depends on the starting point . However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. (For Hamiltonian systems, which do not have attractors, this is not a concern.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix
this Jacobian defines the evolution of the tangent vectors, given by the matrix , via the equation
with the initial condition . The matrix describes how a small change at the point propagates to the final point . The limit
defines a matrix (the conditions for the existence of the limit are given by the Oseledets theorem). The Lyapunov exponents are defined by the eigenvalues of .
The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system.
Lyapunov exponent for time-varying linearization
To introduce Lyapunov exponent consider a fundamental matrix (e.g., for linearization along a stationary solution in a continuous system), the fundamental matrix is consisting of the linearly-independent solutions of the first-order approximation of the system. The singular values of the matrix are the square roots of the eigenvalues of the matrix .
The largest Lyapunov exponent is as follows
Lyapunov proved that if the system of the first approximation is regular (e.g., all systems with constant and periodic coefficients are regular) and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable.
Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial.
Perron effects of largest Lyapunov exponent sign inversion
In 1930 O. Perron constructed an example of a second-order system, where the first approximation has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of the original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also, it is possible to construct a reverse example in which the first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system
is Lyapunov stable.
The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently
called the Perron effect.
Perron's counterexample shows that a negative largest Lyapunov exponent does not, in general, indicate stability, and that
a positive largest Lyapunov exponent does not, in general, indicate chaos.
Therefore, time-varying linearization requires additional justification.
Basic properties
If the system is conservative (i.e., there is no dissipation), a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative.
If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of with an eigenvector in the direction of the flow.
Significance of the Lyapunov spectrum
The Lyapunov spectrum can be used to give an estimate of the rate of entropy production,
of the fractal dimension, and of the Hausdorff dimension of the considered dynamical system. In particular from the knowledge of the Lyapunov spectrum it is possible to obtain the so-called Lyapunov dimension (or Kaplan–Yorke dimension) , which is defined as follows:
where is the maximum integer such that the sum of the largest exponents is still non-negative. represents an upper bound for the information dimension of the system. Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem.
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, which is based on the direct Lyapunov method with special Lyapunov-like functions.
The Lyapunov exponents of bounded trajectory and the Lyapunov dimension of attractor are invariant under diffeomorphism of the phase space.
The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time, and defines the characteristic e-folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite.
Numerical calculation
Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964. Currently, the most commonly used numerical procedure estimates the matrix based on averaging several finite time approximations of the limit defining .
One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion. The Lyapunov spectrum of various models are described. Source codes for nonlinear systems such as the Hénon map, the Lorenz equations, a delay differential equation and so on are introduced.
For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited (if any) extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum, provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored.
Local Lyapunov exponent
Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes of interest to estimate the local predictability around a point in phase space. This may be done through the eigenvalues of the Jacobian matrix . These eigenvalues are also called local Lyapunov exponents. Local exponents are not invariant under a nonlinear change of coordinates.
Conditional Lyapunov exponent
This term is normally used regarding synchronization of chaos, in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive (or master) system and a response (or slave) system. The conditional exponents are those of the response system with the drive system treated as simply the source of a (chaotic) drive signal. Synchronization occurs when all of the conditional exponents are negative.
See also
Chaos Theory
Chaotic mixing for an alternative derivation
Eden's conjecture on the Lyapunov dimension
Floquet theory
Liouville's theorem (Hamiltonian)
Lyapunov dimension
Lyapunov time
Recurrence quantification analysis
Oseledets theorem
Butterfly effect
References
Further reading
Cvitanović P., Artuso R., Mainieri R., Tanner G. and Vattay G.Chaos: Classical and Quantum Niels Bohr Institute, Copenhagen 2005 – textbook about chaos available under Free Documentation License
Software
R. Hegger, H. Kantz, and T. Schreiber, Nonlinear Time Series Analysis, TISEAN 3.0.1 (March 2007).
Scientio's ChaosKit product calculates Lyapunov exponents amongst other Chaotic measures. Access is provided online via a web service and Silverlight demo.
Dr. Ronald Joe Record's mathematical recreations software laboratory includes an X11 graphical client, lyap, for graphically exploring the Lyapunov exponents of a forced logistic map and other maps of the unit interval. The contents and manual pages of the mathrec software laboratory are also available.
Software on this page was developed specifically for the efficient and accurate calculation of the full spectrum of exponents. This includes LyapOde for cases where the equations of motion are known and also Lyap for cases involving experimental time series data. LyapOde, which includes source code written in "C", can also calculate the conditional Lyapunov exponents for coupled identical systems. It is intended to allow the user to provide their own set of model equations or to use one of the ones included. There are no inherent limitations on the number of variables, parameters etc. Lyap which includes source code written in Fortran, can also calculate the Lyapunov direction vectors and can characterize the singularity of the attractor, which is the main reason for difficulties in calculating the more negative exponents from time series data. In both cases there is extensive documentation and sample input files. The software can be compiled for running on Windows, Mac, or Linux/Unix systems. The software runs in a text window and has no graphics capabilities, but can generate output files that could easily be plotted with a program like excel.
External links
Perron effects of Lyapunov exponent sign inversions
Dynamical systems | Lyapunov exponent | [
"Physics",
"Mathematics"
] | 2,496 | [
"Mechanics",
"Dynamical systems"
] |
166,470 | https://en.wikipedia.org/wiki/Ricin | Ricin ( ) is a lectin (a carbohydrate-binding protein) and a highly potent toxin produced in the seeds of the castor oil plant, Ricinus communis. The median lethal dose (LD50) of ricin for mice is around 22 micrograms per kilogram of body weight via intraperitoneal injection. Oral exposure to ricin is far less toxic. An estimated lethal oral dose in humans is approximately one milligram per kilogram of body weight.
Ricin is a toxalbumin and was first described by Peter Hermann Stillmark, the founder of lectinology. Ricin is chemically similar to robin.
Biochemistry
Ricin is classified as a type 2 ribosome-inactivating protein (RIP). Whereas type 1 RIPs are composed of a single protein chain that possesses catalytic activity, type 2 RIPs, also known as holotoxins, are composed of two different protein chains that form a heterodimeric complex. Type 2 RIPs consist of an A chain that is functionally equivalent to a type 1 RIP, covalently connected by a single disulfide bond to a B chain that is catalytically inactive, but serves to mediate transport of the A-B protein complex from the cell surface, via vesicle carriers, to the lumen of the endoplasmic reticulum (ER). Both type 1 and type 2 RIPs are functionally active against ribosomes in vitro; however, only type 2 RIPs display cytotoxicity due to the lectin-like properties of the B chain. To display its ribosome-inactivating function, the ricin disulfide bond must be reductively cleaved.
Biosynthesis
Ricin is synthesized in the endosperm of castor oil plant seeds. The ricin precursor protein is 576 amino acid residues in length and contains a signal peptide (residues 1–35), the ricin A chain (36–302), a linker peptide (303–314), and the ricin B chain (315–576). The N-terminal signal sequence delivers the prepropolypeptide to the endoplasmic reticulum (ER) and then the signal peptide is cleaved off. Within the lumen of the ER the propolypeptide is glycosylated and a protein disulfide isomerase catalyzes disulfide bond formation between cysteines 294 and 318. The propolypeptide is further glycosylated within the Golgi apparatus and transported to protein storage bodies. The propolypeptide is cleaved within protein bodies by an endopeptidase to produce the mature ricin protein that is composed of a 267 residue A chain and a 262 residue B chain that are covalently linked by a single disulfide bond.
Structure
In terms of structure, ricin closely resembles abrin-a, an isomer of abrin. The quaternary structure of ricin is a globular, glycosylated heterodimer of approximately 60–65 kDa. Ricin toxin A chain and ricin toxin B chain are of similar molecular weights, approximately 32 kDa and 34 kDa, respectively.
Ricin toxin A chain (RTA) is an N-glycoside hydrolase composed of 267 amino acids. It has three structural domains with approximately 50% of the polypeptide arranged into alpha-helices and beta-sheets. The three domains form a pronounced cleft that is the active site of RTA.
Ricin toxin B chain (RTB) is a lectin composed of 262 amino acids that is able to bind terminal galactose residues on cell surfaces. RTB forms a bilobal, barbell-like structure lacking alpha-helices or beta-sheets where individual lobes contain three subdomains. At least one of these three subdomains in each homologous lobe possesses a sugar-binding pocket that gives RTB its functional character.
While other plants contain the protein chains found in ricin, both protein chains must be present to produce toxic effects. For example, plants that contain only protein chain A, such as barley, are not toxic because without the link to protein chain B, protein chain A cannot enter the cell and do damage to ribosomes.
Entry into the cytoplasm
Ricin B chain binds complex carbohydrates on the surface of eukaryotic cells containing either terminal N-acetylgalactosamine or beta-1,4-linked galactose residues. In addition, the mannose-type glycans of ricin are able to bind to cells that express mannose receptors. RTB has been shown to bind to the cell surface on the order of 106–108 ricin molecules per cell surface.
The profuse binding of ricin to surface membranes allows internalization with all types of membrane invaginations. The holotoxin can be taken up by clathrin-coated pits, as well as by clathrin-independent pathways including caveolae and macropinocytosis. Intracellular vesicles shuttle ricin to endosomes that are delivered to the Golgi apparatus. The active acidification of endosomes is thought to have little effect on the functional properties of ricin. Because ricin is stable over a wide pH range, degradation in endosomes or lysosomes offers little or no protection against ricin. Ricin molecules are thought to follow retrograde transport via early endosomes, the trans-Golgi network, and the Golgi to enter the lumen of the endoplasmic reticulum (ER).
For ricin to function cytotoxically, RTA must be reductively cleaved from RTB to release a steric block of the RTA active site. This process is catalysed by the protein PDI (protein disulphide isomerase) that resides in the lumen of the ER. Free RTA in the ER lumen then partially unfolds and partially buries into the ER membrane, where it is thought to mimic a misfolded membrane-associated protein. Roles for the ER chaperones GRP94, EDEM and BiP have been proposed prior to the 'dislocation' of RTA from the ER lumen to the cytosol in a manner that uses components of the endoplasmic reticulum-associated protein degradation (ERAD) pathway. ERAD normally removes misfolded ER proteins to the cytosol for their destruction by cytosolic proteasomes. Dislocation of RTA requires ER membrane-integral E3 ubiquitin ligase complexes, but RTA avoids the ubiquitination that usually occurs with ERAD substrates because of its low content of lysine residues, which are the usual attachment sites for ubiquitin. Thus, RTA avoids the usual fate of dislocated proteins (destruction that is mediated by targeting ubiquitinylated proteins to the cytosolic proteasomes). In the mammalian cell cytosol, RTA then undergoes triage by the cytosolic molecular chaperones Hsc70 and Hsp90 and their co-chaperones, as well as by one subunit (RPT5) of the proteasome itself, that results in its folding to a catalytic conformation, which de-purinates ribosomes, thus halting protein synthesis.
Ribosome inactivation
RTA has rRNA N-glycosylase activity that is responsible for the cleavage of a glycosidic bond within the large rRNA of the 60S subunit of eukaryotic ribosomes. RTA specifically and irreversibly hydrolyses the N-glycosidic bond of the adenine residue at position 4324 (A4324) within the 28S rRNA, but leaves the phosphodiester backbone of the RNA intact. The ricin targets A4324 that is contained in a highly conserved sequence of 12 nucleotides universally found in eukaryotic ribosomes. The sequence, 5'-AGUACGAGAGGA-3', termed the sarcin-ricin loop, is important in binding elongation factors during protein synthesis. The depurination event rapidly and completely inactivates the ribosome, resulting in toxicity from inhibited protein synthesis. A single RTA molecule in the cytosol is capable of depurinating approximately 1500 ribosomes per minute.
Depurination reaction
Within the active site of RTA, there exist several invariant amino acid residues involved in the depurination of ribosomal RNA. Although the exact mechanism of the event is unknown, key amino acid residues identified include tyrosine at positions 80 and 123, glutamic acid at position 177, and arginine at position 180. In particular, Arg180 and Glu177 have been shown to be involved in the catalytic mechanism, and not substrate binding, with enzyme kinetic studies involving RTA mutants. The model proposed by Mozingo and Robertus, based on X-ray structures, is as follows:
Sarcin-ricin loop substrate binds RTA active site with target adenine stacking against tyr80 and tyr123.
Arg180 is positioned such that it can protonate N-3 of adenine and break the bond between N-9 of the adenine ring and C-1' of the ribose.
Bond cleavage results in an oxycarbonium ion on the ribose, stabilized by Glu177.
N-3 protonation of adenine by Arg180 allows deprotonation of a nearby water molecule.
Resulting hydroxyl attacks ribose carbonium ion.
Depurination of adenine results in a neutral ribose on an intact phosphodiester RNA backbone.
Toxicity
Ricin is very toxic if inhaled, injected, or ingested. It can also be toxic if dust contacts the eyes or if it is absorbed through damaged skin. It acts as a toxin by inhibiting protein synthesis. Ricin is resistant, but not impervious, to digestion by peptidases. By ingestion, the pathology of ricin is largely restricted to the gastrointestinal tract, where it may cause mucosal injuries. With appropriate treatment, most patients will make a good recovery.
Symptoms
Because the symptoms are caused by failure to make protein, they may take anywhere from hours to days to appear, depending on the route of exposure and the dose. When ingested, gastrointestinal symptoms can manifest within six hours; these symptoms do not always become apparent. Within two to five days of exposure to ricin, its effects on the central nervous system, adrenal glands, kidneys, and liver appear.
Ingestion of ricin causes pain, inflammation, and hemorrhage in the mucosal membranes of the gastrointestinal system. Gastrointestinal symptoms quickly progress to severe nausea, vomiting, diarrhea, and difficulty swallowing (dysphagia). Haemorrhage causes bloody feces (melena) and vomiting blood (hematemesis). The low blood volume (hypovolemia) caused by gastrointestinal fluid loss can lead to organ failure in the pancreas, kidney, liver, and GI tract and progress to shock. Shock and organ failure are indicated by disorientation, stupor, weakness, drowsiness, excessive thirst (polydipsia), low urine production (oliguria), and bloody urine (hematuria).
Symptoms of ricin inhalation are different from those caused by ingestion. Early symptoms include a cough and fever.
When skin or inhalation exposure occur, ricin can cause an allergic reaction to develop. This is indicated by swelling (edema) of the eyes and lips; asthma; bronchial irritation; dry, sore throat; congestion; skin redness (erythema); skin blisters (vesication); wheezing; itchy, watery eyes; chest tightness; and skin irritation.
Treatment
An antidote has been developed by the UK military, although as of 2006 it has not yet been tested on humans. As of 2005 another antidote developed by the US military has been shown to be safe and effective in lab mice injected with antibody-rich blood mixed with ricin, and has had some human testing. Monoclonal antibodies are under scientific investigation as a possible treatment for ricin poisoning.
Symptomatic and supportive treatments are available for ricin poisoning. Existing treatments emphasize minimizing the effects of the poison. Possible treatments include intravenous fluids or electrolytes, airway management, assisted ventilation, or giving medications to remedy seizures and low blood pressure. If the ricin has been ingested recently, the stomach can be flushed by ingesting activated charcoal or by performing gastric lavage. Survivors often develop long-term organ damage. Ricin causes severe diarrhea and vomiting, and victims can die of circulatory shock or organ failure; inhaled ricin can cause fatal pulmonary edema or respiratory failure. Death typically occurs within 3–5 days after oral ingestion.
Prevention
Vaccination is possible by injecting an inactive form of protein chain A. This vaccination is effective for several months due to the body's production of antibodies to the foreign protein. In 1978 Bulgarian defector Vladimir Kostov survived a ricin attack similar to the one on Georgi Markov, probably due to his body's production of antibodies. When a ricin-laced pellet was removed from the small of his back it was found that some of the original wax coating was still attached. For this reason only small amounts of ricin had leaked out of the pellet, producing some symptoms but allowing his body to develop immunity to further poisoning.
Sources
The seeds of Ricinus communis are commonly crushed to extract castor oil. As ricin is not oil-soluble, little is found in the extracted castor oil. The extracted oil is also heated to more than to denature any ricin that may be present. The remaining spent crushed seeds, called variously the "cake", "oil cake", and "press cake", can contain up to 5% ricin. While the oil cake from coconut, peanuts, and sometimes cotton seeds can be used as cattle feed or fertilizer, the toxic nature of castor beans precludes their oil cake from being used as feed unless the ricin is first deactivated by autoclaving. Accidental ingestion of Ricinus communis cake intended for fertilizer has been reported to be responsible for fatal ricin poisoning in animals.
Deaths from ingesting castor plant seeds are rare, partly because of their indigestible seed coat, and because some of the ricin is deactivated in the stomach. The pulp from eight beans is considered dangerous to an adult. Rauber and Heard have written that close examination of early 20th century case reports indicates that public and professional perceptions of ricin toxicity "do not accurately reflect the capabilities of modern medical management".
Most acute poisoning episodes in humans are the result of oral ingestion of castor beans, 5–20 of which could prove fatal to an adult. Swallowing castor beans rarely proves to be fatal unless the bean is thoroughly chewed. The survival rate of castor bean ingestion is 98%. In 2013 a 37-year-old woman in the United States survived after ingesting 30 beans. In another case, a man ingested 200 castor beans mixed with juice in a blender and survived. Victims often manifest nausea, diarrhea, fast heart rate, low blood pressure, and seizures persisting for up to a week. Blood, plasma, or urine ricin or ricinine concentrations may be measured to confirm diagnosis. The laboratory testing usually involves immunoassay or liquid chromatography-mass spectrometry.
Therapeutic applications
Although no approved therapeutics are currently based on ricin, it does have the potential to be used in the treatment of tumors, as a "magic bullet" to destroy targeted cells. Because ricin is a protein, it can be linked to a monoclonal antibody to target cancerous cells recognized by the antibody. The major problem with ricin is that its native internalization sequences are distributed throughout the protein. If any of these native internalization sequences are present in a therapeutic agent, the drug will be internalized by, and kill, untargeted non-tumorous cells as well as targeted cancerous cells.
Modifying ricin may sufficiently lessen the likelihood that the ricin component of these immunotoxins will cause the wrong cells to internalize it, while still retaining its cell-killing activity when it is internalized by the targeted cells. However, bacterial toxins, such as diphtheria toxin, which is used in denileukin diftitox, an FDA-approved treatment for leukemia and lymphoma, have proven to be more practical. A promising approach for ricin is to use the non-toxic B subunit (a lectin) as a vehicle for delivering antigens into cells, thus greatly increasing their immunogenicity. Use of ricin as an adjuvant has potential implications for developing mucosal vaccines.
Regulation
In the US, ricin appears on the select agents list of the Department of Health and Human Services, and scientists must register with HHS to use ricin in their research. However, investigators under the control of less than 1000 mg are exempt from regulation.
Ricin is classified as an extremely hazardous substance in the United States as defined in Section 302 of the US Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities that produce, store, or use it in significant quantities.
Chemical or biological warfare agent
History
The United States investigated ricin for its military potential during World War I. At that time it was being considered for use either as a toxic dust or as a coating for bullets and shrapnel. The dust cloud concept could not be adequately developed, and the coated bullet/shrapnel concept would violate the Hague Convention of 1899 (adopted in U.S. law at 32 Stat. 1903), specifically Annex §2, Ch.1, Article 23, stating "... it is especially prohibited ... [t]o employ poison or poisoned arms".
During World War II the United States and Canada studied ricin in cluster bombs. Though there were plans for mass production and several field trials with different bomblet concepts, the end conclusion was that it was no more economical than using phosgene. This conclusion was based on comparison of the final weapons, rather than ricin's toxicity (LCt50 ~10 mg/min·m3). Ricin was given the military symbol W or later WA. Interest in it continued for a short period after World War II, but soon subsided when the US Army Chemical Corps began a program to weaponize sarin.
The Soviet Union possessed weaponized ricin. The KGB developed weapons using ricin which were used outside the Soviet bloc, most famously in the Markov assassination.
Control
In spite of ricin's extreme toxicity and utility as an agent of chemical/biological warfare, production of the toxin is difficult to limit. The castor bean plant from which ricin is derived is a common ornamental and can be grown at home without any special care.
Under both the 1972 Biological Weapons Convention and the 1997 Chemical Weapons Convention, ricin is listed as a schedule 1 controlled substance. Despite this, more than of castor beans are processed each year, and approximately 5% of the total is rendered into a waste containing negligible concentrations of undenatured ricin toxin.
Ricin is several orders of magnitude less toxic than botulinum or tetanus toxin, but the latter are harder to come by. Compared to botulinum or anthrax as biological weapons or chemical weapons, the quantity of ricin required to achieve LD50 over a large geographic area is significantly more than an agent such as anthrax (tons of ricin vs. only kilogram quantities of anthrax). Ricin is easy to produce, but is not as practical or likely to cause as many casualties as other agents. Ricin is easily denatured by temperatures over meaning many methods of deploying ricin would generate enough heat to denature it. Once deployed, an area contaminated with ricin remains dangerous until the bonds between chain A or B have been broken, a process that takes two or three days. In contrast, anthrax spores may remain lethal for decades. Jan van Aken, a German expert on biological weapons, explained in a report for The Sunshine Project that Al Qaeda's experiments with ricin suggest their inability to produce botulinum or anthrax.
Vaccination
Ricin toxin vaccines have emerged as a focus in biodefense research. Two recombinant A subunit (RTA)-based vaccines, RiVax and RVEc (also known as RTA1-33/44-198), have completed Phase I clinical trials, and were found to be safe. These vaccines are based on modified versions of the ricin toxin A-chain, designed to reduce toxicity while maintaining immunogenicity.
Developments
A biopharmaceutical company called Soligenix, Inc. licensed an anti-ricin vaccine called RiVax from Vitetta et al. at UT Southwestern. The vaccine was found safe and immunogenic in mice, rabbits, and humans. Two successful clinical trials were completed. Soligenix was issued a US patent for Rivax. The ricin vaccine candidate was granted orphan drug status in the US and the EEC and, as of 2019, was in clinical trials in the US. Grants from the National Institute of Allergy and Infectious Diseases and the US Food and Drug Administration supported development of the vaccine candidate.
Synthesis
The first isolation of ricin is attributed to the Baltic-German microbiologist Peter Hermann Stillmark (1860–1923) in 1888.
Terrorist use
Ricin has been involved in a number of actual or planned attacks on individuals. In 1978, the Bulgarian dissident Georgi Markov was assassinated by Bulgarian secret police who surreptitiously shot him on a London street with what was later found to have been a modified umbrella using compressed gas to fire a tiny pellet containing ricin into his leg. He died in a hospital a few days later; his body was passed to a special poison branch of the British Ministry of Defence that discovered the pellet during an autopsy. The prime suspects were the Bulgarian secret police: Georgi Markov had defected from Bulgaria some years previously and had subsequently written books and made radio broadcasts that were highly critical of the Bulgarian communist regime. However, it was believed at the time that Bulgaria would not have been able to produce the pellet, and it was also believed that the KGB had supplied it. The KGB denied any involvement, although high-profile KGB defectors Oleg Kalugin and Oleg Gordievsky later confirmed the KGB's involvement. Soviet dissident Aleksandr Solzhenitsyn had (but survived) ricin-like symptoms after an encounter in 1971 with KGB agents.
Ten days before the attack on Georgi Markov another Bulgarian defector, Vladimir Kostov, survived a similar attack. Kostov was standing on an escalator of the Paris metro when he felt a sting in his lower back above the belt of his trousers. He developed a fever, but recovered. After Markov's death the wound on Kostov's back was examined and a ricin-laced pellet identical to the one used against Markov was removed.
Several terrorist individuals and groups have experimented with ricin or planned to use it. There have been incidents of the poison being mailed to US politicians. For example, on 29 May 2013 two anonymous letters sent to New York City Mayor Michael Bloomberg contained traces of it. Another was sent to the offices of Mayors Against Illegal Guns in Washington, D.C. A letter containing ricin was also reported to have been sent to American President Barack Obama at the same time. Shannon Richardson, an actress, was later charged with the crime, and pleaded guilty that December; she was sentenced to 18 years in prison plus a restitution fine of US$367,000. On 2 October 2018, two letters suspected of containing ricin were sent to The Pentagon, one addressed to Secretary of Defense James Mattis, and the other to Chief of Naval Operations, Admiral John Richardson. A letter was received on 23 July 2019 at Pelican Bay State Prison in California which claimed to contain a suspicious substance. Authorities later confirmed it contained ricin; no detrimental exposures were identified.
In 2020, some media in the Czech Republic reported, based on intelligence information, that a person carrying a Russian diplomatic passport and ricin had arrived in Prague with the intention of assassinating three politicians. Russian president Vladimir Putin denied the reports. The targets were said to have been Zdeněk Hřib, the mayor of Prague (capital of the Czech Republic), who was involved in renaming a square in Prague, "Pod Kaštany", where the Russian embassy is situated, to the Square of Boris Nemtsov, an opposition politician assassinated in the Kremlin in 2015; Ondřej Kolář, the mayor of Prague 6 municipal district, who was involved in removing the controversial statue to the Soviet-era Marshal Konev; and Pavel Novotný, the mayor of Prague's southwestern Řeporyje district. They all received police protection. Czech president Miloš Zeman later described the police protection of Zdeněk Hřib as an attempt by an insignificant politician to gain attention. Zeman also confused ricin with non-poisonous laxative castor oil.
In 2018 and 2023 German police thwarted attempted ricin attacks, after tip-offs believed to have come from the US FBI.
In popular culture
Ricin has been used as a plot device, such as in the television series Breaking Bad.
The popularity of Breaking Bad inspired several real-life criminal cases involving ricin or similar substances. Kuntal Patel from London attempted to poison her mother with abrin after the latter interfered with her marriage plans. Daniel Milzman, a 19-year-old former Georgetown University student, was charged with manufacturing ricin in his dorm room, as well as the intent of "[using] the ricin on another undergraduate student with whom he had a relationship". Mohammed Ali from Liverpool, England, was convicted after attempting to purchase 500 mg of ricin over the dark web from an undercover FBI agent. He was sentenced on 18 September 2015 to eight years imprisonment.
In Agatha Christie's novel Partners in Crime, ricin was used as a plot device.
In the final season of Walker, Texas Ranger, ricin was used by Emil Lavocat to murder the titular Texas Ranger's best friend, mentor and former partner, C.D. Parker, out of revenge against them and all the Rangers in their company for busting up his organized crime ring and imprisoning his lieutenants. Though it was under the guise of a heart attack near the end of the episode "The Avenging Angel", the truth about C.D.'s death comes out in the series finale, "The Final Show/Down", when Walker and Trivette have his body exhumed and autopsied.
In the 2013 movie The Good Mother, a mother injects and feeds her daughters with ricin in a case of Munchausen by proxy; she is caught after a daughter dies.
In the 2014 movie The Interview, a transdermal strip carrying ricin is used in a CIA plot to assassinate North Korean dictator Kim Jong-un via handshake.
See also
List of poisonous plants
References
External links
Studies showing lack of toxicity of castor oil from the US Public Health Service
Castor bean information at Purdue University
Plants Poisonous to Livestock – Ricin information at Cornell University
Ricin cancer therapy tested at BBC
Ricin – Emergency Preparations at CDC
Emergency Response Card – Ricin at CDC
Biological toxin weapons
Castor oil plant
Lectins
Legume lectins
Plant toxins
Proteins
Ribosome-inactivating proteins
Toxins | Ricin | [
"Chemistry",
"Environmental_science"
] | 5,913 | [
"Biomolecules by chemical classification",
"Toxicology",
"Chemical ecology",
"Chemical weapons",
"Plant toxins",
"Molecular biology",
"Toxins",
"Proteins",
"Biological toxin weapons"
] |
166,554 | https://en.wikipedia.org/wiki/Project%20plan | A project plan, is a series of structured tasks, objectives, and schedule to a complete a desired outcome, according to a project managers designs and purpose. According to the Project Management Body of Knowledge (PMBOK), is:
"...a formal, approved document used to guide both project execution and project control. The primary uses of the project plan are to document planning assumptions and decisions, facilitate communication among project stakeholders, and document approved scope, cost, and schedule baselines. A project plan may be sumarized or detailed."
The latest edition of the PMBOK (v6) uses the term project charter to refer to the contract that the project sponsor and project manager use to agree on the initial vision of the project (scope, baseline, resources, objectives, etc.) at a high level. In the PMI methodology described in the PMBOK v5, the project charter and the project management plan are the two most important documents for describing a project during the initiation and planning phases.
PRINCE2 defines a project plan as:
"...a statement of how and when a project's objectives are to be achieved, by showing the major products, milestones, activities and resources required on the project."
The project manager creates the project management plan following input from the project team and key project stakeholders. The plan should be agreed and approved by at least the project team and its key stakeholders.
Many project management processes are mentioned in PMBOK® Guide, but determining which processes need to be used based on the needs of the project which is called Tailoring is part of developing the project management plan.
Purpose
The objective of a project plan is to define the approach to be used by the project team to deliver the intended project managers scope of the project.
At a minimum, a project plan answers basic questions about the project:
Why? - What is the problem or value proposition addressed by the project? Why is it being sponsored?
What? - What is the work that will be performed on the project? What are the major products/deliverables?
Who? - Who will be involved and what will be their responsibilities within the project? How will they be organized?
When? - What is the project timeline and when will particularly meaningful points, referred to as milestones, be complete?
Plan contents
To be a complete project plan according to industry standards such as the PMBOK or PRINCE2, the project plan must also describe the execution, management and control of the project. This information can be provided by referencing other documents that will be produced, such as a procurement plan or construction plan, or it may be detailed in the project plan itself.
The project plan typically covers topics used in the project execution system and includes the following main aspects:
Scope management
Requirements management
Schedule management
Financial management
Quality management
Resource management
Stakeholder management – New from PMBOK 5
Communications management
Project change management
Risk management
Details
The project plan document may include the following sections:
Introduction: A high-level overview of the project.
Project management approach: The roles and authority of team members. It represents the executive summary of the project management plan.
Project scope: The scope statement from the Project charter should be used as a starting point with more details about what the project includes and what it does not include (in-scope and out-of-scope).
Milestone list: A list of the project milestones (the stop points that helps evaluating the progress of the project). This list includes the milestone name, a description about the milestone, and the date expected.
Schedule baseline and work breakdown structure: The WBS which consists of work packages and WBS dictionary, which defines these work packages, as well as schedule baseline, which is the reference point for managing project progress, are included here.
Project management plans: This section contains all management plans of all project aspects.
Change management plan
Communication management plan
Cost management plan
Procurement management plan
Project scope management plan
Schedule management plan
Quality management plan
Risk management plan
HR or staffing management plan
Resource calendar: Identify key resources needed for the project and their times and durations of need.
Cost baseline: This section includes the budgeted total of each phase of the project and comments about the cost.
Quality baseline: Acceptable levels of quality.
Sponsor acceptance: Some space for the project sponsor to sign off the document.
See also
Project planning
References
Schedule (project management)
Planning | Project plan | [
"Physics"
] | 886 | [
"Spacetime",
"Physical quantities",
"Time",
"Schedule (project management)"
] |
166,570 | https://en.wikipedia.org/wiki/List%20of%20tallest%20structures | The tallest structure in the world is the Burj Khalifa skyscraper at . Listed are guyed masts (such as telecommunication masts), self-supporting towers (such as the CN Tower), skyscrapers (such as the Willis Tower), oil platforms, electricity transmission towers, and bridge support towers. This list is organized by absolute height. See List of tallest buildings and structures, List of tallest freestanding structures and List of tallest buildings and List of tallest towers for additional information about these types of structures.
Terminology
Terminological and listing criteria follow Council on Tall Buildings and Urban Habitat definitions. Guyed masts are differentiated from towers – the latter not featuring any guy wires or other support structures; and buildings are differentiated from towers – the former having at least 50% of occupiable floor space although both are self-supporting structures.
Lists by height
These lists include structures with a minimum height of 500 metres (1640 feet). The lists of tallest structures from 400 to 500 metres and from 300 to 400 metres include shorter structures.
For all structures, the pinnacle height is given, so the height of skyscrapers may differ from the values at List of skyscrapers. Tension-leg platforms are not included.
Structures (past or present) 600 m (1,969 ft) or taller
Structures (past or present) between 550 and 600 m (1,804 ft and 1,969 ft)
Structures (past or present) between 500 and 550 m (1,640 and 1,804 ft)
On hold
Structures that are on hold or have been cancelled.
List by continent
Current
The following table is a list of the current tallest structures by each continent (listed by geographic size):
All time
The following table is a list of the all time tallest structures by each continent (listed by geographic size):
See also
List of tallest buildings and structures
List of tallest structures – 400 to 500 metres
List of tallest structures – 300 to 400 metres
List of tallest structures built before the 20th century
List of transmission sites
List of European medium wave transmitters
List of tallest buildings
List of future tallest buildings
List of tallest freestanding structures
List of tallest bridges
List of tallest dams
List of cities with most skyscrapers
List of tallest statues
References
External links
World Federation of Great Towers
Skyscrapers diagrams and forum
Search for Radio Masts and Towers in the U.S.
Lists of buildings and structures
Towers
Guyed masts
Lists of construction records
Lists of tallest structures by region | List of tallest structures | [
"Engineering"
] | 496 | [
"Structural engineering",
"Towers"
] |
166,589 | https://en.wikipedia.org/wiki/On%20Numbers%20and%20Games | On Numbers and Games is a mathematics book by John Horton Conway first published in 1976. The book is written by a pre-eminent mathematician, and is directed at other mathematicians. The material is, however, developed in a playful and unpretentious manner and many chapters are accessible to non-mathematicians. Martin Gardner discussed the book at length, particularly Conway's construction of surreal numbers, in his Mathematical Games column in Scientific American in September 1976.
The book is roughly divided into two sections: the first half (or Zeroth Part), on numbers, the second half (or First Part), on games. In the Zeroth Part, Conway provides axioms for arithmetic: addition, subtraction, multiplication, division and inequality. This allows an axiomatic construction of numbers and ordinal arithmetic, namely, the integers, reals, the countable infinity, and entire towers of infinite ordinals. The object to which these axioms apply takes the form {L|R}, which can be interpreted as a specialized kind of set; a kind of two-sided set. By insisting that L<R, this two-sided set resembles the Dedekind cut. The resulting construction yields a field, now called the surreal numbers. The ordinals are embedded in this field. The construction is rooted in axiomatic set theory, and is closely related to the Zermelo–Fraenkel axioms. In the original book, Conway simply refers to this field as "the numbers". The term "surreal numbers" is adopted later, at the suggestion of Donald Knuth.
In the First Part, Conway notes that, by dropping the constraint that L<R, the axioms still apply and the construction goes through, but the resulting objects can no longer be interpreted as numbers. They can be interpreted as the class of all two-player games. The axioms for greater than and less than are seen to be a natural ordering on games, corresponding to which of the two players may win. The remainder of the book is devoted to exploring a number of different (non-traditional, mathematically inspired) two-player games, such as nim, hackenbush, and the map-coloring games col and snort. The development includes their scoring, a review of the Sprague–Grundy theorem, and the inter-relationships to numbers, including their relationship to infinitesimals.
The book was first published by Academic Press in 1976, , and a second edition was released by A K Peters in 2001 ().
Zeroth Part ... On Numbers
In the Zeroth Part, Chapter 0, Conway introduces a specialized form of set notation, having the form {L|R}, where L and R are again of this form, built recursively, terminating in {|}, which is to be read as an analog of the empty set. Given this object, axiomatic definitions for addition, subtraction, multiplication, division and inequality may be given. As long as one insists that L<R (with this holding vacuously true when L or R are the empty set), then the resulting class of objects can be interpreted as numbers, the surreal numbers. The {L|R} notation then resembles the Dedekind cut.
The ordinal is built by transfinite induction. As with conventional ordinals, can be defined. Thanks to the axiomatic definition of subtraction, can also be coherently defined: it is strictly less than , and obeys the "obvious" equality Yet, it is still larger than any natural number.
The construction enables an entire zoo of peculiar numbers, the surreals, which form a field. Examples include , , , and similar.
First Part ... and Games
In the First Part, Conway abandons the constraint that L<R, and then interprets the form {L|R} as a two-player game: a position in a contest between two players, Left and Right. Each player has a set of games called options to choose from in turn. Games are written {L|R} where L is the set of Left's options and R is the set of Right's options. At the start there are no games at all, so the empty set (i.e., the set with no members) is the only set of options we can provide to the players. This defines the game {|}, which is called 0. We consider a player who must play a turn but has no options to have lost the game. Given this game 0 there are now two possible sets of options, the empty set and the set whose only element is zero. The game {0|} is called 1, and the game {|0} is called -1. The game {0|0} is called * (star), and is the first game we find that is not a number.
All numbers are positive, negative, or zero, and we say that a game is positive if Left has a winning strategy, negative if Right has a winning strategy, or zero if the second player has a winning strategy. Games that are not numbers have a fourth possibility: they may be fuzzy, meaning that the first player has a winning strategy. * is a fuzzy game.
See also
Winning Ways for Your Mathematical Plays
References
1976 non-fiction books
Combinatorial game theory
Mathematics books
Systems of set theory
John Horton Conway
Academic Press books | On Numbers and Games | [
"Mathematics"
] | 1,114 | [
"Recreational mathematics",
"Game theory",
"Combinatorial game theory",
"Combinatorics"
] |
166,622 | https://en.wikipedia.org/wiki/Doll | A doll is a model typically of a human or humanoid character, often used as a toy for children. Dolls have also been used in traditional religious rituals throughout the world. Traditional dolls made of materials such as clay and wood are found in the Americas, Asia, Africa and Europe. The earliest documented dolls go back to the ancient civilizations of Egypt, Greece, and Rome. They have been made as crude, rudimentary playthings as well as elaborate art. Modern doll manufacturing has its roots in Germany, from the 15th century. With industrialization and new materials such as porcelain and plastic, dolls were increasingly mass-produced. During the 20th century, dolls became increasingly popular as collectibles.
History, types and materials
Early history and traditional dolls
The earliest dolls were made from available materials such as clay, stone, wood, bone, ivory, leather, or wax. Archaeological evidence places dolls as the foremost candidate for the oldest known toy. Wooden paddle dolls have been found in Egyptian tombs dating to as early as the 21st century BC. Dolls with movable limbs and removable clothing date back to at least 200 BC. Archaeologists have discovered Greek dolls made of clay and articulated at the hips and shoulders. Rag dolls and stuffed animals were probably also popular, but no known examples of these have survived to the present day. Stories from ancient Greece around 100 AD show that dolls were used by little girls as playthings. Greeks called a doll κόρη, literally meaning "little girl", and a wax-doll was called δάγυνον, δαγύς and πλαγγών. Often dolls had movable limbs and were called νευρόσπαστα, they were worked by strings or wires.
In ancient Rome, dolls were made of clay, wood or ivory. Dolls have been found in the graves of Roman children. Like children today, the younger members of Roman civilization would have dressed their dolls according to the latest fashions. In Greece and Rome, it was customary for boys to dedicate their toys to the gods when they reached puberty and for girls to dedicate their toys to the goddesses when they married. At marriage the Greek girls dedicated their dolls to Artemis and the Roman girls to Venus, but if they died before marriage their dolls were buried with them.
Rag dolls are traditionally home-made from spare scraps of cloth material. Roman rag dolls have been found dating back to 300 BC.
Traditional dolls are sometimes used as children's playthings, but they may also have spiritual, magical and ritual value. There is no defined line between spiritual dolls and toys. In some cultures dolls that had been used in rituals were given to children. They were also used in children's education and as carriers of cultural heritage. In other cultures dolls were considered too laden with magical powers to allow children to play with them.
African dolls are used to teach and entertain; they are supernatural intermediaries, and they are manipulated for ritual purposes. Their shape and costume vary according to region and custom. Dolls are frequently handed down from mother to daughter. Akuaba are wooden ritual fertility dolls from Ghana and nearby areas. The best known akuaba are those of the Ashanti people, whose akuaba have large, disc-like heads. Other tribes in the region have their own distinctive style of akuaba.
There is a rich history of Japanese dolls dating back to the Dogū figures (8000–200 BCE). and Haniwa funerary figures (300–600 AD). By the eleventh century, dolls were used as playthings as well as for protection and in religious ceremonies. During Hinamatsuri, the doll festival, are displayed. These are made of straw and wood, painted, and dressed in elaborate, many-layered textiles. Daruma dolls are spherical dolls with red bodies and white faces without pupils. They represent Bodhidharma, the East Indian who founded Zen, and are used as good luck charms. Wooden Kokeshi dolls have no arms or legs, but a large head and cylindrical body, representing little girls.
The use of an effigy to perform a spell on someone is documented in African, Native American, and European cultures. Examples of such magical devices include the European poppet and the nkisi or bocio of West and Central Africa. In European folk magic and witchcraft, poppet dolls are used to represent a person for casting spells on that person. The intention is that whatever actions are performed upon the effigy will be transferred to the subject through sympathetic magic. The practice of sticking pins in voodoo dolls have been associated with African-American Hoodoo folk magic. Voodoo dolls are not a feature of Haitian Vodou religion, but have been portrayed as such in popular culture, and stereotypical voodoo dolls are sold to tourists in Haiti. Likely the voodoo doll concept in popular culture is influenced by the European poppet. A kitchen witch is a poppet originating in Northern Europe. It resembles a stereotypical witch or crone and is displayed in residential kitchens as a means to provide good luck and ward off bad spirits.
Hopi Kachina dolls are effigies made of cottonwood that embody the characteristics of the ceremonial Kachina, the masked spirits of the Hopi Native American tribe. Kachina dolls are objects meant to be treasured and studied in order to learn the characteristics of each Kachina. Inuit dolls are made out of soapstone and bone, materials common to the Inuit. Many are clothed with animal fur or skin. Their clothing articulates the traditional style of dress necessary to survive cold winters, wind, and snow. The tea dolls of the Innu people were filled with tea for young girls to carry on long journeys. Apple dolls are traditional North American dolls with a head made from dried apples. In Inca mythology, Sara Mama was the goddess of grain. She was associated with maize that grew in multiples or was similarly strange. These strange plants were sometimes dressed as dolls of Sara Mama. Corn husk dolls are traditional Native American dolls made out of the dried leaves or husk of a corncob. Traditionally, they do not have a face. The making of corn husk dolls was adopted by early European settlers in the United States. Early settlers also made rag dolls and carved wooden dolls, called Pennywoods. La última muñeca, or "the last doll", is a tradition of the Quinceañera, the celebration of a girl's fifteenth birthday in parts of Latin America. During this ritual the quinceañera relinquishes a doll from her childhood to signify that she is no longer in need of such a toy. In the United States, dollmaking became an industry in the 1860s, after the Civil War.
Matryoshka dolls are traditional Russian dolls, consisting of a set of hollow wooden figures that open up and nest inside each other. They typically portray traditional peasants and the first set was carved and painted in 1890. In Germany, clay dolls have been documented as far back as the 13th century, and wooden doll making from the 15th century. Beginning about the 15th century, increasingly elaborate dolls were made for Nativity scene displays, chiefly in Italy. Dolls with detailed, fashionable clothes were sold in France in the 16th century, though their bodies were often crudely constructed. The German and Dutch peg wooden dolls were cheap and simply made and were popular toys for poorer children in Europe from the 16th century. Wood continued to be the dominant material for dolls in Europe until the 19th century. Through the 18th and 19th centuries, wood was increasingly combined with other materials, such as leather, wax and porcelain and the bodies made more articulate. It is unknown when dolls' glass eyes first appeared, but brown was the dominant eye color for dolls up until the Victorian era when blue eyes became more popular, inspired by Queen Victoria.
Dolls, puppets and masks allow ordinary people to state what is impossible in the real situation; In Iran for example during Qajar era, people criticised the politics and social conditions of Ahmad-Shah's reign via puppetry without any fear of punishment. According to the Islamic rules, the act of dancing in public especially for women, is a taboo. But dolls or puppets have free and independent identities and are able to do what is not feasible for the real person. Layli is a hinged dancing doll, which is popular among the Lur people of Iran. The name Layli is originated from the Middle East folklore and love story, Layla and Majnun. Layli is the symbol of the beloved who is spiritually beautiful. Layli also represents and maintains a cultural tradition, which is gradually vanishing in urban life.
Industrial era
During the 19th century, dolls' heads were often made of porcelain and combined with a body of leather, cloth, wood, or composite materials, such as papier-mâché or composition, a mix of pulp, sawdust, glue and similar materials. With the advent of polymer and plastic materials in the 20th century, doll making largely shifted to these materials. The low cost, ease of manufacture, and durability of plastic materials meant new types of dolls could be mass-produced at a lower price. The earliest materials were rubber and celluloid. From the mid-20th century, soft vinyl became the dominant material, in particular for children's dolls. Beginning in the 20th century, both porcelain and plastic dolls are made directly for the adult collectors market. Synthetic resins such as polyurethane resemble porcelain in texture and are used for collectible dolls.
Colloquially the terms porcelain doll, bisque doll and china doll are sometimes used interchangeably. But collectors make a distinction between china dolls, made of glazed porcelain, and bisque dolls, made of unglazed bisque or biscuit porcelain. A typical antique china doll has a white glazed porcelain head with painted molded hair and a body made of cloth or leather. The name comes from china being used to refer to the material porcelain. They were mass-produced in Germany, peaking in popularity between 1840 and 1890 and selling in the millions. Parian dolls were also made in Germany, from around 1860 to 1880. They are made of white porcelain similar to china dolls but the head is not dipped in glaze and has a matte finish. Bisque dolls are characterized by their realistic, skin-like matte finish. They had their peak of popularity between 1860 and 1900 with French and German dolls. Antique German and French bisque dolls from the 19th century were often made as children's playthings, but contemporary bisque dolls are predominantly made directly for the collectors market. Realistic, lifelike wax dolls were popular in Victorian England.
Up through the middle of the 19th century, European dolls were predominantly made to represent grown-ups. Childlike dolls and the later ubiquitous baby doll did not appear until around 1850. But, by the late 19th century, baby and childlike dolls had overtaken the market. By about 1920, baby dolls typically were made of composition with a cloth body. The hair, eyes, and mouth were painted. A voice box was sewn into the body that cried ma-ma when the doll was tilted, giving them the name Mama dolls. During 1923, 80% of all dolls sold to children in the United States were Mama dolls.
Paper dolls are cut out of paper, with separate clothes that are usually held onto the dolls by folding tabs. They often reflect contemporary styles, and 19th century ballerina paper dolls were among the earliest celebrity dolls. The 1930s Shirley Temple doll sold millions and was one of the most successful celebrity dolls. Small celluloid Kewpie dolls, based on illustrations by Rose O'Neill, were popular in the early 20th century. Madame Alexander created the first collectible doll based on a licensed character – Scarlett O'Hara from Gone with the Wind.
Contemporary dollhouses have their roots in European baby house display cases from the 17th century. Early dollhouses were all handmade, but, following the Industrial Revolution and World War II, they were increasingly mass-produced and became more affordable. Children's dollhouses during the 20th century have been made of tin litho, plastic, and wood. Contemporary houses for adult collectors are typically made of wood.
The earliest modern stuffed toys were made in 1880. They differ from earlier rag dolls in that they are made of plush fur-like fabric and commonly portray animals rather than humans. Teddy bears first appeared in 1902–1903.
Black dolls have been designed to resemble dark-skinned persons varying from stereotypical to more accurate portrayals. Rag dolls made by American slaves served as playthings for slave children. Golliwogg was a children's book rag doll character in the late 19th century that was widely reproduced as a toy. The doll has very black skin, eyes rimmed in white, clown lips, and frizzy hair, and has been described as an anti-black caricature. Early mass-produced black dolls were typically dark versions of their white counterparts. The earliest American black dolls with realistic African facial features were made in the 1960s.
Fashion dolls are primarily designed to be dressed to reflect fashion trends and are usually modeled after teen girls or adult women. The earliest fashion dolls were French bisque dolls from the mid-19th century. Contemporary fashion dolls are typically made of vinyl. Barbie, from the American toy company Mattel, dominated the market from her inception in 1959. Bratz was the first doll to challenge Barbie's dominance, reaching forty percent of the market in 2006.
Plastic action figures, often representing superheroes, are primarily marketed to boys. Fashion dolls and action figures are often part of a media franchise that may include films, TV, video games and other related merchandise. Bobblehead dolls are collectible plastic dolls with heads connected to the body by a spring or hook in such a way that the head bobbles. They often portray baseball players or other athletes.
Modern era
With the introduction of computers and the Internet, virtual and online dolls appeared. These are often similar to traditional paper dolls and enable users to design virtual dolls and drag and drop clothes onto dolls or images of actual people to play dress up. These include KiSS, Stardoll and Dollz.
Also with the advent of the Internet, collectible dolls are customized and sold or displayed online. Reborn dolls are vinyl dolls that have been customized to resemble a human baby with as much realism as possible. They are often sold online through sites such as eBay. Asian ball-jointed dolls (BJDs) are cast in polyurethane synthetic resin in a style that has been described as both realistic and influenced by anime. Asian BJDs and Asian fashion dolls such as Pullip and Blythe are often customized and photographed. The photos are shared in online communities.
Uses, appearances and issues
Since ancient times, dolls have played a central role in magic and religious rituals and have been used as representations of deities. Dolls have also traditionally been toys for children. Dolls are also collected by adults, for their nostalgic value, beauty, historical importance or financial value. Antique dolls originally made as children's playthings have become collector's items. Nineteenth-century bisque dolls made by French manufacturers such as Bru and Jumeau may be worth almost $22,000 today.
Dolls have traditionally been made as crude, rudimentary playthings as well as with elaborate, artful design. They have been created as folk art in cultures around the globe, and, in the 20th century, art dolls began to be seen as high art. Artist Hans Bellmer made surrealistic dolls that had interchangeable limbs in 1930s and 1940s Germany as opposition to the Nazi party's idolization of a perfect Aryan body. East Village artist Greer Lankton became famous in the 1980s for her theatrical window displays of drug addicted, anorexic and mutant dolls.
Lifelike or anatomically correct dolls are used by health professionals, medical schools and social workers to train doctors and nurses in various health procedures or investigate cases of all sexual abuse of children. Artists sometimes use jointed wooden mannequins in drawing the human figure. Many ordinary doll brands are also anatomically correct, although most types of dolls are degenitalized.
Egli-Figuren are a type of doll that originated in Switzerland in 1964 for telling Bible stories.
In Western society, a gender difference in the selection of toys has been observed and studied. Action figures that represent traditional masculine traits are popular with boys, who are more likely to choose toys that have some link to tools, transportation, garages, machines and military equipment. Dolls for girls tend to represent feminine traits and come with such accessories as clothing, kitchen appliances, utensils, furniture and jewelry.
Pediophobia is a fear of dolls or similar objects. Psychologist Ernst Jentsch theorized that uncanny feelings arise when there is an intellectual uncertainty about whether an object is alive or not. Sigmund Freud further developed on these theories. Japanese roboticist Masahiro Mori expanded on these theories to develop the uncanny valley hypothesis: if an object is obviously enough non-human, its human characteristics will stand out and be endearing; however, if that object reaches a certain threshold of human-like appearance, its non-human characteristics will stand out, and be disturbing.
Doll hospitals
A doll hospital is a workshop that specializes in the restoration or repair of dolls. Doll hospitals can be found in countries around the world. One of the oldest doll hospitals was established in Lisbon, Portugal in 1830, and another in Melbourne, reputedly the first such establishment in Australia, was founded in 1888. There is a Doll Doctors Association in the United States. Henri Launay, who has been repairing dolls at his shop in northeast Paris for 43 years, says he has restored over 30,000 dolls in the course of his career. Most of the clients are not children, but adults in their 50s and 60s. Some doll brands, such as American Girl and Madame Alexander, also offer doll hospital services for their own dolls.
Dolls and children's tales
Many books deal with dolls tales, including Wilhelmina. The Adventures of a Dutch Doll, by Nora Pitt-Taylor, pictured by Gladys Hall. Rag dolls have featured in a number of children's stories, such as the 19th century character Golliwogg in The Adventures of Two Dutch Dolls and a Golliwogg by Bertha Upton and Florence K. Upton and Raggedy Ann in the books by Johnny Gruelle, first published in 1918. The Lonely Doll is a 1957 children's book by Canadian author Dare Wright. The story, told through text and photographs, is about a doll named Edith and two teddy bears.
References
Works cited
External links
Dolls at the V&A Museum of Childhood
The Canadian Museum of Civilization – The Story of Dolls in Canada
Play (activity)
Toy figurines
Traditional toys | Doll | [
"Biology"
] | 3,852 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
13,400,191 | https://en.wikipedia.org/wiki/Mechanical%20Engineering%20Heritage%20%28Japan%29 | The is a list of sites, landmarks, machines, and documents that made significant contributions to the development of mechanical engineering in Japan. Items in the list are certified by the .
Overview
The Mechanical Engineering Heritage program was inaugurated in June 2007 in connection with the 110th anniversary of the founding of the JSME. The program recognizes machines, related systems, factories, specification documents, textbooks, and other items that had a significant impact on the development of mechanical engineering. When a certified item can no longer be maintained by its current owner, the JSME acts to prevent its loss by arranging a transfer to the National Science Museum of Japan or to a local government institution.
The JSME plans to certify several items of high heritage value over years.
Categories
Items in the Mechanical Engineering Heritage (Japan) are classified into four categories:
Sites: Historical sites that contain heritage items.
Landmarks: Representative buildings, structures, and machinery.
Collections: Collections of machinery, or individual machines.
Documents: Machinery-related documents of historical significance.
Each item is assigned a Mechanical Engineering Heritage number.
Items certified in 2007
Sites
No. 1: Steam engines and hauling machinery at the Kosuge Ship Repair Dock, (built in 1868). – Nagasaki Prefecture
Landmarks
No. 2: Memorial workshop and machine tools at Kumamoto University, (built in 1908). – Kumamoto Prefecture
Collections
No. 3: Forged iron treadle lathe (made in 1875 by Kaheiji Ito). – Aichi Prefecture
No. 4: Industrial Steam Turbine (Parsons steam turbine), (made in 1908). – Nagasaki Prefecture
No. 5: 10A Mazda Wankel engine (made in 1967). – Hiroshima Prefecture
No. 6: Honda CVCC engine (first engine to meet emission standards of Clean Air Act (1970)). – Tochigi Prefecture
No. 7: FJR710 Jet Engine (made in 1971). – Tokyo
No. 8: Yanmar small horizontal Diesel Engine, Model HB (made in 1933). – Shiga Prefecture
No. 9: Prof. Inokuchi's centrifugal pump, (made in 1912). – Aichi Prefecture
No. 10: High frequency generator (made in 1929 by German AEG). – Aichi Prefecture
No. 11: 0-Series Tōkaidō Shinkansen electric multiple units (operated 1964–1978). – Osaka Prefecture
No. 12: Class 230 No. 233 2-4-2 steam tank locomotive (made 1902–1909). – Osaka Prefecture
No. 13: YS11 passenger airplane (flown 1964–2009). – Tokyo
No. 14: Cub Type F, Honda bicycle engine (1952). – Tochigi Prefecture
No. 15: Chain stitch sewing machine for the production of straw hats (made in 1928). – Aichi Prefecture
No. 16: Non-stop shuttle change automatic loom, Toyoda Type G (made in 1924). – Aichi Prefecture
No. 17: Hand operated letterpress printing machine (made in 1885). – Tokyo
No. 18: Komatsu bulldozer G40 (made in 1943). – Shizuoka Prefecture
No. 19: Olympus gastrocamera GT-I (made in 1950). – Tokyo
No. 20: Buckton universal testing machine (installed in 1908). – Hyōgo Prefecture
No. 21: Mutoh Drafter manual drafting machine, MH-I (made in 1953). – Tokyo
No. 22: Myriad year clock, (made in 1851). – Tokyo
No. 23: The Chikugo River Lift Bridge (opened in 1935). – Between Fukuoka and Saga Prefecture
Documents
No. 24: JSME publications from the early days of the society, (published in 1897, 1901 and 1934). – Tokyo
No. 25: "Hydraulics and Hydraulic machinery", lecture notes by Professors Bunji Mano and Ariya Inokuchi at Imperial University of Tokyo (1905). – Tokyo
Items certified in 2008
Sites
No. 26: Sankyozawa hydroelectric power station and related objects, (operating since 1888). – Miyagi Prefecture
No. 27: Hydraulic lock (made in United Kingdom, operating since 1908) and floating steam crane (operated 1905–2008), Miike Port. – Fukuoka Prefecture
Collections
No. 28: "Entaro" bus (Ford TT type), (1923, adapted from chassis imported from United States). – Saitama Prefecture
No. 29: Mechanical telecommunication devices (made in 1947 by Shinko Seisakusho Co.). – Iwate Prefecture
No. 30: Mechanical calculator, (Yazu Arithmometer, patented in 1903). – Fukuoka Prefecture
No. 31: Induction motor and design sheet (made in 1910, in the earliest days of the Japanese electrical machinery industry). – Ibaraki Prefecture
Items certified in 2009
Sites
No. 32: Mechanical Device of Sapporo Clock Tower, (clock mechanism imported/installed from E. Howard & Co. in 1881, moved in 1906). – Hokkaidō
Landmarks
No. 33: Minegishi Watermill, (installed in 1808, in operation till 1965). – Tokyo
Collections
No. 34: The Master Worm wheel of the Hobbing Machine HRS-500, (machining by Hobbing machine of Rhein-Neckar from Germany in 1943). – Shizuoka Prefecture
No. 35: Locomobile, The oldest private Steam Automobile in Japan, (one of eight imported from Locomobile Company of America in 1902, failured in 1908, discovered in 1978 then only boiler was replaced and operable in 1980). – Hokkaidō
No. 36: Arrow-Gou, The oldest Japanese-made Car, (one of Japanese fundamental vehicle technology made in 1916). – Fukuoka Prefecture
No. 37: British-made 50 ft Turn Table, (imported from Ransomes & Rapier made in 1897, but installed location was unknown before moved in 1941 then further moved to Ōigawa Railway in 1980, in operation. Two others are deemed also imported and still in operation in other locations, these historical details are not known). – Shizuoka Prefecture
Items certified in 2010
Landmarks
NO. 38: Carousel El Dorado of Toshimaen, the oldest in Japan and oldest class in worldwide, produced by Hugo Haase (German, 1857–1933) in 1907, travelled in Europe, then moved to Steeplechase Park of Coney Island, New York in 1911, operated till 1964, then purchased, refurbished and operate in Toshimaen since 1971. – Tokyo
No. 39: Revolving stage and its slewing mechanism of old Konpira Grand Theatre. – Kagawa Prefecture
Collections
No. 40: Electric vehicle TAMA (E4S-47 I), produced by Tachikawa Aircraft Company Ltd in 1947, to overcome oil shortage after World War II. The car is with single motor of 36V, 120A, run 65km by single charge, max. speed 35 km/h. The second model in 1949 run 200 km. Used as taxi in Tokyo. Production had stopped due to cost of batteries by the time of the Korean War. – Kanagawa Prefecture
No. 41: The first made in Japan forklift truck with internal combustion engine, max. load 6,000 pound, in 1949, learned from Clark Material Handling Company's 4,000-pound type. – Shiga Prefecture
No. 42: Takasago and Ebara type Centrifugal Refrigerating machine. – Kanagawa Prefecture
No. 43: Automated Ticket Gate (Turnstile), OMRON and Kintetsu jointly studied from 1964, model PG-D120 operated from 1973 after prototype evaluation from 1967. – Kyoto Prefecture
Items certified in 2011
Landmarks
NO. 44: Seikan Train Ferry and Moving Rail Bridge. The ferry service started between Aomori Station of Honshu and Hakodate Station of Hokkaido in 1908, and became train ferry service from 1925 till Seikan Tunnel operated in 1988. Landmark is both and moving rail bridge at Aomori Station, and and moving rail bridge at Hakodate Station. – Aomori Prefecture and Hokkaidō
Collections
NO. 45: Type ED15 Electric Locomotive. This direct current locomotive is the first Japan-made one in 1924 and operation till 1960. It is functionally equal to imported electric locomotive with specification of maximum speed 65 km/h with 820 KW by four main motors. – Ibaraki Prefecture
NO. 46: Silk reeling machines of , several types of silk reeling machines. Machines are; 2 silk reeling machines out of 300 machines imported by French engineer for Tomioka silk mill which operated from 1872, Japan made machine based on French and Italian technologies, and some other Japan made improved and innovated machines. – Nagano Prefecture
NO. 47: Toyoda Power Loom. Looms power by steam engine type and electric motor types invented by Sakichi Toyoda in 1897 and patented next year. Machine's productivity is 20 times high and 1/20 of low in machine cost compared to imported machines, widely used throughout Japan. – Aichi Prefecture
NO. 48: Hydraulic Excavator UH03 is the first evolved type, made in Japan in 1965, having double hydraulic pumps and double valves with bucket size 0.35 m3 and engine output 58 hp. The excavators made in Japan before UH03 are single hydraulic pump and single valve type under technical tied up with Europe. – Ibaraki Prefecture
NO. 49: Zipper chain machine (YKK-CM6) is YKK Group first made in Japan machine in 1953, evolved from imported machine from US in 1950. – Toyama Prefecture
NO. 50: Ticket Vending Machine is the first train ticket vending machine. Developed in 1962, it consists of approximately 250 relays, and can print train tickets for various destinations. It accepts coins, checks them for authenticity, sorts and stores them, and makes change. The improved type made in 1969 was installed in of Expo '70 in Suita, Osaka – Nagano Prefecture.
Items certified in 2012
Landmarks
NO. 51: Tokyu 5200 series made in 1958 is the first railcar applying stainless steel on the exterior aim at no maintenance required of periodical painterwork. Tokyu 7000 series railcar made by Tokyu Car Corporation in 1965 is the first all stainless steel railcar including framing. The framing technologies learned and improved under technical tie-up with Budd Company.- Kanagawa Prefecture
NO. 52: Yoshino Ropeway opened on March 12, 1929. The oldest surviving aerial lift line in Japan and oldest class in worldwide. – Nara Prefecture
Collections
NO. 53: Oldest in Japan England style 9-foot length lathe made by Ikegai Corp., the first machine tool manufacture of Japan, in 1889 for own use. – Tokyo
NO. 54: Ricoh desktop copier model 101 is the first Japanese blueprint document reproduction machine using the diazo chemical process made in 1955. This copier with the newly innovated photographic paper brings no need to rinse in washing water and no odor operation. – Shizuoka Prefecture
NO. 55: Washlet G released in 1980 is the first type innovated by Toto. The original model for therapy of hemorrhoid were imported from American Bidet company in 1964 for Japanese market. Toto opened new market as the electric toilet seats for general use. – Fukuoka Prefecture
Items certified in 2013
Landmarks
No. 56: Mechanical Car Parking System ROTOPARK, made by Bajulaz S.A. company of Switzerland, was imported in 1976 and installed as parking system in underground at south exit of Shinjuku Station. System is controlled by mechanical relay and DC motor. – Tokyo
Collections
NO. 57: Dawn of Japanese Home Electric Appliances made by Toshiba. Early years of Shōwa period 1930 to 1931, refrigerator and vacuum cleaner made based on General Electric model, and washing machine produced under technology introduction from Thor washing machine of Chicago-based Hurley Electric Laundry Equipment Company. – Kanagawa Prefecture
NO. 58: Former Yokosuka Arsenal's steam hammer. Six hammer were imported from Netherlands in 1865 Keiō. After Meiji Restoration, Imperial Japanese Navy, then after World War II, United States Fleet Activities Yokosuka used 0.5 ton work load capacity type had been used till 1971 and 3 ton type had been used till 1996. In 2002, hammer were returned to as the property of Japan, and display in Léonce Verny Memorial House. – Kanagawa Prefecture
NO. 59: Okuma Non-round Plain bearing and GPB Cylindrical Grinder developed by Okuma Corporation in 1954, 700 units produced by 1969, and contributed for Japanese precision mechanical industries. – Aichi Prefecture
NO. 60: Japan's First 16mm Film Projector. Hand drive projector, study from imported model, made in 1927, and motor drive type developed in 1930 by Elmo company limited. – Aichi Prefecture
NO. 61: Japanese Automata YUMIHIKI-DOJI, Karakuri ningyō (lit: a boy bending a bow), created by Tanaka Hisashige. – Fukuoka Prefecture
Items certified in 2014
Landmarks
No. 62: Soil and Tractor Museum of Hokkaido. Display Tractor and Agricultural machinery innovation in Hokkaido mostly after World War II, and the resultant of artificial soil improvement technologies and agriculture managing philosophy. – Hokkaido
No. 63: Museum of Agricultural Technology Progress. Imported and Japan made 250 Agricultural machineries powered by human, animal labor then prime mover or engine from late Meiji period to late 1950s to early 1960s. Display includes Japan originated rice transplanter and straw rope producer. – Saitama Prefecture
No. 64: Telpher of the Port of Shimizu, operating in 1928 to 1971, height 8.4m, total rail length 189.4m, lift up weight 2 to 3 Tonne driven by electric motor, and used to imported wood discharge. – Shizuoka Prefecture
Collections
No. 65: Japan-made Snow Vehicles (KD604 & KD605) which reached the South Pole in 1968. Three snow vehicles participated round trip 5200km for 5 months, but one vehicle KD503 was engine troubled and thrown away on outward. The prototype KD501 was not used for the trip, and KD502 is preserved in Showa Station. Trip contributed to find out first meteorite in Antarctic. – KD604 is in Tokyo and KD605 is in Akita Prefecture
No. 66: Japan-made Wristwatches which Showed Remarkable Technological Innovations. Japan adopted Western style timekeeping system from traditional Japanese time system in 1873. Founder of Seiko, Kintaro Hattori, started in 1982 and produced pocket watch in 1985, first Japanese wrist watch Seiko Laurel in 1913, watch Grand Seiko (グランドセイコー), in 1960, was accurate as Switzerland Chronometer watch then the world's first quartz clock wristwatch Seiko Quartz-Astron 35SQ in 1969. – Tokyo
No. 67: Double Housing Plaining Machine: Made by Akabane Engineering Works, Ministry of Industry. Double Housing Plaining Machine, 6-foot type machine, with three emblem Chrysanthemum Flower Seal, made by Akabane Engineering Works of Ministry of Industry in 1879. Ministry of Industry produced Japan made various machine tool for industrial innovation aiming to modernization. – Aichi Prefecture
No. 68: Fuji Automatic Massage Machine, mass production type invented by Fuji (フジ医療器) in 1954. – Osaka Prefecture
Documents
No. 69: The Collection of Drawings for Japanese Machines. 288 drawings used, in early Shōwa period first edition in 1932 and revised in 1937, to let engineers learn the ability of Japanese machine the same or not inferior to imported machine. Drawing include 16 industrial field of machines such as measuring devices, steam boiler, steam engine, steam turbine, internal combustion engine, automobile, rolling stock, water wheel, pump, mechanical fan, gas compressor, cryocooler, Machine tool, crane, haulage, spinning and weaving machine. – Tokyo
Items certified in 2015
Landmarks
No. 70: Railway bascule bridge "Suehiro Kyoryo". The bridge constructed in December 1931 and still in function as of 2015. The dimension is length 58m, width 4 m, balance scale 24 tons, movement girder length 18 m and weighs 48 tons. – Mie Prefecture
Collection
No. 71: Automatic Encrusting Machine Model 105. High viscosity material such as dough, for Manjū and wagashi of Japan and bread worldwide, is traditionally encrusting by human hand. The automatic encrusting machine is invented as model 101 in 1963, and improved model 105 in 1966, then it had been sold 1838 set in 8 years and contributing world food cultures in effective making. – Tochigi Prefecture
No. 72: Automatic Transmission of "MIKASA". The first Japanese Automatic transmission with torque converter developed in 1951 and front-wheel drive car MIKASA produced over 500 cars in 1957 to 1960. – Tokyo
No. 73: Japan Made First Coin counter. The coin counter asked by mint and produced in 1949 and delivered in February 1950. Imported large size of coin counter was used before this improved type with small size, simple structure and more accurate counting. Commercial type put in market in 1953. Selectable various coin size and counting ability contributed to lessen banking job for coin counting and Japan made full-line vending machines. – Hyogo Prefecture
No. 74: KOBAS Stationary Suction Gas Engine and Charcoal Gas Producer Unit. Wood gas engine with magneto ignition system had been started to develop in 1928 and produced in 1936. Less resource of petroleum during and after World War II in Japan, wood gas engine had been widely used by about 1955. – Hiroshima Prefecture
No. 75: Small Once-through Steam Boiler Type ZP. This once-through Steam drum type boiler less than 10 Atmospheric pressure and 10 m2 had been usable without license by change of law Industrial Safety and Health Act in 1959 then 70% shared in small boiler market. – Ehime Prefecture
No. 76: All Electric Industrial Robot "MOTOMAN-L10". MOTOMAN-L10 is first all electric drive industrial robot developed in 1977. Before this, Hydraulic drive system robot used with less accurate positioning, moving range and speed. – Fukuoka Prefecture
Items certified in 2016
Landmarks
No. 77: Matsukawa Geothermal Power Plant. Operated in 1966, the first commercial power plant in Japan. To avoid erosion and corrosion of steam turbine blade from sulfur, turbine is made of chromium, molybdenum and vanadium steel without nickel. Initial power was 9,500kW, then improved to 23,500kW in 1993. – Iwate Prefecture
Collection
No. 78: Subaru 360-K111. Japanese government proposed "national car" concept in 1955, then produced in 1958. Nicknamed tentoumushi means coccinellidae comparable to Volkswagen Beetle. – Gunma Prefecture
No. 79: Double Expansion Marine Steam Engine. Main engine, 97 horsepower, of small wooden guard ship Tachibana maru (Kanji: たちばな丸) in port of Kobe since 1911. Ship used as training ship by Kobe University (former Kobe University of Mercantile Marine) till 1964. – Saitama prefecture
No. 80: Simple Cash register Zeni-ai-ki. Produced in 1916 in lieu of imported expensive cash register. Attractive naming Zeni-ai-ki, literal meaning is money-matching-machine instead of traditional calculation by soroban, sold more than 10,000 units by 1927, well sold and widely used till further innovated type appeared after war over in 1945. – Tokyo
No. 81: Tatsuno's Patent Gasoline Measuring Equipment Type No. 25. First Japan made fuel dispenser in 1919. Implemented safety patented mechanism well worked and no fire in time of 1923 Great Kantō earthquake. – Kanagawa Prefecture
No. 82: Gate-type Car Wash Machine. Gate shaped Japanese first car wash machine with horizontal and two side vertical rotating brush type and wash up a car in three minutes developed in 1962. Before this, car wash is manual brushing with waterjet in 1950s. – Aichi Prefecture
No. 83: Optical Instruments of the Kashinozaki Lighthouse. Japan first one of eight Western style rotating flashing light lighthouse technically advised by Richard Henry Brunton operate in 1870. This is the first Stone Building out of 26 lighthouse advised and guided by him. – Wakayama Prefecture
Items certified in 2017
Site
No. 84: Mechanical equipment full set in the bascule bridge at Kachidoki bridge. Kachidoki Bridge (勝鬨橋), bascule type bridge, the pivot axis to river other side pivot axis over Sumida River, is 51.6 m the longest length in Japan, and total length of the bridge is 246.0 m, constructed in 1940, movable operation ended in 1970, and classified Important Cultural Property in 2007. The one side of movable bridge part weight is 1,000 tonne with counterweight of 1,000 tonne, both river side total movable bridge part weighs 4,000 tonne in symmetry. The open or close speed is controlled by Ward Leonard control with combination of AC motor and DC motor. – Tokyo
No. 85: The longitudinal flow ventilation system by jet fan (booster fan) of Okuda Tunnel. The first eight units with jet fans having an inner diameter 630mm and length 4.7m were imported from German Voith and well tested, data evaluated, then applied in Okuda Tunnel (奥田トンネル) of Kitakyusyu Expressway in 1966, and used until tunnel width widen and changed to one-way traffic in 1975. This jet fan air ventilating direction is along length of tunnel and ventilating technology founded this application contributed more than eighty percent of tunnels of mountains in Japan. Two units is preserved. – Osaka Prefecture.
Collection
No. 86: Electric car of Japan's first subway. Tokyo Metro Ginza Line, Ueno to Asakusa opened in 1927. The electric car, length 16m × width 2.6m × height 3.5m and weight 35.5 tonne, constructed with imported basic major parts and applied mechanical systems of ATS used in overseas. – Tokyo
No. 87: Deep Submergence Research Vehicle SHINKAI 2000. Shinkai 2000 is the succession manned machine after Japanese first manned SHINKAI (1970–1976). – Kanagawa Prefecture
No. 88: Green Sand Molding Machine Type C-11. The first Japanese sand casting molding machine capable to make 450mm×300mm×height 200mm mold, instead of traditional handmade mold. This machine was own developed in 1927, by refer to imported machine from United States. – Aichi Prefecture
No. 89: Multihead Weigher ACW-M-1. Japanese first patented weighting machine, for various weight of number of bell pepper sorted and grouped by CPU to thirty single selling volume of 150±2g in a minute without damage bell pepper, invented in 1973. Innovated Multihead Weigher series machine sold more than 30,000 units, and widely used for packing of snack, agricultural products, sausage, frozen food, pharmaceutical drug, machine component and others with major market share. – Shiga Prefecture
No. 90: Full Automatic Glove Knitting Machine (Square Fingertip Type). Knitting glove for field army, Japanese term Gun-te (軍手 literally: army-hand) used to protect hand since Meiji period produced by hand knitting or semi-automatic. Full automated machine with sinker knitting method invented in 1964, producing single piece, or half of pair, of glove in 2 minutes and 15 seconds, and single worker monitors and controls 30 machines. – Wakayama Prefecture
Items certified in 2018
Collection
No. 91: Historical Machine Tools collected by Nippon Institute of Technology. 232 units of machine tools are displayed in the museum. These units indicate historical transition of machine tools in Japan, from import, make with replica, then by technical license agreement, in period of mid Meiji to Shōwa 50s (1975–1984). – Saitama Prefecture
No. 92: Airless Spray Painting Equipment. Under United States patent license, first made in Japan equipment with some improvements put on market in 1959. – Aichi Prefecture
No. 93: CRT Funnel Pressing Machine. Cathode ray tube of television production in Japan started under technical license from United States. The front face part and centrifugal cast rear funnel part produced separately, and weld combined in early stage, after the funnel press machine appeared as new methodology, its production time, welding accuracy, quality and productivity was improved. The market share of Japan made cathode ray tube of 24-inch size and over was almost 100 percent at the end of 1980s. – Siga Prefecture
No. 94: Type Casting Machine of Newspaper Museum. Museum of Kumamoto Daily News displays various newspaper publishing machines, and one of them is Japan made first Man-nen jidou katsuji cyuzoki (万年自働活字鋳造機 (lit.:Ten thousand years life automatic type casting machine)) reflecting number of patents put in market in 1934 capable to cast 10.5 point with speed of 90 Japanese letter types in minute, used till 1982. – Kumamoto Prefecture
Items certified in 2019
Landmarks
No. 95: Conduit Gate of Tase Dam. Japan communicated frequently in detail and learned from US and adding own Japanese technology to improve U.S. made four high-pressure slide gates (conduit gates), then installed in the world record deepest near the bottom of the lake of dam completed in 1945. The water discharge system (the discharge volume per gate is 120 m3/s) from the lake, it became foundation of technologies to apply other dams thereafter. – Iwate Prefecture
No. 96: Oil Mining and Refine System at Kanazu Oilfield. Crude oil drilling and mining attempted before Meiji era, however not commercialization due to collapsible stratum. Kanichi Nakano (中野貫一) succeeded in manual drilling, mining and refining and production volume 150,000 kiloliters/year in 1916 and he was called oil king of Japan. Further mechanical method deployed, however it closed in 1998 and the museum opened to display facilitated machines and materials in 2008. – Niigata Prefecture
No. 97: Steam Locomotives Preserved at Kyoto Railway Museum and Related Objects. 23 steam locomotives used until 1984, its maintenance facility and records are preserved. 8 locomotives out of 23, railway roundhouse and railway turntable are still operational. – Kyoto Prefecture
Collection
No. 98: Dawn of Japanese Passenger Elevator. The elevator imported from US with basic elements, cage, guide rail and emergency stop system, were further studied then full push button automatic type elevator developed in 1915, and deployed in Japan. Displaying an elevator and related Japanese own process history of technological studies and improvements. – Fukui Prefecture
No. 99: Monorail for Steep Slope MONORACK M-1. The simple cable transport system, on steep slope hills and mountains of cultivating mikan orchard in area of Seto Inland Sea, was widely used. In 1966 the newly developed monorail system capable of transporting crops in slope angle up to 40 degree and to curve left and right directions flexibly. This monorail is effectiveness in more free design for installation and laborsaving. – Okayama Prefecture
Items certified in 2020
Collection and Documents
No. 100: Educational Equipment for Mechanical Engineering of Imperial College of Engineering/Related Documents of C.D. West. The dawn of modern engineering in Japan is coincidently the same period of textbooks published in Western Europe. in Tokyo is believed the first university worldwide bearing the name engineering and succeeded by Tokyo Imperial University (東京帝國大學, Tōkyō teikoku daigaku). A Number of Technical drawing, tool, Mechanism, model, lecture note and educational material used by Charles Dickinson West, Henry Dyer and others are preserved and displayed. – Tokyo
Collection
No. 101: ASAHIFLEX I・IIB, MIRANDA T, ZUNOW, NIKON F Single-lens reflex (SLR) cameras, which advanced Japanese cameras to the world standard. The five Japanese Single-lens reflex camera models, with more convenience and robustness, in 1950s, Asahi Flex I, IIB, Miranda T, Zunow and Nikon F, revolutionary opening new era of reputation and wording from Camera is German made to Camera is Japan made. – Tokyo
No. 102: NARA Jiyu Mill (High-speed Impact Mill, First Milling Machine Manufactured in Japan). Laboratory of Furukawa Group asked to make improved pulverizer for casein with physical property of elasticity and thermostability, he referenced German made pulverizer and the first commercialized NARA Jiyu Milling machine with utility model in 1928. By strong impact and shearing force without generate heat, the pulverizer use to produce granular material of mineral, medicinal plants, food, dye, fodder, medicines and more. – Tokyo
No. 103: Electric Arc Spray Gun in the early era of thermal spraying. M.U.Schoop of University of Zurich get patent for metal thermal spraying in 1909. However Jewellery shop TENSHODO in Ginza obtained exclusive right to use this patented technology in 1919 aiming at gas thermal spraying apply to jewellery was not succeeded, Japanese patent invented electric melting to spray electroplating in 1921. Since 1935 industrial use deployed, and further improvement in 1955 and 1963, and beginning to start surface finishing to prevent rust on railroad, water container and others and further applied wide industrial use thermostability, abrasion resistance and chemical resistance. – Siga Prefecture
No. 104:Continuously Variable Transmission/Ring-Cone Type. The ring cone (RC) Continuously Variable Transmission (CVT) invented by in 1952. Power transmission take place through oil fluid without solid parts contact of power input cone to output cone, so that no wear of each cone. CVT is widely used on conveyor belt, machine tool and other area due to simple structure and low cost. It functions no slip and 2 to 3% or less rotation rate fluctuation by automatic each cone contact pressure-regulating mechanism. – Kyoto
Items certified in 2021
Collection
No. 105: Existing Japan's first Electric milking machine DK-5 II. First Japan made electric milking machine, by referencing the structure of imported milking machine with adding own made vacuum mechanism, is developed by in 1957. The machine is less price but better specifications and the relief of dairy farming hand milking hard physical labor, also health enhancement for Japanese people. The basic structure or mechanism of the machine is still applied since then. – Nagano Prefecture
No. 106: Spur Gear Grinding Machine Type ASG-2. Gear is the one of essential Machine element. When gear implemented into the machine, machine should operate with less noise and vibration, so that these gear is required to be made by process of grinding machine. Even entered into Showa era, there were several number of such process machines in the world, made by Switzerland company MAAG or others, but not in Japan. Kure Naval Arsenal placed purchase order for grinding machine aiming to make precision gear to , the founder of previous firm of , he finally made it after trial and error in 1930 as the first Japan made spur gear grinding machine, then made total 13 machines by 1945. One machine is preserved at museum of Nippon Institute of Technology. Unique feature of this grinding machine is by changing the consisting gear, it is able to produce variety of gears with different size and number of tooth. – Saitama prefecture
No. 107: Sushi Machine. Automated grasp then make molding sushi rice (Nigirizushi) machine is developed by under deep study of sushi artisan's technique in 1981. Suzumo Machinery aimed to recover and increase total Japan rice consumption volume under the fact that the amount of rice consumption decreasing along with adjusting rice production under Japan set-aside policy, let people eat sushi more with less price in sushi shop is one of solution as consumer behavior. The machine produced 1,200 unit of sushi molding per hour, and opening conveyor belt sushi system. – Saitama Prefecture
No. 108: Rolling stock Test Stand for Shinkansen. Feasibility study for Shinkansen rolling stock targeting maximum speed 250 km/h and 350 km/h of bogie is unfeasible in real environment, so that the stationary simulation test device created by Hitachi and installed by JNR in 1959. Test carry out for various simulate locomotive conditions from the control and monitor room. After the test stand completed, prototype Tokaido Shinkansen bogie test started in 1960, and contributed to determine feasible railed vehicle specification. New test stand developed for maximum speed 500 km/h in 1990, but still this test stand operate and in use. – Tokyo
No. 109: Japanese oldest pitching machines Catapult type:KS-P/AR. The catapult-type pitching machine was designed by , lecturer of Kanto Gakuin University. Type KS-P is produced in 1958 and preserved in The Baseball Hall of Fame and Museum of Japan in Tokyo Dome and type AR is produced in the same time and preserved in Chunichi batting center. The mechanism is to pitch 12 throwing in a minute, fastball and breaking ball with rotation by means of hook, by reaction of compressed spring, and it is equivalent to 15 pitchers and well worked in lieu of batting practice pitcher. Arm-type and wheel type pitching machine are produced as follower machines, and batting center became popular amusement place. – Tokyo (KS-P) and Gifu prefecture (AR)
No. 110: Electric Hand Planer Model 1000. Makita produced electric hand planer as the first consumer use product, by referring to United States made electric hand planer, suitable in terms of light weight, Japanese building material processing size, and easy handling by carpenter in 1958. Until then, plane job is physically heavy work, and required expertise. Electric planer Model 1000 opened other type of carpenter's various electric power tool consequently. This model consists of two blades of 120 mm width rotates 13,000 per minute, 26,000 cut, in a minute on 100 volt home mains electricity, realized easy process for hardwood and softwood, even against wood grain. – Aichi Prefecture
No. 111: The Coining Presses during the Founding Period of the Japan Mint; Uhlhorn Münzprägemaschine and Presse Montaire de Thonnelier. In 1871, start-up Japan Mint was largest metal processing factory, melting bullion, casting, rolling, and die stamping making coin by the power of steam engine. The final stamping machine is invented by Diedrich Uhlhorn in 1817, and imported 10 units in 1871 to 1873, which produced 40 coins per minute. Other 8 units of French machine developed by Nicolas Thonnelier, made in 1857, were purchased from closed Hong Kong mint was capable to produce 50 coins per minute, and one out of 8 is preserved. Both of these preserved machine are historical value of Japanese coin processing and only several number of machines are preserved worldwide. – Osaka Prefecture
No. 112: Conveyor belt sushi machine, Origin of the new food culture. Number of small dish-sized plates with shape like scale or crescent concatenated to form swivel or circle conveyor. Conveyor belt sushi mechanism, inspired it come up with brewery bottling line system in 1948, and opened first sushi shop in Higashi Osaka in 1958. The machine certified here in is made in 1985 and still operating. – Osaka Prefecture
No. 113: Hydraulic Pile press-in and extraction Machinery Silent Piler KGK-100A. This is the first environmentally friendly hydraulic pile driver or piling machine, named SILENT PILER, without pollution like big sound noise and vibration, developed jointly by and in 1975. As initial step, provided that two or three deep foundation piles already be pressed-in in advance by means of, other than ordinary usage, put heavy load and keep SILENT PILER down on the ground and press-in, then as the ordinary usage step this piling machine ride on and underneath handle grips these plies. Hydraulic static load to press-in consecutive next pile of opposite drag reaction force is smaller than pull-out drag reaction force of gripped two or three plies, so that this piling machine steadily stand and work to press-in plies one after another by move, ride on and grip newly pressed-in next pile. Heavy equipment applied hydraulic pressure is 14 to 17 MPa, but in order press-in or pull-out a plie required new developing design of the hydraulic device with 70 MPa for 100 Ton of drive force to a plie. The sound noise pollution of hammer hitting pile driver type is approximately 100 dB, and this Silent Piler is only 55 dB. – Kochi Prefecture
Items certified in 2022
Collection
No. 114: Surface Grinding Machine PSG-6B. Surface grinding machine is used for final surface finishing for machine element. This machine implements horizontal moving rectangle table with grinding unit which move up and down precisely. applied self-developed hydraulic pump and hydraulic cylinder to drive the rectangle table and four precision ball bearings on the grinding unit, then possible to move producing machine element by 0.001 mm steps, and it was the first machine realized 1/1,000 mm of precision surface finishing in 1953. – Gunma Prefecture
No. 115: Timber pre-cut system MPS-1. 57 percent of Japanese houses are constructed with timber, and 43 percent, out of 57 percent, is by wooden column and beam construction method structure in tradition by skilled carpenter. Carpenter designs each house and woodworking timber on the construction site. , manufacturer of timber processing machine, planned to improve with developed machine by pre-cut or previously cut to column and beam from timber at factory, instead of process timber on site, however, carpenter have not accepted improved system until trend changed housing market and labor shortage in 1978. CAD and CAM is further added to total system in 1985, then this Timber pre-cut system MPS-1 had changed construction with pre-cut method expanded to 93 percent today. – Aichi Prefecture
No. 116: Hand-cranked Garabo Spinning Machine. Meiji government aimed to more productivity of cotton yarn production with imported cotton-spinning machinery, but machines were so expensive. invented simple hand rotating low cost Gaun-method cotton-spinning machine () or Gara bo, then exhibit next year at first National Industrial Exhibition in 1877. The machine was well evaluated at exhibition, and applied for larger diameter or thicker cotton thread producing, actually machine drive by water wheel, in Mikawa Province where such industry had been leading and became top level of producing area, then machine deployed widely in country. After World War II, lifestyle changed and Western machines again used, because of Gara bo specialized for larger diameter or thicker thread, not for small diameter or thin thread, so that the peak number of machine working was in 1960, several number of machine still working today. Certified machine is made in 1880s, displayed at in Osaka. Gara bo machine contributed thread spinning industry, yarn export from Japan, and acquisition of foreign currency to Japan. Precise replica is demonstrating at Toyota Commemorative Museum of Industry and Technology in Nagoya. – Osaka Prefecture
Items certified in 2023
Collection
No. 117: Goto Planetarium Type M-1. German-made projector introduced in Japan in 1937, many astronomer were trying to produce prototype planetarium, and opened its developing history in Japan. at last developed first lens projection planetarium Type M-1 in 1959, then mass produced and marketed 19 units in Japan. Type M-1 well recognized worldwide, then become foundations of Japanese planetarium and related units gained world market share (approximately 70%), and further realized up to date planetarium functional elements such as lens projection and annual motion projection. Type M-1 installed in Tokyo University of Marine Science and Technology in 1965, is still operable and maintained by students, also work as study material and technologies continuation for them. – Tokyo
No. 118: Odakyu Limited Express Romancecar SE3000. Odakyu Electric Railway started express train, named Romancecar, rolling stock with state-of-the-art technology in 1957. SE3000 adapted monocoque shape of front end with extensive wind tunnel experiment, bogie is cardan jointed drive, and jacobs bogie. In developing stage of SE3000, former Research Institute, present Railway Technical Research Institute, cooperatively involved into Odakyu team to get technical data for own higher speed train, obtain valuable data and information, then realized 0 Series Shinkansen later on. The higher speed test with SE3000 on Tōkaidō Main Line in 1957, world record of 145m/h accomplished on Narrow-gauge railway (3 ft 6 in gauge railways), these activities forwarded to successful open Tokaido Shinkansen. Naming Romancecar is given to all Odakyu express trains. SE3000 had retired in 1992, and preserved in Romancecar Museum. – Kanagawa Prefecture
No. 119: A Pharmaceutial Millstone Driven by a Treadwheel at the former Wachusan Hompo. Head shop pharmacy Wachūsan Honpo was alongside Tōkaidō and producing Chinese herbology from crude drug such as stomach medicine, Wuling San Wan, and others by using machine namely Jinsha Seiyaku-ki (人車製薬機 (lit.: man wheel powered drug make machine)) installed in 1831, Edo period Tenpō 2. Two man, in large wooden wheel with diameter of 4280 millimetre, steps forward to generate rotation force and transport to four wooden gears, further speed up rotation to circulate quern-stone to mill process medicinal plants. Rotation ratio of the large wheel and mill quern-stone is 3:10, so that three times rotations at large wheel makes 10 times mill quern-stone rotations. Power transmission mechanism from human power to mechanical is assumed propagated from mainland China and indicates histories of development on machine technologies. This drug making process was observe by travelers and worked good advertising media. – Shiga Prefecture
No. 120: Historical Machine Tools of The SANKYO Machine Tools Museum. SANKYO seisakusho.co opened the museum in factory premises in 2021 Reiwa era 3. 134 units of machine tool, out of total display 137 units, are not Japan made, mainly from United States, and others from Italy, Switzerland, France, United Kingdom, and Germany. These machine tools are lathe, boring machine, milling machining, drilling machine, shaping machine, gear cutting machine, grinding machine and others, produced and functioning in each era or span of time between 17th century, before and after Industrial Revolution, and 20th century. Machine tools were collecting and sorted in era and function basis, be able to study its evolution and progress respectively. Measurement hand tool and cutting tool, and various machine elements in early Shōwa era and its mechanism model are preserved and displayed including Ford Model T produced by machine tool, and early days of historical motor vehicles. – Shizuoka Prefecture
Items certified in 2024
Collection
No. 121: ARAI Lottery Wheel. Japanese lottery machine hand rotating type of cylinder appearance with attached handle, developed by Takuya Arai (新井卓也, Arai Takuya) in 1930 and patented, is called ancestor of lottery machine nicknamed Gara-Pon in onomatopoeia and well diffused in shōtengai (commercial district) and others throughout Japan. The machine incorporates ball receiving tray, made of celluloid raffle balls, and a bell sounds when the raffle ball comes out from machine, and only one ball comes out, not more than two at single raffle time, devised with well and creative ideas. This lottery machine changed lottery or raffle culture and procedure in Japan fundamentally. – Aomori Prefecture
No. 122: High-Pressure Triple Plunge Pump, Sugino Pump First Machine. Former name Sugino Cleaner Seisakusho, present , manufacture and sell Tube Cleaner to clean tube of boiler and heat exchanger since 1936. After World War II, company had been seeking fluid pressure pump to drive Tube Cleaner by high pressurized water pump, however none of such pump wasn't available from market, so that decided to develop the pump by themselves. In 1964, at least they realized High-Pressure Triple Plunge Pump, Model JCE-2550, capable with discharge pressure 30MPa (MPascal) and water flow rate 60ℓ/min ( m3/s), also this pump delivered to petroleum refining processes factory, then 7 years later it back to Sugino Machine and currently displayed at headquarters office. Cumulative number of this model pump approx. 14,000 unit produced. Applicable this super high pressurized water jet stream began to cut hard materials such as metal and spalling concrete, afterwards to cut expanding for food and health care areas. – Toyama Prefecture
No. 123: Macadam Roller SAKAI R1. Imported three-point roller Macadam Road roller had been used until 1920s. Sakai Seisakusho, present made first Japanese version of Macadam type Road roller with internal combustion engine in 1930, and put in market, then in 1968 innovative new model R1 developed. Before model R1, three-point roller that is one-front-roller and two-rear-rollers type which had troubles push forward or trailing road surface material, new model R1 three-point roller, two-front-rollers and one-rear-roller type instead, solved the troubles. Model R1 three rollers driven by oil hydraulic machinery with the same and consistent roller pressure to road surface, three rollers are in the same size in diameter, and three-roller drive in synchronization. Model R1 Road roller itself rudder like curved so that less turning radius and consistent roller pressure for straight and curving road without unevenness with pressing width 2.3 metre. Driver's seat has two steering wheels, left and right, and which wheel to use is driver's choice with effective in man-machine work, R1 machine structure became industrial De facto standard, then in 1974, rather small successor R2 produced suitable for Japanese road conditions. – Saitama Prefecture
No. 124: Strain Gage Type K-1. Founder of Kyowa Musen Kenkyujo, present Kyowa Electronic Instrument, produced and begun to sell first Japan made strain gauge type K-1 covered with red colored felt in 1951. Meanwhile Ministry of Transport Technology Laboratory section Ship Structure asked him to make prototype strain gauge under the circumstances that imported strain gauge, from United States and others, were widely used but high price, and also strain gauge under research stage was rail transport use. He worked for Army Ministry Aviation Laboratory during war time, recalled survey of shot down B29 strategic bomber and succeeded usable strain gauge after trial and errors. The strain gauge is 20.5mm, fine resistance wires size 25μm and 120Ω resistance covered by red felt for protection, to measure as the first stress test in the ship, then resulted that well change over to welding joint from rivet joint in strength required. Imported strain gauge from United States cost more than Yen1,000, and newly produced one is low cost only Yen86, therefore various measurement instrument uses this strain gauge especially contributed in mechanical engineering area, and it became indispensable component. – Tokyo
No. 125: ISHIKAWA - Marinoni Type Rotary Press with Folding Mechanism. In Meiji era Newspaper or other printing used Marinoni type rotary printing press machine invented by Hippolyte Auguste Marinoni. , for former Mita Seisakusho (present ), by referring to Marinoni type machine, Kakuzo Ishikawa produced small called Ishikawa Marinoni Type Rotary Press (first model) machine suitable for Japanese market in 1906. Number of copies printed newspaper increased, consequently lot of time required due to labour power to folding printed paper by hand when time enter to Taishō era, rather new machine with Folding Mechanism named Ishikawa-Marinoni Type Rotary Press with Folding Mechanism (second model) invented in 1922. This resulted that new Rotary Press presented excellent performance on work efficiency with printing speed 24,000 copies per hour for four-page double-sided newspaper printing. – Kanagawa Prefecture
No. 126: Pioneer of Export to USA, NC Lathe MAZAK Turning Center 2500R. A number of machine tool manufactures challenging to apply Numerical Control capability on the general purpose machine tools in 1960s, former Yamazaki Tekkosho (present Yamazaki Mazak) succeeded general purpose lathe to have Numerical Control function named MTC series (Mazak Turning Center), and manufacturing and sales in Japanese market, and displayed at JIMTOF () in 1968, then next year 1969, MTC series was exhibited at trade show in United States, and produced total 578 units by 1976. MTC series upgraded in combination with Numerical Controller Model FANUC 240 to Mazak Turning Center 2500R and exported as the first Japan made Numerical Control system to United States in 1970. Both X axis and Z axis are controlled by Electric-Hydraulic Pulse motor, of minimum increment 0.01mm with contours ability, control program read in via EIA/ISI coded punched paper tape. Mazak Turning Center 2500R returned from United States to Japan in 2008, displayed in 'Yamazaki Mazak' museum, and also used to instructional materials. – Gifu Prefecture
See also
List of historic mechanical engineering landmarks
List of historic civil engineering landmarks
References
External links
The Japan Society of Mechanical Engineers, JSME
The Mechanical Engineering Heritage
2007 establishments in Japan
Archives in Japan
History of science and technology in Japan
Cultural history of Japan
Science and technology in Japan
History of mechanical engineering
Japan history-related lists
Japanese cultural heritage protection system | Mechanical Engineering Heritage (Japan) | [
"Engineering"
] | 10,234 | [
"History of mechanical engineering",
"Mechanical engineering"
] |
13,400,209 | https://en.wikipedia.org/wiki/Sullivan%20conjecture | In mathematics, Sullivan conjecture or Sullivan's conjecture on maps from classifying spaces can refer to any of several results and conjectures prompted by homotopy theory work of Dennis Sullivan. A basic theme and motivation concerns the fixed point set in group actions of a finite group . The most elementary formulation, however, is in terms of the classifying space of such a group. Roughly speaking, it is difficult to map such a space continuously into a finite CW complex in a non-trivial manner. Such a version of the Sullivan conjecture was first proved by Haynes Miller. Specifically, in 1984, Miller proved that the function space, carrying the compact-open topology, of base point-preserving mappings from to is weakly contractible.
This is equivalent to the statement that the map → from X to the function space of maps → , not necessarily preserving the base point, given by sending a point of to the constant map whose image is is a weak equivalence. The mapping space is an example of a homotopy fixed point set. Specifically, is the homotopy fixed point set of the group acting by the trivial action on . In general, for a group acting on a space , the homotopy fixed points are the fixed points of the mapping space of maps from the universal cover of to under the -action on given by in acts on a map in by sending it to . The -equivariant map from to a single point induces a natural map η: → from the fixed points to the homotopy fixed points of acting on . Miller's theorem is that η is a weak equivalence for trivial -actions on finite-dimensional CW complexes. An important ingredient and motivation for his proof is a result of Gunnar Carlsson on the homology of as an unstable module over the Steenrod algebra.
Miller's theorem generalizes to a version of Sullivan's conjecture in which the action on is allowed to be non-trivial. In, Sullivan conjectured that η is a weak equivalence after a certain p-completion procedure due to A. Bousfield and D. Kan for the group . This conjecture was incorrect as stated, but a correct version was given by Miller, and proven independently by Dwyer-Miller-Neisendorfer, Carlsson, and Jean Lannes, showing that the natural map → is a weak equivalence when the order of is a power of a prime p, and where denotes the Bousfield-Kan p-completion of . Miller's proof involves an unstable Adams spectral sequence, Carlsson's proof uses his affirmative solution of the Segal conjecture and also provides information about the homotopy fixed points before completion, and Lannes's proof involves his T-functor.
References
External links
Book extract
J. Lurie's course notes
Conjectures that have been proved
Fixed points (mathematics)
Homotopy theory | Sullivan conjecture | [
"Mathematics"
] | 581 | [
"Mathematical analysis",
"Fixed points (mathematics)",
"Topology",
"Conjectures that have been proved",
"Mathematical problems",
"Mathematical theorems",
"Dynamical systems"
] |
13,401,263 | https://en.wikipedia.org/wiki/Copper%28I%29%20fluoride | Copper(I) fluoride or cuprous fluoride is an inorganic compound with the chemical formula CuF. Its existence is uncertain. It was reported in 1933 to have a sphalerite-type crystal structure. Modern textbooks state that CuF is not known, since fluorine is so electronegative that it will always oxidise copper to its +2 oxidation state. Complexes of CuF such as [(Ph3P)3CuF] are, however, known and well characterised.
Synthesis and reactivity
Unlike other copper(I) halides like copper(I) chloride, copper(I) fluoride tends to disproportionate into copper(II) fluoride and copper in a one-to-one ratio at ambient conditions, unless it is stabilised through complexation as in the example of [Cu(N2)F].
2CuF → Cu + CuF2
See also
Copper(II) fluoride, the other simple fluoride of copper
References
Fluorides
Metal halides
Copper(I) compounds
Zincblende crystal structure
Hypothetical_chemical_compounds | Copper(I) fluoride | [
"Chemistry"
] | 233 | [
"Inorganic compounds",
"Hypotheses in chemistry",
"Salts",
"Theoretical chemistry",
"Metal halides",
"Hypothetical chemical compounds",
"Fluorides"
] |
13,401,632 | https://en.wikipedia.org/wiki/Selenographia%2C%20sive%20Lunae%20descriptio | Selenographia, sive Lunae descriptio (Selenography, or A Description of The Moon) was printed in 1647 and is a milestone work by Johannes Hevelius. It includes the first detailed map of the Moon, created from Hevelius's personal observations. In his treatise, Hevelius reflected on the difference between his own work and that of Galileo Galilei. Hevelius remarked that the quality of Galileo's representations of the Moon in Sidereus nuncius (1610) left something to be desired. Selenography was dedicated to King Ladislaus IV of Poland and along with Riccioli/Grimaldi's Almagestum Novum became the standard work on the Moon for over a century. There are many copies that have survived, including those in Bibliothèque nationale de France, in the library of Polish Academy of Sciences, in the Stillman Drake Collection at the Thomas Fisher Rare Books Library at the University of Toronto, and in the Gunnerus Library at the Norwegian University of Science and Technology in Trondheim.
Notes
External links
Selenographia, sive Lunae descriptio
1647 books
Lunar science
Astronomy books | Selenographia, sive Lunae descriptio | [
"Astronomy"
] | 243 | [
"Astronomy books",
"Astronomy book stubs",
"Works about astronomy",
"Astronomy stubs"
] |
13,401,676 | https://en.wikipedia.org/wiki/Domine%20Database | DOMINE is a database of known and predicted protein domain interactions (or domain-domain interactions). It contains interactions observed in PDB crystal structures, and those predicted by several computational approaches. DOMINE uses Pfam HMM profiles for protein domain definitions. The DOMINE database contains 26,219 interactions among 5,410 domains), which includes 6,634 known interactions inferred from PDB structure data.
References
See also
DOMINE database
Protein databases
Protein structure
Protein domains | Domine Database | [
"Chemistry",
"Biology"
] | 98 | [
"Protein structure",
"Protein domains",
"Structural biology",
"Protein classification"
] |
13,402,061 | https://en.wikipedia.org/wiki/Oleochemistry | Oleochemistry is the study of vegetable oils and animal oils and fats, and oleochemicals derived from these fats and oils. The resulting product can be called oleochemicals (from Latin: oleum "olive oil"). The major product of this industry is soap, approximately 8.9×106 tons of which were produced in 1990. Other major oleochemicals include fatty acids, fatty acid methyl esters, fatty alcohols and fatty amines. Glycerol is a side product of all of these processes. Intermediate chemical substances produced from these basic oleochemical substances include alcohol ethoxylates, alcohol sulfates, alcohol ether sulfates, quaternary ammonium salts, monoacylglycerols (MAG), diacylglycerols (DAG), structured triacylglycerols (TAG), sugar esters, and other oleochemical products.
As the price of crude oil rose in the late 1970s, manufacturers switched from petrochemicals to oleochemicals because plant-based lauric oils processed from palm kernel oil were cheaper. Since then, palm kernel oil is predominantly used in the production of laundry detergent and personal care items like toothpaste, soap bars, shower cream and shampoo.
Processes
Important process in oleochemical manufacturing include hydrolysis and transesterification, among others.
Hydrolysis
The splitting (or hydrolysis) of the triglycerides produces fatty acids and glycerol follows this equation:
RCO2CH2–CHO2CR–CH2O2CR + 3 H2O → 3 RCOOH + HOCH2–CHOH–CH2OH
To this end, hydrolysis is conducted in water at 250 °C. The cleavage of triglycerides with base proceeds more quickly than hydrolysis, the process being saponification. Saponification however produces soap, whereas the desired product of hydrolysis are the fatty acids.
Transesterification
Fats react with alcohols (R'OH) instead of with water in hydrolysis in a process called transesterification. Glycerol is produced together with the fatty acid esters. Most typically, the reaction entails the use of methanol (MeOH) to give fatty acid methyl esters:
RCO2CH2–CHO2CR–CH2O2CR + 3 MeOH → 3 RCO2Me + HOCH2–CHOH–CH2OH
FAMEs are less viscous than the precursor fats and can be purified to give the individual fatty acid esters, e.g. methyl oleate vs methyl palmitate.
Hydrogenation
The fatty acid or fatty esters are susceptible to hydrogenation converts unsaturated fatty acids into saturated fatty acids. The acids or esters can also be reduced to the fatty alcohols. For some applications, fatty acids are converted to fatty nitriles. Hydrogenated of these nitriles gives fatty amines, which have a variety of applications.
Gelation
Liquid oil can also be immobilized in a 3D-network provided by various molecules called oleogelators.
Applications
The largest application for oleochemicals, about 30% of market share for fatty acids and 55% for fatty alcohols, is for making soaps and detergents. Lauric acid is used to produce sodium lauryl sulfate and related compounds, which are used to make soaps and other personal care products.
Other applications of oleochemicals include the production of lubricants, solvents, biodiesel and bioplastics. Due to the use of methyl esters in biodiesel production, they represent the fastest growing sub-sector of oleochemical production in recent years.
Oleochemical industry development
Europe
Through the 1996 creation of Novance and the 2008 acquisition of Oleon, Avril Group has dominated the European market of oleochemistry.
Southeast Asia
Southeast Asian countries' rapid production growth of palm oil and palm kernel oil in the 1980s spurred the oleochemical industry in Malaysia, Indonesia, and Thailand. Many oleochemical plants were built. Though a nascent and small industry when pitted against big detergent giants in the US and Europe, oleochemical companies in southeast Asia had competitive edge in cheap ingredients. The US fatty chemical industry found it difficult to consistently maintain acceptable levels of profits. Competition was intense with market shares divided among many companies there where neither imports nor exports played a significant role. By the late 1990s, giants like Henkel, Unilever, and Petrofina sold their oleochemical factories to focus on higher profit activities like retail of consumer goods. Since the Europe outbreak of 'mad cow disease' (or bovine spongiform encephalopathy) in 2000, tallow is replaced for many uses by vegetable oleic fatty acids, such as palm kernel and coconut oils.
References
External links
Oils
Lipids | Oleochemistry | [
"Chemistry"
] | 1,016 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Oils",
"Organic compounds",
"Lipids"
] |
13,402,281 | https://en.wikipedia.org/wiki/Medicon%20Valley | Medicon Valley is a leading international life-sciences cluster in Europe, spanning the Øresund Region of eastern Denmark and southern Sweden. It is one of Europe's strongest life science clusters, with many life science companies and research institutions located within a relatively small geographical area. The name has officially been in use since 1997.
Major life science sectors of the Medicon Valley cluster includes pharmacology, biotechnology, health tech and medical technology. It is specifically known for its research strengths in the areas of neurological disorders, inflammatory diseases, cancer and diabetes.
Background and activities
The population of Medicon Valley reaches close to 4 million inhabitants. In 2008, 60% of Scandinavian pharmaceutical companies were located in the region. The area includes 17 universities, 32 hospitals, and more than 400 life science companies. 20 are large pharmaceutical or medical technology firms and 170 are dedicated biotechnology firms. Between 1998 and 2008, 100 new biotechnology and medical technology companies were created here. The biotechnology industry alone employs around 41,000 people in the region, 7,000 of whom are academic researchers.
International companies with major research centres in the region include Novo Nordisk, Baxter, Lundbeck, LEO Pharma, HemoCue and Ferring Pharmaceuticals. There are more than 7 science parks in the region, all with a significant focus on life science, including the Medicon Village in Lund, established in 2010. Companies within Medicon Valley account for more than 20% of the total GDP of Denmark and Sweden combined.
Medicon Valley is promoted by Invest in Skåne and Copenhagen Capacity.
Many of the region's universities have a strong heritage in biological and medical research and have produced several Nobel Prize winners. The almost century-long presence of a number of research-intensive and fully integrated pharmaceutical companies, such as Novo Nordisk, H. Lundbeck and LEO Pharma, has also contributed significantly to the medical research and business development of the region by strengthening abilities within applied research, attracting suppliers and producing spin-offs.
In 2011, the organisations of MVA and Invest in Skåne, launched the concept of Beacons. Beacons are projects for creating world-leading cross-disciplinary research environments with a focus on areas where there are considerable potential for synergies in Medicon Valley specifically. After a long thorough selection process, the four Beacons of systems biology, structural biology, immune regulation and drug delivery were approved.
The Medicon Valley issues the quaternal life science magazine "Medicon Valley".
Science parks
Science parks in Medicon Valley includes:
Copenhagen Bio Science Park (COBIS)
Ideon Science Park (largest of its kind in Scandinavia)
Symbion
Krinova Science Park
Medicon Village
Medeon Science Park
Scion DTU
References
See also
Zealand Pharma
Sources and further reading
A case study.
External links
Medicon Valley Swedish-Danish life-sciences cluster
Medicon Valley Alliance
Invest in Skåne Swedish business organisation
Copenhagen Capacity Danish business organisation
Biotechnology
High-technology business districts
Science and technology in Denmark
Science and technology in Sweden
Venture capital firms of Denmark
Economy of the Øresund Region | Medicon Valley | [
"Biology"
] | 621 | [
"nan",
"Biotechnology"
] |
13,402,672 | https://en.wikipedia.org/wiki/Volari%20Duo | On September 15, 2003 XGI Technology Inc introduced the Volari Duo V8 Ultra and the Volari Duo V5 Ultra. These dual GPU graphics cards while impressive looking failed to compete with the single core GPU cards put out by NVIDIA and ATI and disappeared from the market.
References
XGI Volari Duo V8 Ultra 256MB Video Card Review
Club3D Volari Duo V8 Ultra Review
A New Graphics Kid on the Block: XGI Volari
External links
XGI Technology Inc
XGI Volari Duo Graphic card
Graphics cards | Volari Duo | [
"Technology"
] | 109 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,403,260 | https://en.wikipedia.org/wiki/Generalised%20metric | In mathematics, the concept of a generalised metric is a generalisation of that of a metric, in which the distance is not a real number but taken from an arbitrary ordered field.
In general, when we define metric space the distance function is taken to be a real-valued function. The real numbers form an ordered field which is Archimedean and order complete. These metric spaces have some nice properties like: in a metric space compactness, sequential compactness and countable compactness are equivalent etc. These properties may not, however, hold so easily if the distance function is taken in an arbitrary ordered field, instead of in
Preliminary definition
Let be an arbitrary ordered field, and a nonempty set; a function is called a metric on if the following conditions hold:
if and only if ;
(symmetry);
(triangle inequality).
It is not difficult to verify that the open balls form a basis for a suitable topology, the latter called the metric topology on with the metric in
In view of the fact that in its order topology is monotonically normal, we would expect to be at least regular.
Further properties
However, under axiom of choice, every general metric is monotonically normal, for, given where is open, there is an open ball such that Take Verify the conditions for Monotone Normality.
The matter of wonder is that, even without choice, general metrics are monotonically normal.
proof.
Case I: is an Archimedean field.
Now, if in open, we may take where and the trick is done without choice.
Case II: is a non-Archimedean field.
For given where is open, consider the set
The set is non-empty. For, as is open, there is an open ball within Now, as is non-Archimdedean, is not bounded above, hence there is some such that for all Putting we see that is in
Now define We would show that with respect to this mu operator, the space is monotonically normal. Note that
If is not in (open set containing ) and is not in (open set containing ), then we'd show that is empty. If not, say is in the intersection. Then
From the above, we get that which is impossible since this would imply that either belongs to or belongs to
This completes the proof.
See also
References
External links
Metric geometry
Norms (mathematics)
Topology | Generalised metric | [
"Physics",
"Mathematics"
] | 491 | [
"Mathematical analysis",
"Topology",
"Space",
"Norms (mathematics)",
"Geometry",
"Spacetime"
] |
13,403,445 | https://en.wikipedia.org/wiki/Rate%20of%20reinforcement | In behaviorism, rate of reinforcement is number of reinforcements per time, usually per minute. Symbol of this rate is usually Rf. Its first major exponent was B.F. Skinner (1939). It is used in the Matching Law.
Rf = # of reinforcements/unit of time = SR+/t
See also
Rate of response
References
Herrnstein, R.J. (1961). Relative and absolute strength of responses as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behaviour, 4, 267–272.
Herrnstein, R.J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243–266.
Skinner, B.F. (1938). The behavior of organisms: An experimental analysis. , .
Behaviorism
Quantitative analysis of behavior
Reinforcement | Rate of reinforcement | [
"Physics",
"Biology"
] | 169 | [
"Temporal quantities",
"Behavior",
"Physical quantities",
"Temporal rates",
"Quantitative analysis of behavior",
"Behaviorism"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.