id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
182,301 | https://en.wikipedia.org/wiki/Screaming%20jelly%20babies | "Screaming Jelly Babies" (British English), also known as "Growling Gummy Bears" (American and Canadian English), is a classroom chemistry demonstration in which a piece of candy bursts loudly into flame when dropped into potassium chlorate. The experiment is practised in schools around the world and is often used at open evenings to show the more engaging and entertaining aspects of science in secondary education settings.
The experiment shows the amount of energy there is in a piece of candy. Jelly babies or gummy bears are often used for theatrics. Potassium chlorate, a strong oxidising agent, rapidly oxidises the sugar in the candy causing it to burst into flames. The reaction produces a "screaming" sound as rapidly expanding gases are emitted from the test tube. The aroma of caramel is given off. Other carbohydrate or hydrocarbon containing substances can be dropped into test tubes of molten chlorate to produce similar results.
Net reaction
4 KClO3 (s) + C12H22O11 (s) + 6 O2 (g) → 4 KCl + 12 CO2 (g) + 11 H2O (g)
Mechanism
The solid potassium chlorate is melted into a liquid.
KClO3 (s) + energy → K+ClO3− (l)
The liquid potassium chlorate decomposes into potassium perchlorate and potassium chloride.
4 KClO3 → KCl + 3 KClO4
The potassium perchlorate decomposes into potassium chloride and oxygen.
KClO4 → KCl + 2 O2
The sugar in the candy reacts with oxygen, forming water and carbon dioxide. The reaction is exothermic and produces heat, smoke, and fire.
C12H22O11 (s) + 12 O2 (g) → 12 CO2 (g) + 11 H2O (g) + energy.
Safety measures
Care should be taken in performing this experiment, which should only be attempted by a professional. Potassium chlorate is a strong oxidizer and can cause fire or explosions. It is toxic by inhalation or ingestion and is hazardous to aquatic environments. Reagent grade potassium chlorate should be used. Upon completion of the demonstration, all chemicals should be disposed of in designated chemical waste containers to prevent harm to people or the environment.
All participants in the experiment should wear personal protective equipment, including eye protection, and should stand a safe distance away from the demonstration. A face-shield and heat resistant gloves should be worn by the person adding the jelly baby to the molten potassium chlorate.
Variations
Deviation from the experiment is not recommended, and has been linked with accidents. Candy with low moisture content or high surface area may cause explosions.
References
Further reading
External links
Jelly Babies - From The University of Nottingham's Periodic Table of Videos
Chemical reactions
Chemistry classroom experiments
Articles containing video clips
Gummi candies | Screaming jelly babies | [
"Chemistry"
] | 596 | [
"Chemistry classroom experiments",
"nan"
] |
182,303 | https://en.wikipedia.org/wiki/Food%20energy | Food energy is chemical energy that animals (including humans) derive from their food to sustain their metabolism, including their muscular activity.
Most animals derive most of their energy from aerobic respiration, namely combining the carbohydrates, fats, and proteins with oxygen from air or dissolved in water. Other smaller components of the diet, such as organic acids, polyols, and ethanol (drinking alcohol) may contribute to the energy input. Some diet components that provide little or no food energy, such as water, minerals, vitamins, cholesterol, and fiber, may still be necessary to health and survival for other reasons. Some organisms have instead anaerobic respiration, which extracts energy from food by reactions that do not require oxygen.
The energy contents of a given mass of food is usually expressed in the metric (SI) unit of energy, the joule (J), and its multiple the kilojoule (kJ); or in the traditional unit of heat energy, the calorie (cal). In nutritional contexts, the latter is often (especially in US) the "large" variant of the unit, also written "Calorie" (with symbol Cal, both with capital "C") or "kilocalorie" (kcal), and equivalent to 4184 J or 4.184 kJ. Thus, for example, fats and ethanol have the greatest amount of food energy per unit mass, , respectively. Proteins and most carbohydrates have about , though there are differences between different kinds. For example, the values for glucose, sucrose, and starch are respectively. The differing energy density of foods (fat, alcohols, carbohydrates and proteins) lies mainly in their varying proportions of carbon, hydrogen, and oxygen atoms. Carbohydrates that are not easily absorbed, such as fibre, or lactose in lactose-intolerant individuals, contribute less food energy. Polyols (including sugar alcohols) and organic acids contribute and respectively.
The energy contents of a complex dish or meal can be approximated by adding the energy contents of its components.
History and methods of measurement
Direct calorimetry of combustion
The first determinations of the energy content of food were made by burning a dried sample in a bomb calorimeter and measuring the temperature change in the water surrounding the apparatus, a method known as direct calorimetry.
The Atwater system
However, the direct calorimetric method generally overestimates the actual energy that the body can obtain from the food, because it also counts the energy contents of dietary fiber and other indigestible components, and does not allow for partial absorption and/or incomplete metabolism of certain substances. For this reason, today the energy content of food is instead obtained indirectly, by using chemical analysis to determine the amount of each digestible dietary component (such as protein, carbohydrates, and fats), and adding the respective food energy contents, previously obtained by measurement of metabolic heat released by the body. In particular, the fibre content is excluded. This method is known as the Modified Atwater system, after Wilbur Atwater who pioneered these measurements in the late 19th century.
The system was later improved by Annabel Merrill and Bernice Watt of the USDA, who derived a system whereby specific calorie conversion factors for different foods were proposed.
Dietary sources of energy
The typical human diet consists chiefly of carbohydrates, fats, proteins, water, ethanol, and indigestible components such as bones, seeds, and fibre (mostly cellulose). Carbohydrates, fats, and proteins typically comprise ninety percent of the dry weight of food. Ruminants can extract food energy from the respiration of cellulose because of bacteria in their rumens that decompose it into digestible carbohydrates.
Other minor components of the human diet that contribute to its energy content are organic acids such as citric and tartaric, and polyols such as glycerol, xylitol, inositol, and sorbitol.
Some nutrients have regulatory roles affected by cell signaling, in addition to providing energy for the body. For example, leucine plays an important role in the regulation of protein metabolism and suppresses an individual's appetite. Small amounts of essential fatty acids, constituents of some fats that cannot be synthesized by the human body, are used (and necessary) for other biochemical processes.
The approximate food energy contents of various human diet components, to be used in package labeling according to the EU regulations and UK regulations, are:
(1) Some polyols, like erythritol, are not digested and should be excluded from the count.
(2) This entry exists in the EU regulations of 2008, but not in the UK regulations, according to which fibre shall not be counted.
More detailed tables for specific foods have been published by many organizations, such as the United Nations Food and Agriculture Organization also has published a similar table.
Other components of the human diet are either noncaloric, or are usually consumed in such small amounts that they can be neglected.
Energy usage in the human body
The food energy actually obtained by respiration is used by the human body for a wide range of purposes, including basal metabolism of various organs and tissues, maintaining the internal body temperature, and exerting muscular force to maintain posture and produce motion. About 20% is used for brain metabolism.
The conversion efficiency of energy from respiration into muscular (physical) power depends on the type of food and on the type of physical energy usage (e.g., which muscles are used, whether the muscle is used aerobically or anaerobically). In general, the efficiency of muscles is rather low: only 18 to 26% of the energy available from respiration is converted into mechanical energy. This low efficiency is the result of about 40% efficiency of generating ATP from the respiration of food, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20%, one watt of mechanical power is equivalent to . For example, a manufacturer of rowing equipment shows calories released from "burning" food as four times the actual mechanical work, plus per hour, which amounts to about 20% efficiency at 250 watts of mechanical output. It can take up to 20 hours of little physical output (e.g., walking) to "burn off" more than a body would otherwise consume. For reference, each kilogram of body fat is roughly equivalent to 32,300 kilojoules of food energy (i.e., ).
Recommended daily intake
Many countries and health organizations have published recommendations for healthy levels of daily intake of food energy. For example, the United States government estimates needed for women and men, respectively, between ages 26 and 45, whose total physical activity is equivalent to walking around per day in addition to the activities of sedentary living. These estimates are for a "reference woman" who is tall and weighs and a "reference man" who is tall and weighs . Because caloric requirements vary by height, activity, age, pregnancy status, and other factors, the USDA created the DRI Calculator for Healthcare Professionals in order to determine individual caloric needs.
According to the Food and Agriculture Organization of the United Nations, the average minimum energy requirement per person per day is about . Although the U.S. has changed over time with a growth in population and processed foods or food in general, Americans today have available roughly the same level of calories as the older generation.
Older people and those with sedentary lifestyles require less energy; children and physically active people require more. Recognizing these factors, Australia's National Health and Medical Research Council recommends different daily energy intakes for each age and gender group. Notwithstanding, nutrition labels on Australian food products typically recommend the average daily energy intake of .
The minimum food energy intake is also higher in cold environments. Increased mental activity has been linked with moderately increased brain energy consumption.
Nutrition labels
Many governments require food manufacturers to label the energy content of their products, to help consumers control their energy intake. To facilitate evaluation by consumers, food energy values (and other nutritional properties) in package labels or tables are often quoted for convenient amounts of the food, rather than per gram or kilogram; such as in "calories per serving" or "kcal per 100 g", or "kJ per package". The units vary depending on country:
See also
Atwater system
Basal metabolic rate
Calorie
Chemical energy
Food chain
Food composition
Heat of combustion
Nutrition facts label
Satiety value
Table of food nutrients
List of countries by food energy intake
References
External links
Is a calorie a calorie?
DRI Calculator for Healthcare Professionals
Nutrition
Food analysis
Nutritional physiology | Food energy | [
"Chemistry"
] | 1,851 | [
"Food analysis",
"Food chemistry"
] |
182,353 | https://en.wikipedia.org/wiki/Carte%20du%20Ciel | The Carte du Ciel (; literally, 'Map of the Sky') and the Astrographic Catalogue (or Astrographic Chart) were two distinct but connected components of a massive international astronomical project, initiated in the late 19th century, to catalogue and map the positions of millions of stars as faint as 11th or 12th magnitude. Twenty observatories from around the world participated in exposing and measuring more than 22,000 (glass) photographic plates in an enormous observing programme extending over several decades. Despite, or because of, its vast scale, the project was only ever partially successful – the Carte du Ciel component was never completed, and for almost half a century the Astrographic Catalogue part was largely ignored. However, the appearance of the Hipparcos Catalogue in 1997 has led to an important development in the use of this historical plate material.
Origins and goals
A vast and unprecedented international star-mapping project was initiated in 1887 by Paris Observatory director Amédée Mouchez, who realized the potential of the new dry plate photographic process to revolutionize the process of making maps of the stars. As a result of the Astrographic Congress of more than 50 astronomers held in Paris in April 1887, 20 observatories from around the world agreed to participate in the project, and two goals were established:
For the first, the Astrographic Catalogue, the entire sky was to be photographed to 11 mag to provide a reference catalogue of star positions that would fill the magnitude gap between those previously observed by transit and meridian circle instrument observations down to 8 mag – this would provide the positions of a reasonably dense network of star positions which could in turn be used as a reference system for the fainter survey component (the Carte du Ciel). Different observatories around the world were charged with surveying specific declination zones (see table). The Astrographic Catalogue plates, of typically 6 minutes exposure, were in due course photographed, measured, and published in their entirety. They yielded a catalogue of positions and magnitudes down to about 11.5 mag, and the programme was largely completed during the first quarter of the 20th century.
For the second goal, a second set of plates, with longer exposures but minimal overlap, was to photograph all stars to 14 mag. These plates were to be reproduced and distributed as a set of charts, the Carte du Ciel, in contrast to previous sky charts which had been constructed from the celestial coordinates of stars observed by transit instruments. Most of the Carte du Ciel plates used three exposures of 20 minutes duration, displaced to form an equilateral triangle with sides of 10 arcsec, making it easy to distinguish stars from plate flaws, and asteroids from stars.
A contemporary account of this vast international astronomical collaboration, published in 1912, was given by Herbert Hall Turner, then Savilian Professor of Astronomy at Oxford University. Other aspects are covered in various papers in the Proceedings of IAU Symposium Number 133 held in 1988.
The Astrographic Catalogue
For the Astrographic Catalogue, 20 observatories from around the world participated in exposing and measuring more than 22,000 glass plates (see table). Around half of the observatories ordered telescopes from the Henry brothers (Paul and Prosper) in France, with others coming from the factory of Howard Grubb of Dublin. These telescopes were termed normal astrographs with an aperture of around 13 inches (33 cm) and a focal length of 11 feet (3.4 m) designed to create images with a uniform scale on the photographic plate of approximately 60 arcsecs/mm while covering a 2° × 2° field of view. Each observatory was assigned a specific declination zone to photograph. The first such plate was taken in August 1891 at the Vatican Observatory (where the exposures took more than 27 years to complete), and the last in December 1950 at the Royal Observatory of Belgium (Brussels), with most observations being made between 1895 and 1920. To compensate for plate defects, each area of the sky was photographed twice, using a two-fold, corner-to-centre overlap pattern, extended at the zone boundaries, such that each observatory's plates would overlap with those of the adjacent zones. The participating observatories agreed to use a standardized telescope so that all plates had a similar scale of approximately 60 arcsec/mm. The measurable areas of the plates were 2.1°×2.1° (13 cm×13 cm), so the overlap pattern consisted of plates that were centred on every degree band in declination, but offset in right ascension by two degrees. Many factors, such as reference catalogue, reduction technique and print formats were left up to the individual institutions. The positional accuracy goal was 0.5 arcsec per image.
Plate measurement was a protracted affair, with measuring done by eye and recorded by hand. The plates were turned over to a large number of people working as computers to determine the positions of the stars on each plate. (Before its modern meaning, the word "computer" meant a person who performs calculations). These human computers would manually measure each star with respect to the dozen or so reference stars within that particular plate, and then perform calculations to determine the star's right ascension and declination. The original goal of 11 mag for the limiting magnitude was generally surpassed, however, with some observatories routinely measuring stars as faint as 13 mag. In total, some 4.6 million stars (8.6 million images) were observed. The brightest stars were over-exposed on the plates, not measured, and therefore missing in the resulting catalogues. The plate measurements (as rectangular coordinates), as well as the formulae to transform them to equatorial coordinates, were published in the original volumes of the Astrographic Catalogue, although the accompanying equatorial coordinates are now of only historical interest. Publication of the measurements proceeded from 1902 to 1964, and resulted in 254 printed volumes of raw data.
For decades the Astrographic Catalogue was largely ignored. The data were difficult to work with because they were available neither in machine-readable form nor in equatorial coordinates. Decades of labour were expended internationally before the project was superseded by modern astronomical techniques. One problem was that the work took much longer than expected. As originally envisaged, the project was meant to have taken only 10 to 15 years. A more serious problem was that while many European astronomers were preoccupied with this project, which required steady, methodical labor rather than creativity, in other parts of the world notably the United States astrophysics was becoming far more important than astrometry. As a result, French astronomy in particular fell behind and lagged for decades.
The Carte du Ciel
The still-more-ambitious Carte du Ciel component of the programme was undertaken by some of the participating observatories, but neither completed nor even started by others. The charts proved to be excessively expensive to photograph and reproduce, generally via engraved copper plates (photogravure), and many zones were either not completed or properly published. The plates which were taken generally still exist, but cover only half of the sky. They are typically archived at their original observatories. A very few plates have recently been re-measured and re-analysed with the availability of the Hipparcos Catalogue data (see below).
Combination of the Astrographic Catalogue with Hipparcos
The vast amount of work invested in the Astrographic Catalogue, taking plates, measuring, and publishing, was looked at for a long time as giving only a marginal scientific profit. But today, astronomers are very much indebted to this great effort because of the possibility of combining these century-old star positions with the results from ESA's Hipparcos space astrometry satellite, allowing high accuracy proper motions to be derived for 2.5 million stars.
Specifically, the Astrographic Catalogue positions were transferred from the decades-old printed catalogues into machine readable form (undertaken at the Sternberg Astronomical Institute in Moscow under the leadership of A. Kuzmin) between 1987 and 1994. The data was then reduced anew (at the US Naval Observatory in Washington under the leadership of Sean Urban), using the reference stars measured by the Hipparcos astrometry satellite.
The stars from the Hipparcos Catalogue were used to establish a detailed reference framework at the various epochs of the Astrographic Catalogue plates, while the 2.5 million stars in the Tycho-2 Catalogue provided a dense reference framework to allow the plate distortions to be accurately calibrated and corrected. The proper motions of all the Tycho Catalogue stars could then be derived especially thanks to the Astrographic Catalogue, but additionally using star positions from more than 140 other ground-based catalogues.
Aside from the 120,000 stars of the Hipparcos Catalogue itself, the resulting Tycho-2 Catalogue (compiled at the Copenhagen University Observatory under the leadership of Erik Høg) became the largest, most accurate and most complete, star catalogue of the brightest stars on the sky. It was the basis for deriving positions for all fainter stars on the sky, until the Gaia 2 Catalog became available in 2017. Sean Urban of the US Naval Observatory wrote in 1998: The history of the Astrographic Catalogue endeavour is one of dedicated individuals devoting tedious decades of their careers to a single goal. Some believe it is also the story of how the best European observatories of the 19th century lost their leadership in astronomical research by committing so many resources to this one undertaking. Long portrayed as an object lesson in overambition, the Astrographic Catalogue has more recently turned into a lesson in the way that old data can find new uses.
See also
Henry Chamberlain Russell
References
External links
IAU Commission 8 Working Group, includes a photograph of nuns measuring the Vatican plate collection (1910–1921)
Histoire de l'Observatoire de Toulouse (with a section on the Carte du Ciel)
A compilation of historical material by the Palermo Observatory
History of astronomy
Astronomical catalogues
Astronomical catalogues of stars
Astronomical surveys
Astrometry | Carte du Ciel | [
"Astronomy"
] | 2,050 | [
"History of astronomy",
"Astronomical surveys",
"Works about astronomy",
"Astrometry",
"Astronomical catalogues",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
182,358 | https://en.wikipedia.org/wiki/Cubit | The cubit is an ancient unit of length based on the distance from the elbow to the tip of the middle finger. It was primarily associated with the Sumerians, Egyptians, and Israelites. The term cubit is found in the Bible regarding Noah's Ark, the Ark of the Covenant, the Tabernacle, and Solomon's Temple. The common cubit was divided into 6 palms × 4 fingers = 24 digits. Royal cubits added a palm for 7 palms × 4 fingers = 28 digits. These lengths typically ranged from , with an ancient Roman cubit being as long as .
Cubits of various lengths were employed in many parts of the world in antiquity, during the Middle Ages and as recently as early modern times. The term is still used in hedgelaying, the length of the forearm being frequently used to determine the interval between stakes placed within the hedge.
Etymology
The English word "cubit" comes from the Latin noun "elbow", from the verb "to lie down", from which also comes the adjective "recumbent".
Ancient Egyptian royal cubit
The ancient Egyptian royal cubit () is the earliest attested standard measure. Cubit rods were used for the measurement of length. A number of these rods have survived: two are known from the tomb of Maya, the treasurer of the 18th dynasty pharaoh Tutankhamun, in Saqqara; another was found in the tomb of Kha (TT8) in Thebes. Fourteen such rods, including one double cubit rod, were described and compared by Lepsius in 1865. These cubit rods range from in length and are divided into seven palms; each palm is divided into four fingers, and the fingers are further subdivided.
Early evidence for the use of this royal cubit comes from the Early Dynastic Period: on the Palermo Stone, the flood level of the Nile river during the reign of the Pharaoh Djer is given as measuring 6 cubits and 1 palm. Use of the royal cubit is also known from Old Kingdom architecture, from at least as early as the construction of the Step Pyramid of Djoser designed by Imhotep in around 2700 BC.
Ancient Mesopotamian units of measurement
Ancient Mesopotamian units of measurement originated in the loosely organized city-states of Early Dynastic Sumer. Each city, kingdom and trade guild had its own standards until the formation of the Akkadian Empire when Sargon of Akkad issued a common standard. This standard was improved by Naram-Sin, but fell into disuse after the Akkadian Empire dissolved. The standard of Naram-Sin was readopted in the Ur III period by the Nanše Hymn which reduced a plethora of multiple standards to a few agreed upon common groupings. Successors to Sumerian civilization including the Babylonians, Assyrians, and Persians continued to use these groupings.
The Classical Mesopotamian system formed the basis for Elamite, Hebrew, Urartian, Hurrian, Hittite, Ugaritic, Phoenician, Babylonian, Assyrian, Persian, Arabic, and Islamic metrologies. The Classical Mesopotamian System also has a proportional relationship, by virtue of standardized commerce, to Bronze Age Harappan and Egyptian metrologies.
In 1916, during the last years of the Ottoman Empire and in the middle of World War I, the German assyriologist Eckhard Unger found a copper-alloy bar while excavating at Nippur. The bar dates from and Unger claimed it was used as a measurement standard. This irregularly formed and irregularly marked graduated rule supposedly defined the Sumerian cubit as about .
There is some evidence that cubits were used to measure angular separation. The Babylonian Astronomical Diary for 568–567 BCE refers to Jupiter being one cubit behind the elbow of Sagittarius. One cubit measures about 2 degrees.
Biblical cubit
The standard of the cubit () in different countries and in different ages has varied. This realization led the rabbis of the 2nd century CE to clarify the length of their cubit, saying that the measure of the cubit of which they have spoken "applies to the cubit of middle-size". In this case, the requirement is to make use of a standard 6 handbreadths to each cubit, and which handbreadth was not to be confused with an outstretched palm, but rather one that was clenched and which handbreadth has the standard width of 4 fingerbreadths (each fingerbreadth being equivalent to the width of a thumb, about 2.25 cm). This puts the handbreadth at roughly , and 6 handbreadths (1 cubit) at . Epiphanius of Salamis, in his treatise On Weights and Measures, describes how it was customary, in his day, to take the measurement of the biblical cubit: "The cubit is a measure, but it is taken from the measure of the forearm. For the part from the elbow to the wrist and the palm of the hand is called the cubit, the middle finger of the cubit measure being also extended at the same time and there being added below (it) the span, that is, of the hand, taken all together."
Rabbi Avraham Chaim Naeh put the linear measurement of a cubit at . Avrohom Yeshaya Karelitz (the "Chazon Ish"), dissenting, put the length of a cubit at .
Rabbi and philosopher Maimonides, following the Talmud, makes a distinction between the cubit of 6 handbreadths used in ordinary measurements, and the cubit of 5 handbreadths used in measuring the Golden Altar, the base of the altar of burnt offerings, its circuit and the horns of the altar.
Ancient Greece
In ancient Greek units of measurement, the standard forearm cubit measured approximately The short forearm cubit from the knuckle of the middle finger (i.e., fist clenched) to the elbow, measured approximately .
Ancient Rome
In ancient Rome, according to Vitruvius, a cubit was equal to Roman feet or 6 palm widths (approximately ). A 120-centimetre cubit (approximately four feet long), called the Roman ulna, was common in the Roman empire, which cubit was measured from the fingers of the outstretched arm opposite the man's hip.; also, with
Islamic world
In the Islamic world, the cubit () had a similar origin, being originally defined as the arm from the elbow to the tip of the middle finger. Several different cubit lengths were current in the medieval Islamic world for the unit of length, ranging from , and in turn the was commonly subdivided into six handsbreadths (), and each handsbreadth into four fingerbreadths (). The most commonly used definitions were:
the legal cubit (), also known as the hand cubit (), cubit of Yusuf (, named after the 8th-century Abu Yusuf), postal cubit (), "freed" cubit () and thread cubit (). It measured , although in the Abbasid Caliphate it measured , possibly as a result of reforms of Caliph al-Ma'mun ().
the black cubit (), adopted in the Abbasid period and fixed by the measure used in the Nilometer on Rawda Island at . It is also known as the common cubit (), sack-cloth cubit (), and was the most commonly used in the Maghreb and Islamic Spain under the name .
the king's cubit (), inherited from the Sassanid Persians. It measured eight for a total of on average. It was this measure used by Ziyad ibn Abihi for his survey of Iraq, and is hence also known as Ziyadi cubit () or survey cubit (). From Caliph al-Mansur () it was also known as the Hashemite cubit (). Other identical measures were the work cubit () and likely also the , which measures .
the cloth cubit, which fluctuated widely according to region: the Egyptian cubit ( or ) measured , that of Damascus , that of Aleppo , that of Baghdad , and that of Istanbul .
A variety of more local or specific cubit measures were developed over time: the "small" Hashemite cubit of , also known as the cubit of Bilal (, named after the 8th-century Basran Bilal ibn Abi Burda); the Egyptian carpenter's cubit () or architect's cubit () of , reduced and standardized to in the 19th century; the house cubit () of , introduced by the Abbasid-era Ibn Abi Layla; the cubit of Umar () of and its double, the scale cubit () established by al-Ma'mun and used mainly for measuring canals.
In medieval and early modern Persia, the cubit (usually known as ) was either the legal cubit of , or the Isfahan cubit of . A royal cubit () appeared in the 17th century with , while a "shortened" cubit () of (likely derived from the widely used cloth cubit of Aleppo) was used for cloth. The measure survived into the 20th century, with 1 equal to . Mughal India also had its own royal cubit () of .
Other systems
Other measurements based on the length of the forearm include some lengths of ell, the Russian lokot (), the Indian , the Thai , the Malay , the Tamil , the Telugu (), the Khmer , and the Tibetan ().
Cubit arm in heraldry
A cubit arm in heraldry may be dexter or sinister. It may be vested (with a sleeve) and may be shown in various positions, most commonly erect, but also fesswise (horizontal), bendwise (diagonal) and is often shown grasping objects. It is most often used erect as a crest, for example by the families of Poyntz of Iron Acton, Rolle of Stevenstone and Turton.
See also
History of measurement
List of obsolete units of measurement
System of measurement
Unit of measurement
References
Bibliography
.
Petrie, Sir Flinders (1881). Pyramids and Temples of Gizeh.
Stone, Mark H., "The Cubit: A History and Measurement Commentary", Journal of Anthropology , 2014
External links
Obsolete units of measurement
Units of length
Human-based units of measurement | Cubit | [
"Mathematics"
] | 2,149 | [
"Obsolete units of measurement",
"Quantity",
"Units of measurement",
"Units of length"
] |
182,369 | https://en.wikipedia.org/wiki/Whitelist | A whitelist or allowlist is a list or register of entities that are being provided a particular privilege, service, mobility, access or recognition. Entities on the list will be accepted, approved and/or recognized. Whitelisting is the reverse of blacklisting, the practice of identifying entities that are denied, unrecognized, or ostracized.
Email whitelists
Spam filters often include the ability to "whitelist" certain sender IP addresses, email addresses or domain names to protect their email from being rejected or sent to a junk mail folder. These can be manually maintained by the user or system administrator - but can also refer to externally maintained whitelist services.
Non-commercial whitelists
Non-commercial whitelists are operated by various non-profit organizations, ISPs, and others interested in blocking spam. Rather than paying fees, the sender must pass a series of tests; for example, their email server must not be an open relay and have a static IP address. The operator of the whitelist may remove a server from the list if complaints are received.
Commercial whitelists
Commercial whitelists are a system by which an Internet service provider allows someone to bypass spam filters when sending email messages to its subscribers, in return for a pre-paid fee, either an annual or a per-message fee. A sender can then be more confident that their messages have reached recipients without being blocked, or having links or images stripped out of them, by spam filters. The purpose of commercial whitelists is to allow companies to reliably reach their customers by email.
Advertising whitelists
Many websites rely on ads as a source of revenue, but the use of ad blockers is increasingly common. Websites that detect an adblocker in use often ask for it to be disabled - or their site to be "added to the whitelist" - a standard feature of most adblockers.
Network whitelists
LAN whitelists
A use for whitelists is in local area network (LAN) security. Many network admins set up MAC address whitelists, or a MAC address filter, to control who is allowed on their networks. This is used when encryption is not a practical solution or in tandem with encryption. However, it's sometimes ineffective because a MAC address can be faked.
IP whitelist
Firewalls can usually be configured to only allow data-traffic from/to certain (ranges of) IP-addresses.
Application whitelists
One approach in combating viruses and malware is to whitelist software which is considered safe to run, blocking all others. This is particularly attractive in a corporate environment, where there are typically already restrictions on what software is approved.
Leading providers of application whitelisting technology include Bit9, Velox, McAfee, Lumension, ThreatLocker, Airlock Digital and SMAC.
On Microsoft Windows, recent versions include AppLocker, which allows administrators to control which executable files are denied or allowed to execute. With AppLocker, administrators are able to create rules based on file names, publishers or file location that will allow certain files to execute. Rules can apply to individuals or groups. Policies are used to group users into different enforcement levels. For example, some users can be added to a report-only policy that will allow administrators to understand the impact before moving that user to a higher enforcement level.
Linux systems typically have AppArmor and SE Linux features available which can be used to effectively block all applications which are not explicitly whitelisted, and commercial products are also available.
On HP-UX introduced a feature called "HP-UX Whitelisting" on 11iv3 version.
See also
Blacklisting
Blacklist (computing)
Blackballing
Closed platform
DNSWL, whitelisting based on DNS
Opt-in
References
Antivirus software
Blacklisting
Databases
Malware
Social privilege
Social status
Spamming | Whitelist | [
"Technology"
] | 792 | [
"Malware",
"Computer security exploits"
] |
182,445 | https://en.wikipedia.org/wiki/Fermi%20liquid%20theory | Fermi liquid theory (also known as Landau's Fermi-liquid theory) is a theoretical model of interacting fermions that describes the normal state of the conduction electrons in most metals at sufficiently low temperatures. The theory describes the behavior of many-body systems of particles in which the interactions between particles may be strong. The phenomenological theory of Fermi liquids was introduced by the Soviet physicist Lev Davidovich Landau in 1956, and later developed by Alexei Abrikosov and Isaak Khalatnikov using diagrammatic perturbation theory. The theory explains why some of the properties of an interacting fermion system are very similar to those of the ideal Fermi gas (collection of non-interacting fermions), and why other properties differ.
Fermi liquid theory applies most notably to conduction electrons in normal (non-superconducting) metals, and to liquid helium-3. Liquid helium-3 is a Fermi liquid at low temperatures (but not low enough to be in its superfluid phase). An atom of helium-3 has two protons, one neutron and two electrons, giving an odd number of fermions, so the atom itself is a fermion. Fermi liquid theory also describes the low-temperature behavior of electrons in heavy fermion materials, which are metallic rare-earth alloys having partially filled f orbitals. The effective mass of electrons in these materials is much larger than the free-electron mass because of interactions with other electrons, so these systems are known as heavy Fermi liquids. Strontium ruthenate displays some key properties of Fermi liquids, despite being a strongly correlated material that is similar to high temperature superconductors such as the cuprates. The low-momentum interactions of nucleons (protons and neutrons) in atomic nuclei are also described by Fermi liquid theory.
Description
The key ideas behind Landau's theory are the notion of adiabaticity and the Pauli exclusion principle. Consider a non-interacting fermion system (a Fermi gas), and suppose we "turn on" the interaction slowly. Landau argued that in this situation, the ground state of the Fermi gas would adiabatically transform into the ground state of the interacting system.
By Pauli's exclusion principle, the ground state of a Fermi gas consists of fermions occupying all momentum states corresponding to momentum with all higher momentum states unoccupied. As the interaction is turned on, the spin, charge and momentum of the fermions corresponding to the occupied states remain unchanged, while their dynamical properties, such as their mass, magnetic moment etc. are renormalized to new values. Thus, there is a one-to-one correspondence between the elementary excitations of a Fermi gas system and a Fermi liquid system. In the context of Fermi liquids, these excitations are called "quasiparticles".
Landau quasiparticles are long-lived excitations with a lifetime that satisfies where is the quasiparticle energy (measured from the Fermi energy). At finite temperature, is on the order of the thermal energy , and the condition for Landau quasiparticles can be reformulated as .
For this system, the many-body Green's function can be written (near its poles) in the form
where is the chemical potential, is the energy corresponding to the given momentum state and is called the quasiparticle residue or renormalisation constant which is very characteristic of Fermi liquid theory. The spectral function for the system can be directly observed via angle-resolved photoemission spectroscopy (ARPES), and can be written (in the limit of low-lying excitations) in the form:
where is the Fermi velocity.
Physically, we can say that a propagating fermion interacts with its surrounding in such a way that the net effect of the interactions is to make the fermion behave as a "dressed" fermion, altering its effective mass and other dynamical properties. These "dressed" fermions are what we think of as "quasiparticles".
Another important property of Fermi liquids is related to the scattering cross section for electrons. Suppose we have an electron with energy above the Fermi surface, and suppose it scatters with a particle in the Fermi sea with energy . By Pauli's exclusion principle, both the particles after scattering have to lie above the Fermi surface, with energies . Now, suppose the initial electron has energy very close to the Fermi surface Then, we have that also have to be very close to the Fermi surface. This reduces the phase space volume of the possible states after scattering, and hence, by Fermi's golden rule, the scattering cross section goes to zero. Thus we can say that the lifetime of particles at the Fermi surface goes to infinity.
Similarities to Fermi gas
The Fermi liquid is qualitatively analogous to the non-interacting Fermi gas, in the following sense: The system's dynamics and thermodynamics at low excitation energies and temperatures may be described by substituting the non-interacting fermions with interacting quasiparticles, each of which carries the same spin, charge and momentum as the original particles. Physically these may be thought of as being particles whose motion is disturbed by the surrounding particles and which themselves perturb the particles in their vicinity. Each many-particle excited state of the interacting system may be described by listing all occupied momentum states, just as in the non-interacting system. As a consequence, quantities such as the heat capacity of the Fermi liquid behave qualitatively in the same way as in the Fermi gas (e.g. the heat capacity rises linearly with temperature).
Differences from Fermi gas
The following differences to the non-interacting Fermi gas arise:
Energy
The energy of a many-particle state is not simply a sum of the single-particle energies of all occupied states. Instead, the change in energy for a given change in occupation of states contains terms both linear and quadratic in (for the Fermi gas, it would only be linear, , where denotes the single-particle energies). The linear contribution corresponds to renormalized single-particle energies, which involve, e.g., a change in the effective mass of particles. The quadratic terms correspond to a sort of "mean-field" interaction between quasiparticles, which is parametrized by so-called Landau Fermi liquid parameters and determines the behaviour of density oscillations (and spin-density oscillations) in the Fermi liquid. Still, these mean-field interactions do not lead to a scattering of quasi-particles with a transfer of particles between different momentum states.
The renormalization of the mass of a fluid of interacting fermions can be calculated from first principles using many-body computational techniques. For the two-dimensional homogeneous electron gas, GW calculations and quantum Monte Carlo methods have been used to calculate renormalized quasiparticle effective masses.
Specific heat and compressibility
Specific heat, compressibility, spin-susceptibility and other quantities show the same qualitative behaviour (e.g. dependence on temperature) as in the Fermi gas, but the magnitude is (sometimes strongly) changed.
Interactions
In addition to the mean-field interactions, some weak interactions between quasiparticles remain, which lead to scattering of quasiparticles off each other. Therefore, quasiparticles acquire a finite lifetime. However, at low enough energies above the Fermi surface, this lifetime becomes very long, such that the product of excitation energy (expressed in frequency) and lifetime is much larger than one. In this sense, the quasiparticle energy is still well-defined (in the opposite limit, Heisenberg's uncertainty relation would prevent an accurate definition of the energy).
Structure
The structure of the "bare" particles (as opposed to quasiparticle) many-body Green's function is similar to that in the Fermi gas (where, for a given momentum, the Green's function in frequency space is a delta peak at the respective single-particle energy). The delta peak in the density-of-states is broadened (with a width given by the quasiparticle lifetime). In addition (and in contrast to the quasiparticle Green's function), its weight (integral over frequency) is suppressed by a quasiparticle weight factor . The remainder of the total weight is in a broad "incoherent background", corresponding to the strong effects of interactions on the fermions at short time scales.
Distribution
The distribution of particles (as opposed to quasiparticles) over momentum states at zero temperature still shows a discontinuous jump at the Fermi surface (as in the Fermi gas), but it does not drop from 1 to 0: the step is only of size .
Electrical resistivity
In a metal the resistivity at low temperatures is dominated by electron–electron scattering in combination with umklapp scattering. For a Fermi liquid, the resistivity from this mechanism varies as , which is often taken as an experimental check for Fermi liquid behaviour (in addition to the linear temperature-dependence of the specific heat), although it only arises in combination with the lattice. In certain cases, umklapp scattering is not required. For example, the resistivity of compensated semimetals scales as because of mutual scattering of electron and hole. This is known as the Baber mechanism.
Optical response
Fermi liquid theory predicts that the scattering rate, which governs the optical response of metals, not only depends quadratically on temperature (thus causing the dependence of the DC resistance), but it also depends quadratically on frequency. This is in contrast to the Drude prediction for non-interacting metallic electrons, where the scattering rate is a constant as a function of frequency.
One material in which optical Fermi liquid behavior was experimentally observed is the low-temperature metallic phase of Sr2RuO4.
Instabilities
The experimental observation of exotic phases in strongly correlated systems has triggered an enormous effort from the theoretical community to try to understand their microscopic origin. One possible route to detect instabilities of a Fermi liquid is precisely the analysis done by Isaak Pomeranchuk. Due to that, the Pomeranchuk instability has been studied by several authors with different techniques in the last few years and in particular, the instability of the Fermi liquid towards the nematic phase was investigated for several models.
Non-Fermi liquids
Non-Fermi liquids are systems in which the Fermi-liquid behaviour breaks down. The simplest example is a system of interacting fermions in one dimension, called the Luttinger liquid. Although Luttinger liquids are physically similar to Fermi liquids, the restriction to one dimension gives rise to several qualitative differences such as the absence of a quasiparticle peak in the momentum dependent spectral function, and the presence of spin-charge separation and of spin-density waves. One cannot ignore the existence of interactions in one dimension and has to describe the problem with a non-Fermi theory, where Luttinger liquid is one of them. At small finite spin temperatures in one dimension the ground state of the system is described by spin-incoherent Luttinger liquid (SILL).
Another example of non-Fermi-liquid behaviour is observed at quantum critical points of certain second-order phase transitions, such as heavy fermion criticality, Mott criticality and high- cuprate phase transitions. The ground state of such transitions is characterized by the presence of a sharp Fermi surface, although there may not be well-defined quasiparticles. That is, on approaching the critical point, it is observed that the quasiparticle residue .
In optimally doped cuprates and iron-based superconductors, the normal state above the critical temperature shows signs of non-Fermi liquid behaviour, and is often called a strange metal. In this region of phase diagram, resistivity increases linearly in temperature and the Hall coefficient is found to depend on temperature.
Understanding the behaviour of non-Fermi liquids is an important problem in condensed matter physics. Approaches towards explaining these phenomena include the treatment of marginal Fermi liquids; attempts to understand critical points and derive scaling relations; and descriptions using emergent gauge theories with techniques of holographic gauge/gravity duality.
See also
Classical fluid
Fermionic condensate
Luttinger liquid
Luttinger's theorem
Strongly correlated quantum spin liquid
References
Further reading
Condensed matter physics
Fermions
Electronic band structures
Lev Landau | Fermi liquid theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,624 | [
"Electron",
"Matter",
"Fermions",
"Phases of matter",
"Materials science",
"Electronic band structures",
"Condensed matter physics",
"Subatomic particles"
] |
182,451 | https://en.wikipedia.org/wiki/Ethylenediaminetetraacetic%20acid | Ethylenediaminetetraacetic acid (EDTA), also called EDTA acid, is an aminopolycarboxylic acid with the formula [CH2N(CH2CO2H)2]2. This white, slightly water-soluble solid is widely used to bind to iron (Fe2+/Fe3+) and calcium ions (Ca2+), forming water-soluble complexes even at neutral pH. It is thus used to dissolve Fe- and Ca-containing scale as well as to deliver iron ions under conditions where its oxides are insoluble. EDTA is available as several salts, notably disodium EDTA, sodium calcium edetate, and tetrasodium EDTA, but these all function similarly.
Uses
EDTA Is widely used in industry. It also has applications in food preservation, medicine, cosmetics, water softening, in laboratories, and other fields.
Industrial
EDTA is mainly used to sequester (bind or confine) metal ions in aqueous solution. In the textile industry, it prevents metal ion impurities from modifying colours of dyed products. In the pulp and paper industry, EDTA inhibits the ability of metal ions, especially Mn2+, from catalysing the disproportionation of hydrogen peroxide, which is used in chlorine-free bleaching.
Gas scrubbing
Aqueous [Fe(EDTA)]− is used for removing ("scrubbing") hydrogen sulfide from gas streams. This conversion is achieved by oxidising the hydrogen sulfide to elemental sulfur, which is non-volatile:
2 [Fe(EDTA)]− + H2S → 2 [Fe(EDTA)]2− + S + 2 H+
In this application, the iron(III) centre is reduced to its iron(II) derivative, which can then be reoxidised by air. In a similar manner, nitrogen oxides are removed from gas streams using [Fe(EDTA)]2−.
Food
In a similar manner, EDTA is added to some food as a preservative or stabiliser to prevent catalytic oxidative decolouration, which is catalysed by metal ions.
Water softener
The reduction of water hardness in laundry applications and the dissolution of scale in boilers both rely on EDTA and related complexants to bind Ca2+, Mg2+, as well as other metal ions. Once bound to EDTA, these metal complexes are less likely to form precipitates or to interfere with the action of the soaps and detergents. For similar reasons, cleaning solutions often contain EDTA. In a similar manner EDTA is used in the cement industry for the determination of free lime and free magnesia in cement and clinkers.
The solubilisation of Fe3+ ions at or below near neutral pH can be accomplished using EDTA. This property is useful in agriculture including hydroponics. However, given the pH dependence of ligand formation, EDTA is not helpful for improving iron solubility in above neutral soils. Otherwise, at near-neutral pH and above, iron(III) forms insoluble salts, which are less bioavailable to susceptible plant species.
Ion-exchange chromatography
EDTA was used in separation of the lanthanide metals by ion-exchange chromatography. Perfected by F. H. Spedding et al. in 1954, the method relies on the steady increase in stability constant of the lanthanide EDTA complexes with atomic number. Using sulfonated polystyrene beads and Cu2+ as a retaining ion, EDTA causes the lanthanides to migrate down the column of resin while separating into bands of pure lanthanides. The lanthanides elute in order of decreasing atomic number. Due to the expense of this method, relative to countercurrent solvent extraction, ion exchange is now used only to obtain the highest purities of lanthanides (typically greater than 99.99%).
Medicine
Sodium calcium edetate, an EDTA derivative, is used to bind metal ions in the practice of chelation therapy, such as for treating mercury and lead poisoning. It is used in a similar manner to remove excess iron from the body. This therapy is used to treat the complication of repeated blood transfusions, as would be applied to treat thalassaemia.
In testing
In medical diagnosis and organ function tests (here, kidney function test), the chromium(III) complex [Cr(EDTA)]− (as radioactive chromium-51 (51Cr)) is administered intravenously and its filtration into the urine is monitored. This method is useful for evaluating glomerular filtration rate (GFR) in nuclear medicine.
EDTA is used extensively in the analysis of blood. It is an anticoagulant for blood samples for CBC/FBCs, where the EDTA chelates the calcium present in the blood specimen, arresting the coagulation process and preserving blood cell morphology. Tubes containing EDTA are marked with lavender (purple) or pink tops. EDTA is also in tan top tubes for lead testing and can be used in royal blue top tubes for trace metal testing.
EDTA is a slime dispersant, and has been found to be highly effective in reducing bacterial growth during implantation of intraocular lenses (IOLs).
Dentistry
Dentists and endodontists use EDTA solutions to remove inorganic debris (smear layer) and lubricate the root canals in endodontics. This procedure helps prepare root canals for obturation. Furthermore, EDTA solutions with the addition of a surfactant loosen up calcifications inside a root canal and allow instrumentation (canal shaping) and facilitate apical advancement of a file in a tight or calcified root canal towards the apex.
Eyedrops
It serves as a preservative (usually to enhance the action of another preservative such as benzalkonium chloride or thiomersal) in ocular preparations and eyedrops.
Alternative medicine
Some alternative practitioners believe EDTA acts as an antioxidant, preventing free radicals from injuring blood vessel walls, therefore reducing atherosclerosis. These ideas are unsupported by scientific studies, and seem to contradict some currently accepted principles. The U.S. FDA has not approved it for the treatment of atherosclerosis.
Cosmetics
In shampoos, cleaners, and other personal care products, EDTA salts are used as a sequestering agent to improve their stability in air.
Laboratory applications
In the laboratory, EDTA is widely used for scavenging metal ions: In biochemistry and molecular biology, ion depletion is commonly used to deactivate metal-dependent enzymes, either as an assay for their reactivity or to suppress damage to DNA, proteins, and polysaccharides. EDTA also acts as a selective inhibitor against dNTP hydrolyzing enzymes (Taq polymerase, dUTPase, MutT), liver arginase and horseradish peroxidase independently of metal ion chelation. These findings urge the rethinking of the utilisation of EDTA as a biochemically inactive metal ion scavenger in enzymatic experiments. In analytical chemistry, EDTA is used in complexometric titrations and analysis of water hardness or as a masking agent to sequester metal ions that would interfere with the analyses.
EDTA finds many specialised uses in the biomedical labs, such as in veterinary ophthalmology as an anticollagenase to prevent the worsening of corneal ulcers in animals. In tissue culture, EDTA is used as a chelating agent that binds to calcium and prevents joining of cadherins between cells, preventing clumping of cells grown in liquid suspension, or detaching adherent cells for passaging. In histopathology, EDTA can be used as a decalcifying agent making it possible to cut sections using a microtome once the tissue sample is demineralised.
EDTA is also known to inhibit a range of metallopeptidases, the method of inhibition occurs via the chelation of the metal ion required for catalytic activity. EDTA can also be used to test for bioavailability of heavy metals in sediments. However, it may influence the bioavailability of metals in solution, which may pose concerns regarding its effects in the environment, especially given its widespread uses and applications.
Other
The oxidising properties of [Fe(EDTA)]− are used in photography to solubilise silver particles.
EDTA is also used to remove crud (corroded metals) from fuel rods in nuclear reactors.
Side effects
EDTA exhibits low acute toxicity with (rat) of 2.0 g/kg to 2.2 g/kg. It has been found to be both cytotoxic and weakly genotoxic in laboratory animals. Oral exposures have been noted to cause reproductive and developmental effects. The same study also found that both dermal exposure to EDTA in most cosmetic formulations and inhalation exposure to EDTA in aerosolised cosmetic formulations would produce exposure levels below those seen to be toxic in oral dosing studies.
Synthesis
The compound was first described in 1935 by Ferdinand Münz, who prepared the compound from ethylenediamine and chloroacetic acid. Today, EDTA is mainly synthesised from ethylenediamine (1,2-diaminoethane), formaldehyde, and sodium cyanide. This route yields the tetrasodium EDTA, which is converted in a subsequent step into the acid forms:
H2NCH2CH2NH2 + 4 CH2O + 4 NaCN + 4 H2O → (NaO2CCH2)2NCH2CH2N(CH2CO2Na)2 + 4 NH3
(NaO2CCH2)2NCH2CH2N(CH2CO2Na)2 + 4 HCl → (HO2CCH2)2NCH2CH2N(CH2CO2H)2 + 4 NaCl
This process is used to produce about 80,000 tonnes of EDTA each year. Impurities cogenerated by this route include glycine and nitrilotriacetic acid; they arise from reactions of the ammonia coproduct.
Nomenclature
To describe EDTA and its various protonated forms, chemists distinguish between EDTA4−, the conjugate base that is the ligand, and H4EDTA, the precursor to that ligand. At very low pH (very acidic conditions) the fully protonated H6EDTA2+ form predominates, whereas at very high pH or very basic condition, the fully deprotonated EDTA4− form is prevalent. In this article, the term EDTA is used to mean H4−xEDTAx−, whereas in its complexes EDTA4− stands for the tetraanion ligand.
Coordination chemistry principles
In coordination chemistry, EDTA4− is a member of the aminopolycarboxylic acid family of ligands. EDTA4− usually binds to a metal cation through its two amines and four carboxylates, i.e., it is a hexadentate ("six-toothed") chelating agent. Many of the resulting coordination compounds adopt octahedral geometry. Although of little consequence for its applications, these octahedral complexes are chiral. The cobalt(III) anion [Co(EDTA)]− has been resolved into enantiomers. Many complexes of EDTA4− adopt more complex structures due to either the formation of an additional bond to water, i.e. seven-coordinate complexes, or the displacement of one carboxylate arm by water. The iron(III) complex of EDTA is seven-coordinate. Early work on the development of EDTA was undertaken by Gerold Schwarzenbach in the 1940s. EDTA forms especially strong complexes with Mn(II), Cu(II), Fe(III), Pb(II) and Co(III).
Several features of EDTA's complexes are relevant to its applications. First, because of its high denticity, this ligand has a high affinity for metal cations:
[Fe(H2O)6]3+ + H4EDTA [Fe(EDTA)]− + 6 H2O + 4 H+ Keq = 1025.1
Written in this way, the equilibrium quotient shows that metal ions compete with protons for binding to EDTA. Because metal ions are extensively enveloped by EDTA, their catalytic properties are often suppressed. Finally, since complexes of EDTA4− are anionic, they tend to be highly soluble in water. For this reason, EDTA is able to dissolve deposits of metal oxides and carbonates.
The pKa values of free EDTA are 0, 1.5, 2, 2.66 (deprotonation of the four carboxyl groups) and 6.16, 10.24 (deprotonation of the two amino groups).
Environmental concerns
Abiotic degradation
EDTA is in such widespread use that questions have been raised whether it is a persistent organic pollutant. While EDTA serves many positive functions in different industrial, pharmaceutical and other avenues, the longevity of EDTA can pose serious issues in the environment. The degradation of EDTA is slow. It mainly occurs abiotically in the presence of sunlight.
The most important process for the elimination of EDTA from surface waters is direct photolysis at wavelengths below 400 nm. Depending on the light conditions, the photolysis half-lives of iron(III) EDTA in surface waters can range as low as 11.3 minutes up to more than 100 hours. Degradation of FeEDTA, but not EDTA itself, produces iron complexes of the triacetate (ED3A), diacetate (EDDA), and monoacetate (EDMA) – 92% of EDDA and EDMA biodegrades in 20 hours while ED3A displays significantly higher resistance. Many environmentally-abundant EDTA species (such as Mg2+ and Ca2+) are more persistent.
Biodegradation
In many industrial wastewater treatment plants, EDTA elimination can be achieved at about 80% using microorganisms. Resulting byproducts are ED3A and iminodiacetic acid (IDA) – suggesting that both the backbone and acetyl groups were attacked. Some microorganisms have even been discovered to form nitrates out of EDTA, but they function optimally at moderately alkaline conditions of pH 9.0–9.5.
Several bacterial strains isolated from sewage treatment plants efficiently degrade EDTA. Specific strains include Agrobacterium radiobacter ATCC 55002 and the sub-branches of Pseudomonadota like BNC1, BNC2, and strain DSM 9103. The three strains share similar properties of aerobic respiration and are classified as gram-negative bacteria. Unlike photolysis, the chelated species is not exclusive to iron(III) in order to be degraded. Rather, each strain uniquely consumes varying metal–EDTA complexes through several enzymatic pathways. Agrobacterium radiobacter only degrades Fe(III) EDTA while BNC1 and DSM 9103 are not capable of degrading iron(III) EDTA and are more suited for calcium, barium, magnesium and manganese(II) complexes. EDTA complexes require dissociation before degradation.
Alternatives to EDTA
Interest in environmental safety has raised concerns about biodegradability of aminopolycarboxylates such as EDTA. These concerns incentivize the investigation of alternative aminopolycarboxylates. Candidate chelating agents include nitrilotriacetic acid (NTA), iminodisuccinic acid (IDS), polyaspartic acid, S,S-ethylenediamine-N,N′-disuccinic acid (EDDS), methylglycinediacetic acid (MGDA), and L-Glutamic acid N,N-diacetic acid, tetrasodium salt (GLDA).
Iminodisuccinic acid (IDS)
Commercially used since 1998, iminodisuccinic acid (IDS) biodegrades by about 80% after only 7 days. IDS binds to calcium exceptionally well and forms stable compounds with other heavy metal ions. In addition to having a lower toxicity after chelation, IDS is degraded by Agrobacterium tumefaciens (BY6), which can be harvested on a large scale. The enzymes involved, IDS epimerase and C−N lyase, do not require any cofactors.
Polyaspartic acid
Polyaspartic acid, like IDS, binds to calcium and other heavy metal ions. It has many practical applications including corrosion inhibitors, wastewater additives, and agricultural polymers. A Polyaspartic acid-based laundry detergent was the first laundry detergent in the world to receive the EU flower ecolabel. Calcium binding ability of polyaspartic acid has been exploited for targeting of drug-loaded nanocarriers to bone. Preparation of hydrogels based on polyaspartic acid, in a variety of physical forms ranging from fiber to particle, can potentially enable facile separation of the chelated ions from a solution. Therefore, despite being weaker than EDTA, polyaspartic acid can still be regarded as a viable alternative due to these features as well as biocompatibility, and biodegradability.
S,S-Ethylenediamine-N,N′-disuccinic acid (EDDS)
A structural isomer of EDTA, ethylenediamine-N,N′-disuccinic acid (EDDS) is readily biodegradable at high rate in its S,S form.
Methylglycinediacetic acid (MGDA)
Trisodium dicarboxymethyl alaninate, also known as methylglycinediacetic acid (MGDA), has a high rate of biodegradation at over 68%, but unlike many other chelating agents can degrade without the assistance of adapted bacteria. Additionally, unlike EDDS or IDS, MGDA can withstand higher temperatures while maintaining a high stability as well as the entire pH range. MGDA has been shown to be an effective chelating agent, with a capacity for mobilization comparable with that of nitrilotriacetic acid (NTA), with application to water for industrial use and for the removal of calcium oxalate from urine from patients with kidney stones.
Methods of detection and analysis
The most sensitive method of detecting and measuring EDTA in biological samples is selected reaction monitoring capillary electrophoresis mass spectrometry (SRM-CE/MS), which has a detection limit of 7.3 ng/mL in human plasma and a quantitation limit of 15 ng/mL. This method works with sample volumes as small as 7–8 nL.
EDTA has also been measured in non-alcoholic beverages using high performance liquid chromatography (HPLC) at a level of 2.0 μg/mL.
In popular culture
In the movie Blade (1998), EDTA is used as a weapon to kill vampires, exploding when in contact with vampire blood.
References
External links
EDTA: Molecule of the Month
EDTA Determination of Total Water Hardness
Acetic acids
Diamines
Antidotes
Chelating agents
Photographic chemicals
Preservatives
E-number additives
Hexadentate ligands
Ophthalmology drugs | Ethylenediaminetetraacetic acid | [
"Chemistry"
] | 4,138 | [
"Chelating agents",
"Process chemicals"
] |
182,455 | https://en.wikipedia.org/wiki/Corrin | Corrin is a heterocyclic compound. Although not known to exist on its own, the molecule is of interest as the parent macrocycle related to the cofactor and chromophore in vitamin B12. Its name reflects that it is the "core" of vitamin B12 (cobalamins). Compounds with a corrin core are known as "corrins".
There are two chiral centres, which in natural compounds like cobalamin have the same stereochemistry.
Coordination chemistry
Upon deprotonation, the corrinoid ring is capable of binding cobalt. In vitamin B12, the resulting complex also features a benzimidazole-derived ligand, and the sixth site on the octahedron serves as the catalytic center.
The corrin ring resembles the porphyrin ring. Both feature four pyrrole-like subunits organized into rings. Corrins have a central 15-membered ring whereas porphryins have an interior 16-membered ring. All four nitrogen centers are linked by conjugation structure, with alternating double and single bonds. In contrast to porphyrins, corrins lack one of the carbon groups that link the pyrrole-like units into a fully conjugated structure. With a conjugated system that extends only 3/4 of the way around the ring, and does not include any of the outer edge carbons, corrins have a number of non-conjugated sp3 carbons, making them more flexible than porphyrins and not as flat. A third closely related biological structure, the chlorin ring system found in chlorophyll, is intermediate between porphyrin and corrin, having 20 carbons like the porphyrins and a conjugated structure extending all the way around the central atom, but with only 6 of the 8 edge carbons participating.
Corroles (octadehydrocorrins) are fully aromatic derivatives of corrins.
References
Further reading
Biomolecules
Tetrapyrroles
Metabolism
Macrocycles
Schiff bases | Corrin | [
"Chemistry",
"Biology"
] | 441 | [
"Natural products",
"Biochemistry",
"Organic compounds",
"Macrocycles",
"Cellular processes",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Metabolism"
] |
182,499 | https://en.wikipedia.org/wiki/Porphyrin | Porphyrins ( ) are a group of heterocyclic, macrocyclic, organic compounds, composed of four modified pyrrole subunits interconnected at their α carbon atoms via methine bridges (). In vertebrates, an essential member of the porphyrin group is heme, which is a component of hemoproteins, whose functions include carrying oxygen in the bloodstream. In plants, an essential porphyrin derivative is chlorophyll, which is involved in light harvesting and electron transfer in photosynthesis.
The parent of porphyrins is porphine, a rare chemical compound of exclusively theoretical interest. Substituted porphines are called porphyrins. With a total of 26 π-electrons, of which 18 π-electrons form a planar, continuous cycle, the porphyrin ring structure is often described as aromatic. One result of the large conjugated system is that porphyrins typically absorb strongly in the visible region of the electromagnetic spectrum, i.e. they are deeply colored. The name "porphyrin" derives .
Structure
Porphyrin complexes consist of a square planar MN4 core. The periphery of the porphyrins, consisting of sp2-hybridized carbons, generally display small deviations from planarity. "Ruffled" or saddle-shaped porphyrins is attributed to interactions of the system with its environment. Additionally, the metal is often not centered in the N4 plane. For free porphyrins, the two pyrrole protons are mutually trans and project out of the N4 plane. These nonplanar distortions are associated with altered chemical and physical properties. Chlorophyll-rings are more distinctly nonplanar, but they are more saturated than porphyrins.
Complexes of porphyrins
Concomitant with the displacement of two N-H protons, porphyrins bind metal ions in the N4 "pocket". The metal ion usually has a charge of 2+ or 3+. A schematic equation for these syntheses is shown, where M = metal ion and L = a ligand:
Ancient porphyrins
A geoporphyrin, also known as a petroporphyrin, is a porphyrin of geologic origin. They can occur in crude oil, oil shale, coal, or sedimentary rocks. Abelsonite is possibly the only geoporphyrin mineral, as it is rare for porphyrins to occur in isolation and form crystals.
The field of organic geochemistry had its origins in the isolation of porphyrins from petroleum. This finding helped establish the biological origins of petroleum. Petroleum is sometimes "fingerprinted" by analysis of trace amounts of nickel and vanadyl porphyrins.
Biosynthesis
In non-photosynthetic eukaryotes such as animals, insects, fungi, and protozoa, as well as the α-proteobacteria group of bacteria, the committed step for porphyrin biosynthesis is the formation of δ-aminolevulinic acid (δ-ALA, 5-ALA or dALA) by the reaction of the amino acid glycine with succinyl-CoA from the citric acid cycle. In plants, algae, bacteria (except for the α-proteobacteria group) and archaea, it is produced from glutamic acid via glutamyl-tRNA and glutamate-1-semialdehyde. The enzymes involved in this pathway are glutamyl-tRNA synthetase, glutamyl-tRNA reductase, and glutamate-1-semialdehyde 2,1-aminomutase. This pathway is known as the C5 or Beale pathway.
Two molecules of dALA are then combined by porphobilinogen synthase to give porphobilinogen (PBG), which contains a pyrrole ring. Four PBGs are then combined through deamination into hydroxymethyl bilane (HMB), which is hydrolysed to form the circular tetrapyrrole uroporphyrinogen III. This molecule undergoes a number of further modifications. Intermediates are used in different species to form particular substances, but, in humans, the main end-product protoporphyrin IX is combined with iron to form heme. Bile pigments are the breakdown products of heme.
The following scheme summarizes the biosynthesis of porphyrins, with references by EC number and the OMIM database. The porphyria associated with the deficiency of each enzyme is also shown:
Laboratory synthesis
A common synthesis for porphyrins is the Rothemund reaction, first reported in 1936, which is also the basis for more recent methods described by Adler and Longo. The general scheme is a condensation and oxidation process starting with pyrrole and an aldehyde.
Potential applications
Photodynamic therapy
Porphyrins have been evaluated in the context of photodynamic therapy (PDT) since they strongly absorb light, which is then converted to heat in the illuminated areas. This technique has been applied in macular degeneration using verteporfin.
PDT is considered a noninvasive cancer treatment, involving the interaction between light of a determined frequency, a photo-sensitizer, and oxygen. This interaction produces the formation of a highly reactive oxygen species (ROS), usually singlet oxygen, as well as superoxide anion, free hydroxyl radical, or hydrogen peroxide. These high reactive oxygen species react with susceptible cellular organic biomolecules such as; lipids, aromatic amino acids, and nucleic acid heterocyclic bases, to produce oxidative radicals that damage the cell, possibly inducing apoptosis or even necrosis.
Molecular electronics and sensors
Porphyrin-based compounds are of interest as possible components of molecular electronics and photonics. Synthetic porphyrin dyes have been incorporated in prototype dye-sensitized solar cells.
Biological applications
Porphyrins have been investigated as possible anti-inflammatory agents and evaluated on their anti-cancer and anti-oxidant activity. Several porphyrin-peptide conjugates were found to have antiviral activity against HIV in vitro.
Toxicology
Heme biosynthesis is used as biomarker in environmental toxicology studies. While excess production of porphyrins indicate organochlorine exposure, lead inhibits ALA dehydratase enzyme.
Gallery
Related species
In nature
Several heterocycles related to porphyrins are found in nature, almost always bound to metal ions. These include
Synthetic
A benzoporphyrin is a porphyrin with a benzene ring fused to one of the pyrrole units. e.g. verteporfin is a benzoporphyrin derivative.
Non-natural porphyrin isomers
The first synthetic porphyrin isomer was reported by Emanual Vogel and coworkers in 1986. This isomer [18]porphyrin-(2.0.2.0) is named as porphycene, and the central N4 Cavity forms a rectangle shape as shown in figure. Porphycenes showed interesting photophysical behavior and found versatile compound towards the photodynamic therapy. This inspired Vogel and Sessler to took up the challenge of preparing [18]porphyrin-(2.1.0.1) and named it as corrphycene or porphycerin. The third porphyrin that is [18]porphyrin-(2.1.1.0), was reported by Callot and Vogel-Sessler. Vogel and coworkers reported successful isolation of [18]porphyrin-(3.0.1.0) or isoporphycene. The Japanese scientist Furuta and Polish scientist Latos-Grażyński almost simultaneously reported the N-confused porphyrins. The inversion of one of the pyrrolic subunits in the macrocyclic ring resulted in one of the nitrogen atoms facing outwards from the core of the macrocycle.
See also
A porphyrin-related disease: porphyria
Porphyrin coordinated to iron: heme
A heme-containing group of enzymes: Cytochrome P450
Porphyrin coordinated to magnesium: chlorophyll
The one-carbon-shorter analogues: corroles, including vitamin B12, which is coordinated to a cobalt
Corphins, the highly reduced porphyrin coordinated to nickel that binds the Cofactor F430 active site in methyl coenzyme M reductase (MCR)
Nitrogen-substituted porphyrins: phthalocyanine
References
External links
Journal of Porphyrins and Phthalocyanines
Handbook of Porphyrin Science
Porphynet – an informative site about porphyrins and related structures
Biomolecules
Metabolism
Photosynthetic pigments
Chelating agents | Porphyrin | [
"Chemistry",
"Biology"
] | 1,889 | [
"Photosynthetic pigments",
"Natural products",
"Photosynthesis",
"Organic compounds",
"Cellular processes",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Chelating agents",
"Porphyrins",
"Metabolism",
"Process chemicals",
"Molecular biology"
] |
182,544 | https://en.wikipedia.org/wiki/Anthracite | Anthracite, also known as hard coal and black coal, is a hard, compact variety of coal that has a submetallic lustre. It has the highest carbon content, the fewest impurities, and the highest energy density of all types of coal and is the highest ranking of coals.
The Coal Region of Northeastern Pennsylvania in the United States has the largest known deposits of anthracite coal in the world with an estimated reserve of seven billion short tons.<ref>Carpenito, Thomas (2019) "The State of Coal and Renewable Energy in Schuylkill County", https://medium.com/@thomascarpenito3/state-of-coal-and-renewable-energy-in-schuylkill-f8850fec3fa6</ref> China accounts for the majority of global production; other producers include Russia, Ukraine, North Korea, South Africa, Vietnam, Australia, Canada, and the United States. Total production in 2020 was 615 million tons.
Anthracite is the most metamorphosed type of coal, but still represents low-grade metamorphism, in which the carbon content is between 86% and 97%. The term is applied to those varieties of coal which do not give off tarry or other hydrocarbon vapours when heated below their point of ignition. Anthracite is difficult to ignite, and burns with a short, blue, and smokeless flame.
Anthracite is categorized into several grades. Standard grade is used predominantly in power generation, and high grade (HG) and ultra high grade (UHG), are used predominantly in the metallurgy sector. Anthracite accounts for about 1% of global coal reserves, and is mined in only a few countries around the world.
Names
Anthracite derives from the Greek anthrakítēs (), literally "coal-like". Other terms which refer to anthracite are black coal, hard coal, stone coal, dark coal, coffee coal, blind coal (in Scotland), Kilkenny coal (in Ireland), crow coal or craw coal, and black diamond. "Blue Coal" is the term for a once-popular and trademarked brand of anthracite, mined by the Glen Alden Coal Company in Pennsylvania, and sprayed with a blue dye at the mine before shipping to its Northeastern U.S. markets to distinguish it from its competitors.
Culm has different meanings in British and American English. In British English, culm is the imperfect anthracite, located predominantly north Devon and Cornwall, which was used as a pigment. The term is also used to refer to some carboniferous rock strata found in both Britain and in the Rhenish hill countries, also known as the Culm Measures. In Britain, it may also refer to coal exported from Britain during the 19th century. In American English, "culm" refers to the waste or slack from anthracite mining, mostly dust and small pieces not suitable for use in home furnaces.
Properties
Anthracite is similar in appearance to the mineraloid jet and is sometimes used as a jet imitation.
Anthracite differs from ordinary bituminous coal by its greater hardness (2.75–3 on the Mohs scale), its higher relative density of 1.3–1.4, and luster, which is often semi-metallic with a mildly green reflection. It contains a high percentage of fixed carbon and a low percentage of volatile matter. It is also free from included soft or fibrous notches and does not soil the fingers when rubbed. Anthracitization is the transformation of bituminous coal into anthracite.
The moisture content of fresh-mined anthracite generally is less than 15 percent. The heat content of anthracite ranges from 26 to 33 MJ/kg (22 to 28 million Btu/short ton) on a moist, mineral-matter-free basis. The heat content of anthracite coal consumed in the United States averages 29 MJ/kg (25 million Btu/ton), on the as-received basis, containing both inherent moisture and mineral matter.
Since the 1980s, anthracite refuse or mine waste has been used for coal power generation in a form of recycling. The practice known as reclamation is being applied to culm piles antedating laws requiring mine owners to restore lands to their approximate original condition.
Chemically, anthracite may be considered as a transition stage between ordinary bituminous coal and graphite, produced by the more or less complete elimination of the volatile constituents of the former, and it is found most abundantly in areas that have been subjected to considerable stresses and pressures, such as the flanks of great mountain ranges. Anthracite is associated with strongly deformed sedimentary rocks that were subjected to higher pressures and temperatures (but short of metamorphic conditions) just as bituminous coal is generally associated with less deformed or flat-lying sedimentary rocks. The compressed layers of anthracite that are deep mined in the folded Ridge and Valley Province of the Appalachian Mountains of the Coal Region of East-central Pennsylvania are extensions of the same layers of bituminous coal that are mined on the generally flat lying and undeformed sedimentary rocks further west on the Allegheny Plateau of Kentucky and West Virginia, Eastern Ohio, and Western Pennsylvania.
In the same way the anthracite region of South Wales is confined to the contorted portion west of Swansea and Llanelli, the central and eastern portions producing steam coal, coking coal and domestic house coals.
Anthracite shows some alteration by the development of secondary divisional planes and fissures so that the original stratification lines are not always easily seen. The thermal conductivity is also higher; a lump of anthracite feels perceptibly colder when held in the warm hand than a similar lump of bituminous coal at the same temperature.
Anthracite has a history of use in blast furnaces for iron smelting; however, it lacked the pore space of metallurgical coke, which eventually replaced anthracite.
History of mining and use
In southwest Wales, anthracite has been burned as a domestic fuel since at least medieval times, when it was mined near Saundersfoot. More recently, large-scale mining of anthracite took place across the western part of the South Wales Coalfield until the late 20th century.
In the United States, anthracite coal history began in 1790 in Pottsville, Pennsylvania, with the discovery of coal made by the hunter Necho Allen in what is now known as the Coal Region. Legend has it that Allen fell asleep at the base of Broad Mountain and woke to the sight of a large fire because his campfire had ignited an outcrop of anthracite coal. By 1795, an anthracite-fired iron furnace had been built on the Schuylkill River.
Anthracite was first experimentally burned as a residential heating fuel in the US on 11 February 1808, by Judge Jesse Fell in Wilkes-Barre, Pennsylvania, on an open grate in a fireplace. Anthracite differs from wood in that it needs a draft from the bottom, and Judge Fell proved with his grate design that it was a viable heating fuel. In spring 1808, John and Abijah Smith shipped the first commercially mined load of anthracite down the Susquehanna River from Plymouth, Pennsylvania, marking the birth of commercial anthracite mining in the United States. From that first mine, production rose to an all-time high of over 100 million tons in 1917.
The difficulty of igniting anthracite inhibited its early use, especially in blast furnaces for smelting iron. With the development of the hot blast in 1828, which used waste heat to preheat combustion air, anthracite became a preferred fuel, accounting for 45% of US pig iron production within 15 years. Anthracite iron smelting was later displaced by coke.
From the late 19th century until the 1950s, anthracite was the most popular fuel for heating homes and other buildings in the northern US, until it was supplanted by oil-burning systems, and more recently natural gas systems. Many large public buildings, such as schools, were heated with anthracite-burning furnaces through the 1980s.
During the American Civil War, Confederate blockade runners used anthracite as a smokeless fuel for their boilers to avoid revealing their position to the blockaders.
The invention of the Wootten firebox enabled locomotives to directly burn anthracite efficiently, particularly waste culm. In the early 20th century US, the Delaware, Lackawanna and Western Railroad started using only the more expensive anthracite coal in its passenger locomotives, dubbed themselves "The Road of Anthracite", and advertised widely that travelers on their line could make railway journeys without getting their clothing stained with soot. The advertisements featured a white-clad woman named Phoebe Snow and poems containing lines like "My gown stays white / From morn till night / Upon the road of Anthracite". Similarly, the Great Western Railway in the UK was able to use its access to anthracite (it dominated the anthracite region) to earn a reputation for efficiency and cleanliness unmatched by other UK companies.
Internal combustion motors driven by the so-called "mixed", "poor", "semi-water" or "Dowson gas" produced by the gasification of anthracite with air (and a small proportion of steam) were at one time the most economical method of obtaining power, requiring only , or less. Large quantities of anthracite for power purposes were formerly exported from South Wales to France, Switzerland and parts of Germany. , widespread commercial anthracite mining in Wales has now ceased, although a few large open cast sites remain, along with some relatively small drift mining operations.
Anthracite today
Anthracite generally costs two to six times as much as regular coal. In June 2008, the wholesale cost of anthracite was US$150/short ton, falling to $107/ton in 2021; it makes up 1% of U.S. coal production.
The principal use of anthracite today is for a domestic fuel in either hand-fired stoves or automatic stoker furnaces. It delivers high energy per its weight and burns cleanly with little soot, making it ideal for this purpose. Its high value makes it prohibitively expensive for power plant use. Other uses include the fine particles used as filter media, and as an ingredient in charcoal briquettes. Anthracite was an authorised fuel in terms of the United Kingdom's Clean Air Act 1993, meaning that it could be used within a designated Smoke Control Area such as the central London boroughs.
Mining
China today mines by far the largest share of global anthracite production, accounting for more than three-quarters of global output. Most Chinese production is of standard-grade anthracite, which is used in power generation. Increased demand in China has made that country into a net importer of the fuel, mostly from Vietnam, another major producer of anthracite for power generation, although increasing domestic consumption in Vietnam means that exports may be scaled back.
Current U.S. anthracite production averages around five million tons per year. Of that, about 1.8 million tons were mined in the state of Pennsylvania. Mining of anthracite coal continues to this day in eastern Pennsylvania, and contributes up to 1% to the gross state product. More than 2,000 people were employed in the mining of anthracite coal in 1995. Most of the mining as of that date involved reclaiming coal from slag heaps (waste piles from past coal mining) at nearby closed mines. Some underground anthracite coal is also being mined.
Countries producing HG and UHG anthracite include Russia and South Africa. HG and UHG anthracite are used as a coke or coal substitute in various metallurgical coal applications (sintering, PCI, direct BF charge, pelletizing). It plays an important role in cost reduction in the steel making process and is also used in production of ferroalloys, silicomanganese, calcium carbide and silicon carbide. South Africa exports lower-quality, higher-ash anthracite to Brazil to be used in steel-making.
Sizing and grading
Anthracite is processed into different sizes by what is commonly referred to as a breaker. The large coal is raised from the mine and passed through breakers with toothed rolls to reduce the lumps to smaller pieces. The smaller pieces are separated into different sizes by a system of graduated sieves, placed in descending order. Sizing is necessary for different types of stoves and furnaces.
Anthracite is classified into three grades, depending on its carbon content. Standard grade is used as a domestic fuel and in industrial power-generation. The rarer higher grades of anthracite are purer – i.e., they have a higher carbon content – and are used in steel-making and other segments of the metallurgical industries. Technical characteristics of the various grades of anthracite are as follows:
Anthracite is divided by size mainly into applications that need lumps (typically larger than 10 mm) – various industrial processes where it replaces metallurgical coke, and domestic fuel – and those that need fines (less than 10 mm), such as sintering and pelletising.
The common American classification by size is as follows:
Lump, steamboat, egg and stove coals, the latter in two or three sizes, all three being above in (38 mm) size on round-hole screens.
High grade
High grade (HG) and ultra high grade (UHG) anthracite are the highest grades of anthracite coal. They are the purest forms of coal, having the highest degree of coalification, the highest carbon count and energy content and the fewest impurities (moisture, ash and volatiles).
High grade and ultra high grade anthracite are harder than standard grade anthracite, and have a higher relative density. An example of a chemical formula for high-grade anthracite would be C240H90O4NS, representing 94% carbon. UHG anthracite typically has a minimum carbon content of 95%.
They also differ in usage from standard grade anthracite (used mainly for power generation), being employed mainly in metallurgy as a cost-efficient substitute for coke in processes such as sintering and pelletising, as well as pulverised coal injection (PCI) and direct injection into blast furnaces. They can also be used in water purification and domestically as a smokeless fuel.
HG and UHG anthracite account for a small percentage of the total anthracite market. The major producing countries are Russia, Ukraine, Vietnam, South Africa and the US.
The primary sizes used in the United States for domestic heating are Chestnut, Pea, Buckwheat and Rice, with Chestnut and Rice being the most popular. Chestnut and Pea are used in hand fired furnaces while the smaller Rice and Buckwheat are used in automatic stoker furnaces. Rice is currently the most sought-after size due to the ease of use and popularity of that type of furnace.
In South Wales, a less elaborate classification is adopted, but great care is exercised in hand-picking and cleaning the coal from particles of pyrites in the higher qualities known as best malting coals, which are used for kiln-drying malt.
Anthracite dust can be made into briquettes and is sold in the United Kingdom under trade names such as Phurnacite, Ancit and Taybrite.
Semianthracite
On the opposite end from high-grade anthracite coal, semianthracite coal is defined as a coal which is intermediate between anthracite coal and bituminous coal, and particularly a coal which approaches anthracite in nonvolatile character.
Underground fires
Historically, from time to time, underground seams of coal have caught fire, often from careless or unfortunate mining activities. The pocket of ignited coal is fed oxygen by vent paths that have not yet been discovered. These can smolder for years. Commonly, exhaust vents in populated areas are soon sensed and are sealed while vents in uninhabited areas remain undiscovered. Occasionally, vents are discovered via fumes sensed by passers-by, often in forested areas. Attempts to extinguish those remaining have at times been futile, and several such combustion areas exist today. The existence of an underground combustion site can sometimes be identified in the winter where fallen snow is seen to be melted by the warmth conducted from below. Proposals for harnessing this heat as geothermal energy have not been successful.
A vein of anthracite that caught fire in Centralia, Pennsylvania, in 1962 has been burning ever since, turning the once-thriving borough into a ghost town.
Major reserves
Geologically, the largest most concentrated anthracite deposit in the world is found in the Lackawanna Coal Mine in northeastern Pennsylvania, United States in and around Scranton, Pennsylvania. Locally called the Coal Region, the deposit contains of coal-bearing rock which originally held 22.8 billion short tons (20.68 billion tonnes) of anthracite. The geographic region is roughly 100 miles (161 km) in length and 30 miles (48 km) in width. Because of historical mining and development of the lands overlying the coal, it is estimated that 7 billion short tons (6.3 billion tonnes) of minable reserves remain. Other areas of the United States also contain several smaller deposits of anthracite, such as those historically mined in Crested Butte, Colorado.
Among current producers, Russia, China, Poland, and Ukraine have the largest estimated recoverable reserves of anthracite. Other countries with substantial reserves include Vietnam and North Korea.
The Groundhog Anthracite Deposit in British Columbia, Canada, is the world's largest previously undeveloped anthracite deposit. It is owned by the Australian publicly-traded company Atrum Coal and has 1.57 billion tonnes of high grade anthracite.
Anthracites of newer Tertiary or Cretaceous age are found in the Crowsnest Pass part of the Rocky Mountains in Canada and at various places in the Andes in Peru.
See also
, named after a large supply of anthracite found there
, a softer coal
Explanatory notes
References
Further reading
– Useful overview of the industry in the 20th century; fair-minded with an operators perspective
Primary sources
Report of the United states coal commission.... (5 vol in 3; 1925) Official US government investigation. online vol 1-2
Tryon, Frederick Gale, and Joseph Henry Willits, eds. What the Coal Commission Found: An Authoritative Summary by the Staff (1925).
General policies committee of anthracite operators. The anthracite coal strike of 1922: A statement of its causes and underlying purposes'' (1923); Official statement by the operators. online
External links
"What are the types of coal?" at U.S. Geological Survey
Pennsylvania Anthracite Museum in Scranton, Pennsylvania
Coal
Coal mining
Metamorphic rocks
Organic minerals | Anthracite | [
"Chemistry"
] | 3,956 | [
"Organic compounds",
"Organic minerals"
] |
182,607 | https://en.wikipedia.org/wiki/Site-specific%20art | Site-specific art is artwork created to exist in a certain place. Typically, the artist takes the location into account while planning and creating the artwork. Site-specific art is produced both by commercial artists, and independently, and can include some instances of work such as sculpture, stencil graffiti, rock balancing, and other art forms. Installations can be in urban areas, remote natural settings, or underwater.
History
The term "site-specific art" was promoted and refined by Californian artist Robert Irwin but it was actually first used in the mid-1970s by young sculptors, such as Patricia Johanson, Dennis Oppenheim, and Athena Tacha, who had started executing public commissions for large urban sites. For Two Jumps for Dead Dog Creek (1970), Oppenheim attempted a series of standing jumps at a selected site in Idaho, where "the width of the creek became a specific goal to which I geared a bodily activity," with his two successful jumps being "dictated by a land form." Site specific environmental art was first described as a movement by architectural critic Catherine Howett and art critic Lucy Lippard. Emerging out of minimalism, site-specific art opposed the Modernist program of subtracting from the artwork all cues that interfere with the fact that it is "art",
Modernist art objects were transportable, nomadic, could only exist in the museum space and were the objects of the market and commodification. Since 1960 the artists were trying to find a way out of this situation, and thus drew attention to the site and the context around this site. The work of art was created in the site and could only exist and in such circumstances - it can not be moved or changed. The notion of "site" precisely references the current location, which comprises a unique combination of physical elements: depth, length, weight, height, shape, walls, temperature. Works of art began to emerge from the walls of the museum and galleries (Daniel Buren, Within and Beyond the Frame, John Weber Gallery, New York, 1973), were created specifically for the museum and galleries (Michael Asher, untitled installation at Claire Copley Gallery, Los Angeles, 1974, Hans Haacke, Condensation Cube, 1963–65, Mierle Laderman Ukeles, Hartford Wash: Washing Tracks, Maintenance Outside, Wadsworth Atheneum, Hartford, 1973), thus criticizing the museum as an institution that sets the rules for artists and viewers.
Jean-Max Albert, created Sculptures Bachelard in Parc de la Villette related to the site, or Carlotta’s Smile, a trellis construction related to Ar. Co,’s architecture Lisbon, and to a choreography in collaboration with Michala Marcus and Carlos Zingaro, 1979.
When the public debate over Tilted Arc (1981) resulted in its removal in 1989, its author Richard Serra reacted with what can be considered a definition of site-specific art: "To move the work is to destroy the work."
Examples
Outdoor site-specific artworks often include landscaping combined with permanently sited sculptural elements; it is sometimes linked with environmental art. Outdoor site-specific artworks can also include dance performances created especially for the site. More broadly, the term is sometimes used for any work that is more or less permanently attached to a particular location. In this sense, a building with interesting architecture could also be considered a piece of site-specific art.
In Geneva, Switzerland, the Contemporary Art Funds are looking for original ways to integrate art into architecture and the public space since 1980. The Neon Parallax project, initiated in 2004, is conceived specifically for the Plaine de Plainpalais, a public square of 95'000 square meters, in the heart of the city. The concept consists of commissioning luminous artistic works for the rooftops of the buildings bordering the plaza, in the same way, advertisements are installed on the city's glamorous lakefront. The 14 artists invited had to respect the same legal sizes of luminous advertisements in Geneva. The project thus creates a parallax both between locations, and messages, but also by the way one interprets neon signs in the public realm.
Site-specific performance art, site-specific visual art and interventions are commissioned for the annual Infecting the City Festival in Cape Town, South Africa. The site-specific nature of the work allows artists to interrogate the contemporary and historic reality of the Central Business District and create work that allows the city's users to engage and interact with public spaces in new and memorable ways.
Gallery
See also
Ecological art
Environmental art
Environmental sculpture
Independent public art
Land art
Land Arts of the American West
Rock balancing
Street Installations
Public art
References
External links
Visual arts genres
Artistic techniques
Site-specific
Contemporary art
Sculpture
Site-specific
Landscape design history
Landscape architecture
Art
la:Ars situs propria | Site-specific art | [
"Engineering"
] | 981 | [
"Landscape architecture",
"Architecture"
] |
182,649 | https://en.wikipedia.org/wiki/Euston%20railway%20station | Euston railway station ( ; or London Euston) is a major central London railway terminus managed by Network Rail in the London Borough of Camden. It is the southern terminus of the West Coast Main Line, the UK's busiest inter-city railway. Euston is the tenth-busiest station in Britain and the country's busiest inter-city passenger terminal, being the gateway from London to the West Midlands, North West England, North Wales and Scotland.
Intercity express passenger services to the major cities of Birmingham, Manchester, Liverpool, Glasgow and Edinburgh, and through services to for connecting ferries to Dublin are operated by Avanti West Coast. Overnight sleeper services to Scotland are provided by the Caledonian Sleeper. London Northwestern Railway provide commuter and regional services to the West Midlands, whilst the Lioness line of the London Overground provides local suburban services in the London area via the Watford DC Line which runs parallel to the West Coast Main Line as far as . Euston tube station is connected to the main concourse and Euston Square tube station is nearby. King's Cross and St Pancras railway stations are about east along Euston Road.
Euston, the first inter-city railway terminal in London, was planned by George and Robert Stephenson. It was designed by Philip Hardwick and built by William Cubitt, with a distinctive arch over the station entrance. The station opened as the terminus of the London and Birmingham Railway (L&BR) on 20 July 1837. Euston was expanded after the L&BR was amalgamated with other companies to form the London and North Western Railway, and the original sheds were replaced by the Great Hall in 1849. Capacity was increased throughout the 19th century from two platforms to fifteen. The station was controversially rebuilt in the mid-1960s when the Arch and the Great Hall were demolished to accommodate the electrified West Coast Main Line, and the revamped station still attracts criticism over its architecture. Euston is to be the London terminus for the planned High Speed 2 railway and the station is being redeveloped to accommodate it.
Name and location
The station is named after Euston Hall in Suffolk, the ancestral home of the Dukes of Grafton, the main landowners in the area during the mid-19th century. It is set back from Euston Square and Euston Road on the London Inner Ring Road, between Cardington Street and Eversholt Street in the London Borough of Camden. It is one of 20 stations managed by Network Rail. As of the 2022-23 estimates of station usage, it is the tenth-busiest station in Britain It is the eighth-busiest terminus in London by entries and exits. Euston bus station is in front of the main entrance.
History
Euston was the first inter-city railway station in London. It opened on 20 July 1837 as the terminus of the London and Birmingham Railway (L&BR). It was demolished in the 1960s and replaced with the present building in the international modern style.
The site was chosen in 1831 by George and Robert Stephenson, engineers of the L&BR. The area was mostly farmland at the edge of the expanding city, and adjacent to the New Road (now Euston Road), which had caused urban development. The name Euston came from Euston Hall, the seat of the duke of Grafton, who owned the locality.
The station and railway have been owned by the L&BR (1837–1846), the London and North Western Railway (LNWR) (1846–1923), the London, Midland and Scottish Railway (LMS) (1923–1948), British Railways (1948–1994), Railtrack (1994–2002) and Network Rail (2002–present).
Old station
The plan was to construct a station near the Regent's Canal in Islington to provide a connection for London dock traffic. An alternative site at Marble Arch, proposed by Robert Stephenson, was rejected by a provisional committee, and a proposal to end the line at Maiden Lane was rejected by the House of Lords in 1832. A terminus at Camden Town, announced by Stephenson the following year, received royal assent on 6 May, before an extension was approved in 1834, allowing the line to reach Euston Grove where the original station was built by William Cubitt.
Initial services were three trains to and from with journeys taking just over an hour. On 9 April 1838, they were extended to a temporary halt at near Bletchley where a coach service was provided to . The line to Curzon Street station in Birmingham opened on 17 September 1838, the journey of took around hours.
The incline from Camden Town to Euston involved crossing the Regent's Canal on a gradient of more than 1 in 68. Because steam trains at the time could not climb such an ascent, they were cable-hauled on the down line towards Camden until 1844, after which bank engines were used. The L&BR's act of Parliament prohibited the use of locomotives in the Euston area, following concerns of residents about noise and smoke from locomotives toiling up the incline.
The station was built with space left vacant for extra platforms, as it was originally planned for the Great Western Railway (GWR) to use Euston, as the terminus of the Great Western Main Line. In the event, the GWR chose to build their own terminus at Paddington. The spare land was instead used for more platforms for ever expanding services as the railway network grew.
The station building, designed by the classically trained architect Philip Hardwick, had a trainshed by structural engineer Charles Fox. It had two platforms, one each for departures and arrival. The main entrance portico, the Euston Arch, also by Hardwick, symbolised the arrival of a major new transport system and was "the gateway to the north". It was high, supported on four by hollow Doric propylaeum columns of Bramley Fall stone, the largest ever built. It was completed in May 1838 and cost £35,000 (now £). The old station building was probably the first one in the world with all-wrought iron roof trusses.
The first railway hotels in London were built at Euston. Two hotels designed by Hardwick opened in 1839 on either side of the Arch; the Victoria on the west had basic facilities while the Euston on the east was designed for first-class passengers.
Between 1838 and 1841, parcel handling grew from 2,700 parcels a month to 52,000. By 1845, 140 staff were employed but trains began to run late because of a lack of capacity. The following year, two platforms (later 9 and 10) were constructed on vacant land to the west of the station that had been reserved for Great Western Railway services. The L&BR amalgamated with the Manchester & Birmingham Railway and the Grand Junction Railway in 1846 to form the LNWR. The company headquarters were established at Euston requiring a block of offices to be built between the Arch and the platforms.
The station's facilities were expanded with the opening of the Great Hall on 27 May 1849 replacing the original sheds. The Great Hall was designed by Hardwick's son Philip Charles Hardwick in classical style. It was long, wide, and high with a coffered ceiling and a sweeping double flight of stairs leading to offices at its northern end. Architectural sculptor John Thomas contributed eight allegorical statues representing the cities served by the line. The station faced Drummond Street, further back from Euston Road than the front of the modern complex; Drummond Street now terminates at the side of the station but then ran across its front. A short road, Euston Grove, ran from Euston Square towards the arch.
A bay platform (later platform 7) for local services to Kensington (Addison Road) opened in 1863. Two new platforms (1 and 2) were added in 1873 along with an entrance for cabs from Seymour Street. At the same time, the station roof was raised by to accommodate smoke from the engines.
The continued growth of long-distance railway traffic led to major expansion along the station's west side starting in 1887. It involved rerouting Cardington Street over part of the burial ground (later St James's Gardens) of St James's Church, Piccadilly, which was located some way from the church. To avoid public outcry, the remains were reinterred at St Pancras Cemetery. Two more platforms (4 and 5) opened in 1891. Four departure platforms (now platforms 12–15), bringing the total to 15, and a booking office on Drummond Street opened on 1 July 1892.
The line between Euston and Camden was doubled between 1901 and 1906. A new booking hall opened in 1914 on part of the cab yard. The Great Hall was redecorated and refurbished between 1915 and 1916 and again in 1927. The station's ownership was transferred to the London, Midland and Scottish Railway (LMS) in the 1923 grouping.
Apart from the lodges on Euston Road and statues now on the forecourt, few relics of the old station survive. The National Railway Museum's collection at York includes Edward Hodges Baily's statue of George Stephenson from the Great Hall; the entrance gates; and a turntable from 1846 discovered during demolition.
London, Midland and Scottish Railway redevelopment
By the 1930s Euston was again congested and the LMS considered rebuilding it. In 1931 it was reported that a site for a new station was being sought, the most likely option was behind the existing station in the direction of Camden Town. The LMS announced in 1935 that the station (including the hotel and offices) would be rebuilt using a government loan guarantee.
In 1937 it appointed the architect Percy Thomas to produce designs. He proposed an American-inspired station that would involve removing or resiting the arch, and included office frontages along Euston Road and a helicopter pad on the roof. Redevelopment began on 12 July 1938, when of limestone was extracted for the building and new flats were constructed to rehouse people displaced by the works. The project was shelved indefinitely because of World War II.
The station was damaged several times during the Blitz in 1940. Part of the Great Hall's roof was destroyed, and a bomb landed between platforms 2 and 3, destroying offices and part of the hotel.
New station
Passengers considered Euston to be squalid and covered in soot and it was restored and redecorated in 1953, when an enquiry kiosk in the middle of the Great Hall was removed. Ticket machines were modernised. By this time the Arch was surrounded by property development and kiosks and in need of restoration.
British Railways announced that Euston would be rebuilt to accommodate the electrification of the West Coast Main Line in 1959. Because of the restricted layout of track and tunnels at the northern end, enlargement only could be accomplished by expanding southwards over the area occupied by the Great Hall and the Arch. Permission to demolish the Arch and Great Hall was sought from London County Council and it was granted on condition that the Arch would be restored and re-sited. BR estimated it would cost at least £190,000 (now £) and was not viable.
The Arch's demolition, announced by the Minister of Transport, Ernest Marples in July 1961, drew objections from the Earl of Euston, the Earl of Rosse and John Betjeman. Experts did not believe the work would cost £190,000 and speculated it could be done more cheaply by foreign labour. On 16 October 1961, 75 architects and students staged a demonstration against its demolition inside the Great Hall and a week later Sir Charles Wheeler led a deputation to speak with the Prime Minister Harold Macmillan. Macmillan replied that as well as the cost, there was nowhere large enough to relocate the Arch in keeping with its surroundings.
Demolition began on 6 November and was completed within four months. The station was rebuilt by Taylor Woodrow Construction to a design by London Midland Region architects of British Railways, William Robert Headley and Ray Moorcroft, in consultation with Richard Seifert & Partners. Redevelopment began in summer 1962 and progressed from east to west, the Great Hall was demolished and an temporary building housed ticket offices and essential facilities. Euston worked to 80% capacity during the works with at least 11 platforms in operation at any time. Services were diverted elsewhere where practical and the station remained operational throughout the works.
The first phase of construction involved building 18 platforms with two track bays to handle parcels above them, a signal and communications building and various staff offices. The parcel deck was reinforced using 5,500 tons of structural steelwork. Signalling on the routes leading out of the station was reworked along with the electrification of the lines, including the British Rail Automatic Warning System. Fifteen platforms had been completed by 1966, and the electric service began on 3 January. An automated parcel depot above platforms 3 to 18 opened on 7 August 1966. The station was opened by Queen Elizabeth II on 14 October 1968.
The station is a long, low structure, wide and deep under a high roof. It opened with integrated automatic ticket facilities and a range of shops; the first of its kind for any British station. The plan to construct offices above the station whose rents would help fund the cost of the rebuilding was scrapped after a government White Paper was released in 1963 that restricted the rate of commercial office development in London.
In 1966, a "Whites only" recruitment policy for guards at the station was dropped after the case of Asquith Xavier, a migrant from Dominica, who had been refused promotion on those grounds, was raised in Parliament and taken up by the Secretary of State for Transport, Barbara Castle.
A second development phase by Richard Seifert & Partners began in 1979, adding of office space along the station frontage in the form of three low-rise towers overlooking Melton Street and Eversholt Street. The offices were occupied by British Rail, then by Railtrack, and by Network Rail which has now vacated all but a small portion of one of the towers. The offices are in a functional style; the main facing material is polished dark stone, complemented by white tiles, exposed concrete and plain glazing.
The station has a large concourse separate from the train shed. Originally, no seats were installed there to deter vagrants and crime, but were added after complaints from passengers. Few remnants of the older station remain: two Portland stone entrance lodges, the London and North Western Railway War Memorial and a statue of Robert Stephenson by Carlo Marochetti, from the old ticket hall, stands in the forecourt.
A large statue by Eduardo Paolozzi named Piscator dedicated to German theatre director Erwin Piscator is sited at the front of the courtyard, which as of 2016 was reported to be deteriorating. Other pieces of public art, including low stone benches by Paul de Monchaux around the courtyard, were commissioned by Network Rail in 1990. The station has catering units and shops, a large ticket hall and an enclosed car park with over 200 spaces. The lack of daylight on the platforms compares unfavourably with the glazed trainshed roofs of traditional Victorian railway stations, but the use of the space above as a parcels depot released the maximum space at ground level for platforms and passenger facilities.
Since 1996, proposals have been formulated to reconstruct the Arch as part of the redevelopment of the station, and its use as the terminus of the High Speed 2 line.
Privatisation
Ownership of the station transferred from British Rail to Railtrack in 1994, passing to Network Rail in 2002 following the collapse of Railtrack. In 2005 Network Rail was reported to have long-term aspirations to redevelop the station, removing the 1960s buildings and providing more commercial space by using the "air rights" above the platforms.
In 2007, British Land announced that it had won the tender to demolish and rebuild the station, spending some £250 million of its overall redevelopment budget of £1 billion for the area. The number of platforms would increase from 18 to 21. In 2008, it was reported that the Arch could be rebuilt. In September 2011, the demolition plans were cancelled, and Aedas was appointed to give the station a makeover.
In July 2014 a statue of navigator and cartographer Matthew Flinders, who circumnavigated the globe and charted Australia, was unveiled at Euston; his grave was rumoured to lie under platform 15 at the station, but had been relocated during the original station construction and in 2019 was found behind the station during excavation work for the HS2 line.
High Speed 2
In March 2010 the Secretary of State for Transport, Andrew Adonis announced that Euston was the preferred southern terminus of the planned High Speed 2 line, which would connect to a newly built station near Curzon Street and Fazeley Street in Birmingham. This would require expansion to the south and west to create new sufficiently long platforms. These plans involved a complete reconstruction, involving the demolition of 220 Camden Council flats, with half the station providing conventional train services and the new half high-speed trains. The Command Paper suggested rebuilding the Arch, and included an artist's impression.
The station is to have seven new platforms dropped from an original planned eight, taking the total to 23, with 10 dedicated to HS2 services and 13 to conventional lines at a low level. The flats demolished for the extension would be replaced by significant building work above. The Underground station would be rebuilt and connected to adjacent Euston Square station. As part of the extension beyond Birmingham, the Mayor of London's office believed it will be necessary to build the proposed Crossrail 2 line via Euston to relieve 10,000 extra passengers forecast to arrive during an average day.
To relieve pressure on Euston during and after rebuilding for High Speed 2, HS2 Ltd has proposed the diversion of some services to (for Crossrail). This would include eight commuter trains per hour originating/terminating between and inclusive. In 2016, the Mayor Sadiq Khan endorsed the plans and suggested that all services should terminate at Old Oak Common while a more appropriate solution is found for Euston.
The current scheme does not provide any direct access between High Speed 2 at Euston and the existing High Speed 1 at St Pancras. In 2015, plans were announced to link the two stations via a travelator service. Platforms 17 and 18 closed in May and June 2019 for High Speed 2 preparation work.
The Euston Downside Carriage Maintenance Depot was demolished in 2018 in preparation for the start of tunnelling. The two office towers in front of the station were demolished between January 2019 and December 2020. The third tower at 1 Eversholt Street is not part of these plans. Two hotels on Cardington Street adjacent to the west of the station were also demolished. The cemetery in adjacent St James's Gardens was also controversially excavated in 2018-19, resulting in an estimated 60,000 graves having to be exhumed, and the entire site being cleared of all human remains the largest exhumation in British history, and the corpses having to be reburied in Brookwood Cemetery in Brookwood, Surrey.
In August 2019, the Department for Transport (DfT) ordered an independent review of the project, chaired by the British civil engineer Douglas Oakervee. The Oakervee Review was published by the Department for Transport the following February, alongside a statement from the Prime Minister Boris Johnson confirming that HS2 would go ahead in full, with reservations. The review said the rebuild was "not satisfactory" and called the management "muddled" and recommended a change of governance. In Summer 2020, the government asked Network Rail's chairman, Sir Peter Hendy, to lead an oversight board; in October 2020, the Architects' Journal reported that more than £100m had already been spent on engineering and architectural design fees.
In October 2023, the Prime Minister Rishi Sunak announced that construction of the Euston terminus and approach tunnel would not be government funded and that it could only go ahead with private sector investment. Transport for London commissioner Andy Lord was sceptical that the private sector would pay for the link to Old Oak Common.
Criticism
Demolition of original station
The demolition of the original buildings in 1962 was described by the Royal Institute of British Architects as "one of the greatest acts of Post-War architectural vandalism in Britain" and was approved directly by Harold Macmillan. The attempts made to preserve the earlier building, championed by Sir John Betjeman, led to the formation of the Victorian Society and heralded the modern conservation movement. This movement saved the nearby high Gothic St Pancras station when threatened with demolition in 1966, ultimately leading to its renovation in 2007 as the terminus of HS1 to the Continent.
Architecture
Euston's 1960s style of architecture has been described as "a dingy, grey, horizontal nothingness" and a reflection of "the tawdry glamour of its time", entirely lacking in "the sense of occasion, of adventure, that the great Victorian termini gave to the traveller". Writing in The Times, Richard Morrison stated that "even by the bleak standards of Sixties architecture, Euston is one of the nastiest concrete boxes in London: devoid of any decorative merit; seemingly concocted to induce maximum angst among passengers; and a blight on surrounding streets. The design should never have left the drawing-board – if, indeed, it was ever on a drawing-board. It gives the impression of having been scribbled on the back of a soiled paper bag by a thuggish android with a grudge against humanity and a vampiric loathing of sunlight".
Passenger experience
Michael Palin, explorer and travel writer, in his contribution to Great Railway Journeys titled "Confessions of a Trainspotter" in 1980, likened it to "a great bath, full of smooth, slippery surfaces where people can be sloshed about efficiently". Journalist Barney Ronay described the station as "easily, easily the worst main station in Western Europe" and that using it is "like being taken away to be machine gunned in the woods by various mobile phone and soft drinks companies".
Access to parts of the station is difficult for people with physical disability. The introduction of lifts in 2010 made the taxi rank and underground station accessible from the concourse, though some customers found them unreliable and frequently broken down. Wayfindr technology was introduced to the station in 2015 to help people with visual impairment to navigate the station.
In September 2023, the Office of Rail and Road issued Network Rail with an improvement notice in relation to its failure to put in place effective measures to tackle overcrowding. Network Rail admitted that the station was designed for a different era and that "the passenger experience at Euston remains uncomfortable at times". The Office of Rail and Road declared in December 2023 that Network Rail had complied with the notice and implemented measures to better manage passenger traffic flows and overcrowding. In October 2024, London TravelWatch warned that passengers at Euston are being put in danger when the station becomes severely overcrowded during periods of disruption to services. Transport Secretary Louise Haigh subsequently asked Network Rail to declutter the station concourse and improve how it handles train announcements. Network Rail reacted by switching off the advertising board installed in January 2024 after removal of the main departure boards, and issued a five point improvement plan.
Incidents
On 26 April 1924, an electric multiple unit collided with the rear of an excursion train carrying passengers from the FA Cup Final in Coventry. Five passengers were killed. The crash was blamed on poor visibility owing to smoke and steam under the Park Street Bridge.
On 27 August 1928, a passenger train collided with the buffer stops. Thirty people were injured.
On 10 November 1938, a suburban service collided with empty coaches after a signal was misinterpreted. 23 people were injured.
On 6 August 1949, an empty train was accidentally routed towards a service for Manchester, colliding with it at about . The crash was blamed on a lack of track circuiting and no proper indication of when platforms were occupied.
1973 IRA attack
Extensive but superficial damage was caused by an IRA bomb that exploded close to a snack bar at approximately 1:10 pm on 10 September 1973, injuring eight people. A similar explosive had detonated 50 minutes earlier at King's Cross. The Metropolitan Police had received a three-minute warning, and were unable to evacuate the station completely, but British Transport Police managed to clear much of the area just before the explosion. In 1974, the mentally ill Judith Ward confessed to the bombing and was convicted of this and other crimes, despite the evidence against her being highly suspect and Ward retracting her confessions. She was acquitted in 1992; the true culprit has yet to be identified.
Cultural references
The station has been the backdrop for a musical film clip as well as the subject in songs since the 1960s. Barbara Ruskin both wrote and recorded the song "Euston Station" which was released in 1967. In 1969, rock group Ambrose Slade shot a promo film at the station for their Beginnings album. Craig Davies recorded the song "Euston Railway Station Blues" which was released in the late 1980s. Jane Kitto's 2002 song "Busdriver" is about getting on the no. 73 bus from Euston station to Stoke Newington. In another travel theme it was referenced by The Smiths in their song "London" as a way to get to the city from Manchester.
National Rail services
Euston has services from four different train operators:
Avanti West Coast operates InterCity West Coast services:
2 tph (trains per hour) to via , extended to/from (at peak hours), of which:
1 tph extends to via and
2 trains per day (tpd) run further to only, with 1 train every 2 hours running to and 4 tpd running to . Services to Scotland run via .
1 tph to via , with certain trains extended along the North Wales Main Line to or for the ferries to Ireland, such as Irish Ferries as well as Stena Line to Dublin Port, one train on Mon-Fri to
3 tph to via , of which:
2 tph operate via
1 tph operate via , and
1 tph to via Crewe and
1 tph to via . Additional services operate to/from Preston, Lancaster and Carlisle during peak times.
London Northwestern Railway operates regional and commuter services.
2 tph to via
1 tph to
2 tph to via
1 tph to via and
London Overground operates local commuter services.
4 tph to via the Lioness line (Watford DC line)
Caledonian Sleeper operates two nightly services to Scotland from Sunday to Friday inclusive.
Highland sleeper to via and , via , via and via Stirling and Perth
Lowland sleeper to Glasgow Central and Edinburgh Waverley via
Great Western Railway will operate InterCity Greater Western services on occasional days from .
1 tph to via and , of which:
1 tp2h (trains per 2 hour) extends to via and
1 tph to via , of which:
1 tpd extends to
Night Riviera will operate a nightly service to Cornwall from Sunday to Friday inclusive, on occasional days from .
sleeper to Penzance via Taunton, Plymouth and St Austell .
London Underground
Euston was poorly served by the early London Underground network. The nearest station on the Metropolitan line was Gower Street, around five minutes' walk away. A permanent connection did not appear until 12 May 1907, when the City & South London Railway opened an extension west from Angel. The Charing Cross, Euston & Hampstead Railway opened an adjacent station on 22 June in the same year; these two stations are now part of the Northern line. Gower Street station was quickly renamed Euston Square in response. A connection to the Victoria line opened on 1 December 1968.
The underground network around Euston is planned to change depending on the construction of High Speed 2. Transport for London (TfL) plans to change the safeguarded route for the proposed Chelsea–Hackney line to include Euston between Tottenham Court Road and King's Cross St Pancras. As part of the rebuilding work for High Speed 2, it is proposed to integrate Euston and Euston Square into a single tube station.
See also
Birmingham Curzon Street railway station (1838–1966) – the original Birmingham counterpart to the original Euston station
Pennsylvania Station (1910–1963) – a similarly demolished and rebuilt station
References
Notes
Citations
Sources
External links
Station information on Euston railway station from Network Rail
Euston Station and railway works – information about the old station from the Survey of London online.
Euston Station Panorama
Euston London Guide
Railway stations in the London Borough of Camden
DfT Category A stations
Former London and Birmingham Railway stations
Railway stations in Great Britain opened in 1837
Railway stations served by Avanti West Coast
Railway stations served by Caledonian Sleeper
Railway stations served by London Overground
Railway stations served by West Midlands Trains
Network Rail managed stations
Railway termini in London
Architectural controversies
Richard Seifert buildings
London station group
Stations on the West Coast Main Line
Lioness line stations | Euston railway station | [
"Engineering"
] | 5,884 | [
"Architectural controversies",
"Architecture"
] |
182,650 | https://en.wikipedia.org/wiki/Winkler%20titration | The Winkler test is used to determine the concentration of dissolved oxygen in water samples. Dissolved oxygen (D.O.) is widely used in water quality studies and routine operation of water reclamation facilities to analyze its level of oxygen saturation.
In the test, an excess of manganese(II) salt, iodide (I−) and hydroxide (OH−) ions are added to a water sample causing a white precipitate of Mn(OH)2 to form. This precipitate is then oxidized by the oxygen that is present in the water sample into a brown manganese-containing precipitate with manganese in a more highly oxidized state (either Mn(III) or Mn(IV)).
In the next step, a strong acid (either hydrochloric acid or sulfuric acid) is added to acidify the solution. The brown precipitate then converts the iodide ion (I−) to iodine. The amount of dissolved oxygen is directly proportional to the titration of iodine with a thiosulfate solution. Today, the method is effectively used as its colorimetric modification, where the trivalent manganese produced on acidifying the brown suspension is directly reacted with ethylenediaminetetraacetic acid to give a pink color. As manganese is the only common metal giving a color reaction with ethylenediaminetetraacetic acid, it has the added effect of masking other metals as colorless complexes.
History
The test was originally developed by Ludwig Wilhelm Winkler, in later literature referred to as Lajos Winkler, while working at Budapest University on his doctoral dissertation in 1888. The amount of dissolved oxygen is a measure of the biological activity of the water masses. Phytoplankton and macroalgae present in the water mass-produce oxygen by way of photosynthesis. Bacteria and eukaryotic organisms (zooplankton, fish) consume this oxygen through cellular respiration. The result of these two mechanisms determines the concentration of dissolved oxygen, which in turn indicates the production of biomass. The difference between the physical concentration of oxygen in the water (or the theoretical concentration if there were no living organisms) and the actual concentration of oxygen is called the biochemical demand in oxygen. The Winkler test is often controversial as it is not 100% accurate and the oxygen levels may fluctuate from test to test despite using the same constant sample.
Chemical processes
In the first step, manganese(II) sulphate (at 48% of the total volume) is added to an environmental water sample. Next, potassium iodide (15% in potassium hydroxide 70%) is added to create a pinkish-brown precipitate. In the alkaline solution, dissolved oxygen will oxidize manganese(II) ions to the tetravalent state.
2 Mn2+ + O2 + 4 OH− → 2 MnO(OH)2
Mn has been oxidised to 4+, and MnO(OH)2 appears as a brown precipitate. There is some uncertainty about whether the oxidised manganese is tetravalent or trivalent. Some sources claim that Mn(OH)3 is the brown precipitate, but hydrated MnO2 may also give the brown colour.
4 Mn(OH)2 + O2 + 2 H2O → 4 Mn(OH)3
The second part of the Winkler test reduces (acidifies) the solution. The precipitate will dissolve back into solution as the H+ reacts with the O2− and OH− to form water.
MnO(OH)2 + 4 H+ → Mn4+ + 3 H2O
The acid facilitates the conversion by the brown, Manganese-containing precipitate of the Iodide ion into elemental Iodine.
The Mn(SO4)2 formed by the acid converts the iodide ions into iodine, itself being reduced back to manganese(II) ions in an acidic medium.
Mn(SO4)2 + 2 I− → Mn2+ + I2 + 2
Thiosulfate is used, with a starch indicator, to titrate the iodine.
2 + I2 → + 2 I−
Analysis
From the above stoichiometric equations, we can find that:
1 mole of O2 → 2 moles of MnO(OH)2 → 2 mole of I2 → 4 mole of
Therefore, after determining the number of moles of iodine produced, we can work out the number of moles of oxygen molecules present in the original water sample. The oxygen content is usually presented in milligrams per liter (mg/L).
Limitations
The success of this method is critically dependent upon the manner in which the sample is manipulated. At all stages, steps must be taken to ensure that oxygen is neither introduced to nor lost from the sample. Furthermore, the water sample must be free of any solutes that will oxidize or reduce iodine.
Instrumental methods for measurement of dissolved oxygen have widely supplanted the routine use of the Winkler test, although the test is still used to check instrument calibration.
BOD5
To determine five-day biochemical oxygen demand (BOD5), several dilutions of a sample are analyzed for dissolved oxygen before and after a five-day incubation period at 20 °C in the dark. In some cases, bacteria are used to provide a standardized community to uptake oxygen while consuming the organic matter in the sample; these bacteria are known as "seed". The difference in DO and the dilution factor are used to calculated BOD5. The resulting number (usually reported in parts per million or milligrams per liter) is useful in determining the relative organic strength of sewage or other polluted waters.
The BOD5 test is an example of analysis that determines classes of materials in a sample.
Winkler bottle
A Winkler bottle is a piece of laboratory glassware specifically made for carrying out the Winkler test. These bottles have conical tops and a close fitting stopper to aid in the exclusion of air bubbles when the top is sealed. This is important because oxygen in trapped air would be included in the measurement and would affect the accuracy of the test.
References
Further reading
Moran, Joseph M.; Morgan, Michael D., & Wiersma, James H. (1980). Introduction to Environmental Science (2nd ed.). W.H. Freeman and Company, New York, NY
Y.C. Wong & C.T. Wong. New Way Chemistry for Hong Kong A-Level Volume 4, p. 248.
Manganese (III) consistently claimed (NB: Gives unbalanced equation for formation of MnO(OH)2). Claims manganese (III) gives manganese (IV) consistently.
Aquatic ecology
Water quality indicators
Oxygen | Winkler titration | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,420 | [
"Aquatic ecology",
"Water quality indicators",
"Ecosystems",
"Water pollution"
] |
182,655 | https://en.wikipedia.org/wiki/Inventory | Inventory (American English) or stock (British English) refers to the goods and materials that a business holds for the ultimate goal of resale, production or utilisation.
Inventory management is a discipline primarily about specifying the shape and placement of stocked goods. It is required at different locations within a facility or within many locations of a supply network to precede the regular and planned course of production and stock of materials.
The concept of inventory, stock or work in process (or work in progress) has been extended from manufacturing systems to service businesses and projects, by generalizing the definition to be "all work within the process of production—all work that is or has occurred prior to the completion of production". In the context of a manufacturing production system, inventory refers to all work that has occurred—raw materials, partially finished products, finished products prior to sale and departure from the manufacturing system. In the context of services, inventory refers to all work done prior to sale, including partially process information.
Business inventory
Reasons for keeping stock
There are five basic reasons for keeping an inventory:
Time: The time lags present in the supply chain, from supplier to user at every stage, requires that you maintain certain amounts of inventory to use in this lead time. However, in practice, inventory is to be maintained for consumption during 'variations in lead time'. Lead time itself can be addressed by ordering that many days in advance.
Seasonal demand: Demands varies periodically, but producers capacity is fixed. This can lead to stock accumulation, consider for example how goods consumed only in holidays can lead to accumulation of large stocks on the anticipation of future consumption.
Uncertainty: Inventories are maintained as buffers to meet uncertainties in demand, supply and movements of goods.
Economies of scale: Ideal condition of "one unit at a time at a place where a user needs it, when he needs it" principle tends to incur lots of costs in terms of logistics. So bulk buying, movement and storing brings in economies of scale, thus inventory.
Appreciation in value: In some situations, some stock gains the required value when it is kept for some time to allow it reach the desired standard for consumption, or for production. For example, beer in the brewing industry.
All these stock reasons can apply to any owner or product.
Special terms used in dealing with inventory management
Stock Keeping Unit (SKU) SKUs are clear, internal identification numbers assigned to each of the products and their variants. SKUs can be any combination of letters and numbers chosen, just as long as the system is consistent and used for all the products in the inventory. An SKU code may also be referred to as product code, barcode, part number or MPN (Manufacturer's Part Number).
"New old stock" (sometimes abbreviated NOS) is a term used in business to refer to merchandise being offered for sale that was manufactured long ago but that has never been used. Such merchandise may not be produced anymore, and the new old stock may represent the only market source of a particular item at the present time.
ABC analysis (also known as Pareto analysis) is a method of classifying inventory items based on their contribution to total sales revenue. This can be used to prioritize inventory management efforts and ensure that businesses are focusing on the most important items.
Typology
Buffer/safety stock: Safety stock is the additional inventory that a company keeps on hand to mitigate the risk of stockouts or delays in supply chain. It is the extra stock that is kept in reserve above and beyond the regular inventory levels. The purpose of safety stock is to provide a buffer against fluctuations in demand or supply that could otherwise result in stockouts.
Reorder level: Reorder level refers to the point when a company place an order to re-fill the stocks. Reorder point depends on the inventory policy of a company. Some companies place orders when the inventory level is lower than a certain quantity. Some companies place orders periodically.
Cycle stock: Used in batch processes, cycle stock is the available inventory, excluding buffer stock.
De-coupling: Buffer stock held between the machines in a single process which serves as a buffer for the next one allowing smooth flow of work instead of waiting the previous or next machine in the same process.
Anticipation stock: Building up extra stock for periods of increased demand—e.g., ice cream for summer.
Pipeline stock: Goods still in transit or in the process of distribution; e.g., they have left the factory but not arrived at the customer yet. Often calculated as: Average Daily / Weekly usage quantity X Lead time in days + Safety stock.
Inventory examples
While accountants often discuss inventory in terms of goods for sale, organizations—manufacturers, service-providers and not-for-profits—also have inventories (fixtures, equipment, furniture, supplies, parts, etc.) that they do not intend to sell. Manufacturers', distributors', and wholesalers' inventory tends to cluster in warehouses. Retailers' inventory may exist in a warehouse or in a shop or store accessible to customers. Inventories not intended for sale to customers or to clients may be held in any premises an organization uses. Stock ties up cash and, if uncontrolled, it will be impossible to know the actual level of stocks and therefore difficult to keep the costs associated with holding too much or too little inventory under control.
While the reasons for holding stock were covered earlier, most manufacturing organizations usually divide their "goods for sale" inventory into:
Raw materials: Materials and components scheduled for use in making a product.
Work in process (WIP): Materials and components that have begun their transformation to finished goods. These are used in process of manufacture and as such these are neither raw material nor finished goods.
Finished goods: Goods ready for sale to customers.
Goods for resale: Returned goods that are salable.
Stocks in transit: The materials which are not at the seller's location or buyers' location but in between are "stocks in transit". Or we could say, the stocks which left the seller's plant but have not reached the buyer, and are in transit.
Consignment stocks: The inventories where goods are with the buyer, but the actual ownership of goods remains with the seller until the goods are sold. Though the goods were transported to the buyer, payment of goods is done once the goods are sold. Hence such stocks are known as consignment stocks.
Maintenance supply.
For example:
Manufacturing
A canned food manufacturer's materials inventory includes the ingredients to form the foods to be canned, empty cans and their lids (or coils of steel or aluminum for constructing those components), labels, and anything else (solder, glue, etc.) that will form part of a finished can. The firm's work in process includes those materials from the time of release to the work floor until they become complete and ready for sale to wholesale or retail customers. This may be vats of prepared food, filled cans not yet labeled or sub-assemblies of food components. It may also include finished cans that are not yet packaged into cartons or pallets. Its finished good inventory consists of all the filled and labeled cans of food in its warehouse that it has manufactured and wishes to sell to food distributors (wholesalers), to grocery stores (retailers), and even perhaps to consumers through arrangements like factory stores and outlet centers.
Capital projects
The partially completed work (or work in process) is a measure of inventory built during the work execution of a capital project, such as encountered in civilian infrastructure construction or oil and gas. Inventory may not only reflect physical items (such as materials, parts, partially-finished sub-assemblies) but also knowledge work-in-process (such as partially completed engineering designs of components and assemblies to be fabricated).
Virtual inventory
A "virtual inventory" (also known as a "bank inventory") enables a group of users to share common parts, especially where their availability at short notice may be critical but they are unlikely to required by more than a few bank members at any one time. Virtual inventory also allows distributors and fulfilment houses to ship goods to retailers direct from stock, regardless of whether the stock is held in a retail store, stock room or warehouse. Virtual inventories allow participants to access a wider mix of products and to reduce the risks involved in carrying inventory for which expected demand does not materialise.
Costs associated with inventory
There are several costs associated with inventory:
Ordering cost
Setup cost
Holding cost
Shortage costs (the costs arising out of inability to supply, including lost revenue, reputational damage, and potential loss of customer loyalty).
Principle of inventory proportionality
Purpose
Inventory proportionality is the goal of demand-driven inventory management. The primary optimal outcome is to have the same number of days' (or hours', etc.) worth of inventory on hand across all products so that the time of runout of all products would be simultaneous. In such a case, there is no "excess inventory", that is, inventory that would be left over of another product when the first product runs out. Holding excess inventory is sub-optimal because the money spent to obtain and the cost of holding it could have been utilized better elsewhere, i.e. to the product that just ran out.
The secondary goal of inventory proportionality is inventory minimization. By integrating accurate demand forecasting with inventory management, rather than only looking at past averages, a much more accurate and optimal outcome is expected. Integrating demand forecasting into inventory management in this way also allows for the prediction of the "can fit" point when inventory storage is limited on a per-product basis.
Applications
The technique of inventory proportionality is most appropriate for inventories that remain unseen by the consumer, as opposed to "keep full" systems where a retail consumer would like to see full shelves of the product they are buying so as not to think they are buying something old, unwanted or stale; and differentiated from the "trigger point" systems where product is reordered when it hits a certain level; inventory proportionality is used effectively by just-in-time manufacturing processes and retail applications where the product is hidden from view.
One early example of inventory proportionality used in a retail application in the United States was for motor fuel. Motor fuel (e.g. gasoline) is generally stored in underground storage tanks. The motorists do not know whether they are buying gasoline off the top or bottom of the tank, nor need they care. Additionally, these storage tanks have a maximum capacity and cannot be overfilled. Finally, the product is expensive. Inventory proportionality is used to balance the inventories of the different grades of motor fuel, each stored in dedicated tanks, in proportion to the sales of each grade. Excess inventory is not seen or valued by the consumer, so it is simply cash sunk (literally) into the ground. Inventory proportionality minimizes the amount of excess inventory carried in underground storage tanks. This application for motor fuel was first developed and implemented by Petrolsoft Corporation in 1990 for Chevron Products Company. Most major oil companies use such systems today.
Roots
The use of inventory proportionality in the United States is thought to have been inspired by Japanese just-in-time parts inventory management made famous by Toyota Motors in the 1980s.
High-level inventory management
It seems that around 1880 there was a change in manufacturing practice from companies with relatively homogeneous lines of products to horizontally integrated companies with unprecedented diversity in processes and products. Those companies (especially in metalworking) attempted to achieve success through economies of scope—the gains of jointly producing two or more products in one facility. The managers now needed information on the effect of product-mix decisions on overall profits and therefore needed accurate product-cost information. A variety of attempts to achieve this were unsuccessful due to the huge overhead of the information processing of the time. However, the burgeoning need for financial reporting after 1900 created unavoidable pressure for financial accounting of stock and the management need to cost manage products became overshadowed. In particular, it was the need for audited accounts that sealed the fate of managerial cost accounting. The dominance of financial reporting accounting over management accounting remains to this day with few exceptions, and the financial reporting definitions of 'cost' have distorted effective management 'cost' accounting since that time. This is particularly true of inventory.
Hence, high-level financial inventory has these two basic formulas, which relate to the accounting period:
Cost of Beginning Inventory at the start of the period + inventory purchases within the period + cost of production within the period = cost of goods available
Cost of goods available − cost of ending inventory at the end of the period = cost of goods sold
The benefit of these formulas is that the first absorbs all overheads of production and raw material costs into a value of inventory for reporting. The second formula then creates the new start point for the next period and gives a figure to be subtracted from the sales price to determine some form of sales-margin figure.
Manufacturing management is more interested in inventory turnover ratio or average days to sell inventory since it tells them something about relative inventory levels.
Inventory turnover ratio (also known as inventory turns) = cost of goods sold / Average Inventory = Cost of Goods Sold / ((Beginning Inventory + Ending Inventory) / 2)
and its inverse
Average Days to Sell Inventory = Number of Days a Year / Inventory Turnover Ratio = 365 days a year / Inventory Turnover Ratio
This ratio estimates how many times the inventory turns over a year. This number tells how much cash/goods are tied up waiting for the process and is a critical measure of process reliability and effectiveness. So a factory with two inventory turns has six months stock on hand, which is generally not a good figure (depending upon the industry), whereas a factory that moves from six turns to twelve turns has probably improved effectiveness by 100%. This improvement will have some negative results in the financial reporting, since the 'value' now stored in the factory as inventory is reduced.
While these accounting measures of inventory are very useful because of their simplicity, they are also fraught with the danger of their own assumptions. There are, in fact, so many things that can vary hidden under this appearance of simplicity that a variety of 'adjusting' assumptions may be used. These include:
Specific Identification
Lower of cost or market
Weighted Average Cost
Moving-Average Cost
FIFO and LIFO.
Queueing theory.
Inventory Turn is a financial accounting tool for evaluating inventory and it is not necessarily a management tool. Inventory management should be forward looking. The methodology applied is based on historical cost of goods sold. The ratio may not be able to reflect the usability of future production demand, as well as customer demand.
Business models, including Just in Time (JIT) Inventory, Vendor Managed Inventory (VMI) and Customer Managed Inventory (CMI), attempt to minimize on-hand inventory and increase inventory turns. VMI and CMI have gained considerable attention due to the success of third-party vendors who offer added expertise and knowledge that organizations may not possess.
Inventory management also involves risk which varies depending upon a firm's position in the distribution channel. Some typical measures of inventory exposure are width of commitment, time of duration and depth.
Inventory management in modern days is online oriented and more viable in digital. This type of dynamics order management will require end-to-end visibility, collaboration across fulfillment processes, real-time data automation among different companies, and integration among multiple systems.
Accounting for inventory
Each country has its own rules about accounting for inventory that fit with their financial-reporting rules.
For example, organizations in the U.S. define inventory to suit their needs within US Generally Accepted Accounting Practices (GAAP), the rules defined by the Financial Accounting Standards Board (FASB) (and others) and enforced by the U.S. Securities and Exchange Commission (SEC) and other federal and state agencies. Other countries often have similar arrangements but with their own accounting standards and national agencies instead.
It is intentional that financial accounting uses standards that allow the public to compare firms' performance, cost accounting functions internally to an organization and potentially with much greater flexibility. A discussion of inventory from standard and Theory of Constraints-based (throughput) cost accounting perspective follows some examples and a discussion of inventory from a financial accounting perspective.
The internal costing/valuation of inventory can be complex. Whereas in the past most enterprises ran simple, one-process factories, such enterprises are quite probably in the minority in the 21st century. Where 'one process' factories exist, there is a market for the goods created, which establishes an independent market value for the good. Today, with multistage-process companies, there is much inventory that would once have been finished goods which is now held as 'work in process' (WIP). This needs to be valued in the accounts, but the valuation is a management decision since there is no market for the partially finished product. This somewhat arbitrary 'valuation' of WIP combined with the allocation of overheads to it has led to some unintended and undesirable results.
Financial accounting
An organization's inventory can appear a mixed blessing, since it counts as an asset on the balance sheet, but it also ties up money that could serve for other purposes and requires additional expense for its protection. Inventory may also cause significant tax expenses, depending on particular countries' laws regarding depreciation of inventory, as in Thor Power Tool Company v. Commissioner.
Inventory appears as a current asset on an organization's balance sheet because the organization can, in principle, turn it into cash by selling it. Some organizations hold larger inventories than their operations require in order to inflate their apparent asset value and their perceived profitability.
In addition to the money tied up by acquiring inventory, inventory also brings associated costs for warehouse space, for utilities, and for insurance to cover staff to handle and protect it from fire and other disasters, obsolescence, shrinkage (theft and errors), and others. Such holding costs can mount up: between a third and a half of its acquisition value per year.
Businesses that stock too little inventory cannot take advantage of large orders from customers if they cannot deliver. The conflicting objectives of cost control and customer service often put an organization's financial and operating managers against its sales and marketing departments. Salespeople, in particular, often receive sales-commission payments, so unavailable goods may reduce their potential personal income. This conflict can be minimised by reducing production time to being near or less than customers' expected delivery time. This effort, known as "Lean production" will significantly reduce working capital tied up in inventory and reduce manufacturing costs (See the Toyota Production System).
Role of inventory accounting
By helping the organization to make better decisions, the accountants can help the public sector to change in a very positive way that delivers increased value for the taxpayer's investment. It can also help to incentive's progress and to ensure that reforms are sustainable and effective in the long term, by ensuring that success is appropriately recognized in both the formal and informal reward systems of the organization.
To say that they have a key role to play is an understatement. Finance is connected to most, if not all, of the key business processes within the organization. It should be steering the stewardship and accountability systems that ensure that the organization is conducting its business in an appropriate, ethical manner. It is critical that these foundations are firmly laid. So often they are the litmus test by which public confidence in the institution is either won or lost.
Finance should also be providing the information, analysis and advice to enable the organizations' service managers to operate effectively. This goes beyond the traditional preoccupation with budgets—how much have we spent so far, how much do we have left to spend? It is about helping the organization to better understand its own performance. That means making the connections and understanding the relationships between given inputs—the resources brought to bear—and the outputs and outcomes that they achieve. It is also about understanding and actively managing risks within the organization and its activities.
FIFO vs. LIFO accounting
When a merchant buys goods from inventory, the value of the inventory account is reduced by the cost of goods sold (COGS). This is simple where the cost has not varied across those held in stock; but where it has, then an agreed method must be derived to evaluate it. For commodity items that one cannot track individually, accountants must choose a method that fits the nature of the sale. Two popular methods in use are: FIFO (first in, first out) and LIFO (last in, first out).
FIFO treats the first unit that arrived in inventory as the first one sold. LIFO considers the last unit arriving in inventory as the first one sold. Which method an accountant selects can have a significant effect on net income and book value and, in turn, on taxation. Using LIFO accounting for inventory, a company generally reports lower net income and lower book value, due to the effects of inflation. This generally results in lower taxation. Due to LIFO's potential to skew inventory value, UK GAAP and IAS have effectively banned LIFO inventory accounting. LIFO accounting is permitted in the United States subject to section 472 of the Internal Revenue Code.
Standard cost accounting
Standard cost accounting uses ratios called efficiencies that compare the labour and materials actually used to produce a good with those that the same goods would have required under "standard" conditions. As long as actual and standard conditions are similar, few problems arise. Unfortunately, standard cost accounting methods developed about 100 years ago, when labor comprised the most important cost in manufactured goods. Standard methods continue to emphasize labor efficiency even though that resource now constitutes a (very) small part of cost in most cases.
Standard cost accounting can hurt managers, workers, and firms in several ways. For example, a policy decision to increase inventory can harm a manufacturing manager's performance evaluation. Increasing inventory requires increased production, which means that processes must operate at higher rates. When (not if) something goes wrong, the process takes longer and uses more than the standard labor time. The manager appears responsible for the excess, even though s/he has no control over the production requirement or the problem.
In adverse economic times, firms use the same efficiencies to downsize, rightsize, or otherwise reduce their labor force. Workers laid off under those circumstances have even less control over excess inventory and cost efficiencies than their managers.
Many financial and cost accountants have agreed for many years on the desirability of replacing standard cost accounting. They have not, however, found a successor.
Theory of constraints cost accounting
Eliyahu M. Goldratt developed the Theory of Constraints in part to address the cost-accounting problems in what he calls the "cost world." He offers a substitute, called throughput accounting, that uses throughput (money for goods sold to customers) in place of output (goods produced that may sell or may boost inventory) and considers labor as a fixed rather than as a variable cost. He defines inventory simply as everything the organization owns that it plans to sell, including buildings, machinery, and many other things in addition to the categories listed here. Throughput accounting recognizes only one class of variable costs: the truly variable costs, like materials and components, which vary directly with the quantity produced
Finished goods inventories remain balance-sheet assets, but labor-efficiency ratios no longer evaluate managers and workers. Instead of an incentive to reduce labor cost, throughput accounting focuses attention on the relationships between throughput (revenue or income) on one hand and controllable operating expenses and changes in inventory on the other.
National accounts
Inventories also play an important role in national accounts and the analysis of the business cycle. Some short-term macroeconomic fluctuations are attributed to the inventory cycle.
Distressed inventory
Also known as distressed or expired stock, distressed inventory is inventory whose potential to be sold at a normal cost has passed or will soon pass. In certain industries it could also mean that the stock is or will soon be impossible to sell. Examples of distressed inventory include products which have reached their expiry date, or have reached a date in advance of expiry at which the planned market will no longer purchase them (e.g. 3 months left to expiry), clothing which is out of fashion, music which is no longer popular and old newspapers or magazines. It also includes computer or consumer-electronic equipment which is obsolete or discontinued and whose manufacturer is unable to support it, along with products which use that type of equipment e.g. VHS format equipment and videos.
In 2001, Cisco wrote off inventory worth US$2.25 billion due to duplicate orders. This is considered one of the biggest inventory write-offs in business history.
Stock rotation
Stock rotation is the practice of changing the way inventory is displayed on a regular basis. This is most commonly used in hospitality and retail - particularity where food products are sold. For example, in the case of supermarkets that a customer frequents on a regular basis, the customer may know exactly what they want and where it is. This results in many customers going straight to the product they seek and do not look at other items on sale. To discourage this practice, stores will rotate the location of stock to encourage customers to look through the entire store. This is in hopes the customer will pick up items they would not normally see.
Inventory credit
Inventory credit refers to the use of stock, or inventory, as collateral to raise finance. Where banks may be reluctant to accept traditional collateral, for example in developing countries where land title may be lacking, inventory credit is a potentially important way of overcoming financing constraints. This is not a new concept; archaeological evidence suggests that it was practiced in Ancient Rome. Obtaining finance against stocks of a wide range of products held in a bonded warehouse is common in much of the world. It is, for example, used with Parmesan cheese in Italy. Inventory credit on the basis of stored agricultural produce is widely used in Latin American countries and in some Asian countries. A precondition for such credit is that banks must be confident that the stored product will be available if they need to call on the collateral; this implies the existence of a reliable network of certified warehouses. Banks also face problems in valuing the inventory. The possibility of sudden falls in commodity prices means that they are usually reluctant to lend more than about 60% of the value of the inventory at the time of the loan.
Journal
International Journal of Inventory Research
Omega - The International Journal of Management Science
See also
Cash conversion cycle
Consignment stock
Cost of goods sold
Economic order quantity
Inventory investment
Inventory management software
Logistics
Operations research
Pinch point (economics)
Project production management
Service level
Spare part
Stock management
Notes
References
Further reading
Cannella S., Ciancimino E. (2010) Up-to-date Supply Chain Management: the Coordinated (S, R). In "Advanced Manufacturing and Sustainable Logistics". Dangelmaier W. et al. (Eds.) 175–185. Springer-Verlag Berlin Heidelberg, Germany.
Supply chain management
Inventory optimization
National accounts
Lean manufacturing
fr:Stock | Inventory | [
"Engineering"
] | 5,527 | [
"Lean manufacturing"
] |
182,679 | https://en.wikipedia.org/wiki/Time%20signal | A time signal is a visible, audible, mechanical, or electronic signal used as a reference to determine the time of day.
Church bells or voices announcing hours of prayer gave way to automatically operated chimes on public clocks; however, audible signals (even signal guns) have limited range. Busy seaports used a visual signal, the dropping of a ball, to allow mariners to check the chronometers used for navigation. The advent of electrical telegraphs allowed widespread and precise distribution of time signals from central observatories. Railways were among the first customers for time signals, which allowed synchronization of their operations over wide geographic areas. Dedicated radio time signal stations transmit a signal that allows automatic synchronization of clocks, and commercial broadcasters still include time signals in their programming.
Today, global navigation satellite systems (GNSS) radio signals are used to precisely distribute time signals over much of the world. There are many commercially available radio controlled clocks available to accurately indicate the local time, both for business and residential use. Computers often set their time from an Internet atomic clock source. Where this is not available, a locally connected GNSS receiver can precisely set the time using one of several software applications.
Audible and visible time signals
One sort of public time signal is a striking clock. These clocks are only as good as the clockwork that activates them, but they have improved substantially since the first clocks from the 14th century. Until modern times, a public clock such as Big Ben was the only time standard the general public needed.
Accurate knowledge of time of day is essential for navigation, and ships carried the most accurate marine chronometers available, although they did not keep perfect time. A number of accurate audible or visible time signals were established in many seaport cities to enable navigators to set their chronometers.
Signal guns
In Vancouver, British Columbia, a "9 O'Clock Gun" is still shot every night at 9 pm. (This gun was brought to Stanley Park in 1894 by the Department of Fisheries originally to warn fishermen of the 6:00 pm Sunday closing of fishing.) The 9:00 pm firing was later established as a time signal for the general population. Until a time gun was installed, the nearby Brockton Point lighthouse keeper detonated a stick of dynamite. Elsewhere in Canada, a "Noon Gun" is fired daily from the citadels in Halifax and Quebec City and from Signal Hill in St. John's, Newfoundland and Labrador.
In the same manner, a Noon Gun has been fired in Cape Town, since 1806. The gun is fired daily from the Lion Battery at Signal Hill.
The Noonday Gun serves a similar purpose in Hong Kong. The tradition, which started in the 1860s under British colonial rule, has become a tourist attraction in recent times.
A cannon was fired at one o'clock every weekday at Liverpool, at the Castle in Edinburgh, and also at Perth to establish the time. The Edinburgh "One O'Clock Gun" is still in operation. A cannon located at the top of Santa Lucia Hill, in Santiago, is shot every noon.
In Rome, on the Janiculum, a hill west of the Tiber since 1904 a cannon is fired daily at noon towards the river as a time signal. This was introduced in 1847 by Pope Pius IX to synchronise all the church bells of Rome. It was situated in Castel Sant'Angelo until 1903 when it was moved to Monte Mario for a few months until it was placed in its current position. The cannon was silenced from the start of WWII for about twenty years until 21 April 1959, the 2712th anniversary of Rome's founding, and has been in use since then.
For many years an old cannon was fired "about noon" from a mountain near Kabul.
Sirens, whistles, and other audible signals
In many Midwestern US cities where tornadoes are a common hazard, the emergency sirens are tested regularly at a specified time (say, noon each Saturday); while not primarily intended to mark the time, local people often check their watches when they hear this signal. In many non-seafaring communities, loud factory whistles served as public time signals before radio made them obsolete. Sometimes, the tradition of a factory whistle becomes so deeply entrenched in a community that the whistle is maintained long after its original function as a time keeper became obsolete. For example, the University of Iowa's power plant whistle has been reinstated several times by popular demand after numerous attempts to silence it.
Visual signals
In 1861 and 1862, the Edinburgh Post Office Directory published time gun maps relating the number of seconds required for the report of the time gun to reach various locations in the city. Because light travels much faster than sound, visible signals enabled greater precision than audible ones, although audible signals could operate better under conditions of reduced visibility. The first time ball was erected at Portsmouth, England in 1829 by its inventor Robert Wauchope. One was installed in 1833 on the roof of the Royal Observatory in Greenwich, London, and the time ball has dropped at 1:00 pm every day since then. The first American time ball went into service in 1845. In New York City, the ceremonial Times Square Ball drop on New Year's Eve in Times Square is a vestige of a visual time signal.
Electrical time signals
United Kingdom
The first telegraph distribution of time signal in the United Kingdom, indeed, in the world, was initiated in 1852 by the Electric Telegraph Company in collaboration with the Astronomer Royal. Greenwich Mean Time was distributed by telegraph from the Greenwich Observatory. This included a system for synchronising the drop of the time ball at Greenwich with other time balls around the country, one of which was atop the Electric's offices in the Strand.
Other synchronised time balls were atop the Nelson Monument, Edinburgh; the sailors' home Broomielaw, Glasgow; Liverpool and one at Deal, Kent, installed by the Admiralty.
United States
Telegraph signals were used regularly for time coordination by the United States Naval Observatory starting in 1865. By the late 1800s, many U.S. observatories were selling accurate time by offering a regional time signal service.
Sandford Fleming proposed a single 24-hour clock for the entire world. At a meeting of the Royal Canadian Institute on 8 February 1879 he linked it to the anti-meridian of Greenwich (now 180°). He suggested that standard time zones could be used locally, but they were subordinate to his single world time.
Standard time came into existence in the United States on 18 November 1883. Earlier, on 11 October 1883, the General Time Convention, forerunner to the American Railway Association, approved a plan that divided the United States into several time zones. On that November day, the US Naval Observatory telegraphed a signal that coordinated noon at Eastern standard time with 11 am Central, 10 am Mountain, and 9 am Pacific standard time.
A March 1905 issue of The Technical World describes the role of the United States Naval Observatory as a source of time signals:
One of the most important functions of the Naval Observatory is found in the daily distribution of the correct time to every portion of the United States. This is effected by means of telegraphic signals, which are sent out from Washington at noon daily, except Sundays. The original object of this time service was to furnish mariners in the seaboard cities with the means of regulating their chronometers; but, like many another governmental activity, its scope has gradually broadened until it has become of general usefulness. The electrical impulse which goes forth from the Observatory at noon each day, now sets or regulates automatically more than 70,000 clocks located in all parts of the United States, and also serves, in each of the larger cities of the country, to release a time-ball located on some lofty building of central location. The dropping of the time-ball – accompanied, at some points, with the simultaneous firing of a cannon – is the signal for the regulation by hand of hundreds of other clocks and watches in the vicinity.
Radio time sources
Dedicated time signal broadcasts
The telegraphic distribution of time signals was made obsolete by the use of AM, FM, shortwave radio, Internet Network Time Protocol servers as well as atomic clocks in satellite navigation systems. Time signals have been transmitted by radio since 1905. There are dedicated radio time signal stations around the world.
Time stations operating in the longwave radio band have highly predictable radio propagation characteristics, which gives low uncertainty in the received time signals. Stations operating in the shortwave band can cover wider areas with relatively low-power transmitters, but the varying distance that the signal travels increases the uncertainty of the time signal on a scale of milliseconds.
Radio time signal stations broadcast the time in both audible and machine-readable time code form that can be used as references for radio clocks and radio-controlled watches. Typically, they use a national or regional longwave digital signal; for example, station WWVB in the U.S. .
The audio portions of the shortwave WWV and WWVH broadcasts can also be heard by telephone. The time announcements are normally delayed by less than 30 ms when using land lines from within the continental United States, and the stability (delay variation) is generally However, when mobile phones are used, the delays are often more than 100 ms, due to the multiple access methods used to share cell channels. In rare instances when the telephone connection is made by satellite, the time is delayed by 250–500 ms.
The audio from the broadcasts is available by telephone by dialling U.S. numbers for WWV (Colorado), and for WWVH (Hawaii). Calls (which are not toll-free) are disconnected after 2 minutes.
Loran-C time signals formerly were also used for radio clock synchronization, by augmenting their highly accurate frequency transmissions with external measurements of the offsets of LORAN navigation signals against time standards.
General broadcasters
As radio receivers became more widely available, broadcasters included time information in the form of voice announcements or automated tones to accurately indicate the hour. The BBC has included time "pips" in its broadcasts from 1922.
In the United States many information-based radio stations (full-service, all-news and news/talk) also broadcast time signals at the beginning of the hour. In New York, WCBS and WINS have distinctive beginning-of-the-hour tones, though the WINS signal is only approximate (several seconds error). WINS also has a tone at 30 minutes past the hour for those setting their clocks. WTIC uses the Morse code V for victory to the tune of Beethoven's 5th Symphony at the beginning of the hour continuously, since 1943.
Broadcast stations using iBiquity Digital's "HD Radio" system are contractually required to delay their analog broadcast by about eight seconds, so it remains in sync with the digital stream. Thus, network-generated time signals and service cues will also be delayed by about eight seconds. (Because of the delay, when WBEN-AM in Buffalo, New York was broadcasting time markers, and was simulcast on an FM station that broadcast in HD; the FM signal did not carry the time signal. WBEN does not broadcast in HD.) Local signals may also be delayed.
The all-news radio stations of the CBS Radio Network, of which WCBS is the flagship, air a "bong" (at a frequency of 440 Hz – the international standard for the musical note ) that immediately precedes each top-of-the-hour network newscast. (The same bong could be heard on the CBS Television Network, at the top of the hour immediately before the beginning of any televised program, in the 1960s and 1970s.) An automated "chirp" at one second before the hour signals a switch to the radio network broadcast. As an example, KNX, the CBS Radio Network all-news station in Los Angeles, broadcasts this "bong" sound on the hour. However, due to buffering of the digital broadcast on some computers, this signal may be delayed as much as 20 seconds from the actual start of the hour (this is presumably the same situation for all CBS Radio stations, as each station's digital stream is produced and distributed in a similar manner), though unlike program content which is on a broadcast delay for content concerns, the time signal airs as-is over-the-air, meaning it can sometimes be talked over during a live news event or sports play-by-play. KYW-AM in Philadelphia broadcasts a time signal at the top of the hour along with its jingle.
Bonneville International-owned news/talk station KSL (AM-FM) in Salt Lake City uses a "clang" that originates from the Nauvoo Bell on Temple Square, in Salt Lake City, which has been a staple on the station since the early 1960s.
In Canada, the national English-language non-commercial CBC Radio One network broadcast the daily National Research Council Time Signal from 5 November 1939 until 9 October 2023. The simulcast would occur daily at 1pm Eastern Time. Its French-language counterpart, Radio-Canada, broadcasts a similar signal at noon. Vancouver radio station CKNW also broadcasts time signals, using a chime every half-hour. Time signals on CBC broadcasts may be delayed up to 3 seconds due to network processing delays between the local radio transmitter and the time signal origin in Ottawa. The CBC's predecessor, the Canadian National Railways Radio network, broadcast the time signal over its Ottawa station, CNRO (originally CKCH), at 9 pm daily and also on its Moncton station, CNRA, beginning in 1923. CNRA closed in 1931 but the broadcasts continued on CNRO when the station was acquired by the Canadian Radio Broadcasting Commission in 1933 and by the CBC in 1936 before going national in 1939.
In Australia, many information-based radio stations broadcast time signals at the beginning of the hour, and a speaking clock service was also available until October 2019. However, the VNG dedicated time signal service has been discontinued.
In Cuba, Radio Reloj is a radio station which has a time signal over news. Radio Reloj translates to Clock Radio.
Digital delay
Program material, including time signals, that is transmitted digitally (e.g. DAB, Internet radio) can be delayed by tens of seconds due to buffering and error correction, making time signals received on a digital radio unreliable when accuracy is needed.
See also
Car radio, Radio Data System (RDS)
Clock signal
Extended Data Services and PBS
Greenwich Time Signal
Low frequency, (LF)
Smart Personal Objects Technology, (SPOT)
Speaking clock
Synchronization
VBI, VITC
WWVB
Time synchronization in North America
Time transfer
Time and frequency transfer
Time synchronization
References
Further reading
External links
— mentions the time cannon at Perth
— mentions the time cannon at Cape Town
— The VNG Users' Consortium was working on ways to solve the problem of the lack of accurate time signals in Australia.
Magazine was published from 1904–1905 by the Armour Institute of Technology.
Horology | Time signal | [
"Physics"
] | 3,076 | [
"Spacetime",
"Horology",
"Physical quantities",
"Time"
] |
182,693 | https://en.wikipedia.org/wiki/Clock%20signal | In electronics and especially synchronous digital circuits, a clock signal (historically also known as logic beat) is an electronic logic signal (voltage or current) which oscillates between a high and a low state at a constant frequency and is used like a metronome to synchronize actions of digital circuits. In a synchronous logic circuit, the most common type of digital circuit, the clock signal is applied to all storage devices, flip-flops and latches, and causes them all to change state simultaneously, preventing race conditions.
A clock signal is produced by an electronic oscillator called a clock generator. The most common clock signal is in the form of a square wave with a 50% duty cycle. Circuits using the clock signal for synchronization may become active at either the rising edge, falling edge, or, in the case of double data rate, both in the rising and in the falling edges of the clock cycle.
Digital circuits
Most integrated circuits (ICs) of sufficient complexity use a clock signal in order to synchronize different parts of the circuit, cycling at a rate slower than the worst-case internal propagation delays. In some cases, more than one clock cycle is required to perform a predictable action. As ICs become more complex, the problem of supplying accurate and synchronized clocks to all the circuits becomes increasingly difficult. The preeminent example of such complex chips is the microprocessor, the central component of modern computers, which relies on a clock from a crystal oscillator. The only exceptions are asynchronous circuits such as asynchronous CPUs.
A clock signal might also be gated, that is, combined with a controlling signal that enables or disables the clock signal for a certain part of a circuit. This technique is often used to save power by effectively shutting down portions of a digital circuit when they are not in use, but comes at a cost of increased complexity in timing analysis.
Single-phase clock
Most modern synchronous circuits use only a "single phase clock" – in other words, all clock signals are (effectively) transmitted on 1 wire.
Two-phase clock
In synchronous circuits, a "two-phase clock" refers to clock signals distributed on 2 wires, each with non-overlapping pulses. Traditionally one wire is called "phase 1" or "φ1" (phi1), the other wire carries the "phase 2" or "φ2" signal. Because the two phases are guaranteed non-overlapping, gated latches rather than edge-triggered flip-flops can be used to store state information so long as the inputs to latches on one phase only depend on outputs from latches on the other phase. Since a gated latch uses only four gates versus six gates for an edge-triggered flip-flop, a two phase clock can lead to a design with a smaller overall gate count but usually at some penalty in design difficulty and performance.
Metal oxide semiconductor (MOS) ICs typically used dual clock signals (a two-phase clock) in the 1970s. These were generated externally for both the Motorola 6800 and Intel 8080 microprocessors. The next generation of microprocessors incorporated the clock generation on chip. The 8080 uses a 2 MHz clock but the processing throughput is similar to the 1 MHz 6800. The 8080 requires more clock cycles to execute a processor instruction. Due to their dynamic logic, the 6800 has a minimum clock rate of 100 kHz and the 8080 has a minimum clock rate of 500 kHz. Higher speed versions of both microprocessors were released by 1976.
The 6501 requires an external 2-phase clock generator.
The MOS Technology 6502 uses the same 2-phase logic internally, but also includes a two-phase clock generator on-chip, so it only needs a single phase clock input, simplifying system design.
4-phase clock
Some early integrated circuits use four-phase logic, requiring a four phase clock input consisting of four separate, non-overlapping clock signals.
This was particularly common among early microprocessors such as the National Semiconductor IMP-16, Texas Instruments TMS9900, and the Western Digital MCP-1600 chipset used in the DEC LSI-11.
Four phase clocks have only rarely been used in newer CMOS processors such as the DEC WRL MultiTitan microprocessor. and in Intrinsity's Fast14 technology. Most modern microprocessors and microcontrollers use a single-phase clock.
Clock multiplier
Many modern microcomputers use a "clock multiplier" which multiplies a lower frequency external clock to the appropriate clock rate of the microprocessor. This allows the CPU to operate at a much higher frequency than the rest of the computer, which affords performance gains in situations where the CPU does not need to wait on an external factor (like memory or input/output).
Dynamic frequency change
The vast majority of digital devices do not require a clock at a fixed, constant frequency.
As long as the minimum and maximum clock periods are respected, the time between clock edges can vary widely from one edge to the next and back again.
Such digital devices work just as well with a clock generator that dynamically changes its frequency, such as spread-spectrum clock generation, dynamic frequency scaling, etc.
Devices that use static logic do not even have a maximum clock period (or in other words, minimum clock frequency); such devices can be slowed and paused indefinitely, then resumed at full clock speed at any later time.
Other circuits
Some sensitive mixed-signal circuits, such as precision analog-to-digital converters, use sine waves rather than square waves as their clock signals, because square waves contain high-frequency harmonics that can interfere with the analog circuitry and cause noise. Such sine wave clocks are often differential signals, because this type of signal has twice the slew rate, and therefore half the timing uncertainty, of a single-ended signal with the same voltage range. Differential signals radiate less strongly than a single line. Alternatively, a single line shielded by power and ground lines can be used.
In CMOS circuits, gate capacitances are charged and discharged continually. A capacitor does not dissipate energy, but energy is wasted in the driving transistors. In reversible computing, inductors can be used to store this energy and reduce the energy loss, but they tend to be quite large. Alternatively, using a sine wave clock, CMOS transmission gates and energy-saving techniques, the power requirements can be reduced.
Distribution
The most effective way to get the clock signal to every part of a chip that needs it, with the lowest skew, is a metal grid. In a large microprocessor, the power used to drive the clock signal can be over 30% of the total power used by the entire chip. The whole structure with the gates at the ends and all amplifiers in between have to be loaded and unloaded every cycle. To save energy, clock gating temporarily shuts off part of the tree.
The clock distribution network (or clock tree, when this network forms a tree such as an H-tree) distributes the clock signal(s) from a common point to all the elements that need it. Since this function is vital to the operation of a synchronous system, much attention has been given to the characteristics of these clock signals and the electrical networks used in their distribution. Clock signals are often regarded as simple control signals; however, these signals have some very special characteristics and attributes.
Clock signals are typically loaded with the greatest fanout and operate at the highest speeds of any signal within the synchronous system. Since the data signals are provided with a temporal reference by the clock signals, the clock waveforms must be particularly clean and sharp. Furthermore, these clock signals are particularly affected by technology scaling (see Moore's law), in that long global interconnect lines become significantly more resistive as line dimensions are decreased. This increased line resistance is one of the primary reasons for the increasing significance of clock distribution on synchronous performance. Finally, the control of any differences and uncertainty in the arrival times of
the clock signals can severely limit the maximum performance of the entire system and create catastrophic race conditions in which an incorrect data signal may latch within a register.
Most synchronous digital systems consist of cascaded banks of sequential registers with combinational logic between each set of registers. The functional requirements of the digital system are satisfied by the logic stages. Each logic stage introduces delay that affects timing performance, and the timing performance of the digital design can be evaluated relative to the timing requirements by a timing analysis. Often special consideration must be made to meet the timing requirements. For example, the global performance and local timing requirements may be satisfied by the careful insertion of pipeline registers into equally spaced time windows to satisfy critical worst-case timing constraints. The proper design of the clock distribution network helps ensure that critical timing requirements are satisfied and that no race conditions exist (see also clock skew).
The delay components that make up a general synchronous system are composed of the following three individual subsystems: the memory storage elements, the logic elements, and the clocking circuitry and distribution network.
Novel structures are currently under development to ameliorate these issues and provide effective solutions. Important areas of research include resonant clocking techniques ("resonant clock mesh"),
on-chip optical interconnect, and local synchronization methodologies.
See also
References
Further reading
Eby G. Friedman (Ed.), Clock Distribution Networks in VLSI Circuits and Systems, , IEEE Press. 1995.
Eby G. Friedman, , Proceedings of the IEEE, Vol. 89, No. 5, pp. 665–692, May 2001.
"ISPD 2010 High Performance Clock Network Synthesis Contest", International Symposium on Physical Design, Intel, IBM, 2010.
D.-J. Lee, "High-performance and Low-power Clock Network Synthesis in the Presence of Variation", Ph.D. dissertation, University of Michigan, 2011.
I. L. Markov, D.-J. Lee, "Algorithmic Tuning of Clock Trees and Derived Non-Tree Structures", in Proc. Int'l. Conf. Comp.-Aided Design (ICCAD), 2011.
V. G. Oklobdzija, V. M. Stojanovic, D. M. Markovic, and N. M. Nedovic, Digital System Clocking: High-Performance and Low-Power Aspects, , IEEE Press/Wiley-Interscience, 2003.
Mitch Dale, "The power of RTL Clock-gating", Electronic Systems Design Engineering Incorporating Chip Design, January 20, 2007.
Adapted from Eby Friedman 's column in the ACM SIGDA e-newsletter by Igor Markov
Original text is available at https://web.archive.org/web/20100711135550/http://www.sigda.org/newsletter/2005/eNews_051201.html
Synchronization | Clock signal | [
"Engineering"
] | 2,324 | [
"Telecommunications engineering",
"Synchronization"
] |
182,695 | https://en.wikipedia.org/wiki/Chlorin | In organic chemistry, chlorins are tetrapyrrole pigments that are partially hydrogenated porphyrins. The parent chlorin is an unstable compound which undergoes air oxidation to porphine. The name chlorin derives from chlorophyll. Chlorophylls are magnesium-containing chlorins and occur as photosynthetic pigments in chloroplasts. The term "chlorin" strictly speaking refers to only compounds with the same ring oxidation state as chlorophyll.
Chlorins are excellent photosensitizing agents. Various synthetic chlorins analogues such as m-tetrahydroxyphenylchlorin (mTHPC) and mono-L-aspartyl chlorin e6 are effectively employed in experimental photodynamic therapy as photosensitizer.
Chlorophylls
The most abundant chlorin is the photosynthetic pigment chlorophyll. Chlorophylls have a fifth, ketone-containing ring unlike the chlorins. Diverse chlorophylls exists, such as chlorophyll a, chlorophyll b, chlorophyll d, chlorophyll e, chlorophyll f, and chlorophyll g. Chlorophylls usually feature magnesium as a central metal atom, replacing the two NH centers in the parent.
Variation
Microbes produce two reduced variants of chlorin, bacteriochlorins and isobacteriochlorins. Bacteriochlorins are found in some bacteriochlorophylls; the ring structure is produced by Chlorophyllide a reductase (COR) reducing a chlorin ring at the C7-8 double boud. Isobacteriochlorins are found in nature mostly as sirohydrochlorin, a biosynthetic intermediate of vitamin B12, produced without going through a chlorin. In living organisms, both are ultimately derived from uroporphyrinogen III, a near-universal intermediate in tetrapyrrole biosynthesis.
Synthetic chlorins
Numerous synthetic chlorins with different functional groups and/or ring modifications have been examined.
Contracted chlorins can be synthesised by reduction of B(III)subporphyrin or by oxidation of corresponding B(III)subbacteriochlorin. The B(III)subchlorins were directly synthesized as meso-ester B(III)subchlorin from meso-diester tripyrromethane, these class of compound showed very good fluorescence quantum yield and singlet oxygen producing efficiency
See also
Corrin
Photodynamic therapy
Further reading
References
Biomolecules
Metabolism
Tetrapyrroles | Chlorin | [
"Chemistry",
"Biology"
] | 591 | [
"Natural products",
"Organic compounds",
"Cellular processes",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Metabolism",
"Molecular biology"
] |
182,705 | https://en.wikipedia.org/wiki/Petroleum%20geology | Petroleum geology is the study of the origins, occurrence, movement, accumulation, and exploration of hydrocarbon fuels. It refers to the specific set of geological disciplines that are applied to the search for hydrocarbons (oil exploration).
Sedimentary basin analysis
Petroleum geology is principally concerned with the evaluation of seven key elements in sedimentary basins:
Source
Reservoir
Seal
Trap
Timing
Maturation
Migration
In general, all these elements must be assessed via a limited 'window' into the subsurface world, provided by one (or possibly more) exploration wells. These wells present only a one-dimensional segment through the Earth, and the skill of inferring three-dimensional characteristics from them is one of the most fundamental in petroleum geology. Recently, the availability of inexpensive, high-quality 3D seismic data (from reflection seismology) and data from various electromagnetic geophysical techniques (such as magnetotellurics) has greatly aided the accuracy of such interpretation. The following section discusses these elements in brief. For a more in-depth treatise, see the second half of this article below.
Evaluation of the source uses the methods of geochemistry to quantify the nature of organic-rich rocks which contain the precursors to hydrocarbons, such that the type and quality of expelled hydrocarbon can be assessed.
The reservoir is a porous and permeable lithological unit or set of units that holds the hydrocarbon reserves. Analysis of reservoirs at the simplest level requires an assessment of their porosity (to calculate the volume of in situ hydrocarbons) and their permeability (to calculate how easily hydrocarbons will flow out of them). Some of the key disciplines used in reservoir analysis are the fields of structural analysis, stratigraphy, sedimentology, and reservoir engineering.
The seal, or cap rock, is a unit with low permeability that impedes the escape of hydrocarbons from the reservoir rock. Common seals include evaporites, chalks and shales. Analysis of seals involves assessment of their thickness and extent, such that their effectiveness can be quantified.
The geological trap is the stratigraphic or structural feature that ensures the juxtaposition of reservoir and seal such that hydrocarbons remain trapped in the subsurface, rather than escaping (due to their natural buoyancy) and being lost.
Analysis of maturation involves assessing the thermal history of the source rock in order to make predictions of the amount and timing of hydrocarbon generation and expulsion.
Finally, careful studies of migration reveal information on how hydrocarbons move from source to reservoir and help quantify the source (or kitchen) of hydrocarbons in a particular area.
Major subdisciplines in petroleum geology
Several major subdisciplines exist in petroleum geology specifically to study the seven key elements discussed above.
Critical moment
The critical moment is the time of the generation, migration, and accumulation of most hydrocarbons in their primary traps. The migration and accumulation of hydrocarbons occur over a short period in relation to geologic time. These processes (generation, migration, and accumulation) occur near the end of a duration of a petroleum system. The duration being the time crucial elements of the petroleum system are being accumulated.
The critical moment is crucial since it is based on the burial history of the source rock when it is at maximum burial depth. This is when most of the hydrocarbons are generated. Approximately 50%-90% petroleum is made and expelled at this point. The next step is the hydrocarbons entering the oil window. The oil window has to do with the source rock being the appropriate maturity, and also being at the right depth for oil exploration. Geoscientists will be need this to gather stratigraphic data of the petroleum system for analysis.
Source rock analysis
In terms of source rock analysis, several facts need to be established. Firstly, the question of whether there actually is any source rock in the area must be answered. Delineation and identification of potential source rocks depends on studies of the local stratigraphy, palaeogeography and sedimentology to determine the likelihood of organic-rich sediments having been deposited in the past.
If the likelihood of there being a source rock is thought to be high, the next matter to address is the state of thermal maturity of the source, and the timing of maturation. Maturation of source rocks (see diagenesis and fossil fuels) depends strongly on temperature, such that the majority of oil generation occurs in the range. Gas generation starts at similar temperatures, but may continue up beyond this range, perhaps as high as . In order to determine the likelihood of oil/gas generation, therefore, the thermal history of the source rock must be calculated. This is performed with a combination of geochemical analysis of the source rock (to determine the type of kerogens present and their maturation characteristics) and basin modelling methods, such as back-stripping, to model the thermal gradient in the sedimentary column.
Geochemical analysis
The mid-twentieth century was when scientists began to seriously study petroleum geochemistry. Geochemistry was originally utilized for surface prospecting for subsurface hydrocarbons. Today geochemistry serves the petroleum industry by helping seek out effective petroleum systems. The use of geochemistry is relatively cost-effective that allows geologists to assess reservoir-related issues. Once oil to source rock correlation is found, petroleum geologists will use this information to render a 3D model of the basin. Now they can assess the timing of generation, migration, and accumulation relative to the trap formation. This aids in the decision-making process on whether further exploration is necessary. Additionally, this can increase recoveries of the petroleum remaining in reservoirs that were initially deemed unrecoverable.
Basin analysis
A full scale basin analysis is usually carried out prior to defining leads and prospects for future drilling. This study tackles the petroleum system and studies source rock (presence and quality); burial history; maturation (timing and volumes); migration and focus; and potential regional seals and major reservoir units (that define carrier beds). All these elements are used to investigate where potential hydrocarbons might migrate towards. Traps and potential leads and prospects are then defined in the area that is likely to have received hydrocarbons.
Exploration stage
Although a basin analysis is usually part of the first study a company conducts prior to moving into an area for future exploration, it is also sometimes conducted during the exploration phase. Exploration geology comprises all the activities and studies necessary for finding new hydrocarbon occurrence. Usually seismic (or 3D seismic) studies are shot, and old exploration data (seismic lines, well logs, reports) are used to expand upon the new studies. Sometimes gravity and magnetic studies are conducted, and oil seeps and spills are mapped to find potential areas for hydrocarbon occurrences. As soon as a significant hydrocarbon occurrence is found by an exploration- or wildcat-well, the appraisal stage starts.
Appraisal stage
The appraisal stage is used to delineate the extent of the discovery. Hydrocarbon reservoir properties, connectivity, hydrocarbon type and gas-oil and oil-water contacts are determined to calculate potential recoverable volumes. This is usually done by drilling more appraisal wells around the initial exploration well. Production tests may also give insight in reservoir pressures and connectivity. Geochemical and petrophysical analysis gives information on the type (viscosity, chemistry, API, carbon content, etc.) of the hydrocarbon and the nature of the reservoir (porosity, permeability, etc.).
Production stage
After a hydrocarbon occurrence has been discovered and appraisal has indicated it is a commercial find, the production stage is initiated. This stage focuses on extracting the hydrocarbons in a controlled way (without damaging the formation, within commercial favorable volumes, etc.). Production wells are drilled and completed in strategic positions. 3D seismic is usually available by this stage to target wells precisely for optimal recovery. Sometimes enhanced recovery (steam injection, pumps, etc.) is used to extract more hydrocarbons or to redevelop abandoned fields.
Reservoir analysis
The existence of a reservoir rock (typically, sandstones and fractured limestones) is determined through a combination of regional studies (i.e. analysis of other wells in the area), stratigraphy and sedimentology (to quantify the pattern and extent of sedimentation) and seismic interpretation. Once a possible hydrocarbon reservoir is identified, the key physical characteristics of a reservoir that are of interest to a hydrocarbon explorationist are its bulk rock volume, net-to-gross ratio, porosity and permeability.
Bulk rock volume, or the gross rock volume of rock above any hydrocarbon-water contact, is determined by mapping and correlating sedimentary packages. The net-to-gross ratio, typically estimated from analogues and wireline logs, is used to calculate the proportion of the sedimentary packages that contains reservoir rocks. The bulk rock volume multiplied by the net-to-gross ratio gives the net rock volume of the reservoir. The net rock volume multiplied by porosity gives the total hydrocarbon pore volume, i.e. the volume within the sedimentary package that fluids (importantly, hydrocarbons and water) can occupy. The summation of these volumes (see STOIIP and GIIP) for a given exploration prospect will allow explorers and commercial analysts to determine whether a prospect is financially viable.
Traditionally, porosity and permeability were determined through the study of drilling samples, analysis of cores obtained from the wellbore, examination of contiguous parts of the reservoir that outcrop at the surface (see e.g. Guerriero et al., 2009, 2011, in references below) and by the technique of formation evaluation using wireline tools passed down the well itself. Modern advances in seismic data acquisition and processing have meant that seismic attributes of subsurface rocks are readily available and can be used to infer physical/sedimentary properties of the rocks themselves.
See also
Bituminous rocks
Controlled source electro-magnetic
References
Further reading
Brian Frehner. Finding Oil: The Nature of Petroleum Geology, 1859–1920 (University of Nebraska Press; 2011) 232 pages
External links
Petroleum Geology — A forum dedicated to all aspects of petroleum geology from exploration to production
Oil On My Shoes — Website devoted to the science and practical application of petroleum geology
AAPG — American Association of Petroleum Geologists
PetroleumGeology.org — Website about the history and technology of petroleum geology
Economic geology | Petroleum geology | [
"Chemistry"
] | 2,136 | [
"Petroleum",
"Petroleum geology"
] |
182,727 | https://en.wikipedia.org/wiki/Mach%27s%20principle | In theoretical physics, particularly in discussions of gravitation theories, Mach's principle (or Mach's conjecture) is the name given by Albert Einstein to an imprecise hypothesis often credited to the physicist and philosopher Ernst Mach. The hypothesis attempted to explain how rotating objects, such as gyroscopes and spinning celestial bodies, maintain a frame of reference.
The proposition is that the existence of absolute rotation (the distinction of local inertial frames vs. rotating reference frames) is determined by the large-scale distribution of matter, as exemplified by this anecdote:
You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?
Mach's principle says that this is not a coincidence—that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force. There are a number of rival formulations of the principle, often stated in vague ways like "mass out there influences inertia here". A very general statement of Mach's principle is "local physical laws are determined by the large-scale structure of the universe".
Mach's concept was a guiding factor in Einstein's development of the general theory of relativity. Einstein realized that the overall distribution of matter would determine the metric tensor which indicates which frame is stationary with respect to rotation. Frame-dragging and conservation of gravitational angular momentum makes this into a true statement in the general theory in certain solutions. But because the principle is so vague, many distinct statements have been made which would qualify as a Mach principle, some of which are false. The Gödel rotating universe is a solution of the field equations that is designed to disobey Mach's principle in the worst possible way. In this example, the distant stars seem to be revolving faster and faster as one moves further away. This example does not completely settle the question of the physical relevance of the principle because it has closed timelike curves.
History
Mach put forth the idea in his book The Science of Mechanics (1883 in German, 1893 in English). Before Mach's time, the basic idea also appears in the writings of George Berkeley. After Mach, the book Absolute or Relative Motion? (1896) by Benedict Friedlaender and his brother Immanuel contained ideas similar to Mach's principle.
Einstein's use of the principle
There is a fundamental issue in relativity theory: if all motion is relative, how can we measure the inertia of a body? We must measure the inertia with respect to something else. But what if we imagine a particle completely on its own in the universe? We might hope to still have some notion of its state of motion. Mach's principle is sometimes interpreted as the statement that such a particle's state of motion has no meaning in that case.
In Mach's words, the principle is embodied as follows:
Albert Einstein seemed to view Mach's principle as something along the lines of:
In this sense, at least some of Mach's principles are related to philosophical holism. Mach's suggestion can be taken as the injunction that gravitation theories should be relational theories. Einstein brought the principle into mainstream physics while working on general relativity. Indeed, it was Einstein who first coined the phrase Mach's principle. There is much debate as to whether Mach really intended to suggest a new physical law since he never states it explicitly.
The writing in which Einstein found inspiration was Mach's book The Science of Mechanics (1883, tr. 1893), where the philosopher criticized Newton's idea of absolute space, in particular the argument that Newton gave sustaining the existence of an advantaged reference system: what is commonly called "Newton's bucket argument".
In his Philosophiae Naturalis Principia Mathematica, Newton tried to demonstrate that one can always decide if one is rotating with respect to the absolute space, measuring the apparent forces that arise only when an absolute rotation is performed. If a bucket is filled with water, and made to rotate, initially the water remains still, but then, gradually, the walls of the vessel communicate their motion to the water, making it curve and climb up the borders of the bucket, because of the centrifugal forces produced by the rotation. This experiment demonstrates that the centrifugal forces arise only when the water is in rotation with respect to the absolute space (represented here by the earth's reference frame, or better, the distant stars) instead, when the bucket was rotating with respect to the water no centrifugal forces were produced, this indicating that the latter was still with respect to the absolute space.
Mach, in his book, says that the bucket experiment only demonstrates that when the water is in rotation with respect to the bucket no centrifugal forces are produced, and that we cannot know how the water would behave if in the experiment the bucket's walls were increased in depth and width until they became leagues big. In Mach's idea this concept of absolute motion should be substituted with a total relativism in which every motion, uniform or accelerated, has sense only in reference to other bodies (i.e., one cannot simply say that the water is rotating, but must specify if it's rotating with respect to the vessel or to the earth). In this view, the apparent forces that seem to permit discrimination between relative and "absolute" motions should only be considered as an effect of the particular asymmetry that there is in our reference system between the bodies which we consider in motion, that are small (like buckets), and the bodies that we believe are still (the earth and distant stars), that are overwhelmingly bigger and heavier than the former.
This same thought had been expressed by the philosopher George Berkeley in his De Motu. It is then not clear, in the passages from Mach just mentioned, if the philosopher intended to formulate a new kind of physical action between heavy bodies. This physical mechanism should determine the inertia of bodies, in a way that the heavy and distant bodies of our universe should contribute the most to the inertial forces. More likely, Mach only suggested a mere "redescription of motion in space as experiences that do not invoke the term space". What is certain is that Einstein interpreted Mach's passage in the former way, originating a long-lasting debate.
Most physicists believe Mach's principle was never developed into a quantitative physical theory that would explain a mechanism by which the stars can have such an effect. Mach himself never made his principle exactly clear. Although Einstein was intrigued and inspired by Mach's principle, Einstein's formulation of the principle is not a fundamental assumption of general relativity, although the principle of equivalence of gravitational and inertial mass is most certainly fundamental.
Mach's principle in general relativity
Because intuitive notions of distance and time no longer apply, what exactly is meant by "Mach's principle" in general relativity is even less clear than in Newtonian physics and at least 21 formulations of Mach's principle are possible, some being considered more strongly Machian than others. A relatively weak formulation is the assertion that the motion of matter in one place should affect which frames are inertial in another.
Einstein, before completing his development of the general theory of relativity, found an effect which he interpreted as being evidence of Mach's principle. We assume a fixed background for conceptual simplicity, construct a large spherical shell of mass, and set it spinning in that background. The reference frame in the interior of this shell will precess with respect to the fixed background. This effect is known as the Lense–Thirring effect. Einstein was so satisfied with this manifestation of Mach's principle that he wrote a letter to Mach expressing this:
The Lense–Thirring effect certainly satisfies the very basic and broad notion that "matter there influences inertia here". The plane of the pendulum would not be dragged around if the shell of matter were not present, or if it were not spinning. As for the statement that "inertia originates in a kind of interaction between bodies", this, too, could be interpreted as true in the context of the effect.
More fundamental to the problem, however, is the very existence of a fixed background, which Einstein describes as "the fixed stars". Modern relativists see the imprints of Mach's principle in the initial-value problem. Essentially, we humans seem to wish to separate spacetime into slices of constant time. When we do this, Einstein's equations can be decomposed into one set of equations, which must be satisfied on each slice, and another set, which describe how to move between slices. The equations for an individual slice are elliptic partial differential equations. In general, this means that only part of the geometry of the slice can be given by the scientist, while the geometry everywhere else will then be dictated by Einstein's equations on the slice.
In the context of an asymptotically flat spacetime, the boundary conditions are given at infinity. Heuristically, the boundary conditions for an asymptotically flat universe define a frame with respect to which inertia has meaning. By performing a Lorentz transformation on the distant universe, of course, this inertia can also be transformed.
A stronger form of Mach's principle applies in Wheeler–Mach–Einstein spacetimes, which require spacetime to be spatially compact and globally hyperbolic. In such universes Mach's principle can be stated as the distribution of matter and field energy-momentum (and possibly other information) at a particular moment in the universe determines the inertial frame at each point in the universe (where "a particular moment in the universe" refers to a chosen Cauchy surface).
There have been other attempts to formulate a theory that is more fully Machian, such as the Brans–Dicke theory and the Hoyle–Narlikar theory of gravity, but most physicists argue that none have been fully successful. At an exit poll of experts, held in Tübingen in 1993, when asked the question "Is general relativity perfectly Machian?", 3 respondents replied "yes", and 22 replied "no". To the question "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?" the result was 14 "yes" and 7 "no".
However, Einstein was convinced that a valid theory of gravity would necessarily have to include the relativity of inertia:
Inertial induction
In 1953, in order to express Mach's Principle in quantitative terms, the Cambridge University physicist Dennis W. Sciama proposed the addition of an acceleration dependent term to the Newtonian gravitation equation. Sciama's acceleration dependent term was where r is the distance between the particles, G is the gravitational constant, a is the relative acceleration and c represents the speed of light in vacuum. Sciama referred to the effect of the acceleration dependent term as Inertial Induction.
Variations in the statement of the principle
The broad notion that "mass there influences inertia here" has been expressed in several forms.
Hermann Bondi and Joseph Samuel have listed eleven distinct statements that can be called Mach principles, labelled Mach0 through Mach10 (taking inspiration from the Mach number). Though their list is not necessarily exhaustive, it does give a flavor for the variety possible.
The universe, as represented by the average motion of distant galaxies, does not appear to rotate relative to local inertial frames.
Newton's gravitational constant G is a dynamical field.
An isolated body in otherwise empty space has no inertia.
Local inertial frames are affected by the cosmic motion and distribution of matter.
The universe is spatially closed.
The total energy, angular and linear momentum of the universe are zero.
Inertial mass is affected by the global distribution of matter.
If you take away all matter, there is no more space.
is a definite number, of order unity, where is the mean density of matter in the universe, and is the Hubble time.
The theory contains no absolute elements.
Overall rigid rotations and translations of a system are unobservable.
See also
Notes
References
Further reading
This textbook, among other writings by Sciama, helped revive interest in Mach's principle.
External links
Ernst Mach, The Science of Mechanics (tr. 1893) at Archive.org
"Mach's Principle" (1995) from Einstein Studies vol. 6 (13MB PDF)
(originally published in Italian as Gasco E. "Il contributo di mach sull'origine dell'inerzia." Quaderni di Storia della Fisica, 2004.)
Ernst Mach
Theories of gravity
Principles
Rotation
Philosophy of astronomy
Thought experiments in physics | Mach's principle | [
"Physics",
"Astronomy"
] | 2,714 | [
"Physical phenomena",
"Philosophy of astronomy",
"Theoretical physics",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Theories of gravity"
] |
182,734 | https://en.wikipedia.org/wiki/Magnetohydrodynamic%20drive | A magnetohydrodynamic drive or MHD accelerator is a method for propelling vehicles using only electric and magnetic fields with no moving parts, accelerating an electrically conductive propellant (liquid or gas) with magnetohydrodynamics. The fluid is directed to the rear and as a reaction, the vehicle accelerates forward.
Studies examining MHD in the field of marine propulsion began in the late 1950s.
Few large-scale marine prototypes have been built, limited by the low electrical conductivity of seawater. Increasing current density is limited by Joule heating and water electrolysis in the vicinity of electrodes, and increasing the magnetic field strength is limited by the cost, size and weight (as well as technological limitations) of electromagnets and the power available to feed them. In 2023 DARPA launched the PUMP program to build a marine engine using superconducting magnets expected to reach a field strength of 20 Tesla.
Stronger technical limitations apply to air-breathing MHD propulsion (where ambient air is ionized) that is still limited to theoretical concepts and early experiments.
Plasma propulsion engines using magnetohydrodynamics for space exploration have also been actively studied as such electromagnetic propulsion offers high thrust and high specific impulse at the same time, and the propellant would last much longer than in chemical rockets.
Principle
The working principle involves the acceleration of an electrically conductive fluid (which can be a liquid or an ionized gas called a plasma) by the Lorentz force, resulting from the cross product of an electric current (motion of charge carriers accelerated by an electric field applied between two electrodes) with a perpendicular magnetic field. The Lorentz force accelerates all charged particles, positive and negative species (in opposite directions). If either positive or negative species dominate the vehicle is put in motion in the opposite direction from the net charge.
This is the same working principle as an electric motor (more exactly a linear motor) except that in an MHD drive, the solid moving rotor is replaced by the fluid acting directly as the propellant. As with all electromagnetic devices, an MHD accelerator is reversible: if the ambient working fluid is moving relatively to the magnetic field, charge separation induces an electric potential difference that can be harnessed with electrodes: the device then acts as a power source with no moving parts, transforming the kinetic energy of the incoming fluid into electricity, called an MHD generator.
As the Lorentz force in an MHD converter does not act on a single isolated charged particle nor on electrons in a solid electrical wire, but on a continuous charge distribution in motion, it is a "volumetric" (body) force, a force per unit volume:
where f is the force density (force per unit volume), ρ the charge density (charge per unit volume), E the electric field, J the current density (current per unit area) and B the magnetic field.
Typology
MHD thrusters are classified in two categories according to the way the electromagnetic fields operate:
Conduction devices when a direct current flows in the fluid due to an applied voltage between pairs of electrodes, the magnetic field being steady.
Induction devices when alternating currents are induced by a rapidly varying magnetic field, as eddy currents. No electrodes are required in this case.
As induction MHD accelerators are electrodeless, they do not exhibit the common issues related to conduction systems (especially Joule heating, bubbles and redox from electrolysis) but need much more intense peak magnetic fields to operate. Since one of the biggest issues with such thrusters is the limited energy available on-board, induction MHD drives have not been developed out of the laboratory.
Both systems can put the working fluid in motion according to two main designs:
Internal flow when the fluid is accelerated within and propelled back out of a nozzle of tubular or ring-shaped cross-section, the MHD interaction being concentrated within the pipe (similarly to rocket or jet engines).
External flow when the fluid is accelerated around the whole wetted area of the vehicle, the electromagnetic fields extending around the body of the vehicle. The propulsion force results from the pressure distribution on the shell (as lift on a wing, or how ciliate microorganisms such as Paramecium move water around them).
Internal flow systems concentrate the MHD interaction in a limited volume, preserving stealth characteristics. External field systems on the contrary have the ability to act on a very large expanse of surrounding water volume with higher efficiency and the ability to decrease drag, increasing the efficiency even further.
Marine propulsion
MHD has no moving parts, which means that a good design might be silent, reliable, and efficient. Additionally, the MHD design eliminates many of the wear and friction pieces of the drivetrain with a directly driven propeller by an engine. Problems with current technologies include expense and slow speed compared to a propeller driven by an engine. The extra expense is from the large generator that must be driven by an engine. Such a large generator is not required when an engine directly drives a propeller.
The first prototype, a 3-meter (10-feet) long submarine called EMS-1, was designed and tested in 1966 by Stewart Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior year undergraduate students to build the operational unit. This MHD submarine operated on batteries delivering power to electrodes and electromagnets, which produced a magnetic field of 0.015 tesla. The cruise speed was about 0.4 meter per second (15 inches per second) during the test in the bay of Santa Barbara, California, in accordance with theoretical predictions.
Later, a Japanese prototype, the 3.6-meter long "ST-500", achieved speeds of up to 0.6 m/s in 1979.
In 1991, the world's first full-size prototype Yamato 1 was completed in Japan after 6 years of research and development (R&D) by the Ship & Ocean Foundation (later known as the Ocean Policy Research Foundation). The ship successfully carried a crew of ten plus passengers at speeds of up to in Kobe Harbour in June 1992.
Small-scale ship models were later built and studied extensively in the laboratory, leading to successful comparisons between the measurements and the theoretical prediction of ship terminal speeds.
Military research about underwater MHD propulsion included high-speed torpedoes, remotely operated underwater vehicles (ROV), autonomous underwater vehicles (AUV), up to larger ones such as submarines.
Aircraft propulsion
Passive flow control
First studies of the interaction of plasmas with hypersonic flows around vehicles date back to the late 1950s, with the concept of a new kind of thermal protection system for space capsules during high-speed reentry. As low-pressure air is naturally ionized at such very high velocities and altitude, it was thought to use the effect of a magnetic field produced by an electromagnet to replace thermal ablative shields by a "magnetic shield". Hypersonic ionized flow interacts with the magnetic field, inducing eddy currents in the plasma. The current combines with the magnetic field to give Lorentz forces that oppose the flow and detach the bow shock wave further ahead of the vehicle, lowering the heat flux which is due to the brutal recompression of air behind the stagnation point. Such passive flow control studies are still ongoing, but a large-scale demonstrator has yet to be built.
Active flow control
Active flow control by MHD force fields on the contrary involves a direct and imperious action of forces to locally accelerate or slow down the airflow, modifying its velocity, direction, pressure, friction, heat flux parameters, in order to preserve materials and engines from stress, allowing hypersonic flight. It is a field of magnetohydrodynamics also called magnetogasdynamics, magnetoaerodynamics or magnetoplasma aerodynamics, as the working fluid is the air (a gas instead of a liquid) ionized to become electrically conductive (a plasma).
Air ionization is achieved at high altitude (electrical conductivity of air increases as atmospheric pressure reduces according to Paschen's law) using various techniques: high voltage electric arc discharge, RF (microwaves) electromagnetic glow discharge, laser, e-beam or betatron, radioactive source… with or without seeding of low ionization potential alkali substances (like caesium) into the flow.
MHD studies applied to aeronautics try to extend the domain of hypersonic planes to higher Mach regimes:
Action on the boundary layer to prevent laminar flow from becoming turbulent.
Shock wave mitigation for thermal control and reduction of the wave drag and form drag. Some theoretical studies suggest the flow velocity could be controlled everywhere on the wetted area of an aircraft, so shock waves could be totally cancelled when using enough power.
Inlet flow control.
Airflow velocity reduction upstream to feed a scramjet by the use of an MHD generator section combined with an MHD accelerator downstream at the exhaust nozzle, powered by the generator through an MHD bypass system.
The Russian project Ayaks (Ajax) is an example of MHD-controlled hypersonic aircraft concept. A US program also exists to design a hypersonic MHD bypass system, the Hypersonic Vehicle Electric Power System (HVEPS). A working prototype was completed in 2017 under development by General Atomics and the University of Tennessee Space Institute, sponsored by the US Air Force Research Laboratory. These projects aim to develop MHD generators feeding MHD accelerators for a new generation of high-speed vehicles. Such MHD bypass systems are often designed around a scramjet engine, but easier to design turbojets are also considered, as well as subsonic ramjets.
Such studies covers a field of resistive MHD with magnetic Reynolds number ≪ 1 using nonthermal weakly ionized gases, making the development of demonstrators much more difficult to realize than for MHD in liquids. "Cold plasmas" with magnetic fields are subject to the electrothermal instability occurring at a critical Hall parameter, which makes full-scale developments difficult.
Prospects
MHD propulsion has been considered as the main propulsion system for both marine and space ships since there is no need to produce lift to counter the gravity of Earth in water (due to buoyancy) nor in space (due to weightlessness), which is ruled out in the case of flight in the atmosphere.
Nonetheless, considering the current problem of the electric power source solved (for example with the availability of a still missing multi-megawatt compact fusion reactor), one could imagine future aircraft of a new kind silently powered by MHD accelerators, able to ionize and direct enough air downward to lift several tonnes. As external flow systems can control the flow over the whole wetted area, limiting thermal issues at high speeds, ambient air would be ionized and radially accelerated by Lorentz forces around an axisymmetric body (shaped as a cylinder, a cone, a sphere…), the entire airframe being the engine. Lift and thrust would arise as a consequence of a pressure difference between the upper and lower surfaces, induced by the Coandă effect. In order to maximize such pressure difference between the two opposite sides, and since the most efficient MHD converters (with a high Hall effect) are disk-shaped, such MHD aircraft would be preferably flattened to take the shape of a biconvex lens. Having no wings nor airbreathing jet engines, it would share no similarities with conventional aircraft, but it would behave like a helicopter whose rotor blades would have been replaced by a "purely electromagnetic rotor" with no moving part, sucking the air downward. Such concepts of flying MHD disks have been developed in the peer review literature from the mid 1970s mainly by physicists Leik Myrabo with the Lightcraft, and Subrata Roy with the Wingless Electromagnetic Air Vehicle (WEAV).
These futuristic visions have been advertised in the media although they still remain beyond the reach of modern technology.
Spacecraft propulsion
A number of experimental methods of spacecraft propulsion are based on magnetohydrodynamics. As this kind of MHD propulsion involves compressible fluids in the form of plasmas (ionized gases) it is also referred to as magnetogasdynamics or magnetoplasmadynamics.
In such electromagnetic thrusters, the working fluid is most of the time ionized hydrazine, xenon or lithium. Depending on the propellant used, it can be seeded with alkali such as potassium or caesium to improve its electrical conductivity. All charged species within the plasma, from positive and negative ions to free electrons, as well as neutral atoms by the effect of collisions, are accelerated in the same direction by the Lorentz "body" force, which results from the combination of a magnetic field with an orthogonal electric field (hence the name of "cross-field accelerator"), these fields not being in the direction of the acceleration. This is a fundamental difference with ion thrusters which rely on electrostatics to accelerate only positive ions using the Coulomb force along a high voltage electric field.
First experimental studies involving cross-field plasma accelerators (square channels and rocket nozzles) date back to the late 1950s. Such systems provide greater thrust and higher specific impulse than conventional chemical rockets and even modern ion drives, at the cost of a higher required energy density.
Some devices also studied nowadays besides cross-field accelerators include the magnetoplasmadynamic thruster sometimes referred to as the Lorentz force accelerator (LFA), and the electrodeless pulsed inductive thruster (PIT).
Even today, these systems are not ready to be launched in space as they still lack a suitable compact power source offering enough energy density (such as hypothetical fusion reactors) to feed the power-greedy electromagnets, especially pulsed inductive ones. The rapid ablation of electrodes under the intense thermal flow is also a concern. For these reasons, studies remain largely theoretical and experiments are still conducted in the laboratory, although over 60 years have passed since the first research in this kind of thrusters.
Fiction
Oregon, a ship in the Oregon Files series of books by author Clive Cussler, has a magnetohydrodynamic drive. This allows the ship to turn very sharply and brake instantly, instead of gliding for a few miles. In Valhalla Rising, Clive Cussler writes the same drive into the powering of Captain Nemo's Nautilus.
The film adaptation of The Hunt for Red October popularized the magnetohydrodynamic drive as a "caterpillar drive" for submarines, a nearly undetectable "silent drive" intended to achieve stealth in submarine warfare. In reality, the current traveling through the water would create gases and noise, and the magnetic fields would induce a detectable magnetic signature. In the film, it was suggested that this sound could be confused with geological activity. In the novel from which the film was adapted, the caterpillar that Red October used was actually a pump-jet of the so-called "tunnel drive" type (the tunnels provided acoustic camouflage for the cavitation from the propellers).
In the Ben Bova novel The Precipice, the ship where some of the action took place, Starpower 1, built to prove that exploration and mining of the Asteroid Belt was feasible and potentially profitable, had a magnetohydrodynamic drive mated to a fusion power plant.
See also
Electrohydrodynamics
Lorentz force, relates electric and magnetic fields to propulsion force
References
External links
Demonstrate Magnetohydrodynamic Propulsion in a Minute
Marine propulsion
Fluid dynamics
Plasma technology and applications
Magnetic propulsion devices | Magnetohydrodynamic drive | [
"Physics",
"Chemistry",
"Engineering"
] | 3,219 | [
"Plasma physics",
"Plasma technology and applications",
"Chemical engineering",
"Marine engineering",
"Piping",
"Marine propulsion",
"Fluid dynamics"
] |
182,745 | https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist%20noise | Johnson–Nyquist noise (thermal noise, Johnson noise, or Nyquist noise) is the electronic noise generated by the thermal agitation of the charge carriers (usually the electrons) inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment (such as radio receivers) can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments. Thermal noise is proportional to absolute temperature, so some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to improve their signal-to-noise ratio. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium.
Thermal noise in an ideal resistor is approximately white, meaning that its power spectral density is nearly constant throughout the frequency spectrum (Figure 2). When limited to a finite bandwidth and viewed in the time domain (as sketched in Figure 1), thermal noise has a nearly Gaussian amplitude distribution.
For the general case, this definition applies to charge carriers in any type of conducting medium (e.g. ions in an electrolyte), not just resistors. Thermal noise is distinct from shot noise, which consists of additional current fluctuations that occur when a voltage is applied and a macroscopic current starts to flow.
History of thermal noise
In 1905, in one of Albert Einstein's Annus mirabilis papers the theory of Brownian motion was first solved in terms of thermal fluctuations. The following year, in a second paper about Brownian motion, Einstein suggested that the same phenomena could be applied to derive thermally-agitated currents, but did not carry out the calculation as he considered it to be untestable.
Geertruida de Haas-Lorentz, daughter of Hendrik Lorentz, in her doctoral thesis of 1912, expanded on Einstein stochastic theory and first applied it to the study of electrons, deriving a formula for the mean-squared value of the thermal current.
Walter H. Schottky studied the problem in 1918, while studying thermal noise using Einstein's theories, experimentally discovered another kind of noise, the shot noise.
Frits Zernike working in electrical metrology, found unusual random deflections while working with high-sensitive galvanometers. He rejected the idea that the noise was mechanical, and concluded that it was of thermal nature. In 1927, he introduced the idea of autocorrelations to electrical measurements and calculated the time detection limit. His work coincided with de Haas-Lorentz' prediction.
The same year, working independently without any knowledge of Zernike's work, John B. Johnson working in Bell Labs found the same kind of noise in communication systems, but described it in terms of frequencies. He described his findings to Harry Nyquist, also at Bell Labs, who used principles of thermodynamics and statistical mechanics to explain the results, published in 1928.
Noise of ideal resistors for moderate frequencies
Johnson's experiment (Figure 1) found that the thermal noise from a resistance at kelvin temperature and bandlimited to a frequency band of bandwidth (Figure 3) has a mean square voltage of:
where is the Boltzmann constant ( joules per kelvin). While this equation applies to ideal resistors (i.e. pure resistances without any frequency-dependence) at non-extreme frequency and temperatures, a more accurate general form accounts for complex impedances and quantum effects. Conventional electronics generally operate over a more limited bandwidth, so Johnson's equation is often satisfactory.
Power spectral density
The mean square voltage per hertz of bandwidth is and may be called the power spectral density (Figure 2). Its square root at room temperature (around 300 K) approximates to 0.13 in units of . A 10 kΩ resistor, for example, would have approximately 13 at room temperature.
RMS noise voltage
The square root of the mean square voltage yields the root mean square (RMS) voltage observed over the bandwidth :
A resistor with thermal noise can be represented by its Thévenin equivalent circuit (Figure 4B) consisting of a noiseless resistor in series with a gaussian noise voltage source with the above RMS voltage.
Around room temperature, 3 kΩ provides almost one microvolt of RMS noise over 20 kHz (the human hearing range) and 60 Ω·Hz for corresponds to almost one nanovolt of RMS noise.
RMS noise current
A resistor with thermal noise can also be converted into its Norton equivalent circuit (Figure 4C) consisting of a noise-free resistor in parallel with a gaussian noise current source with the following RMS current:
Thermal noise on capacitors
Ideal capacitors, as lossless devices, do not have thermal noise. However, the combination of a resistor and a capacitor (an RC circuit, a common low-pass filter) has what is called kTC noise. The noise bandwidth of an RC circuit is When this is substituted into the thermal noise equation, the result has an unusually simple form as the value of the resistance (R) drops out of the equation. This is because higher R decreases the bandwidth as much as it increases the noise.
The mean-square and RMS noise voltage generated in such a filter are:
The noise charge is the capacitance times the voltage:
This charge noise is the origin of the term "kTC noise". Although independent of the resistor's value, 100% of the kTC noise arises in the resistor. Therefore, it would incorrect to double-count both a resistor's thermal noise and its associated kTC noise, and the temperature of the resistor alone should be used, even if the resistor and the capacitor are at different temperatures. Some values are tabulated below:
Reset noise
An extreme case is the zero bandwidth limit called the reset noise left on a capacitor by opening an ideal switch. Though an ideal switch's open resistance is infinite, the formula still applies. However, now the RMS voltage must be interpreted not as a time average, but as an average over many such reset events, since the voltage is constant when the bandwidth is zero. In this sense, the Johnson noise of an RC circuit can be seen to be inherent, an effect of the thermodynamic distribution of the number of electrons on the capacitor, even without the involvement of a resistor.
The noise is not caused by the capacitor itself, but by the thermodynamic fluctuations of the amount of charge on the capacitor. Once the capacitor is disconnected from a conducting circuit, the thermodynamic fluctuation is frozen at a random value with standard deviation as given above. The reset noise of capacitive sensors is often a limiting noise source, for example in image sensors.
Any system in thermal equilibrium has state variables with a mean energy of per degree of freedom. Using the formula for energy on a capacitor (E = CV2), mean noise energy on a capacitor can be seen to also be C = . Thermal noise on a capacitor can be derived from this relationship, without consideration of resistance.
Thermometry
The Johnson–Nyquist noise has applications in precision measurements, in which it is typically called "Johnson noise thermometry".
For example, the NIST in 2017 used the Johnson noise thermometry to measure the Boltzmann constant with uncertainty less than 3 ppm. It accomplished this by using Josephson voltage standard and a quantum Hall resistor, held at the triple-point temperature of water. The voltage is measured over a period of 100 days and integrated.
This was done in 2017, when the triple point of water's temperature was 273.16 K by definition, and the Boltzmann constant was experimentally measurable. Because the acoustic gas thermometry reached 0.2 ppm in uncertainty, and Johnson noise 2.8 ppm, this fulfilled the preconditions for a redefinition. After the 2019 redefinition, the kelvin was defined so that the Boltzmann constant is 1.380649×10−23 J⋅K−1, and the triple point of water became experimentally measurable.
Thermal noise on inductors
Inductors are the dual of capacitors. Analogous to kTC noise, a resistor with an inductor results in a noise current that is independent of resistance:
Maximum transfer of noise power
The noise generated at a resistor can transfer to the remaining circuit. The maximum power transfer happens when the Thévenin equivalent resistance of the remaining circuit matches . In this case, each of the two resistors dissipates noise in both itself and in the other resistor. Since only half of the source voltage drops across any one of these resistors, this maximum noise power transfer is:
This maximum is independent of the resistance and is called the available noise power from a resistor.
Available noise power in decibel-milliwatts
Signal power is often measured in dBm (decibels relative to 1 milliwatt). Available noise power would thus be in dBm. At room temperature (300 K), the available noise power can be easily approximated as in dBm for a bandwidth in hertz. Some example available noise power in dBm are tabulated below:
Nyquist's derivation of ideal resistor noise
Nyquist's 1928 paper "Thermal Agitation of Electric Charge in Conductors" used concepts about potential energy and harmonic oscillators from the equipartition law of Boltzmann and Maxwell to explain Johnson's experimental result. Nyquist's thought experiment summed the energy contribution of each standing wave mode of oscillation on a long lossless transmission line between two equal resistors (). According to the conclusion of Figure 5, the total average power transferred over bandwidth from and absorbed by was determined to be:
Simple application of Ohm's law says the current from (the thermal voltage noise of only ) through the combined resistance is , so the power transferred from to is the square of this current multiplied by , which simplifies to:
Setting this equal to the earlier average power expression allows solving for the average of over that bandwidth:
Nyquist used similar reasoning to provide a generalized expression that applies to non-equal and complex impedances too. And while Nyquist above used according to classical theory, Nyquist concluded his paper by attempting to use a more involved expression that incorporated the Planck constant (from the new theory of quantum mechanics).
Generalized forms
The voltage noise described above is a special case for a purely resistive component for low to moderate frequencies. In general, the thermal electrical noise continues to be related to resistive response in many more generalized electrical cases, as a consequence of the fluctuation-dissipation theorem. Below a variety of generalizations are noted. All of these generalizations share a common limitation, that they only apply in cases where the electrical component under consideration is purely passive and linear.
Complex impedances
Nyquist's original paper also provided the generalized noise for components having partly reactive response, e.g., sources that contain capacitors or inductors. Such a component can be described by a frequency-dependent complex electrical impedance . The formula for the power spectral density of the series noise voltage is
The function is approximately 1, except at very high frequencies or near absolute zero (see below).
The real part of impedance, , is in general frequency dependent and so the Johnson–Nyquist noise is not white noise. The RMS noise voltage over a span of frequencies to can be found by taking the square root of integration of the power spectral density:
.
Alternatively, a parallel noise current can be used to describe Johnson noise, its power spectral density being
where is the electrical admittance; note that
Quantum effects at high frequencies or low temperatures
With proper consideration of quantum effects (which are relevant for very high frequencies or very low temperatures near absolute zero), the multiplying factor mentioned earlier is in general given by:
At very high frequencies (), the function starts to exponentially decrease to zero. At room temperature this transition occurs in the terahertz, far beyond the capabilities of conventional electronics, and so it is valid to set for conventional electronics work.
Relation to Planck's law
Nyquist's formula is essentially the same as that derived by Planck in 1901 for electromagnetic radiation of a blackbody in one dimension—i.e., it is the one-dimensional version of Planck's law of blackbody radiation. In other words, a hot resistor will create electromagnetic waves on a transmission line just as a hot object will create electromagnetic waves in free space.
In 1946, Robert H. Dicke elaborated on the relationship, and further connected it to properties of antennas, particularly the fact that the average antenna aperture over all different directions cannot be larger than , where λ is wavelength. This comes from the different frequency dependence of 3D versus 1D Planck's law.
Multiport electrical networks
Richard Q. Twiss extended Nyquist's formulas to multi-port passive electrical networks, including non-reciprocal devices such as circulators and isolators.
Thermal noise appears at every port, and can be described as random series voltage sources in series with each port. The random voltages at different ports may be correlated, and their amplitudes and correlations are fully described by a set of cross-spectral density functions relating the different noise voltages,
where the are the elements of the impedance matrix .
Again, an alternative description of the noise is instead in terms of parallel current sources applied at each port. Their cross-spectral density is given by
where is the admittance matrix.
Notes
See also
Fluctuation-dissipation theorem
Shot noise
Pink noise
Langevin equation
Rise over thermal
References
External links
Amplifier noise in RF systems
Thermal noise (undergraduate) with detailed math
Noise (electronics)
Electrical engineering
Electronic engineering
Electrical parameters
Radar signal processing | Johnson–Nyquist noise | [
"Technology",
"Engineering"
] | 2,888 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering",
"Electrical parameters"
] |
182,756 | https://en.wikipedia.org/wiki/Basin%20modelling | Basin modelling is the term broadly applied to a group of geological disciplines that can be used to analyse the formation and evolution of sedimentary basins, often but not exclusively to aid evaluation of potential hydrocarbon reserves.
At its most basic, a basin modelling exercise must assess:
The burial history of the basin (see back-stripping).
The thermal history of the basin (see thermal history modelling).
The maturity history of the source rocks.
The expulsion, migration and trapping of hydrocarbons.
By doing so, valuable inferences can be made about such matters as hydrocarbon generation and timing, maturity of potential source rocks and migration paths of expelled hydrocarbons.
Basin modelling software
Software packages have been designed for 1D/2D/3D basin modelling purposes to simulate the burial and thermal history of a basin as well as petroleum migration modelling.
Basin Modeling software programs:
Permedia (Halliburton),
PBM-Pars Basin Modeler(Research Institute of Petroleum Industry),
PetroMod (Schlumberger),
BasinMod (Platte River Associates, Inc.),
Genex, Temis 2D, 3D (Beicip/IFP),
Migri, MigriX (Migris),
Sigma2D (JNOC/TRC),
Novva (Sirius Exploration Geochemistry Inc.),
Genesis-Trinity(Zetaware), and
WinBury software.
Basin modelling publications
Basin Modeling Publications
References
Duppenbecker S. J. and Eliffe J. E., Basin Modelling: Practice and Progress, Geological Society Special Publication, (1998).
Lerche I., Basin Analysis: Quantitative Methods v.2, Academic Press (1990).
Hantschel, T. and Kauerauf, A.I., Fundamentals of Basin and Petroleum Systems Modeling, Springer (2009).
Lee, E.Y., Novotny, J., Wagreich, M., Subsidence analysis and visualization - for sedimentary basin analysis and modelling. Springer (2019). https://www.springer.com/gp/book/9783319764238
Sedimentology
Petroleum geology | Basin modelling | [
"Chemistry"
] | 444 | [
"Petroleum",
"Petroleum geology"
] |
182,765 | https://en.wikipedia.org/wiki/Prussian%20blue | Prussian blue (also known as Berlin blue, Brandenburg blue, Parisian and Paris blue) is a dark blue pigment produced by oxidation of ferrous ferrocyanide salts. It has the chemical formula . Turnbull's blue is essentially identical chemically, excepting that it has different impurities and particle sizes—because it is made from different reagents—and thus it has a slightly different color.
Prussian blue was created in the early 18th century and is the first modern synthetic pigment. It is prepared as a very fine colloidal dispersion, because the compound is not soluble in water. It contains variable amounts of other ions and its appearance depends sensitively on the size of the colloidal particles. The pigment is used in paints, it became prominent in 19th-century () Japanese woodblock prints, and it is the traditional "blue" in technical blueprints.
In medicine, orally administered Prussian blue is used as an antidote for certain kinds of heavy metal poisoning, e.g., by thallium(I) and radioactive isotopes of cesium. The therapy exploits Prussian blue's ion-exchange properties and high affinity for certain "soft" metal cations. It is on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system.
Prussian blue lent its name to prussic acid (hydrogen cyanide) derived from it. In German, hydrogen cyanide is called ('blue acid'). Cyanide also acquired its name from this relationship.
History
Prussian blue pigment is significant since it was the first stable and relatively lightfast blue pigment to be widely used since the loss of knowledge regarding the synthesis of Egyptian blue. European painters had previously used a number of pigments such as indigo dye, smalt, and Tyrian purple, and the extremely expensive ultramarine made from lapis lazuli. Japanese painters and woodblock print artists, likewise, did not have access to a long-lasting blue pigment until they began to import Prussian blue from Europe.
Prussian blue (also () was probably synthesized for the first time by the paint maker Johann Jacob Diesbach in Berlin around 1706. The pigment is believed to have been accidentally created when Diesbach used potash tainted with blood to create some red cochineal dye. The original dye required potash, ferric sulfate, and dried cochineal. Instead, the blood, potash, and iron sulfate reacted to create a compound known as iron ferrocyanide, which, unlike the desired red pigment, has a very distinct blue hue. It was named and in 1709 by its first trader.
The pigment readily replaced the expensive lapis lazuli-derived ultramarine and was an important topic in the letters exchanged between Johann Leonhard Frisch and the president of the Prussian Academy of Sciences, Gottfried Wilhelm Leibniz, between 1708 and 1716. It is first mentioned in a letter written by Frisch to Leibniz, from March 31, 1708. Not later than 1708, Frisch began to promote and sell the pigment across Europe. By August 1709, the pigment had been termed ; by November 1709, the German name had been used for the first time by Frisch. Frisch himself is the author of the first known publication of Prussian blue in the paper in 1710, as can be deduced from his letters. Diesbach had been working for Frisch since about 1701.
To date, the Entombment of Christ, dated 1709 by Pieter van der Werff (Picture Gallery, Sanssouci, Potsdam) is the oldest known painting where Prussian blue was used. Around 1710, painters at the Prussian court were already using the pigment. At around the same time, Prussian blue arrived in Paris, where Antoine Watteau and later his successors Nicolas Lancret and Jean-Baptiste Pater used it in their paintings. François Boucher used the pigment extensively for both blues and greens.
In 1731, Georg Ernst Stahl published an account of the first synthesis of Prussian blue. The story involves not only Diesbach, but also Johann Konrad Dippel. Diesbach was attempting to create a red lake pigment from cochineal, but obtained the blue instead as a result of the contaminated potash he was using. He borrowed the potash from Dippel, who had used it to produce his animal oil. No other known historical source mentions Dippel in this context. It is, therefore, difficult to judge the reliability of this story today. In 1724, the recipe was finally published by John Woodward.
In 1752, French chemist Pierre J. Macquer made the important step of showing Prussian blue could be reduced to a salt of iron and a new acid, which could be used to reconstitute the dye. The new acid, hydrogen cyanide, first isolated from Prussian blue in pure form and characterized in 1782 by Swedish chemist Carl Wilhelm Scheele, was eventually given the name (literally "blue acid") because of its derivation from Prussian blue, and in English became known popularly as Prussic acid. Cyanide, a colorless anion that forms in the process of making Prussian blue, derives its name from the Greek word for dark blue.
In the late 1800s, Rabbi Gershon Henoch Leiner, the Hasidic Rebbe of Radzin, dyed tzitziyot with Prussian blue made with sepia, believing that this was the true techeiles dye. Even though some have questioned its identity as techeiles because of its artificial production, and claimed that had Rabbi Leiner been aware of this he would have retracted his position that his dye was techeiles, others have disputed this and claimed that Rabbi Leiner would not have retracted.
Military symbol
From the beginning of the 18th century, Prussian blue was the predominant uniform coat color worn by the infantry and artillery regiments of the Prussian Army. As (dark blue), this shade achieved a symbolic importance and continued to be worn by most German soldiers for ceremonial and off-duty occasions until the outbreak of World War I, when it was superseded by greenish-gray field gray ().
Synthesis
Prussian blue is produced by oxidation of ferrous ferrocyanide salts. These white solids have the formula where = or . The iron in this material is all ferrous, hence the absence of deep color associated with the mixed valency. Oxidation of this white solid with hydrogen peroxide or sodium chlorate produces ferricyanide and affords Prussian blue.
A "soluble" form, , which is really colloidal, can be made from potassium ferrocyanide and iron(III):
The similar reaction of potassium ferricyanide and iron(II) results in the same colloidal solution, because is converted into ferrocyanide.
The "insoluble" Prussian blue is obtained if, in the reactions above, an excess of Fe(III) is added:
Despite the fact that it is prepared from cyanide salts, Prussian blue is not toxic because the cyanide groups are tightly bound to iron. Both ferrocyanide (() and ferricyanide (() are particularly stable and non-toxic polymeric cyanometalates due to the strong iron coordination to cyanide ions. Although cyanide bonds well with transition metals in general like chromium, these non-iron coordination compounds are not as stable as iron cyanides, therefore increasing the risk of releasing CN− ions, and subsequently comparative toxicity.
Turnbull's blue
In former times, the addition of iron(II) salts to a solution of ferricyanide was thought to afford a material different from Prussian blue. The product was traditionally named Turnbull's blue (TB). X-ray diffraction and electron diffraction methods have shown, though, that the structures of PB and TB are identical. The differences in the colors for TB and PB reflect subtle differences in the methods of precipitation, which strongly affect particle size and impurity content.
Prussian white
Prussian white, also known as Berlin white or Everett's salt, is the sodium end-member of the totally reduced form of the Prussian blue in which all iron is present as Fe(II). It is a sodium hexacyanoferrate of Fe(II) of formula . Its molecular weight value is .
A more generic formula allowing for the substitution of cations by cations is (in which A or B = or ).
The Prussian white is closely related to the Prussian blue, but it significantly differs by its crystallographic structure, molecular framework pore size, and its color. The cubic sodium Prussian white, , and potassium Prussian white, , are candidates as cathode materials for Na-ion batteries. The insertion of and cations in the framework of potassium Prussian white provides favorable synergistic effects improving the long-term battery stability and increasing the number of possible recharge cycles, lengthening its service life. The large-size framework of Prussian white easily accommodating and cations facilitates their intercalation and subsequent extraction during the charge/discharge cycles. The spacious and rigid host crystal structure contributes to its volumetric stability against the internal swelling stress and strain developing in sodium-batteries after many cycles. The material also offers perspectives of high energy densities (Ah/kg) while providing high recharge rate, even at low temperature.
Properties
Prussian blue is a microcrystalline blue powder. It is insoluble, but the crystallites tend to form a colloid. Such colloids can pass through fine filters. Despite being one of the oldest known synthetic compounds, the composition of Prussian blue remained uncertain for many years. Its precise identification was complicated by three factors:
Prussian blue is extremely insoluble, but also tends to form colloids
Traditional syntheses tend to afford impure compositions
Even pure Prussian blue is structurally complex, defying routine crystallographic analysis
Crystal structure
The chemical formula of insoluble Prussian blue is , where x = 14–16. The structure was determined by using IR spectroscopy, Mössbauer spectroscopy, X-ray crystallography, and neutron crystallography. Since X-ray diffraction cannot easily distinguish carbon from nitrogen in the presence of heavier elements such as iron, the location of these lighter elements is deduced by spectroscopic means, as well as by observing the distances from the iron atom centers. Neutron diffraction can easily distinguish N and C atoms, and it has been used to determine the detailed structure of Prussian blue and its analogs.
PB has a face centered cubic lattice structure, with four iron(III) ions per unit cell. "Soluble" PB crystals contain interstitial ions; insoluble PB has interstitial water, instead. In ideal insoluble PB crystals, the cubic framework is built from Fe(II)–C–N–Fe(III) sequences, with Fe(II)–carbon distances of 1.92 Å and Fe(III)–nitrogen distances of 2.03 Å. One-fourth of the sites of subunits (supposedly at random) are vacant (empty), leaving three such groups on average per unit cell. The empty nitrogen sites are filled with water molecules instead, which are coordinated to Fe(III).
The Fe(II) centers, which are low spin, are surrounded by six carbon ligands in an octahedral configuration. The Fe(III) centers, which are high spin, are octahedrally surrounded on average by 4.5 nitrogen atoms and 1.5 oxygen atoms (the oxygen from the six coordinated water molecules). Around eight (interstitial) water molecules are present in the unit cell, either as isolated molecules or hydrogen bonded to the coordinated water. It is worth noting that in soluble hexacyanoferrates Fe(II or III) is always coordinated to the carbon atom of a cyanide, whereas in crystalline Prussian blue Fe ions are coordinated to both C and N.
The composition is notoriously variable due to the presence of lattice defects, allowing it to be hydrated to various degrees as water molecules are incorporated into the structure to occupy cation vacancies. The variability of Prussian blue's composition is attributable to its low solubility, which leads to its rapid precipitation without the time to achieve full equilibrium between solid and liquid.
Color
Prussian blue is strongly colored and tends towards black and dark blue when mixed into oil paints. The exact hue depends on the method of preparation, which dictates the particle size. The intense blue color of Prussian blue is associated with the energy of the transfer of electrons from Fe(II) to Fe(III). Many such mixed-valence compounds absorb certain wavelengths of visible light resulting from intervalence charge transfer. In this case, orange-red light around 680 nanometers in wavelength is absorbed, and the reflected light appears blue as a result.
Like most high-chroma pigments, Prussian blue cannot be accurately displayed on a computer display. Prussian blue is electrochromic—changing from blue to colorless upon reduction. This change is caused by reduction of the Fe(III) to Fe(II), eliminating the intervalence charge transfer that causes Prussian blue's color.
Use
Pigment
Because it is easily made, cheap, nontoxic, and intensely colored, Prussian blue has attracted many applications. It was adopted as a pigment very soon after its invention and was almost immediately widely used in oil paints, watercolor, and dyeing. The dominant uses are for pigments: about 12,000 tonnes of Prussian blue are produced annually for use in black and bluish inks. A variety of other pigments also contain the material. Engineer's blue and the pigment formed on cyanotypes—giving them their common name blueprints. Certain crayons were once colored with Prussian blue (later relabeled midnight blue). Similarly, Prussian blue is the basis for laundry bluing.
Nanoparticles of Prussian blue are used as pigments in some cosmetics ingredients, according to the European Union Observatory for Nanomaterials.
Medicine
Prussian blue's ability to incorporate monovalent metallic cations () makes it useful as a sequestering agent for certain toxic heavy metals. Pharmaceutical-grade Prussian blue in particular is used for people who have ingested thallium () or radioactive caesium (). According to the International Atomic Energy Agency (IAEA), an adult male can eat at least 10 g of Prussian blue per day without serious harm. The U.S. Food and Drug Administration (FDA) has determined the "500-mg Prussian blue capsules, when manufactured under the conditions of an approved New Drug Application, can be found safe and effective therapy" in certain poisoning cases. Radiogardase (Prussian blue insoluble capsules ) is a commercial product for the removal of caesium-137 from the intestine, so indirectly from the bloodstream by intervening in the enterohepatic circulation of caesium-137, reducing the internal residency time (and exposure) by about two-thirds. In particular, it was used to adsorb and remove from those poisoned in the Goiânia accident in Brazil.
Stain for iron
Prussian blue is a common histopathology stain used by pathologists to detect the presence of iron in biopsy specimens, such as in bone marrow samples. The original stain formula, known historically (1867) as "Perls Prussian blue" after its inventor, German pathologist Max Perls (1843–1881), used separate solutions of potassium ferrocyanide and acid to stain tissue (these are now used combined, just before staining). Iron deposits in tissue then form the purple Prussian blue dye in place, and are visualized as blue or purple deposits.
By machinists and toolmakers
Engineer's blue, Prussian blue in an oily base, is the traditional material used for spotting metal surfaces such as surface plates and bearings for hand scraping. A thin layer of nondrying paste is applied to a reference surface and transfers to the high spots of the workpiece. The toolmaker then scrapes, stones, or otherwise removes the marked high spots. Prussian blue is preferable because it will not abrade the extremely precise reference surfaces as many ground pigments may. Other uses include marking gear teeth during assembly to determine their interface characteristics.
In analytical chemistry
Prussian blue is formed in the Prussian blue assay for total phenols. Samples and phenolic standards are given acidic ferric chloride and ferricyanide, which is reduced to ferrocyanide by the phenols. The ferric chloride and ferrocyanide react to form Prussian blue. Comparing the absorbance at 700 nm of the samples to the standards allows for the determination of total phenols or polyphenols.
Household use
Prussian blue is present in some preparations of laundry bluing, such as Mrs. Stewart's Bluing.
Research
Battery materials
Prussian blue (PB) has been studied for its applications in electrochemical energy storage since 1978. Prussian Blue proper (the Fe-Fe solid) shows two well-defined reversible redox transitions in K+ solutions. Weakly solvated potassium ions (as well as Rb+ and Cs+, not shown) have the solvated radius, which fits the framework of Prussian Blue. On the other hand, the sizes of solvated Na+ and Li+ are too large for the PB cavity, and the intercalation of these ions is hindered and much slower. The low and high voltage sets of peaks in the cyclic voltammetry correspond to 1 and ⅔ electron per Fe atom, respectively. The high voltage set is due to the transition at the low-spin Fe ions coordinated to C-atoms. The low-voltage set is due to high-spin Fe ion coordinated to N-atoms.
It is possible to replace the Fe metal centers in PB with other metal ions such as Mn, Co, Ni, Zn, etc. to form electrochemically active Prussian blue analogues (PBAs). PB/PBAs and their derivatives have also been evaluated as electrode materials for reversible alkali-ion insertion and extraction in lithium-ion battery, sodium-ion battery, and potassium-ion battery.
See also
Blue pigments
References
External links
The FDA's page on Prussian blue
The CDC's page on Prussian blue
National Pollutant Inventory – Cyanide compounds fact sheet
Heyltex Corporation distributors of Radiogardase (Prussian blue insoluble capsules)
Sarah Lowengard, "Prussian Blue" in The Creation of Color in Eighteenth Century Europe Columbia University Press, 2006
Prussian blue, ColourLex
Cyanides
Iron(II,III) compounds
Inorganic pigments
Iron complexes
Shades of blue
Photographic chemicals
World Health Organization essential medicines | Prussian blue | [
"Chemistry"
] | 3,890 | [
"Inorganic pigments",
"Inorganic compounds"
] |
182,774 | https://en.wikipedia.org/wiki/Accounting%20standard | Publicly traded companies typically are subject to rigorous standards. Small and midsized businesses often follow more simplified standards, plus any specific disclosures required by their specific lenders and shareholders. Some firms operate on the cash method of accounting which can often be simple and straightforward. Larger firms most often operate on an accrual basis. Accrual basis is one of the fundamental accounting assumptions and if it is followed by the company while preparing the Financial statements then no further disclosure is required. Accounting standards prescribe in considerable detail what accruals must be made, how the financial statements are to be presented, and what additional disclosures are required.
Some important elements that accounting standards cover include identifying the exact entity which is reporting, discussing any "going concern" questions, specifying monetary units, and reporting time frames.
In the public sector, 30% of 165 governments surveyed used accrual accounting, rather than cash accounting, in 2020.
Benefits
The lack of transparent accounting standards in some nations has been cited as increasing the difficulty of doing business in them. In particular, the Asian financial meltdown in the late 1990s has been partially attributed to the lack of detailed accounting standards. Giant firms in some Asian countries were able to take advantage of their poorly devised accounting standards to cover up immense debts and losses, which yielded a collective effect that eventually led the whole region into financial crisis.
Limitations
The notable limitations of accounting standards are their inflexibility, time-consuming process to create them, the difficulty of choosing between alternative treatments and their restrictive scope. Accounting standards were largely written in the early 21st century. Accounting scandals such as Worldcom and Enron illustrate that, despite all these efforts, widespread fraud can still occur, and even be missed by the outside auditors.
Accounting standards by nation
Canada – Generally Accepted Accounting Principles ( Canada)
China – Chinese Accounting Standards (Zhōngguó qǐyè kuàijì zhǔnzé 中国企业会计准则)
France – Generally Accepted Accounting Practice (Plan Comptable Général)
Germany – Generally Accepted Accounting Practice (Grundsätze ordnungsmäßiger Buchführung)
India – Indian Accounting Standards (Ind_AS) can be used by Any Company within the rules and regulations under Companies Act,2013 And Generally Accepted Accounting Principles (USA) is used by Foreign and Multinational company in India
Italy – Principi contabili nazionali
Luxembourg - Luxembourg Generally Accepted Accounting Principles (Lux GAAP)
Nepal – Nepal Financial Reporting Standards
Russia – Russian GAAP
Sweden – BAS (accounting)
Switzerland – Swiss GAAP FER (Fachempfehlungen zur Rechnungslegung)
Turkey – Uniform Accounting Plan (Turkey)
United Kingdom – Generally Accepted Accounting Practice (UK)
United States – Generally Accepted Accounting Principles (United States) Domestic firms typically report in this format. Foreign firms that trade in the U.S. typically report in IFRS format (below). Statutory accounting principles are for insurance companies in the US.
Global standardization and IFRS
Many countries use or are converging on the International Financial Reporting Standards (IFRS) that were established and are maintained by the International Accounting Standards Board. In some countries, local accounting principles are applied for regular companies but listed or large companies must conform to IFRS, so statutory reporting is comparable internationally.
All listed and grouped EU companies have been required to use IFRS since 2005, Canada moved in 2009, Taiwan in 2013, and other countries are adopting local versions.
In the United States, while "... the SEC published a statement of continued support for a single set of high-quality, globally accepted accounting standards, and acknowledged that IFRS is best positioned to serve this role..." progress is less evident.
See also
Constant item purchasing power accounting
Convention of consistency
Convergence of accounting standards
Creative accounting
Forensic accounting
Philosophy of accounting
References
Further reading
Meeks, Geoff, and GM Peter Swann. "Accounting standards and the economics of standards." Accounting and Business Research 39.3 (2009): 191-210m
External links
FASAB Handbook of Federal Accounting Standards (2014)
Accounting systems | Accounting standard | [
"Technology"
] | 833 | [
"Information systems",
"Accounting systems"
] |
182,822 | https://en.wikipedia.org/wiki/Teegarden%27s%20Star | Teegarden's Star (SO J025300.5+165258, 2MASS J02530084+1652532, LSPM J0253+1652) is an M-type red dwarf star in the constellation Aries, from the Solar System. Although it is near Earth it is a dim magnitude 15 and can only be seen through large telescopes. This star was found to have a very large proper motion of about 5 arcseconds per year. Only seven stars with such large proper motions are currently known. Teegarden's Star hosts a planetary system with at least three planets.
Discovery
Teegarden's Star was discovered in 2003 using asteroid-tracking data that had been collected years earlier. This data set is a digital archive created from optical images taken over a five-year period by the Near-Earth Asteroid Tracking (NEAT) program using two 1 m telescopes on Maui, Hawaii. The star is named after the discovery team leader, Bonnard J. Teegarden, an astrophysicist at NASA's Goddard Space Flight Center.
Astronomers have long thought it was quite likely that many undiscovered dwarf stars exist within 20 light-years of Earth, because stellar-population surveys show the count of known nearby dwarf stars to be lower than otherwise expected and these stars are dim and easily overlooked. Teegarden's team thought that these dim stars might be found by data mining some of the huge optical sky survey data sets taken by various programs for other purposes in previous years. So they reexamined the NEAT asteroid tracking data set and found this star. The star was then precovered on photographic plates from the Palomar Sky Survey taken in 1951. This discovery is significant as the team did not have direct access to any telescopes and did not include professional astronomers at the time of the discovery.
Properties
Teegarden's Star is classified as a red dwarf as its approximate calculated mass of just over 0.09 times that of the Sun is narrowly above the limit of brown dwarfs. The inherently low temperature of such objects explains why it was not discovered earlier, since it has an apparent magnitude of only 15.1 (and an absolute magnitude of 17.22). Like most red and brown dwarfs it emits most of its energy in the infrared spectrum.
The parallax was initially measured as 0.43 ± 0.13 arcseconds. This would have placed its distance at only 7.50 light-years, making Teegarden's Star only the third star system in order of distance from the Sun, ranking between Barnard's Star and Wolf 359. However, even at that time the anomalously low luminosity (the absolute magnitude would have been 18.5) and high uncertainty in the parallax suggested that it was in fact somewhat farther away, still one of the Sun's nearest neighbors but not nearly as high in the ranking in order of distance. A more accurate parallax measurement of 0.2593 arcseconds was made by George Gatewood in 2009, yielding a distance of 12.578 light-years, very close to the value now accepted.
Planetary system
Observations by the ROPS survey in 2010, published in 2012, showed variation in the radial velocity of Teegarden's Star, though there was insufficient data to make claims of planet detection at that time.
In June 2019, scientists conducting the CARMENES survey at the Calar Alto Observatory announced evidence of two Earth-mass exoplanets orbiting the star within its habitable zone; Teegarden's Star b orbits inside the optimistic habitable zone—the equivalent in the Solar System would be in between Earth and Venus—whereas Teegarden's Star c orbits on the outer edge of the conservative habitable zone, similarly to Mars.
A 2024 study refined the parameters of the two previously known planets, and detected a third planet orbiting farther out, with a period of 26 days and a minimum mass slightly less than Earth's mass. This third planet, Teegarden's Star d, orbits beyond the habitable zone and would have temperatures similar to the icy moons of Jupiter. Two longer-period radial velocity signals were also detected; a 96-day signal corresponds to the star's rotation, while the origin of a 172-day signal is uncertain.
According to one group of researchers, who were specifically studying this star, both habitable-zone planets could have maintained a dense atmosphere and so therefore there would be a high likelihood that at least one may harbour liquid water. However, another group of scientists, looking at Earth-sized planets in general in the habitable zones of stars, specifically in a likely tidally locked scenario, give Teegarden's Star b a 3% chance, and Teegarden's Star c only a 2% chance, of having even retained an atmosphere.
While TESS observations confirm that the planets of Teegarden's Star do not transit their star as seen from Earth, during the period from 2044-2496, Earth would transit the Sun as seen from Teegarden's Star.
See also
References
External links
SolStation.com
Image Teegarden's Star
Aries (constellation)
Local Bubble
M-type main-sequence stars
Planetary systems with three confirmed planets
J02530084+1652532
Astronomical objects discovered in 2003 | Teegarden's Star | [
"Astronomy"
] | 1,104 | [
"Aries (constellation)",
"Constellations"
] |
182,837 | https://en.wikipedia.org/wiki/Pattern%20language | A pattern language is an organized and coherent set of patterns, each of which describes a problem and the core of a solution that can be used in many ways within a specific field of expertise. The term was coined by architect Christopher Alexander and popularized by his 1977 book A Pattern Language.
A pattern language can also be an attempt to express the deeper wisdom of what brings aliveness within a particular field of human endeavor, through a set of interconnected patterns. Aliveness is one placeholder term for "the quality that has no name": a sense of wholeness, spirit, or grace, that while of varying form, is precise and empirically verifiable. Alexander claims that ordinary people can use this design approach to successfully solve very large, complex design problems.
What is a pattern?
When a designer designs something – whether a house, computer program, or lamp – they must make many decisions about how to solve problems. A single problem is documented with its typical place (the syntax), and use (the grammar) with the most common and recognized good solution seen in the wild, like the examples seen in dictionaries. Each such entry is a single design pattern. Each pattern has a name, a descriptive entry, and some cross-references, much like a dictionary entry. A documented pattern should explain why that solution is good in the pattern's contexts.
Elemental or universal patterns such as "door" or "partnership" are versatile ideals of design, either as found in experience or for use as components in practice, explicitly described as holistic resolutions of the forces in recurrent contexts and circumstances, whether in architecture, medicine, software development or governance, etc. Patterns might be invented or found and studied, such as the naturally occurring patterns of design that characterize human environments.
Like all languages, a pattern language has vocabulary, syntax, and grammar – but a pattern language applies to some complex activity other than communication. In pattern languages for design, the parts break down in this way:
The language description – the vocabulary – is a collection of named, described solutions to problems in a field of interest. These are called design patterns. So, for example, the language for architecture describes items like: settlements, buildings, rooms, windows, latches, etc.
Each solution includes syntax, a description that shows where the solution fits in a larger, more comprehensive or more abstract design. This automatically links the solution into a web of other needed solutions. For example, rooms have ways to get light, and ways to get people in and out.
The solution includes grammar that describes how the solution solves a problem or produces a benefit. So, if the benefit is unneeded, the solution is not used. Perhaps that part of the design can be left empty to save money or other resources; if people do not need to wait to enter a room, a simple doorway can replace a waiting room.
In the language description, grammar and syntax cross index (often with a literal alphabetic index of pattern names) to other named solutions, so the designer can quickly think from one solution to related, needed solutions, and document them in a logical way. In Christopher Alexander's book A Pattern Language, the patterns are in decreasing order by size, with a separate alphabetic index.
The web of relationships in the index of the language provides many paths through the design process.
This simplifies the design work because designers can start the process from any part of the problem they understand and work toward the unknown parts. At the same time, if the pattern language has worked well for many projects, there is reason to believe that even a designer who does not completely understand the design problem at first will complete the design process, and the result will be usable. For example, skiers coming inside must shed snow and store equipment. The messy snow and boot cleaners should stay outside. The equipment needs care, so the racks should be inside.
Many patterns form a language
Just as words must have grammatical and semantic relationships to each other in order to make a spoken language useful, design patterns must be related to each other in position and utility order to form a pattern language. Christopher Alexander's work describes a process of decomposition, in which the designer has a problem (perhaps a commercial assignment), selects a solution, then discovers new, smaller problems resulting from the larger solution. Occasionally, the smaller problems have no solution, and a different larger solution must be selected. Eventually all of the remaining design problems are small enough or routine enough to be solved by improvisation by the builders, and the "design" is done.
The actual organizational structure (hierarchical, iterative, etc.) is left to the discretion of the designer, depending on the problem. This explicitly lets a designer explore a design, starting from some small part. When this happens, it's common for a designer to realize that the problem is actually part of a larger solution. At this point, the design almost always becomes a better design.
In the language, therefore, each pattern has to indicate its relationships to other patterns and to the language as a whole. This gives the designer using the language a great deal of guidance about the related problems that must be solved.
The most difficult part of having an outside expert apply a pattern language is in fact to get a reliable, complete list of the problems to be solved. Of course, the people most familiar with the problems are the people that need a design. So, Alexander famously advocated on-site improvisation by concerned, empowered users, as a powerful way to form very workable large-scale initial solutions, maximizing the utility of a design, and minimizing the design rework. The desire to empower users of architecture was, in fact, what led Alexander to undertake a pattern language project for architecture in the first place.
Design problems in a context
An important aspect of design patterns is to identify and document the key ideas that make a good system different from a poor system (that may be a house, a computer program or an object of daily use), and to assist in the design of future systems. The idea expressed in a pattern should be general enough to be applied in very different systems within its context, but still specific enough to give constructive guidance.
The range of situations in which the problems and solutions addressed in a pattern apply is called its context. An important part in each pattern is to describe this context. Examples can further illustrate how the pattern applies to very different situation.
For instance, Alexander's pattern "A PLACE TO WAIT" addresses bus stops in the same way as waiting rooms in a surgery, while still proposing helpful and constructive solutions. The "Gang-of-Four" book Design Patterns by Gamma et al. proposes solutions that are independent of the programming language, and the program's application domain.
Still, the problems and solutions described in a pattern can vary in their level of abstraction and generality on the one side, and specificity on the other side. In the end this depends on the author's preferences. However, even a very abstract pattern will usually contain examples that are, by nature, absolutely concrete and specific.
Patterns can also vary in how far they are proven in the real world. Alexander gives each pattern a rating by zero, one or two stars, indicating how well they are proven in real-world examples. It is generally claimed that all patterns need at least some existing real-world examples. It is, however, conceivable to document yet unimplemented ideas in a pattern-like format.
The patterns in Alexander's book also vary in their level of scale – some describing how to build a town or neighbourhood, others dealing with individual buildings and the interior of rooms. Alexander sees the low-scale artifacts as constructive elements of the large-scale world, so they can be connected to a hierarchic network.
Balancing of forces
A pattern must characterize the problems that it is meant to solve, the context or situation where these problems arise, and the conditions under which the proposed solutions can be recommended.
Often these problems arise from a conflict of different interests or "forces". A pattern emerges as a dialogue that will then help to balance the forces and finally make a decision.
For instance, there could be a pattern suggesting a wireless telephone. The forces would be the need to communicate, and the need to get other things done at the same time (cooking, inspecting the bookshelf). A very specific pattern would be just "WIRELESS TELEPHONE". More general patterns would be "WIRELESS DEVICE" or "SECONDARY ACTIVITY", suggesting that a secondary activity (such as talking on the phone, or inspecting the pockets of your jeans) should not interfere with other activities.
Though quite unspecific in its context, the forces in the "SECONDARY ACTIVITY" pattern are very similar to those in "WIRELESS TELEPHONE". Thus, the competing forces can be seen as part of the essence of a design concept expressed in a pattern.
Patterns contain their own rationale
Usually a pattern contains a rationale referring to some given values. For Christopher Alexander, it is most important to think about the people who will come in contact with a piece of architecture. One of his key values is making these people feel more alive. He talks about the "quality without a name" (QWAN).
More generally, we could say that a good system should be accepted, welcomed and happily embraced as an enrichment of daily life by those who are meant to use it, or – even better – by all people it affects. For instance, when discussing a street café, Alexander discusses the possible desires of a guest, but also mentions people who just walk by.
The same thinking can be applied to technical devices such as telephones and cars, to social structures like a team working on a project, or to the user interface of a computer program. The qualities of a software system, for instance, could be rated by observing whether users spend their time enjoying or struggling with the system.
By focusing on the impacts on human life, we can identify patterns that are independent from changing technology, and thus find "timeless quality" (Alexander).
Generic structure and layout
Usually the author of a pattern language or collection chooses a generic structure for all the patterns it contains, breaking each into generic sections like context, problem statement, solution etc.
Christopher Alexander's patterns, for instance, each consist of a short name, a rating (up to two '*' symbols), a sensitizing picture, the context description, the problem statement, a longer part of text with examples and explanations, a solution statement, a sketch and further references. This structure and layout is sometimes referred to as the "Alexandrian form".
Alexander uses a special text layout to mark the different sections of his patterns. For instance, the problem statement and the solution statement are printed in bold font, the latter is always preceded by the "Therefore:" keyword. Some authors instead use explicit labels, which creates some degree of redundancy.
Meaningful names
When design is done by a team, pattern names will form a vocabulary they can share. This makes it necessary for pattern names to be easy to remember and highly descriptive. Some examples from Alexander's works are WINDOW PLACE (helps define where windows should go in a room) and A PLACE TO WAIT (helps define the characteristics of bus stops and hospital waiting rooms, for example).
Aggregation in an associative network (pattern language)
A pattern language, as conceived by Alexander, contains links from one pattern to another, so when trying to apply one pattern in a project, a designer is pushed to other patterns that are considered helpful in its context.
In Alexander's book, such links are collected in the "references" part, and echoed in the linked pattern's "context" part – thus the overall structure is a directed graph. A pattern that is linked to in the "references" usually addresses a problem of lower scale, that is suggested as a part of the higher-scale problem. For instance, the "PUBLIC OUTDOOR ROOM" pattern has a reference to "STAIR SEATS".
Even without the pattern description, these links, along with meaningful names, carry a message: When building a place outside where people can spend time ("PUBLIC OUTDOOR ROOM"), consider to surround it by stairs where people can sit ("STAIR SEATS"). If you are planning an office ("WORKSHOPS AND OFFICES"), consider to arrange workspaces in small groups ("SMALL WORKING GROUPS"). Alexander argues that the connections in the network can be considered even more meaningful than the text of the patterns themselves.
The links in Alexander's book clearly result in a hierarchic network. Alexander draws a parallel to the hierarchy of a grammar – that is one argument for him to speak of a pattern language.
The idea of linking is generally accepted among pattern authors, though the semantic rationale behind the links may vary. Some authors, however, like Gamma et al. in Design Patterns, make only little use of pattern linking – possibly because it did not make that much sense for their collection of patterns. In such a case we would speak of a pattern catalogue rather than a pattern language.
Usage
Alexander encouraged people who used his system to expand his language with patterns of their own. In order to enable this, his books do not focus strictly on architecture or civil engineering; he also explains the general method of pattern languages. The original concept for the book A Pattern Language was that it would be published in the form of a 3-ring binder, so that pages could easily be added later; this proved impractical in publishing. The pattern language approach has been used to document expertise in diverse fields. Some examples are architectural patterns, computer science patterns, interaction design patterns, pedagogical patterns, pattern gardening, social action patterns, and group facilitation patterns. The pattern language approach has also been recommended as a way to promote civic intelligence by helping to coordinate actions for diverse people and communities who are working together on significant shared problems. Alexander's specifications for using pattern languages as well as creating new ones remain influential, and his books are referenced for style by experts in unrelated fields.
It is important to note that notations such as UML or the flowchart symbol collection are not pattern languages. They could more closely be compared to an alphabet: their symbols could be used to document a pattern language, but they are not a language by themselves. A recipe or other sequential set of steps to be followed, with only one correct path from start to finish, is also not a pattern language. However, the process of designing a new recipe might benefit from the use of a pattern language.
Simple example of a pattern
Name: ChocolateChipRatio
Context: You are baking chocolate chip cookies in small batches for family and friends
Consider these patterns first: SugarRatio, FlourRatio, EggRatio
Problem: Determine the optimum ratio of chocolate chips to cookie dough
Solution: Observe that most people consider chocolate to be the best part of the chocolate chip cookie. Also observe that too much chocolate may prevent the cookie from holding together, decreasing its appeal. Since you are cooking in small batches, cost is not a consideration. Therefore, use the maximum amount of chocolate chips that results in a really sturdy cookie.
Consider next: NutRatio or CookingTime or FreezingMethod
Origin
Christopher Alexander, an architect and author, coined the term pattern language. He used it to refer to common problems of the design and construction of buildings and towns and how they should be solved. The solutions proposed in the book include suggestions ranging from how cities and towns should be structured to where windows should be placed in a room.
The framework and philosophy of the "pattern language" approach was initially popularized in the book A Pattern Language that was written by Christopher Alexander and five colleagues at the Center for Environmental Structure in Berkeley, California in the late 1970s. While A Pattern Language contains 253 "patterns" from the first pattern, "Independent Regions" (the most general) to the last, "Things from Your Life", Alexander's book The Timeless Way of Building goes into more depth about the motivation and purpose of the work. The following definitions of "pattern" and "pattern language" are paraphrased from A Pattern Language:
"A pattern is a careful description of a perennial solution to a recurring problem within a building context, describing one of the configurations that brings life to a building. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core solution to that problem, in such a way that you can use the solution a million times over, without ever doing it the same way twice."
A pattern language is a network of patterns that call upon one another. Patterns help us remember insights and knowledge about design and can be used in combination to create solutions.
Application domains
Christopher Alexander's idea has been adopted in other disciplines, often much more heavily than the original application of patterns to architecture as depicted in the book A Pattern Language. Examples since the 1990s include software design patterns in software engineering and, more generally, architectural patterns in computer science, as well as interaction design patterns. Since the late 1990s, pedagogical patterns have been used to document good practices in teaching. Since at least the mid-2000s, the idea of pattern language was introduced into systems architecture design and Design science (methodology) patterns in a book authored by Vijay Vaishnavi and William Kuechler with 66 patterns; the second revised and expanded edition of this book has been published in 2015 with 84 patterns. The book Liberating Voices: A Pattern Language for Communication Revolution, containing 136 patterns for using information and communication to promote sustainability, democracy and positive social change, was published in 2008 along with a website containing even more patterns. The deck "Group Works: A Pattern Language for Bringing Life to Meetings and Other Gatherings" was published in 2011. The idea of a pattern language has also been applied in permaculture design.
Ward Cunningham, the inventor of wiki, coauthored a paper with Michael Mehaffy arguing that there are deep relationships between wikis and pattern languages, and that wikis "were in fact developed as tools to facilitate efficient sharing and modifying of patterns".
See also
References
Further reading
Christopher Alexander, Sara Ishikawa & Murray Silverstein (1974). 'A Collection of Patterns which Generate Multi-Service Centres' in Declan and Margrit Kennedy (eds.): The Inner City. Architects Year Book 14, Elek, London. .
Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M., Fiksdahl-King, I. & Shlomo Angel, S. (1977). A Pattern Language: Towns, Buildings, Construction. Oxford University Press. .
Alexander, C. (1979). The Timeless Way of Building. USA: Oxford University Press. .
Schuler, D. (2008). Liberating Voices: A Pattern Language for Communication Revolution. USA: MIT Press. .
Leitner, Helmut (2015): Pattern Theory: Introduction and Perspectives on the Tracks of Christopher Alexander. .
External links
About patterns in general
A Pattern Language for Pattern Writing by Gerard Meszaros and Jim Doble
Use of patterns for scenario development for large scale aerospace projects
Lean Startup Business Model Pattern
What Is a Quality Use Case? from the book Patterns for Effective Use Cases
Online pattern collections
patternlanguage.com, by the Center for Environmental Structure
Fused Grid – A Contemporary Urban Pattern "a collection and synthesis of neighbourhood patterns"
hcipatterns.org – Patterns for HCI
The Portland Pattern Repository
Group Works: A Pattern Language for Bringing Life to Meetings and Other Gatherings – A pattern language of group process
The Core Protocols – A set of team communication patterns
Liberating Voices! Pattern Language Project — Short versions of patterns available in Arabic, Chinese, and Spanish
Architectural theory
Cybernetics
Design
Knowledge representation
fi:Suunnittelumalli | Pattern language | [
"Engineering"
] | 4,069 | [
"Design",
"Architectural theory",
"Architecture"
] |
182,839 | https://en.wikipedia.org/wiki/Filter%20paper | Filter paper is a semi-permeable paper barrier placed perpendicular to a liquid or air flow. It is used to separate fine solid particles from liquids or gases.
The raw materials are typically different paper pulps. The pulp may be made from softwood, hardwood, fiber crops, or mineral fibers.
Properties
Filter paper has various properties. The important parameters are wet strength, porosity, particle retention, volumetric flow rate, compatibility, efficiency and capacity.
There are two mechanisms of filtration with paper; volume, and surface. By volume filtration, the particles are caught in the bulk of the filter paper. By surface filtration, the particles are caught on the paper surface. Filter paper is mostly used because of the ability of a small piece of filter paper to absorb a significant volume of liquid.
Manufacture
The raw materials are different paper pulps. The pulp may be from softwood, hardwood, fiber crops, mineral fibers. For high quality filters, dissolving pulp and mercerised pulp are used. Most filter papers are made using small paper machines. For laboratory filters, the machines may be as small as 50 cm in width. The paper is often crêped to improve porosity. The filter papers may also be treated with reagents or impregnation to get the right properties.
Types
Air filters
The main application for air filters are combustion air to engines. The filter papers are transformed into filter cartridges, which then is fitted to a holder. The construction of the cartridges mostly requires that the paper is stiff enough to be self-supporting. A paper for air filters needs to be very porous and have a weight of 100–200 g/m2. Normally particularly long fibrous pulp that is mercerised is used to get these properties. The paper is normally impregnated to improve the resistance to moisture. Some heavy duty qualities are made to be rinsed and thereby extend the life of the filter.
Coffee and tea
Historically, blotting paper or cloth were used to extract filter coffee.
Modern coffee filters of paper are made from about 100 g/m2 crêped paper. The crêping allows the coffee to flow freely between the filter and the filtration funnel. The raw materials (pulp) for the filter paper are coarse long fiber, often from fast growing trees. For example, Melitta uses up to 60% of bambus in their filters since 1998. Both bleached and unbleached qualities are made. Coffee filters are made in different shapes and sizes to fit into different holders.
Most notable are the (paper) coffee filter systems introduced by Melitta (1908, 1932, 1936, 1965), Chemex (1941) and Hario (2004).
Important parameters are strength, compatibility, efficiency and capacity.
Tea bags also work as a kind of paper filter. They are made from abacá fibers, a very thin and long fiber manilla hemp. Often the paper is augmented with a minor portion of synthetic fibers. The bag paper is very porous and thin and has high wet strength.
Fuel filters
The paper used for fuel filters is a crêped paper with controlled porosity, which is pleated and wound to cartridges. The raw material for filter paper used in fuel filters are made of a mixture of hardwood and softwood fibres. The basis weight of the paper is 50–80 g/m2.
Horizontal plate filters
Horizontal plate filter paper is commonly utilized in industrial processing. Filter paper typically is designed to fit the manufacturers specifications. Absolute micron retention can range from 1–100 microns but Diatomaceous earth is commonly used with filter paper to obtain sub-micron filtration. Activated carbon or other filter aids can be used with the filter paper to form a filter cake to achieve specific results. Filter paper can be impregnated with DE or activated carbon.
Oil filters
Engine oil is filtered to remove impurities. Filtration of oil is normally done with volume filtration. Filter papers for lubrication oils are impregnated to resist high temperatures.
Laboratory-grade paper filters
Filter papers are widely used in laboratory experiments across many different fields, from biology to chemistry. The type of filter used will differ according to the purpose of the procedure and the chemicals involved. Generally, filter papers are used with laboratory techniques such as gravity or vacuum filtration.
Historically, a type of soft, porous paper called charta emporetica was used in pharmacy as a filter and as packing paper.
Qualitative filter paper
Qualitative filter paper is used in qualitative analytical techniques to determine materials. There are different grades of qualitative filter paper according to different pore size. There are total 13 different grades of qualitative filter paper. The largest pore size is grade 4; the smallest pore size is grade 602 h; the most commonly used grades are grade 1 to grade 4.
Grade 1 qualitative filter paper has the pore size of 11 μm. This grade of filter paper is widely used for many different fields in agricultural analysis, air pollution monitoring and other similar experiments.
Grade 2 qualitative filter paper has the pore size of 8 μm. This grade of filter paper requires more filtration time than Grade 1 filter paper. This filter paper is used for monitoring specific contaminants in the atmosphere and soil testing.
Grade 3 qualitative filter paper has the pore size of 6 μm. This grade of filter paper is very suitable for carrying samples after filtration.
Grade 4 qualitative filter paper has the pore size of 20~25 μm. This grade of filter paper has the largest pore size among all standard qualitative filter papers. It is very useful as rapid filter for cleanup of geological fluids or organic extracts during experiment.
Grade 602 h qualitative filter paper has the pore size of 2 μm. This grade of filter paper has the smallest pore size among all standard qualitative filter papers. It is used for collecting or removing fine particles.
Quantitative filter paper
Quantitative filter paper, also called ash-free filter paper, is used for quantitative and gravimetric analysis. During the manufacturing, producers use acid to make the paper ash-less and achieve high purity.
Chromatography papers
Chromatography is a method chemists use to separate compounds. This type of filter paper has specific water flow rate and absorption speed to maximize the result of paper chromatography. The absorption speed of this type of filter paper is from 6 cm to 18 cm and the thickness is from 0.17 mm from 0.93 mm.
Extraction thimbles
Extraction thimbles are rod-shape filter paper often used in soxhlet extractors or atomized extractors. It is ideal for very sensitive detection, the performance depends on the thickness of inner diameter. Also, it is usually used in areas of food control and environmental monitoring.
Glass fiber filters
Glass fiber filter has the pore size of 1 μm, it is useful for filtering highly contaminated solutions or difficult-to-filter solution. Also, glass fiber filter has extends filter life, wide range of particulate loads and can prevent sample contamination. In addition, different types of glass fiber filter are suitable for different filtration situation. There are 7 different types of glass fiber filters and the major difference is thickness.
Quartz fiber filter
Quartz fiber filter paper has a high resistance to chemicals, does not absorb NOx and SOx dioxides, is unaffected by humidity and is easily sterilized. Thus, it is mostly used for air pollution analysis.
PTFE filter
Polytetrafluoroethylene (PTFE) filter has wide operating temperature (−120 °C ~ 260 °C) with high air permeability. The resistance to high temperature makes PTFE filter paper suitable for use in autoclaves. It is often used to filter hot oils, strong solvents and collecting airborne particulates.
See also
Filter
References
Paper
Filters
Analytical chemistry
Laboratory equipment
Liquid-solid separation
Solid-gas separation | Filter paper | [
"Chemistry",
"Engineering"
] | 1,638 | [
"Separation processes by phases",
"Solid-gas separation",
"Chemical equipment",
"Filters",
"Filtration",
"nan",
"Liquid-solid separation"
] |
182,842 | https://en.wikipedia.org/wiki/Nernst%20heat%20theorem | The Nernst heat theorem was formulated by Walther Nernst early in the twentieth century and was used in the development of the third law of thermodynamics.
The theorem
The Nernst heat theorem says that as absolute zero is approached, the entropy change ΔS for a chemical or physical transformation approaches 0. This can be expressed mathematically as follows:
The above equation is a modern statement of the theorem. Nernst often used a form that avoided the concept of entropy.
Another way of looking at the theorem is to start with the definition of the Gibbs free energy (G), G = H - TS, where H stands for enthalpy. For a change from reactants to products at constant temperature and pressure the equation becomes .
In the limit of T = 0 the equation reduces to just ΔG = ΔH, as illustrated in the figure shown here, which is supported by experimental data. However, it is known from thermodynamics that the slope of the ΔG curve is -ΔS. Since the slope shown here reaches the horizontal limit of 0 as T → 0 then the implication is that ΔS → 0, which is the Nernst heat theorem.
The significance of the Nernst heat theorem is that it was later used by Max Planck to give the third law of thermodynamics, which is that the entropy of all pure, perfectly crystalline homogeneous materials in complete internal equilibrium is 0 at absolute zero.
See also
Theodore William Richards
Entropy
References and notes
Further reading
- See especially pages 421 – 424
External links
Nernst heat theorem
Thermochemistry
Walther Nernst
de:Nernst-Theorem | Nernst heat theorem | [
"Chemistry"
] | 342 | [
"Thermochemistry"
] |
182,890 | https://en.wikipedia.org/wiki/Kronecker%20delta | In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:
or with use of Iverson brackets:
For example, because , whereas because .
The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above.
In linear algebra, the identity matrix has entries equal to the Kronecker delta:
where and take the values , and the inner product of vectors can be written as
Here the Euclidean vectors are defined as -tuples: and and the last step is obtained by using the values of the Kronecker delta to reduce the summation over .
It is common for and to be restricted to a set of the form or , but the Kronecker delta can be defined on an arbitrary set.
Properties
The following equations are satisfied:
Therefore, the matrix can be considered as an identity matrix.
Another useful representation is the following form:
This can be derived using the formula for the geometric series.
Alternative notation
Using the Iverson bracket:
Often, a single-argument notation is used, which is equivalent to setting :
In linear algebra, it can be thought of as a tensor, and is written . Sometimes the Kronecker delta is called the substitution tensor.
Digital signal processing
In the study of digital signal processing (DSP), the unit sample function represents a special case of a 2-dimensional Kronecker delta function where the Kronecker indices include the number zero, and where one of the indices is zero. In this case:
Or more generally where:
However, this is only a special case. In tensor calculus, it is more common to number basis vectors in a particular dimension starting with index 1, rather than index 0. In this case, the relation does not exist, and in fact, the Kronecker delta function and the unit sample function are different functions that overlap in the specific case where the indices include the number 0, the number of indices is 2, and one of the indices has the value of zero.
While the discrete unit sample function and the Kronecker delta function use the same letter, they differ in the following ways. For the discrete unit sample function, it is more conventional to place a single integer index in square braces; in contrast the Kronecker delta can have any number of indexes. Further, the purpose of the discrete unit sample function is different from the Kronecker delta function. In DSP, the discrete unit sample function is typically used as an input function to a discrete system for discovering the system function of the system which will be produced as an output of the system. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from an Einstein summation convention.
The discrete unit sample function is more simply defined as:
In addition, the Dirac delta function is often confused for both the Kronecker delta function and the unit sample function. The Dirac delta is defined as:
Unlike the Kronecker delta function and the unit sample function , the Dirac delta function does not have an integer index, it has a single continuous non-integer value .
To confuse matters more, the unit impulse function is sometimes used to refer to either the Dirac delta function , or the unit sample function .
Notable properties
The Kronecker delta has the so-called sifting property that for :
and if the integers are viewed as a measure space, endowed with the counting measure, then this property coincides with the defining property of the Dirac delta function
and in fact Dirac's delta was named after the Kronecker delta because of this analogous property. In signal processing it is usually the context (discrete or continuous time) that distinguishes the Kronecker and Dirac "functions". And by convention, generally indicates continuous time (Dirac), whereas arguments like , , , , , and are usually reserved for discrete time (Kronecker). Another common practice is to represent discrete sequences with square brackets; thus: . The Kronecker delta is not the result of directly sampling the Dirac delta function.
The Kronecker delta forms the multiplicative identity element of an incidence algebra.
Relationship to the Dirac delta function
In probability theory and statistics, the Kronecker delta and Dirac delta function can both be used to represent a discrete distribution. If the support of a distribution consists of points , with corresponding probabilities , then the probability mass function of the distribution over can be written, using the Kronecker delta, as
Equivalently, the probability density function of the distribution can be written using the Dirac delta function as
Under certain conditions, the Kronecker delta can arise from sampling a Dirac delta function. For example, if a Dirac delta impulse occurs exactly at a sampling point and is ideally lowpass-filtered (with cutoff at the critical frequency) per the Nyquist–Shannon sampling theorem, the resulting discrete-time signal will be a Kronecker delta function.
Generalizations
If it is considered as a type tensor, the Kronecker tensor can be written with a covariant index and contravariant index :
This tensor represents:
The identity mapping (or identity matrix), considered as a linear mapping or
The trace or tensor contraction, considered as a mapping
The map , representing scalar multiplication as a sum of outer products.
The or multi-index Kronecker delta of order is a type tensor that is completely antisymmetric in its upper indices, and also in its lower indices.
Two definitions that differ by a factor of are in use. Below, the version is presented has nonzero components scaled to be . The second version has nonzero components that are , with consequent changes scaling factors in formulae, such as the scaling factors of in below disappearing.
Definitions of the generalized Kronecker delta
In terms of the indices, the generalized Kronecker delta is defined as:
Let be the symmetric group of degree , then:
Using anti-symmetrization:
In terms of a determinant:
Using the Laplace expansion (Laplace's formula) of determinant, it may be defined recursively:
where the caron, , indicates an index that is omitted from the sequence.
When (the dimension of the vector space), in terms of the Levi-Civita symbol:
More generally, for , using the Einstein summation convention:
Contractions of the generalized Kronecker delta
Kronecker Delta contractions depend on the dimension of the space. For example,
where is the dimension of the space. From this relation the full contracted delta is obtained as
The generalization of the preceding formulas is
Properties of the generalized Kronecker delta
The generalized Kronecker delta may be used for anti-symmetrization:
From the above equations and the properties of anti-symmetric tensors, we can derive the properties of the generalized Kronecker delta:
which are the generalized version of formulae written in . The last formula is equivalent to the Cauchy–Binet formula.
Reducing the order via summation of the indices may be expressed by the identity
Using both the summation rule for the case and the relation with the Levi-Civita symbol, the summation rule of the Levi-Civita symbol is derived:
The 4D version of the last relation appears in Penrose's spinor approach to general relativity that he later generalized, while he was developing Aitken's diagrams, to become part of the technique of Penrose graphical notation. Also, this relation is extensively used in S-duality theories, especially when written in the language of differential forms and Hodge duals.
Integral representations
For any integers and , the Kronecker delta can be written as a complex contour integral using a standard residue calculation. The integral is taken over the unit circle in the complex plane, oriented counterclockwise. An equivalent representation of the integral arises by parameterizing the contour by an angle around the origin.
The Kronecker comb
The Kronecker comb function with period is defined (using DSP notation) as:
where and are integers. The Kronecker comb thus consists of an infinite series of unit impulses that are units apart, aligned so one of the impulses occurs at zero. It may be considered to be the discrete analog of the Dirac comb.
See also
Dirac measure
Indicator function
Heaviside step function
Levi-Civita symbol
Minkowski metric
't Hooft symbol
Unit function
XNOR gate
References
Mathematical notation
Elementary special functions | Kronecker delta | [
"Mathematics"
] | 1,762 | [
"nan"
] |
182,945 | https://en.wikipedia.org/wiki/Ephedrine | Ephedrine is a central nervous system (CNS) stimulant and sympathomimetic agent that is often used to prevent low blood pressure during anesthesia. It has also been used for asthma, narcolepsy, and obesity but is not the preferred treatment. It is of unclear benefit in nasal congestion. It can be taken by mouth or by injection into a muscle, vein, or just under the skin. Onset with intravenous use is fast, while injection into a muscle can take 20minutes, and by mouth can take an hour for effect. When given by injection, it lasts about an hour, and when taken by mouth, it can last up to four hours.
Common side effects include trouble sleeping, anxiety, headache, hallucinations, high blood pressure, fast heart rate, loss of appetite, and urinary retention. Serious side effects include stroke and heart attack. While probably safe in pregnancy, its use in this population is poorly studied. Use during breastfeeding is not recommended. Ephedrine works by inducing the release of norepinephrine and hence indirectly activating the α- and β-adrenergic receptors. Chemically, ephedrine is a substituted amphetamine and is the (1R,2S)-enantiomer of β-hydroxy-N-methylamphetamine.
Ephedrine was first isolated in 1885 and came into commercial use in 1926. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It can normally be found in plants of the Ephedra genus. Over-the-counter dietary supplements containing ephedrine are illegal in the United States, with the exception of those used in traditional Chinese medicine, where its presence is noted by má huáng.
Medical uses
Ephedrine is a non-catecholamine sympathomimetic with cardiovascular effects similar to those of adrenaline/epinephrine: increased blood pressure, heart rate, and contractility. Like pseudoephedrine it is a bronchodilator, with pseudoephedrine having considerably less effect.
Ephedrine may decrease motion sickness, but it has mainly been used to decrease the sedating effects of other medications used for motion sickness.
Ephedrine is also found to have quick and long-lasting responsiveness in congenital myasthenic syndrome in early childhood and also even in adults with a novel COLQ mutation.
Ephedrine is administered by intravenous boluses. Redosing usually requires increased doses to offset the development of tachyphylaxis, which is attributed to the depletion of catecholamine stores.
Weight loss
Ephedrine promotes modest short-term weight loss, specifically fat loss, but its long-term effects are unknown. In mice, ephedrine is known to stimulate thermogenesis in the brown adipose tissue, but because adult humans have only small amounts of brown fat, thermogenesis is assumed to take place mostly in the skeletal muscle. Ephedrine also decreases gastric emptying. Methylxanthines such as caffeine and theophylline have a synergistic effect with ephedrine for weight loss. This led to the creation and marketing of compound products. One of them, known as the ECA stack, contains ephedrine with caffeine and aspirin. It is a popular supplement taken by bodybuilders seeking to cut body fat before a competition.
A 2021 systematic review found that ephedrine led to a weight loss greater than placebo, raised heart rate, and reduced LDL and raised HDL, with no statistically significant difference in blood pressure.
Available forms
Ephedrine is available as a prescription-only pharmaceutical drug in the form of an intravenous solution, under brand names including Akovaz, Corphedra, Emerphed, and Rezipres as well as in generic forms, in the United States. It is also available over-the-counter in the form of 12.5 and 25mg oral tablets for use as a bronchodilator and as a 0.5% concentration nasal spray for use as a decongestant. The drug is additionally available in combination with guaifenesin in the form of oral tablets and liquids. Ephedrine is provided as the hydrochloride or sulfate salt in pharmaceutical formulations.
Contraindications
Ephedrine should not be used in conjunction with certain antidepressants, namely norepinephrine-dopamine reuptake inhibitors (NDRIs), as this increases the risk of symptoms due to excessive serum levels of norepinephrine.
Bupropion is an example of an antidepressant with an amphetamine-like structure similar to ephedrine, and it is an NDRI. Its action bears more resemblance to amphetamine than to fluoxetine in that its primary mode of therapeutic action involves norepinephrine and to a lesser degree dopamine, but it also releases some serotonin from presynaptic clefts. It should not be used with ephedrine, as it may increase the likelihood of side effects.
Ephedrine should be used with caution in patients with inadequate fluid replacement, impaired adrenal function, hypoxia, hypercapnia, acidosis, hypertension, hyperthyroidism, prostatic hypertrophy, diabetes mellitus, cardiovascular disease, during delivery if maternal blood pressure is >130/80 mmHg, and during lactation.
Contraindications for the use of ephedrine include: closed-angle glaucoma, phaeochromocytoma, asymmetric septal hypertrophy (idiopathic hypertrophic subaortic stenosis), concomitant or recent (previous 14 days) monoamine oxidase inhibitor (MAOI) therapy, general anaesthesia with halogenated hydrocarbons (particularly halothane), tachyarrhythmias or ventricular fibrillation, or hypersensitivity to ephedrine or other stimulants.
Ephedrine should not be used at any time during pregnancy unless specifically indicated by a qualified physician and only when other options are unavailable.
Side effects
Ephedrine is a potentially dangerous natural compound; the US Food and Drug Administration had received over 18,000 reports of adverse effects in people using it.
Adverse drug reactions (ADRs) are more common with systemic administration (e.g. injection or oral administration) compared to topical administration (e.g. nasal instillations). ADRs associated with ephedrine therapy include
Cardiovascular: tachycardia, cardiac arrhythmias, angina pectoris, vasoconstriction with hypertension
Dermatological: flushing, sweating, acne vulgaris
Gastrointestinal: nausea
Genitourinary: decreased urination due to vasoconstriction of renal arteries, difficulty urinating is not uncommon, as alpha-agonists such as ephedrine constrict the internal urethral sphincter, mimicking the effects of sympathetic nervous system stimulation
Nervous system: restlessness, confusion, insomnia, mild euphoria, mania/hallucinations (rare except in previously existing psychiatric conditions), delusions, formication (may be possible, but lacks documented evidence) paranoia, hostility, panic, agitation
Respiratory: dyspnea, pulmonary edema
Miscellaneous: dizziness, headache, tremor, hyperglycemic reactions, dry mouth
Overdose
Overdose of ephedrine may result in sympathomimetic symptoms like tachycardia and hypertension.
Interactions
Ephedrine with monoamine oxidase inhibitors (MAOIs) like phenelzine and tranylcypromine can result in hypertensive crisis.
Pharmacology
Pharmacodynamics
Ephedrine, a sympathomimetic amine, acts on part of the sympathetic nervous system (SNS). The principal mechanism of action relies on its indirect stimulation of the adrenergic receptor system by increasing activation of α- and β-adrenergic receptors via induction of norepinephrine release. The presence of direct interactions with α-adrenergic receptors is unlikely but still controversial. L-ephedrine, and particularly its stereoisomer norpseudoephedrine (which is also present in Catha edulis) has indirect sympathomimetic effects and due to its ability to cross the blood–brain barrier, it is a CNS stimulant similar to amphetamines, but less pronounced, as it releases norepinephrine and dopamine in the brain.
Pharmacokinetics
Absorption
The oral bioavailability of ephedrine is 88%. The onset of action of ephedrine orally is 15 to 60minutes, via intramuscular injection is 10 to 20minutes, and via intravenous infusion is within seconds.
Distribution
Its plasma protein binding is approximately 24 to 29%, with 5 to 10% bound to albumin.
Metabolism
Ephedrine is largely not metabolized. Norephedrine (phenylpropanolamine) is an active metabolite of ephedrine formed via N-demethylation. About 8 to 20% of an oral dose of ephedrine is demethylated into norephedrine, about 4 to 13% is oxidatively deaminated into benzoic acid, and a small fraction is converted into 1,2-dihydroxy-1-phenylpropane.
Elimination
Ephedrine is eliminated mainly in urine, with 60% (range 53–79%) excreted unchanged.
The elimination half-life of ephedrine is 6hours. Its duration of action orally is 2 to 4hours and via intravenous or intramuscular injection is 60minutes.
The elimination of ephedrine is dependent on urinary pH.
Chemistry
Ephedrine, or (−)-(1R,2S)-ephedrine, also known as (1R,2S)-β-hydroxy-N-methyl-α-methyl-β-phenethylamine or as (1R,2S)-β-hydroxy-N-methylamphetamine, is a substituted phenethylamine and amphetamine derivative. It is similar in chemical structure to phenylpropanolamine, methamphetamine, and epinephrine (adrenaline). It differs from methamphetamine only by the presence of a hydroxyl group (–OH). Chemically, ephedrine is an alkaloid with a phenethylamine skeleton found in various plants in the genus Ephedra (family Ephedraceae). It is most usually marketed as the hydrochloride or sulfate salt.
It has an experimental log P of 1.13, while its predicted log P values range from 0.9 to 1.32. The lipophilicity of amphetamines is closely related to their brain permeability. For comparison to ephedrine, the experimental log P of methamphetamine is 2.1, of amphetamine is 1.8, of pseudoephedrine is 0.89, of phenylpropanolamine is 0.7, of phenylephrine is -0.3, and of norepinephrine is -1.2. Methamphetamine has high brain permeability, whereas phenylephrine and norepinephrine are peripherally selective drugs. The optimal log P for brain permeation and central activity is about 2.1 (range 1.5–2.7).
Ephedrine hydrochloride has a melting point of 187−188°C.
The racemic form of ephedrine is racephedrine ((±)-ephedrine; dl-ephedrine; (1RS,2SR)-ephedrine). A stereoisomer of ephedrine is pseudoephedrine. Derivatives of ephedrine include methylephedrine (N-methylephedrine), etafedrine (N-ethylephedrine), cinnamedrine (N-cinnamylephedrine), and oxilofrine (4-hydroxyephedrine). Analogues of ephedrine include phenylpropanolamine (norephedrine) and metaraminol (3-hydroxynorephedrine).
The presence of an N-methyl group decreases binding affinities at α-adrenergic receptors, compared with norephedrine. Ephedrine, though, binds better than N-methylephedrine, which has an additional methyl group at the nitrogen atom. Also, the steric orientation of the hydroxyl group is important for receptor binding and functional activity.
Nomenclature
Ephedrine exhibits optical isomerism and has two chiral centres, giving rise to four stereoisomers. By convention, the pair of enantiomers with the stereochemistry (1R,2S) and (1S,2R) is designated ephedrine, while the pair of enantiomers with the stereochemistry (1R,2R) and (1S,2S) is called pseudoephedrine.
The isomer which is marketed is (−)-(1R,2S)-ephedrine.
In the outdated D/L system (+)-ephedrine is also referred to as D-ephedrine and (−)-ephedrine as L-ephedrine (in which case, in the Fisher projection, the phenyl ring is drawn at the bottom).
Often, the D/L system (with small caps) and the d/l system (with lower-case) are confused. The result is that the levorotary l-ephedrine is wrongly named L-ephedrine and the dextrorotary d-pseudoephedrine (the diastereomer) wrongly D-pseudoephedrine.
The IUPAC names of the two enantiomers are (1R,2S)- respectively (1S,2R)-2-methylamino-1-phenylpropan-1-ol. A synonym is erythro-ephedrine.
Detection in body fluids
Ephedrine may be quantified in blood, plasma, or urine to monitor possible abuse by athletes, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Many commercial immunoassay screening tests directed at the amphetamines cross-react appreciably with ephedrine, but chromatographic techniques can easily distinguish ephedrine from other phenethylamine derivatives. Blood or plasma ephedrine concentrations are typically in the 20–200μg/L range in persons taking the drug therapeutically, 300–3000μg/L in abusers or poisoned patients, and 3–20mg/L in cases of acute fatal overdosage. The current World Anti-Doping Agency (WADA) limit for ephedrine in an athlete's urine is 10μg/mL.
History
Asia
Ephedrine in its natural form, known as máhuáng (麻黄) in traditional Chinese medicine, has been documented in China since the Han dynasty (206 BC – 220 AD) as an antiasthmatic and stimulant. In traditional Chinese medicine, máhuáng has been used as a treatment for asthma and bronchitis for centuries.
In 1885, the chemical synthesis of ephedrine was first accomplished by Japanese organic chemist Nagai Nagayoshi based on his research on traditional Japanese and Chinese herbal medicines.
The industrial manufacture of ephedrine in China began in the 1920s, when Merck began marketing and selling the drug as ephetonin. Ephedrine exports from China to the West grew from 4 to 216 tonnes between 1926 and 1928.
Western medicine
Ephedrine was first introduced for medical use in the United States in 1926.
It was introduced in 1948 in Vicks Vatronol nose drops (now discontinued) which contained ephedrine sulfate as the active ingredient for rapid nasal decongestion.
Society and culture
Names
Ephedrine is the generic name of the drug and its . Its is ephédrine while its is efedrina. In the case of the hydrochloride salt, its generic name is ephedrine hydrochloride and this is its , , and . In the case of the sulfate salt, its generic name is ephedrine sulfate or ephedrine sulphate and the former is its while the latter is its . A synonym of ephedrine sulfate is isofedrol. These names all refer to the (1R,2R)-enantiomer of ephedrine. The racemic form of ephedrine is known as racephedrine and this is its and , while the hydrochloride salt of the racemic form is racephedrine hydrochloride and this is its .
Recreational use
As a phenethylamine, ephedrine has a similar chemical structure to amphetamines and is a methamphetamine analog having the methamphetamine structure with a hydroxyl group at the β position. Because of ephedrine's structural similarity to methamphetamine, it can be used to create methamphetamine using chemical reduction in which ephedrine's hydroxyl group is removed; this has made ephedrine a highly sought-after chemical precursor in the illicit manufacture of methamphetamine.
The most popular method for reducing ephedrine to methamphetamine is similar to the Birch reduction, in that it uses anhydrous ammonia and lithium metal in the reaction. The second-most popular method uses red phosphorus and iodine in the reaction with ephedrine. Moreover, ephedrine can be synthesized into methcathinone via simple oxidation. As such, ephedrine is listed as a table-I precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances.
Use in exercise and sports
Ephedrine has been used as a performance-enhancing drug in exercise and sports. It can increase heart rate, blood pressure, and cardiac contractility as well as act as a psychostimulant. Ephedrine is often used in combination with caffeine for performance-enhancing purposes.
Other uses
In chemical synthesis, ephedrine is used in bulk quantities as a chiral auxiliary group.
In saquinavir synthesis, the half-acid is resolved as its salt with l-ephedrine.
Legal status
Canada
In January 2002, Health Canada issued a voluntary recall of all ephedrine products containing more than 8mg per dose, all combinations of ephedrine with other stimulants such as caffeine, and all ephedrine products marketed for weight-loss or bodybuilding indications, citing a serious risk to health. Ephedrine is still sold as an oral nasal decongestant in 8mg pills as a natural health product, with a limit of 0.4g (400mg) per package, the limit established by the Controlled Drugs and Substances Act as it is considered as Class A Precursor.
United States
In 1997, the FDA proposed a regulation on ephedra (the herb from which ephedrine is obtained), which limited an ephedra dose to 8mg (of active ephedrine) with no more than 24mg per day. This proposed rule was withdrawn, in part, in 2000 because of "concerns regarding the agency's basis for proposing a certain dietary ingredient level and a duration of use limit for these products." In 2004, the FDA created a ban on ephedrine alkaloids marketed for reasons other than asthma, colds, allergies, other disease, or traditional Asian use. On April 14, 2005, the U.S. District Court for the District of Utah ruled the FDA did not have proper evidence that low dosages of ephedrine alkaloids are actually unsafe, but on August 17, 2006, the U.S. Court of Appeals for the Tenth Circuit in Denver upheld the FDA's final rule declaring all dietary supplements containing ephedrine alkaloids adulterated, and therefore illegal for marketing in the United States. Furthermore, ephedrine is banned by the NCAA, MLB, NFL, and PGA. Ephedrine is, however, still legal in many applications outside of dietary supplements. Purchasing is currently limited and monitored, with specifics varying from state to state.
The House passed the Combat Methamphetamine Epidemic Act of 2005 as an amendment to the renewal of the USA PATRIOT Act. Signed into law by President George W. Bush on March 6, 2006, the act amended the US Code (21 USC 830) concerning the sale of products containing ephedrine and the closely related drug pseudoephedrine. Both substances are used as precursors in the illicit production of methamphetamine, and to discourage that use the federal statute included the following requirements for merchants who sell these products:
A retrievable record of all purchases identifying the name and address of each party to be kept for two years
Required verification of proof of identity of all purchasers
Required protection and disclosure methods in the collection of personal information
Reports to the Attorney General of any suspicious payments or disappearances of the regulated products
Non-liquid dose form of regulated product may only be sold in unit-dose blister packs
Regulated products are to be sold behind the counter or in a locked cabinet in such a way as to restrict access
Daily sales of regulated products not to exceed 3.6g to a single purchaser, without regard to the number of transactions
Monthly sales to a single purchaser not to exceed 9g of pseudoephedrine base in regulated products
The law gives similar regulations to mail-order purchases, except the monthly sales limit is 7.5g.
As a pure herb or tea, má huáng, containing ephedrine, is still sold legally in the US. The law restricts/prohibits its being sold as a dietary supplement (pill) or as an ingredient/additive to other products, like diet pills.
Australia
Ephedrine and all Ephedra species that contain it are considered Schedule 4 substances under the Poisons Standard. A Schedule 4 drug is considered a Prescription Only Medicine, or Prescription Animal Remedy – Substances, the use or supply of which should be by or on the order of persons permitted by State or Territory legislation to prescribe and should be available from a pharmacist on prescription under the Poisons Standard.
South Africa
In South Africa, ephedrine was moved to schedule 6 on 27 May 2008, which makes pure ephedrine tablets prescription only. Pills containing ephedrine up to 30 mg per tablet in combination with other medications are still available OTC, schedule 1 and 2, for sinus, head colds, and influenza.
Germany
Ephedrine was freely available in pharmacies in Germany until 2001. Afterward, access was restricted since it was mostly bought for unindicated uses. Similarly, ephedra can only be bought with a prescription. Since April 2006, all products, including plant parts, that contain ephedrine are only available with a prescription.
Sources
Agricultural
Ephedrine is obtained from the plant Ephedra sinica and other members of the genus Ephedra, from which the name of the substance is derived. Raw materials for the manufacture of ephedrine and traditional Chinese medicines are produced in China on a large scale. As of 2007, companies produced for export US$13 million worth of ephedrine from 30,000 tons of ephedra annually, or about ten times the amount used in traditional Chinese medicine.
Synthetic
Most of the l-ephedrine produced today for official medical use is made synthetically as the extraction and isolation process from E. sinica is tedious and no longer cost-effective.
Biosynthetic
Ephedrine was long thought to come from modifying the amino acid L-phenylalanine. L-Phenylalanine would be decarboxylated and subsequently attacked with ω-aminoacetophenone. Methylation of this product would then produce ephedrine. This pathway has since been disproven. A new pathway proposed suggests that phenylalanine first forms cinnamoyl-CoA via the enzymes phenylalanine ammonia-lyase and acyl CoA ligase. The cinnamoyl-CoA is then reacted with a hydratase to attach the alcohol functional group. The product is then reacted with a retro-aldolase, forming benzaldehyde. Benzaldehyde reacts with pyruvic acid to attach a 2-carbon unit. This product then undergoes transamination and methylation to form ephedrine and its stereoisomer, pseudoephedrine.
References
External links
Amphetamine alkaloids
Anorectics
Anti-obesity drugs
Antihypotensive agents
Beta-Hydroxyamphetamines
Bronchodilators
Cardiac stimulants
Chinese inventions
Decongestants
Drugs acting on the cardiovascular system
Drugs acting on the nervous system
Drugs in sport
Enantiopure drugs
Ergogenic aids
Euphoriants
Han dynasty
Methamphetamine
Methamphetamines
Norepinephrine releasing agents
Ophthalmology drugs
Peripherally selective drugs
Stimulants
Sympathomimetics
Traditional Chinese medicine
Wakefulness-promoting agents
Wikipedia medicine articles ready to translate
World Anti-Doping Agency prohibited substances
World Health Organization essential medicines | Ephedrine | [
"Chemistry"
] | 5,356 | [
"Stereochemistry",
"Enantiopure drugs"
] |
183,071 | https://en.wikipedia.org/wiki/Ephedraceae | Ephedraceae is a family of gymnosperms belonging to Gnetophyta, it contains only a single extant genus, Ephedra, as well as a number of extinct genera from the Early Cretaceous.
Taxonomy
Ephedraceae is agreed to be the most basal group amongst extant gnetophytes. Members of the family typically grow as shrubs and have small, linear leaves that possess parallel veins. The fossil Ephedraceae genera show a range of morphologies transitional between the ancestral lax male and female reproductive structures and the highly compact reproductive structures typical of modern Ephedra. Modern members of Ephedra have either dry winged membranous bracts (modified leaves which surround the seed), which are dispersed by wind, leathery covered seeds, which are dispersed by seed-eating rodents, or fleshy bracts which are consumed and then dispersed by birds. Some extinct members of Ephedra from the Early Cretaceous, such as Ephedra carnosa, as well as Arlenea from the Early Cretaceous of Brazil have fleshy bracts surrounding the seeds, suggesting that these seeds were dispersed by animals.
Genera
Ephedra L. Early Cretaceous-Recent
Arlenea Ribeiro, Yang, Saraiva, Bantim, Calixto Junior et Lima, 2023 Crato Formation, Brazil, Early Cretaceous (Aptian)
Leongathia V.A. Krassilov, D.L. Dilcher & J.G. Douglas 1998 Koonwarra fossil bed, Australia, Early Cretaceous (Aptian)
Jianchangia Yang, Wang and Ferguson, 2020 Jiufotang Formation, China, Early Cretaceous (Aptian)
Eamesia Yang, Lin and Ferguson, 2018 Yixian Formation, China, Early Cretaceous (Aptian)
Prognetella Krassilov et Bugdaeva, 1999 Yixian Formation, China, Early Cretaceous (Aptian) (initially interpreted as an angiosperm)
Chengia Yang, Lin & Wang, 2013, Yixian Formation, China, Early Cretaceous (Aptian)
Chaoyangia Duan, 1998 Yixian Formation, China, Early Cretaceous (Aptian)
Eragrosites Cao & Wu, 1998 Yixian Formation, China, Early Cretaceous (Aptian)
Gurvanella Krassilov, 1982 China, Mongolia, Early Cretaceous
Alloephedra Tao and Yang, 2003 China, Early Cretaceous (considered a synonym of Ephedra by some authors)
Amphiephedra Miki, 1964 China, Early Cretaceous
Beipiaoa Dilcher & al, 2001 China, Early Cretaceous
Ephedrispermum Rydin, K.R.Pedersen, P.R.Crane et E.M.Friis, 2006 Portugal, Early Cretaceous (Aptian-Albian)
Ephedrites Guo and Wu, 2000 China, Early Cretaceous
Erenia Krassilov, 1982 China, Mongolia, Early Cretaceous
Liaoxia Cao et al. 1998 China, Early Cretaceous
Dichoephedra Ren et al. 2020 China, Early Cretaceous
Laiyangia P.H. Jin, 2024 China, Early Cretaceous
?Pseudoephedra Liu and Wang, 2015 China, Early Cretaceous
References
Plant families
Taxa named by Barthélemy Charles Joseph Dumortier | Ephedraceae | [
"Biology"
] | 673 | [
"Plant families",
"Plants"
] |
183,083 | https://en.wikipedia.org/wiki/Galaxy%20rotation%20curve | The rotation curve of a disc galaxy (also called a velocity curve) is a plot of the orbital speeds of visible stars or gas in that galaxy versus their radial distance from that galaxy's centre. It is typically rendered graphically as a plot, and the data observed from each side of a spiral galaxy are generally asymmetric, so that data from each side are averaged to create the curve. A significant discrepancy exists between the experimental curves observed, and a curve derived by applying gravity theory to the matter observed in a galaxy. Theories involving dark matter are the main postulated solutions to account for the variance.
The rotational/orbital speeds of galaxies/stars do not follow the rules found in other orbital systems such as stars/planets and planets/moons that have most of their mass at the centre. Stars revolve around their galaxy's centre at equal or increasing speed over a large range of distances. In contrast, the orbital velocities of planets in planetary systems and moons orbiting planets decline with distance according to Kepler’s third law. This reflects the mass distributions within those systems. The mass estimations for galaxies based on the light they emit are far too low to explain the velocity observations.
The galaxy rotation problem is the discrepancy between observed galaxy rotation curves and the theoretical prediction, assuming a centrally dominated mass associated with the observed luminous material. When mass profiles of galaxies are calculated from the distribution of stars in spirals and mass-to-light ratios in the stellar disks, they do not match with the masses derived from the observed rotation curves and the law of gravity. A solution to this conundrum is to hypothesize the existence of dark matter and to assume its distribution from the galaxy's center out to its halo. Thus the discrepancy between the two curves can be accounted for by adding a dark matter halo surrounding the galaxy.
Though dark matter is by far the most accepted explanation of the rotation problem, other proposals have been offered with varying degrees of success. Of the possible alternatives, one of the most notable is modified Newtonian dynamics (MOND), which involves modifying the laws of gravity.
History
In 1932, Jan Hendrik Oort became the first to report that measurements of the stars in the solar neighborhood indicated that they moved faster than expected when a mass distribution based upon visible matter was assumed, but these measurements were later determined to be essentially erroneous. In 1939, Horace Babcock reported in his PhD thesis measurements of the rotation curve for Andromeda which suggested that the mass-to-luminosity ratio increases radially. He attributed that to either the absorption of light within the galaxy or to modified dynamics in the outer portions of the spiral and not to any form of missing matter. Babcock's measurements turned out to disagree substantially with those found later, and the first measurement of an extended rotation curve in good agreement with modern data was published in 1957 by Henk van de Hulst and collaborators, who studied M31 with the Dwingeloo Radio Observatory's newly commissioned 25-meter radio telescope. A companion paper by Maarten Schmidt showed that this rotation curve could be fit by a flattened mass distribution more extensive than the light. In 1959, Louise Volders used the same telescope to demonstrate that the spiral galaxy M33 also does not spin as expected according to Keplerian dynamics.
Reporting on NGC 3115, Jan Oort wrote that "the distribution of mass in the system appears to bear almost no relation to that of light... one finds the ratio of mass to light in the outer parts of NGC 3115 to be about 250". On page 302–303 of his journal article, he wrote that "The strongly condensed luminous system appears imbedded in a large and more or less homogeneous mass of great density" and although he went on to speculate that this mass may be either extremely faint dwarf stars or interstellar gas and dust, he had clearly detected the dark matter halo of this galaxy.
The Carnegie telescope (Carnegie Double Astrograph) was intended to study this problem of Galactic rotation.
In the late 1960s and early 1970s, Vera Rubin, an astronomer at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington, worked with a new sensitive spectrograph that could measure the velocity curve of edge-on spiral galaxies to a greater degree of accuracy than had ever before been achieved. Together with fellow staff-member Kent Ford, Rubin announced at a 1975 meeting of the American Astronomical Society the discovery that most stars in spiral galaxies orbit at roughly the same speed, and that this implied that galaxy masses grow approximately linearly with radius well beyond the location of most of the stars (the galactic bulge). Rubin presented her results in an influential paper in 1980. These results suggested either that Newtonian gravity does not apply universally or that, conservatively, upwards of 50% of the mass of galaxies was contained in the relatively dark galactic halo. Although initially met with skepticism, Rubin's results have been confirmed over the subsequent decades.
If Newtonian mechanics is assumed to be correct, it would follow that most of the mass of the galaxy had to be in the galactic bulge near the center and that the stars and gas in the disk portion should orbit the center at decreasing velocities with radial distance from the galactic center (the dashed line in Fig. 1).
Observations of the rotation curve of spirals, however, do not bear this out. Rather, the curves do not decrease in the expected inverse square root relationship but are "flat", i.e. outside of the central bulge the speed is nearly a constant (the solid line in Fig. 1). It is also observed that galaxies with a uniform distribution of luminous matter have a rotation curve that rises from the center to the edge, and most low-surface-brightness galaxies (LSB galaxies) have the same anomalous rotation curve.
The rotation curves might be explained by hypothesizing the existence of a substantial amount of matter permeating the galaxy outside of the central bulge that is not emitting light in the mass-to-light ratio of the central bulge. The material responsible for the extra mass was dubbed dark matter, the existence of which was first posited in the 1930s by Jan Oort in his measurements of the Oort constants and Fritz Zwicky in his studies of the masses of galaxy clusters.
Dark matter
While the observed galaxy rotation curves were one of the first indications that some mass in the universe may not be visible, many different lines of evidence now support the concept of cold dark matter as the dominant form of matter in the universe. Among the lines of evidence are mass-to-light ratios which are much too low without a dark matter component, the amount of hot gas detected in galactic clusters by x-ray astronomy, measurements of cluster mass with the Sunyaev–Zeldovich effect and with gravitational lensing. Models of the formation of galaxies are based on their dark matter halos. The existence of non-baryonic cold dark matter (CDM) is today a major feature of the Lambda-CDM model that describes the cosmology of the universe and matches high precision astrophysical observations.
Further investigations
The rotational dynamics of galaxies are well characterized by their position on the Tully–Fisher relation, which shows that for spiral galaxies the rotational velocity is uniquely related to their total luminosity. A consistent way to predict the rotational velocity of a spiral galaxy is to measure its bolometric luminosity and then read its rotation rate from its location on the Tully–Fisher diagram. Conversely, knowing the rotational velocity of a spiral galaxy gives its luminosity. Thus the magnitude of the galaxy rotation is related to the galaxy's visible mass.
While precise fitting of the bulge, disk, and halo density profiles is a rather complicated process, it is straightforward to model the observables of rotating galaxies through this relationship. So, while state-of-the-art cosmological and galaxy formation simulations of dark matter with normal baryonic matter included can be matched to galaxy observations, there is not yet any straightforward explanation as to why the observed scaling relationship exists. Additionally, detailed investigations of the rotation curves of low-surface-brightness galaxies (LSB galaxies) in the 1990s and of their position on the Tully–Fisher relation showed that LSB galaxies had to have dark matter haloes that are more extended and less dense than those of galaxies with high surface brightness, and thus surface brightness is related to the halo properties. Such dark-matter-dominated dwarf galaxies may hold the key to solving the dwarf galaxy problem of structure formation.
Very importantly, the analysis of the inner parts of low and high surface brightness galaxies showed that the shape of the rotation curves in the centre of dark-matter dominated systems indicates a profile different from the NFW spatial mass distribution profile. This so-called cuspy halo problem is a persistent problem for the standard cold dark matter theory. Simulations involving the feedback of stellar energy into the interstellar medium in order to alter the predicted dark matter distribution in the innermost regions of galaxies are frequently invoked in this context.
Halo density profiles
In order to accommodate a flat rotation curve, a density profile for a galaxy and its environs must be different than one that is centrally concentrated. Newton's version of Kepler's Third Law implies that the spherically symmetric, radial density profile is:
where is the radial orbital velocity profile and is the gravitational constant. This profile closely matches the expectations of a singular isothermal sphere profile where if is approximately constant then the density to some inner "core radius" where the density is then assumed constant. Observations do not comport with such a simple profile, as reported by Navarro, Frenk, and White in a seminal 1996 paper.
The authors then remarked that a "gently changing logarithmic slope" for a density profile function could also accommodate approximately flat rotation curves over large scales. They found the famous Navarro–Frenk–White profile, which is consistent both with N-body simulations and observations given by
where the central density, , and the scale radius, , are parameters that vary from halo to halo. Because the slope of the density profile diverges at the center, other alternative profiles have been proposed, for example the Einasto profile, which has exhibited better agreement with certain dark matter halo simulations.
Observations of orbit velocities in spiral galaxies suggest a mass structure according to:
with the galaxy gravitational potential.
Since observations of galaxy rotation do not match the distribution expected from application of Kepler's laws, they do not match the distribution of luminous matter. This implies that spiral galaxies contain large amounts of dark matter or, alternatively, the existence of exotic physics in action on galactic scales. The additional invisible component becomes progressively more conspicuous in each galaxy at outer radii and among galaxies in the less luminous ones.
A popular interpretation of these observations is that about 26% of the mass of the Universe is composed of dark matter, a hypothetical type of matter which does not emit or interact with electromagnetic radiation. Dark matter is believed to dominate the gravitational potential of galaxies and clusters of galaxies. Under this theory, galaxies are baryonic condensations of stars and gas (namely hydrogen and helium) that lie at the centers of much larger haloes of dark matter, affected by a gravitational instability caused by primordial density fluctuations.
Many cosmologists strive to understand the nature and the history of these ubiquitous dark haloes by investigating the properties of the galaxies they contain (i.e. their luminosities, kinematics, sizes, and morphologies). The measurement of the kinematics (their positions, velocities and accelerations) of the observable stars and gas has become a tool to investigate the nature of dark matter, as to its content and distribution relative to that of the various baryonic components of those galaxies.
Alternatives to dark matter
There have been a number of attempts to solve the problem of galaxy rotation by modifying gravity without invoking dark matter. One of the most discussed is modified Newtonian dynamics (MOND), originally proposed by Mordehai Milgrom in 1983, which modifies the Newtonian force law at low accelerations to enhance the effective gravitational attraction. MOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, matching the baryonic Tully–Fisher relation, and the velocity dispersions of the small satellite galaxies of the Local Group.
Using data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) database, a group has found that the radial acceleration traced by rotation curves (an effect given the name "radial acceleration relation") could be predicted just from the observed baryon distribution (that is, including stars and gas but not dark matter). This so-called radial acceleration relation (RAR) might be fundamental for understanding the dynamics of galaxies. The same relation provided a good fit for 2693 samples in 153 rotating galaxies, with diverse shapes, masses, sizes, and gas fractions. Brightness in the near infrared, where the more stable light from red giants dominates, was used to estimate the density contribution due to stars more consistently. The results are consistent with MOND, and place limits on alternative explanations involving dark matter alone. However, cosmological simulations within a Lambda-CDM framework that include baryonic feedback effects reproduce the same relation, without the need to invoke new dynamics (such as MOND). Thus, a contribution due to dark matter itself can be fully predictable from that of the baryons, once the feedback effects due to the dissipative collapse of baryons are taken into account. MOND is not a relativistic theory, although relativistic theories which reduce to MOND have been proposed, such as tensor–vector–scalar gravity (TeVeS), scalar–tensor–vector gravity (STVG), and the f(R) theory of Capozziello and De Laurentis.
Attempts to model of galaxy rotation based on a general relativity metric, showing that the rotation curves for the Milky Way, NGC 3031, NGC 3198 and NGC 7331 are consistent with the mass density distributions of the visible matter and other similar work have been disputed.
According to recent analysis of the data produced by the Gaia spacecraft, it would seem possible to explain at least the Milky Way's rotation curve without requiring any dark matter if instead of a Newtonian approximation the entire set of equations of general relativity is adopted.
See also
List of unsolved problems in physics
Long-slit spectroscopy
Nonsymmetric gravitational theory
Footnotes
Further reading
Primary research report discussing Oort limit, and citing original Oort 1932 study.
This 1991 data analysis concludes "that MOND is currently the best phenomenological description of the systematics of the discrepancy in galaxies."
Bibliography
Galactic Astronomy, Dmitri Mihalas and Paul McRae. W. H. Freeman 1968.
External links
The Case Against Dark Matter. About Erik Verlinde's approach to the problem. (November 2016)
Concepts in astrophysics
Rotation curve
Articles containing video clips
Physics beyond the Standard Model
Rotation | Galaxy rotation curve | [
"Physics",
"Astronomy"
] | 3,100 | [
"Physical phenomena",
"Concepts in astrophysics",
"Unsolved problems in physics",
"Classical mechanics",
"Rotation",
"Astrophysics",
"Galactic astronomy",
"Motion (physics)",
"Particle physics",
"Physics beyond the Standard Model",
"Astronomical sub-disciplines"
] |
183,089 | https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20physics | The following is a list of notable unsolved problems grouped into broad areas of physics.
Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail.
There are still some questions beyond the Standard Model of physics, such as the strong CP problem, neutrino mass, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself—the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example, within known spacetime singularities like the Big Bang and the centres of black holes beyond the event horizon).
General physics
Theory of everything: Is there a singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe?
Dimensionless physical constants: At the present time, the values of various dimensionless physical constants cannot be calculated; they can be determined only by physical measurement. What is the minimum number of dimensionless physical constants from which all other dimensionless physical constants can be derived? Are dimensional physical constants necessary at all?
Quantum gravity
Quantum gravity: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)? Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity mechanism?
Black holes, black hole information paradox, and black hole radiation: Do black holes produce thermal radiation, as expected on theoretical grounds? Does this radiation contain information about their inner structure, as suggested by gauge–gravity duality, or not, as implied by Hawking's original calculation? If not, and black holes can evaporate away, what happens to the information stored in them (since quantum mechanics does not provide for the destruction of information)? Or does the radiation stop at some point, leaving a black hole remnant? Is there another way to probe their internal structure somehow, if such a structure even exists?
The cosmic censorship hypothesis and the chronology protection conjecture: Can singularities not hidden behind an event horizon, known as "naked singularities", arise from realistic initial conditions, or is it possible to prove some version of the "cosmic censorship hypothesis" of Roger Penrose, which proposes that this is impossible? Similarly, will the closed timelike curves that arise in some solutions to the equations of general relativity (and that imply the possibility of backwards time travel) be ruled out by a theory of quantum gravity that unites general relativity with quantum mechanics, as suggested by the "chronology protection conjecture" of Stephen Hawking?
Holographic principle: Is it true that quantum gravity admits a lower-dimensional description that does not contain gravity? A well-understood example of holography is the AdS/CFT correspondence in string theory. Similarly, can quantum gravity in a de Sitter space be understood using dS/CFT correspondence? Can the AdS/CFT correspondence be vastly generalized to the gauge–gravity duality for arbitrary asymptotic spacetime backgrounds? Are there other theories of quantum gravity other than string theory that admit a holographic description?
Quantum spacetime or the emergence of spacetime: Is the nature of spacetime at the Planck scale very different from the continuous classical dynamical spacetime that exists in general relativity? In loop quantum gravity, the spacetime is postulated to be discrete from the beginning. In string theory, although originally spacetime was considered just like in general relativity (with the only difference being supersymmetry), recent research building upon the Ryu–Takayanagi conjecture has taught that spacetime in string theory is emergent by using quantum information theoretic concepts such as entanglement entropy in the AdS/CFT correspondence. However, how exactly the familiar classical spacetime emerges within string theory or the AdS/CFT correspondence is still not well understood.
Problem of time: In quantum mechanics, time is a classical background parameter, and the flow of time is universal and absolute. In general relativity, time is one component of four-dimensional spacetime, and the flow of time changes depending on the curvature of spacetime and the spacetime trajectory of the observer. How can these two concepts of time be reconciled?
Quantum physics
Yang–Mills theory: Given an arbitrary compact gauge group, does a non-trivial quantum Yang–Mills theory with a finite mass gap exist? (This problem is also listed as one of the Millennium Prize Problems in mathematics.)
Quantum field theory (this is a generalization of the previous problem): Is it possible to construct, in a mathematically rigorous way, a quantum field theory in 4-dimensional spacetime that includes interactions and does not resort to perturbative methods?
Cosmology and general relativity
Axis of evil: Some large features of the microwave sky at distances of over 13 billion light years appear to be aligned with both the motion and orientation of the solar system. Is this due to systematic errors in processing, contamination of results by local effects, an unexplained violation of the Copernican principle and thus the concordance model, or are these features simply statistically insignificant?
Fine-tuned universe: The values of the fundamental physical constants are in a narrow range that is necessary to support carbon-based life. Is this because there are an infinite number of other universes with different constants, or are our universe's constants the result of chance, intelligent design (by a personal being such as the theist's "God"), or some other factor or process? (See also Anthropic principle.)
Cosmic inflation: Is the theory of cosmic inflation in the very early universe correct, and, if so, what are the details of this epoch? What is the hypothetical scalar field that gave rise to this cosmic inflation? If inflation happened at one point, is it self-sustaining through inflation of quantum-mechanical fluctuations, and thus ongoing in some extremely distant place?
Horizon problem: Why is the distant universe so homogeneous when the Big Bang theory seems to predict larger measurable anisotropies of the night sky than those observed? Cosmological inflation is generally accepted as the solution, but are other possible explanations such as a variable speed of light more appropriate?
Origin and future of the universe: How did the conditions for anything to exist arise? Is the universe heading towards a Big Freeze, a Big Rip, a Big Crunch, or a Big Bounce?
Size of universe: The diameter of the observable universe is about 93 billion light-years, but what is the size of the whole universe? Is the universe infinite?
Baryon asymmetry: Why is there far more matter than antimatter in the observable universe? (The apparent asymmetry in neutrino–antineutrino oscillations may suggest a solution.)
Cosmological principle: Is the universe homogeneous and isotropic at large enough scales, as claimed by the cosmological principle and assumed by all models that use the Friedmann–Lemaître–Robertson–Walker metric, including the current version of the ΛCDM model, or is the universe inhomogeneous or anisotropic? Is the CMB dipole purely kinematic, or does it signal anisotropy of the universe, resulting in the breakdown of the FLRW metric and the cosmological principle? Is the Hubble tension evidence that the cosmological principle is false? Even if the cosmological principle is correct, is the Friedmann–Lemaître–Robertson–Walker metric the right metric to use for our universe? Are the observations usually interpreted as the accelerating expansion of the universe rightly interpreted, or are they instead evidence that the cosmological principle is false?
Copernican principle: Are cosmological observations made from Earth representative of observations from the average position in the universe?
Cosmological constant problem: Why does the zero-point energy of the vacuum not cause a large cosmological constant? What leads to its cancellation?
Dark matter: What is the identity of dark matter? Is it a particle? If so, is it a WIMP, axion, the lightest superpartner (LSP), or some other particle? Or, do the phenomena attributed to dark matter point not to some form of matter but actually to an extension of gravity?
Dark energy: What is the cause of the observed accelerating expansion of the universe (the de Sitter phase)? Are the observations rightly interpreted as the accelerating expansion of the universe, or are they evidence that the cosmological principle is false? Why is the energy density of the dark energy component of the same magnitude as the density of matter at present when the two evolve quite differently over time? Is this a cosmic coincidence? Is dark energy a pure cosmological constant or are models of quintessence such as phantom energy applicable?
Dark flow: Is a non-spherically symmetric gravitational pull from outside the observable universe responsible for some of the observed motion of large objects such as galactic clusters in the universe?
Shape of the universe: What is the 3-manifold of comoving space, i.e., of a comoving spatial section of the universe, informally called the "shape" of the universe? Neither the curvature nor the topology is presently known, though the curvature is known to be "close" to zero on observable scales. The cosmic inflation hypothesis suggests that the shape of the universe may be unmeasurable, but, since 2003, Jean-Pierre Luminet, et al., and other groups have suggested that the shape of the universe may be the Poincaré dodecahedral space. Is the shape unmeasurable; the Poincaré space; or another 3-manifold?
Extra dimensions: Does nature have more than four spacetime dimensions? If so, what is their size? Are dimensions a fundamental property of the universe or an emergent result of other physical laws? Can we experimentally observe evidence of higher spatial dimensions?
High-energy/particle physics
Hierarchy problem: Why is gravity such a weak force? It becomes strong for particles only at the Planck scale, around , much above the electroweak scale (100 GeV, the energy scale dominating physics at low energies); why are these scales so different from each other? What prevents quantities at the electroweak scale, such as the Higgs boson mass, from getting quantum corrections on the order of the Planck scale? Is the solution supersymmetry, extra dimensions, or just anthropic fine-tuning?
Magnetic monopoles: Did particles that carry "magnetic charge" exist in some past, higher-energy epoch? If so, do any remain today? (Paul Dirac showed that the existence of some types of magnetic monopoles would explain charge quantization.)
Neutron lifetime puzzle: While the neutron lifetime has been studied for decades, there is currently a lack of consilience on its exact value, due to different results from two experimental methods ("bottle" versus "beam").
Proton decay and spin crisis: Is the proton fundamentally stable? Or does it decay with a finite lifetime as predicted by some extensions to the Standard Model? How do the quarks and gluons carry the spin of protons?
Grand Unification: Are the electromagnetic and nuclear forces different aspects of a Grand Unified Theory? If so, what symmetry governs this force and its behaviours?
Supersymmetry: Is spacetime supersymmetry realized at TeV scale? If so, what is the mechanism of supersymmetry breaking? Does supersymmetry stabilize the electroweak scale, preventing high quantum corrections? Does the lightest supersymmetric particle (LSP) comprise dark matter?
Color confinement: The quantum chromodynamics (QCD) color confinement conjecture is that color-charged particles (such as quarks and gluons) cannot be separated from their parent hadron without producing new hadrons. Is it possible to provide an analytic proof of color confinement in any non-abelian gauge theory?
The QCD vacuum: Many of the equations in non-perturbative QCD are currently unsolved. These energies are the energies sufficient for the formation and description of atomic nuclei. How thus does low energy/non-pertubative QCD give rise to the formation of complex nuclei and nuclear constituents?
Generations of matter: Why are there three generations of quarks and leptons? Is there a theory that can explain the masses of particular quarks and leptons in particular generations from first principles (a theory of Yukawa couplings)?
Neutrino mass: What is the mass of neutrinos, whether they follow Dirac or Majorana statistics? Is the mass hierarchy normal or inverted? Is the CP violating phase equal to 0?
Reactor antineutrino anomaly: There is an anomaly in the existing body of data regarding the antineutrino flux from nuclear reactors around the world. Measured values of this flux appears to be only 94% of the value expected from theory. It is unknown whether this is due to unknown physics (such as sterile neutrinos), experimental error in the measurements, or errors in the theoretical flux calculations.
Strong CP problem and axions: Why is the strong nuclear interaction invariant to parity and charge conjugation? Is Peccei–Quinn theory the solution to this problem? Could axions be the main component of dark matter?
Anomalous magnetic dipole moment: Why is the experimentally measured value of the muon's anomalous magnetic dipole moment ("muon ") significantly different from the theoretically predicted value of that physical constant?
Proton radius puzzle: What is the electric charge radius of the proton? How does it differ from a gluonic charge?
Pentaquarks and other exotic hadrons: What combinations of quarks are possible? Why were pentaquarks so difficult to discover? Are they a tightly bound system of five elementary particles, or a more weakly bound pairing of a baryon and a meson?
Mu problem: A problem in supersymmetric theories, concerned with understanding the reasons for parameter values of the theory.
Koide formula: An aspect of the problem of particle generations. The sum of the masses of the three charged leptons, divided by the square of the sum of the roots of these masses, to within one standard deviation of observations, is . It is unknown how such a simple value comes about, and why it is the exact arithmetic average of the possible extreme values of (equal masses) and 1 (one mass dominates).
Strange matter: Does strange matter exist? Is it stable? Can it form strange stars? Is strange matter stable at zero pressure (i.e. in vacuum)?
Glueballs: Do they exist in nature?
The gallium anomaly: The measurements of the charged-current capture rate of neutrinos on Ga from strong radioactive sources have yielded results below those expected, based on the known strength of the principal transition supplemented by theory.
Astronomy and astrophysics
Solar cycle: How does the Sun generate its periodically reversing large-scale magnetic field? How do other solar-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun? What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minima state?
Coronal heating problem: Why is the Sun's corona (atmosphere layer) so much hotter than the Sun's surface? Why is the magnetic reconnection effect many orders of magnitude faster than predicted by standard models?
Astrophysical jet: Why do only certain accretion discs surrounding certain astronomical objects emit relativistic jets along their polar axes? Why are there quasi-periodic oscillations in many accretion discs? Why does the period of these oscillations scale as the inverse of the mass of the central object? Why are there sometimes overtones, and why do these appear at different frequency ratios in different objects?
Diffuse interstellar bands: What is responsible for the numerous interstellar absorption lines detected in astronomical spectra? Are they molecular in origin, and if so which molecules are responsible for them? How do they form?
Supermassive black holes: What is the origin of the M–σ relation between supermassive black hole mass and galaxy velocity dispersion? How did the most distant quasars grow their supermassive black holes up to solar masses so early in the history of the universe?
Kuiper cliff: Why does the number of objects in the Solar System's Kuiper belt fall off rapidly and unexpectedly beyond a radius of 50 astronomical units?
Flyby anomaly: Why is the observed energy of satellites flying by planetary bodies sometimes different by a minute amount from the value predicted by theory?
Galaxy rotation problem: Is dark matter responsible for differences in observed and theoretical speed of stars revolving around the centre of galaxies, or is it something else?
Supernovae: What is the exact mechanism by which an implosion of a dying star becomes an explosion?
p-nuclei: What astrophysical process is responsible for the nucleogenesis of these rare isotopes?
Ultra-high-energy cosmic ray: Why is it that some cosmic rays appear to possess energies that are impossibly high, given that there are no sufficiently energetic cosmic ray sources near the Earth? Why is it that (apparently) some cosmic rays emitted by distant sources have energies above the Greisen–Zatsepin–Kuzmin limit?
Rotation rate of Saturn: Why does the magnetosphere of Saturn exhibit a (slowly changing) periodicity close to that at which the planet's clouds rotate? What is the true rotation rate of Saturn's deep interior?
Origin of magnetar magnetic field: What is the origin of magnetar magnetic field?
Large-scale anisotropy: Is the universe at very large scales anisotropic, making the cosmological principle an invalid assumption? The number count and intensity dipole anisotropy in radio, NRAO VLA Sky Survey (NVSS) catalogue is inconsistent with the local motion as derived from cosmic microwave background and indicate an intrinsic dipole anisotropy. The same NVSS radio data also shows an intrinsic dipole in polarization density and degree of polarization in the same direction as in number count and intensity. There are several other observations revealing large-scale anisotropy. The optical polarization from quasars shows polarization alignment over a very large scale of Gpc. The cosmic-microwave-background data shows several features of anisotropy, which are not consistent with the Big Bang model.
Age–metallicity relation in the Galactic disk: Is there a universal age–metallicity relation (AMR) in the Galactic disk (both "thin" and "thick" parts of the disk)? Although in the local (primarily thin) disk of the Milky Way there is no evidence of a strong AMR, a sample of 229 nearby "thick" disk stars has been used to investigate the existence of an age–metallicity relation in the Galactic thick disk, and indicate that there is an age–metallicity relation present in the thick disk. Stellar ages from asteroseismology confirm the lack of any strong age–metallicity relation in the Galactic disc.
The lithium problem: Why is there a discrepancy between the amount of lithium-7 predicted to be produced in Big Bang nucleosynthesis and the amount observed in very old stars?
Ultraluminous X-ray sources (ULXs): What powers X-ray sources that are not associated with active galactic nuclei but exceed the Eddington limit of a neutron star or stellar black hole? Are they due to intermediate-mass black holes? Some ULXs are periodic, suggesting non-isotropic emission from a neutron star. Does this apply to all ULXs? How could such a system form and remain stable?
Fast radio bursts (FRBs): What causes these transient radio pulses from distant galaxies, lasting only a few milliseconds each? Why do some FRBs repeat at unpredictable intervals, but most do not? Dozens of models have been proposed, but none have been widely accepted.
Nuclear physics
Quantum chromodynamics: What are the phases of strongly interacting matter, and what roles do they play in the evolution of the cosmos? What is the detailed partonic structure of the nucleons? What does QCD predict for the properties of strongly interacting matter? What determines the key features of QCD, and what is their relation to the nature of gravity and spacetime? Does QCD truly lack CP violations?
Quark–gluon plasma: Where is the onset of deconfinement: (1) as a function of temperature and chemical potentials? (2) as a function of relativistic heavy-ion collision energy and system size? What is the mechanism of energy and baryon-number stopping leading to creation of quark-gluon plasma in relativistic heavy-ion collisions? Why is sudden hadronization and the statistical-hadronization model a near-to-perfect description of hadron production from quark–gluon plasma? Is quark flavor conserved in quark–gluon plasma? Are strangeness and charm in chemical equilibrium in quark–gluon plasma? Does strangeness in quark–gluon plasma flow at the same speed as up and down quark flavours? Why does deconfined matter show ideal flow?
Specific models of quark–gluon plasma formation: Do gluons saturate when their occupation number is large? Do gluons form a dense system called colour glass condensate? What are the signatures and evidences for the Balitsky–Fadin–Kuarev–Lipatov, Balitsky–Kovchegov, Catani–Ciafaloni–Fiorani–Marchesini evolution equations?
Nuclei and nuclear astrophysics: Why is there a lack of convergence in estimates of the mean lifetime of a free neutron based on two separate—and increasingly precise—experimental methods? What is the nature of the nuclear force that binds protons and neutrons into stable nuclei and rare isotopes? What is the explanation for the EMC effect? What is the nature of exotic excitations in nuclei at the frontiers of stability and their role in stellar processes? What is the nature of neutron stars and dense nuclear matter? What is the origin of the elements in the cosmos? What are the nuclear reactions that drive stars and stellar explosions? What is the heaviest possible chemical element?
Fluid dynamics
Under what conditions do smooth solutions exist for the Navier–Stokes equations, which are the equations that describe the flow of a viscous fluid? This problem, for an incompressible fluid in three dimensions, is also one of the Millennium Prize Problems in mathematics.
Turbulent flow: Is it possible to make a theoretical model to describe the statistics of a turbulent flow (in particular, its internal structures)?
Granular convection: why does a granular material subjected to shaking or vibration exhibit circulation patterns similar to types of fluid convection? Why do the largest particles end up on the surface of a granular material containing a mixture of variously sized objects when subjected to a vibration/shaking?
Condensed matter physics
Bose–Einstein condensation: How do we rigorously prove the existence of Bose–Einstein condensates for general interacting systems?
High-temperature superconductors: What is the mechanism that causes certain materials to exhibit superconductivity at temperatures much higher than around ? Is it possible to make a material that is a superconductor at room temperature and atmospheric pressure?
Amorphous solids: What is the nature of the glass transition between a fluid or regular solid and a glassy phase? What are the physical processes giving rise to the general properties of glasses and the glass transition?
Universality of low-temperature amorphous solids: why is the small dimensionless ratio of the phonon wavelength to its mean free path nearly the same for a very large family of disordered solids? This small ratio is observed for very large range of phonon frequencies.
Cryogenic electron emission: Why does the electron emission in the absence of light increase as the temperature of a photomultiplier is decreased?
Sonoluminescence: What causes the emission of short bursts of light from imploding bubbles in a liquid when excited by sound?
Topological order: Is topological order stable at non-zero temperature? Equivalently, is it possible to have three-dimensional self-correcting quantum memory?
Gauge block wringing: What mechanism allows gauge blocks to be wrung together?
Fractional Hall effect: What mechanism explains the existence of the state in the fractional quantum Hall effect? Does it describe quasiparticles with non-Abelian fractional statistics?
Liquid crystals: Can the nematic to smectic (A) phase transition in liquid crystal states be characterized as a universal phase transition?
Semiconductor nanocrystals: What is the cause of the nonparabolicity of the energy-size dependence for the lowest optical absorption transition of quantum dots?
Metal whiskering: In electrical devices, some metallic surfaces may spontaneously grow fine metallic whiskers, which can lead to equipment failures. While compressive mechanical stress is known to encourage whisker formation, the growth mechanism has yet to be determined.
Superfluid transition in helium-4: Explain the discrepancy between the experimental and theoretical determinations of the heat capacity critical exponent .
Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?
Quantum computing and quantum information
Threshold problem: Can we go beyond the noisy intermediate-scale quantum era? Can quantum computers reach fault tolerance? Is it possible to have enough qubit scalability to implement quantum error correction? What is the most promising candidate platforms to physically implement qubits?
Topological qubits: Topological quantum computers are promising but can they be built? Can we demonstrate Majorana zero modes conclusively?
Temperature: Can quantum computing be performed at non-cryogenic temperatures? Can we build room temperature quantum computers?
Complexity classes problems: What is the relation of BQP and BPP? What is the relation between BQP and NP? Can computation in plausible physical theories (quantum algorithms) go beyond BQP?
Post-quantum cryptography: Can we prove that some cryptographic protocols are safe against quantum computers?
Quantum capacity: The capacity of a quantum channel is in general not known.
Plasma physics
Plasma physics and fusion power: Fusion energy may potentially provide power from an abundant resource (e.g. hydrogen) without the type of radioactive waste that fission energy currently produces. However, can ionized gases (plasma) be confined long enough and at a high enough temperature to create fusion power? What is the physical origin of H-mode?
The injection problem: Fermi acceleration is thought to be the primary mechanism that accelerates astrophysical particles to high energy. However, it is unclear what mechanism causes those particles to initially have energies high enough for Fermi acceleration to work on them.
Alfvénic turbulence: In the solar wind and the turbulence in solar flares, coronal mass ejections, and magnetospheric substorms are major unsolved problems in space plasma physics.
Biophysics
Stochasticity and robustness to noise in gene expression: How do genes govern our body, withstanding different external pressures and internal stochasticity? Certain models exist for genetic processes, but we are far from understanding the whole picture, in particular in development where gene expression must be tightly regulated.
Quantitative study of the immune system: What are the quantitative properties of immune responses? What are the basic building blocks of immune system networks?
Homochirality: What is the origin of the preponderance of specific enantiomers in biochemical systems?
Magnetoreception: How do animals (e.g. migratory birds) sense the Earth's magnetic field?
Protein structure prediction: How is the three-dimensional structure of proteins determined by the one-dimensional amino acid sequence? How can proteins fold on microsecond to second timescales when the number of possible conformations is astronomical and conformational transitions occur on the picosecond to microsecond timescale? Can algorithms be written to predict a protein's three-dimensional structure from its sequence? Do the native structures of most naturally occurring proteins coincide with the global minimum of the free energy in conformational space? Or are most native conformations thermodynamically unstable, but kinetically trapped in metastable states? What keeps the high density of proteins present inside cells from precipitating?
Quantum biology: Can coherence be maintained in biological systems at timeframes long enough to be functionally important? Are there non-trivial aspects of biology or biochemistry that can only be explained by the persistence of coherence as a mechanism?
Foundations of physics
Interpretation of quantum mechanics: How does the quantum description of reality, which includes elements such as the superposition of states and wavefunction collapse or quantum decoherence, give rise to the reality we perceive? Another way of stating this question regards the measurement problem: What constitutes a "measurement" which apparently causes the wave function to collapse into a definite state? Unlike classical physical processes, some quantum mechanical processes (such as quantum teleportation arising from quantum entanglement) cannot be simultaneously "local", "causal", and "real", but it is not obvious which of these properties must be sacrificed, or if an attempt to describe quantum mechanical processes in these senses is a category error such that a proper understanding of quantum mechanics would render the question meaningless. Can the many worlds interpretation resolve it?
Arrow of time (e.g. entropy's arrow of time): Why does time have a direction? Why did the universe have such low entropy in the past, and time correlates with the universal (but not local) increase in entropy, from the past and to the future, according to the second law of thermodynamics? Why are CP violations observed in certain weak force decays, but not elsewhere? Are CP violations somehow a product of the second law of thermodynamics, or are they a separate arrow of time? Are there exceptions to the principle of causality? Is there a single possible past? Is the present moment physically distinct from the past and future, or is it merely an emergent property of consciousness? What links the quantum arrow of time to the thermodynamic arrow?
Locality: Are there non-local phenomena in quantum physics? If they exist, are non-local phenomena limited to the entanglement revealed in the violations of the Bell inequalities, or can information and conserved quantities also move in a non-local way? Under what circumstances are non-local phenomena observed? What does the existence or absence of non-local phenomena imply about the fundamental structure of spacetime? How does this elucidate the proper interpretation of the fundamental nature of quantum physics?
Problems solved since the 1990s
General physics/quantum physics
Perform a loophole-free Bell test experiment (1970–2015): In October 2015, scientists from the Kavli Institute of Nanoscience reported that the failure of the local hidden-variable hypothesis is supported at the 96% confidence level based on a "loophole-free Bell test" study. These results were confirmed by two studies with statistical significance over 5 standard deviations which were published in December 2015.
Create Bose–Einstein condensate (1924–1995): Composite bosons in the form of dilute atomic vapours were cooled to quantum degeneracy using the techniques of laser cooling and evaporative cooling.
Cosmology and general relativity
Existence of gravitational waves (1916–2016): On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging, which was also the first detection of a stellar binary black hole.
Numerical solution for binary black hole (1960s–2005): The numerical solution of the two body problem in general relativity was achieved after four decades of research. Three groups devised the breakthrough techniques in 2005 (annus mirabilis of numerical relativity).
Cosmic age problem (1920s–1990s): The estimated age of the universe was around 3 to 8 billion years younger than estimates of the ages of the oldest stars in the Milky Way. Better estimates for the distances to the stars, and the recognition of the accelerating expansion of the universe, reconciled the age estimates.
High-energy physics/particle physics
Existence of pentaquarks (1964–2015): In July 2015, the LHCb collaboration at CERN identified pentaquarks in the channel, which represents the decay of the bottom lambda baryon into a J/ψ meson , a kaon and a proton (p). The results showed that sometimes, instead of decaying directly into mesons and baryons, the decayed via intermediate pentaquark states. The two states, named and , had individual statistical significances of 9 σ and 12 σ, respectively, and a combined significance of 15 σ—enough to claim a formal discovery. The two pentaquark states were both observed decaying strongly to , hence must have a valence quark content of two up quarks, a down quark, a charm quark, and an anti-charm quark (), making them charmonium-pentaquarks.
Existence of quark-gluon plasma, a new phase of matter was discovered and confirmed in experiments at CERN-SPS (2000), BNL-RHIC (2005) and CERN-LHC (2010).
Higgs boson and electroweak symmetry breaking (1963–2012): The mechanism responsible for breaking the electroweak gauge symmetry, giving mass to the W and Z bosons, was solved with the discovery of the Higgs boson of the Standard Model, with the expected couplings to the weak bosons. No evidence of a strong dynamics solution, as proposed by technicolor, has been observed.
Origin of mass of most elementary particles: Solved with the discovery of the Higgs boson, which implies the existence of the Higgs field giving mass to these particles.
Astronomy and astrophysics
Origin of short gamma-ray burst (1993–2017): From binary neutron stars merger, produce a kilonova explosion and short gamma-ray burst GRB 170817A was detected in both electromagnetic waves and gravitational wave GW170817.
Missing baryon problem (1998–2017): proclaimed solved in October 2017, with the missing baryons located in hot intergalactic gas.
Long-duration gamma-ray bursts (1993–2003): Long-duration bursts are associated with the deaths of massive stars in a specific kind of supernova-like event commonly referred to as a collapsar. However, there are also long-duration GRBs that show evidence against an associated supernova, such as the Swift event GRB 060614.
Solar neutrino problem (1968–2001): Solved by a new understanding of neutrino physics, requiring a modification of the Standard Model of particle physics—specifically, neutrino oscillation.
Saturn's core spin was determined from its gravitational field.
Nuclear physics
Hagedorn temperature recognized as phase transformation temperature between hadronic confined phase and deconfined phase of matter.
Rapidly solved problems
Existence of time crystals (2012–2016): The idea of a quantized time crystal was first theorized in 2012 by Frank Wilczek. In 2016, Khemani et al. and Else et al. independently of each other suggested that periodically driven quantum spin systems could show similar behaviour. Also in 2016, Norman Yao at UC Berkeley and colleagues proposed a different way to create discrete time crystals in spin systems. This was then used by two teams, a group led by Christopher Monroe at the University of Maryland and a group led by Mikhail Lukin at Harvard University, who were both able to show evidence for time crystals in the laboratory setting, showing that for short times the systems exhibited the dynamics similar to the predicted one.
Photon underproduction crisis (2014–2015): This problem was resolved by Khaire and Srianand. They show that a factor 2 to 5 times large metagalactic photoionization rate can be easily obtained using updated quasar and galaxy observations. Recent observations of quasars indicate that the quasar contribution to ultraviolet photons is a factor of 2 larger than previous estimates. The revised galaxy contribution is a factor of 3 larger. These together solve the crisis.
Hipparcos anomaly (1997–2012): The High Precision Parallax Collecting Satellite (Hipparcos) measured the parallax of the Pleiades and determined a distance of 385 light years. This was significantly different from other measurements made by means of actual to apparent brightness measurement or absolute magnitude. The anomaly was due to the use of a weighted mean when there is a correlation between distances and distance errors for stars in clusters. It is resolved by using an unweighted mean. There is no systematic bias in the Hipparcos data when it comes to star clusters.
Faster-than-light neutrino anomaly (2011–2012): In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. On 12 July 2012 OPERA updated their paper after discovering an error in their previous flight time measurement. They found agreement of neutrino speed with the speed of light.
Pioneer anomaly (1980–2012): There was a deviation in the predicted accelerations of the Pioneer 10 and 11 spacecraft as they left the Solar System. It is believed that this is a result of previously unaccounted-for thermal recoil force.
See also
Hilbert's sixth problem
Lists of unsolved problems
Physical paradox
List of unsolved problems in mathematics
List of unsolved problems in neuroscience
Footnotes
References
External links
What problems of physics and astrophysics seem now to be especially important and interesting (thirty years later, already on the verge of XXI century)? V. L. Ginzburg, Physics-Uspekhi 42 (4) 353–373, 1999
List of links to unsolved problems in physics, prizes and research.
A list of open problems in quantum information theory maintained by the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna.
Ideas Based On What We'd Like to Achieve
2004 SLAC Summer Institute: Nature's Greatest Puzzles
Dual Personality of Glass Explained at Last
What we do and don't know Review on current state of physics by Steven Weinberg, November 2013
The crisis of big science Steven Weinberg, May 2012
Physics
Physics-related lists | List of unsolved problems in physics | [
"Physics"
] | 8,109 | [
"Unsolved problems in physics"
] |
183,091 | https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20mathematics | Many mathematical problems have been stated but not yet solved. These problems come from many areas of mathematics, such as theoretical physics, computer science, algebra, analysis, combinatorics, algebraic, differential, discrete and Euclidean geometries, graph theory, group theory, model theory, number theory, set theory, Ramsey theory, dynamical systems, and partial differential equations. Some problems belong to more than one discipline and are studied using techniques from different areas. Prizes are often awarded for the solution to a long-standing problem, and some lists of unsolved problems, such as the Millennium Prize Problems, receive considerable attention.
This list is a composite of notable unsolved problems mentioned in previously published lists, including but not limited to lists considered authoritative, and the problems listed here vary widely in both difficulty and importance.
Lists of unsolved problems in mathematics
Various mathematicians and organizations have published and promoted lists of unsolved mathematical problems. In some cases, the lists have been associated with prizes for the discoverers of solutions.
Millennium Prize Problems
Of the original seven Millennium Prize Problems listed by the Clay Mathematics Institute in 2000, six remain unsolved to date:
Birch and Swinnerton-Dyer conjecture
Hodge conjecture
Navier–Stokes existence and smoothness
P versus NP
Riemann hypothesis
Yang–Mills existence and mass gap
The seventh problem, the Poincaré conjecture, was solved by Grigori Perelman in 2003. However, a generalization called the smooth four-dimensional Poincaré conjecture—that is, whether a four-dimensional topological sphere can have two or more inequivalent smooth structures—is unsolved.
Notebooks
The Kourovka Notebook () is a collection of unsolved problems in group theory, first published in 1965 and updated many times since.
The Sverdlovsk Notebook () is a collection of unsolved problems in semigroup theory, first published in 1965 and updated every 2 to 4 years since.
The Dniester Notebook () lists several hundred unsolved problems in algebra, particularly ring theory and modulus theory.
The Erlagol Notebook () lists unsolved problems in algebra and model theory.
Unsolved problems
Algebra
Birch–Tate conjecture on the relation between the order of the center of the Steinberg group of the ring of integers of a number field to the field's Dedekind zeta function.
Bombieri–Lang conjectures on densities of rational points of algebraic surfaces and algebraic varieties defined on number fields and their field extensions.
Connes embedding problem in Von Neumann algebra theory
Crouzeix's conjecture: the matrix norm of a complex function applied to a complex matrix is at most twice the supremum of over the field of values of .
Determinantal conjecture on the determinant of the sum of two normal matrices.
Eilenberg–Ganea conjecture: a group with cohomological dimension 2 also has a 2-dimensional Eilenberg–MacLane space .
Farrell–Jones conjecture on whether certain assembly maps are isomorphisms.
Bost conjecture: a specific case of the Farrell–Jones conjecture
Finite lattice representation problem: is every finite lattice isomorphic to the congruence lattice of some finite algebra?
Goncharov conjecture on the cohomology of certain motivic complexes.
Green's conjecture: the Clifford index of a non-hyperelliptic curve is determined by the extent to which it, as a canonical curve, has linear syzygies.
Grothendieck–Katz p-curvature conjecture: a conjectured local–global principle for linear ordinary differential equations.
Hadamard conjecture: for every positive integer , a Hadamard matrix of order exists.
Williamson conjecture: the problem of finding Williamson matrices, which can be used to construct Hadamard matrices.
Hadamard's maximal determinant problem: what is the largest determinant of a matrix with entries all equal to 1 or –1?
Hilbert's fifteenth problem: put Schubert calculus on a rigorous foundation.
Hilbert's sixteenth problem: what are the possible configurations of the connected components of M-curves?
Homological conjectures in commutative algebra
Jacobson's conjecture: the intersection of all powers of the Jacobson radical of a left-and-right Noetherian ring is precisely 0.
Kaplansky's conjectures
Köthe conjecture: if a ring has no nil ideal other than , then it has no nil one-sided ideal other than .
Monomial conjecture on Noetherian local rings
Existence of perfect cuboids and associated cuboid conjectures
Pierce–Birkhoff conjecture: every piecewise-polynomial is the maximum of a finite set of minimums of finite collections of polynomials.
Rota's basis conjecture: for matroids of rank with disjoint bases , it is possible to create an matrix whose rows are and whose columns are also bases.
Serre's conjecture II: if is a simply connected semisimple algebraic group over a perfect field of cohomological dimension at most , then the Galois cohomology set is zero.
Serre's positivity conjecture that if is a commutative regular local ring, and are prime ideals of , then implies .
Uniform boundedness conjecture for rational points: do algebraic curves of genus over number fields have at most some bounded number of -rational points?
Wild problems: problems involving classification of pairs of matrices under simultaneous conjugation.
Zariski–Lipman conjecture: for a complex algebraic variety with coordinate ring , if the derivations of are a free module over , then is smooth.
Zauner's conjecture: do SIC-POVMs exist in all dimensions?
Zilber–Pink conjecture that if is a mixed Shimura variety or semiabelian variety defined over , and is a subvariety, then contains only finitely many atypical subvarieties.
Group theory
Andrews–Curtis conjecture: every balanced presentation of the trivial group can be transformed into a trivial presentation by a sequence of Nielsen transformations on relators and conjugations of relators
Burnside problem: for which positive integers m, n is the free Burnside group finite? In particular, is finite?
Guralnick–Thompson conjecture on the composition factors of groups in genus-0 systems
Herzog–Schönheim conjecture: if a finite system of left cosets of subgroups of a group form a partition of , then the finite indices of said subgroups cannot be distinct.
The inverse Galois problem: is every finite group the Galois group of a Galois extension of the rationals?
Are there an infinite number of Leinster groups?
Does generalized moonshine exist?
Is every finitely presented periodic group finite?
Is every group surjunctive?
Is every discrete, countable group sofic?
Problems in loop theory and quasigroup theory consider generalizations of groups
Representation theory
Arthur's conjectures
Dade's conjecture relating the numbers of characters of blocks of a finite group to the numbers of characters of blocks of local subgroups.
Demazure conjecture on representations of algebraic groups over the integers.
Kazhdan–Lusztig conjectures relating the values of the Kazhdan–Lusztig polynomials at 1 with representations of complex semisimple Lie groups and Lie algebras.
McKay conjecture: in a group , the number of irreducible complex characters of degree not divisible by a prime number is equal to the number of irreducible complex characters of the normalizer of any Sylow -subgroup within .
Analysis
The Brennan conjecture: estimating the integral of powers of the moduli of the derivative of conformal maps into the open unit disk, on certain subsets of
Fuglede's conjecture on whether nonconvex sets in and are spectral if and only if they tile by translation.
Goodman's conjecture on the coefficients of multivalent functions
Invariant subspace problem – does every bounded operator on a complex Banach space send some non-trivial closed subspace to itself?
Kung–Traub conjecture on the optimal order of a multipoint iteration without memory
Lehmer's conjecture on the Mahler measure of non-cyclotomic polynomials
The mean value problem: given a complex polynomial of degree and a complex number , is there a critical point of such that ?
The Pompeiu problem on the topology of domains for which some nonzero function has integrals that vanish over every congruent copy
Sendov's conjecture: if a complex polynomial with degree at least has all roots in the closed unit disk, then each root is within distance from some critical point.
Vitushkin's conjecture on compact subsets of with analytic capacity
What is the exact value of Landau's constants, including Bloch's constant?
Regularity of solutions of Euler equations
Convergence of Flint Hills series
Regularity of solutions of Vlasov–Maxwell equations
Combinatorics
The 1/3–2/3 conjecture – does every finite partially ordered set that is not totally ordered contain two elements x and y such that the probability that x appears before y in a random linear extension is between 1/3 and 2/3?
The Dittert conjecture concerning the maximum achieved by a particular function of matrices with real, nonnegative entries satisfying a summation condition
Problems in Latin squares – open questions concerning Latin squares
The lonely runner conjecture – if runners with pairwise distinct speeds run round a track of unit length, will every runner be "lonely" (that is, be at least a distance from each other runner) at some time?
Map folding – various problems in map folding and stamp folding.
No-three-in-line problem – how many points can be placed in the grid so that no three of them lie on a line?
Rudin's conjecture on the number of squares in finite arithmetic progressions
The sunflower conjecture – can the number of size sets required for the existence of a sunflower of sets be bounded by an exponential function in for every fixed ?
Frankl's union-closed sets conjecture – for any family of sets closed under sums there exists an element (of the underlying space) belonging to half or more of the sets
Give a combinatorial interpretation of the Kronecker coefficients
The values of the Dedekind numbers for
The values of the Ramsey numbers, particularly
The values of the Van der Waerden numbers
Finding a function to model n-step self-avoiding walks
Dynamical systems
Arnold–Givental conjecture and Arnold conjecture – relating symplectic geometry to Morse theory.
Berry–Tabor conjecture in quantum chaos
Banach's problem – is there an ergodic system with simple Lebesgue spectrum?
Birkhoff conjecture – if a billiard table is strictly convex and integrable, is its boundary necessarily an ellipse?
Collatz conjecture (also known as the conjecture)
Eden's conjecture that the supremum of the local Lyapunov dimensions on the global attractor is achieved on a stationary point or an unstable periodic orbit embedded into the attractor.
Eremenko's conjecture: every component of the escaping set of an entire transcendental function is unbounded.
Fatou conjecture that a quadratic family of maps from the complex plane to itself is hyperbolic for an open dense set of parameters.
Furstenberg conjecture – is every invariant and ergodic measure for the action on the circle either Lebesgue or atomic?
Kaplan–Yorke conjecture on the dimension of an attractor in terms of its Lyapunov exponents
Margulis conjecture – measure classification for diagonalizable actions in higher-rank groups.
Hilbert–Arnold problem – is there a uniform bound on limit cycles in generic finite-parameter families of vector fields on a sphere?
MLC conjecture – is the Mandelbrot set locally connected?
Many problems concerning an outer billiard, for example showing that outer billiards relative to almost every convex polygon have unbounded orbits.
Quantum unique ergodicity conjecture on the distribution of large-frequency eigenfunctions of the Laplacian on a negatively-curved manifold
Rokhlin's multiple mixing problem – are all strongly mixing systems also strongly 3-mixing?
Weinstein conjecture – does a regular compact contact type level set of a Hamiltonian on a symplectic manifold carry at least one periodic orbit of the Hamiltonian flow?
Does every positive integer generate a juggler sequence terminating at 1?
Lyapunov function: Lyapunov's second method for stability – For what classes of ODEs, describing dynamical systems, does Lyapunov's second method, formulated in the classical and canonically generalized forms, define the necessary and sufficient conditions for the (asymptotical) stability of motion?
Is every reversible cellular automaton in three or more dimensions locally reversible?
Games and puzzles
Combinatorial games
Sudoku:
How many puzzles have exactly one solution?
How many puzzles with exactly one solution are minimal?
What is the maximum number of givens for a minimal puzzle?
Tic-tac-toe variants:
Given the width of a tic-tac-toe board, what is the smallest dimension such that X is guaranteed to have a winning strategy? (See also Hales–Jewett theorem and nd game)
Chess:
What is the outcome of a perfectly played game of chess? (See also first-move advantage in chess)
Go:
What is the perfect value of Komi?
Are the nim-sequences of all finite octal games eventually periodic?
Is the nim-sequence of Grundy's game eventually periodic?
Games with imperfect information
Rendezvous problem
Geometry
Algebraic geometry
Abundance conjecture: if the canonical bundle of a projective variety with Kawamata log terminal singularities is nef, then it is semiample.
Bass conjecture on the finite generation of certain algebraic K-groups.
Bass–Quillen conjecture relating vector bundles over a regular Noetherian ring and over the polynomial ring .
Deligne conjecture: any one of numerous named for Pierre Deligne.
Deligne's conjecture on Hochschild cohomology about the operadic structure on Hochschild cochain complex.
Dixmier conjecture: any endomorphism of a Weyl algebra is an automorphism.
Fröberg conjecture on the Hilbert functions of a set of forms.
Fujita conjecture regarding the line bundle constructed from a positive holomorphic line bundle on a compact complex manifold and the canonical line bundle of
General elephant problem: do general elephants have at most Du Val singularities?
Hartshorne's conjectures
Jacobian conjecture: if a polynomial mapping over a characteristic-0 field has a constant nonzero Jacobian determinant, then it has a regular (i.e. with polynomial components) inverse function.
Manin conjecture on the distribution of rational points of bounded height in certain subsets of Fano varieties
Maulik–Nekrasov–Okounkov–Pandharipande conjecture on an equivalence between Gromov–Witten theory and Donaldson–Thomas theory
Nagata's conjecture on curves, specifically the minimal degree required for a plane algebraic curve to pass through a collection of very general points with prescribed multiplicities.
Nagata–Biran conjecture that if is a smooth algebraic surface and is an ample line bundle on of degree , then for sufficiently large , the Seshadri constant satisfies .
Nakai conjecture: if a complex algebraic variety has a ring of differential operators generated by its contained derivations, then it must be smooth.
Parshin's conjecture: the higher algebraic K-groups of any smooth projective variety defined over a finite field must vanish up to torsion.
Section conjecture on splittings of group homomorphisms from fundamental groups of complete smooth curves over finitely-generated fields to the Galois group of .
Standard conjectures on algebraic cycles
Tate conjecture on the connection between algebraic cycles on algebraic varieties and Galois representations on étale cohomology groups.
Virasoro conjecture: a certain generating function encoding the Gromov–Witten invariants of a smooth projective variety is fixed by an action of half of the Virasoro algebra.
Zariski multiplicity conjecture on the topological equisingularity and equimultiplicity of varieties at singular points
Are infinite sequences of flips possible in dimensions greater than 3?
Resolution of singularities in characteristic
Covering and packing
Borsuk's problem on upper and lower bounds for the number of smaller-diameter subsets needed to cover a bounded n-dimensional set.
The covering problem of Rado: if the union of finitely many axis-parallel squares has unit area, how small can the largest area covered by a disjoint subset of squares be?
The Erdős–Oler conjecture: when is a triangular number, packing circles in an equilateral triangle requires a triangle of the same size as packing circles.
The disk covering problem abount finding the smallest real number such that disks of radius can be arranged in such a way as to cover the unit disk.
The kissing number problem for dimensions other than 1, 2, 3, 4, 8 and 24
Reinhardt's conjecture: the smoothed octagon has the lowest maximum packing density of all centrally-symmetric convex plane sets
Sphere packing problems, including the density of the densest packing in dimensions other than 1, 2, 3, 8 and 24, and its asymptotic behavior for high dimensions.
Square packing in a square: what is the asymptotic growth rate of wasted space?
Ulam's packing conjecture about the identity of the worst-packing convex solid
The Tammes problem for numbers of nodes greater than 14 (except 24).
Differential geometry
The spherical Bernstein's problem, a generalization of Bernstein's problem
Carathéodory conjecture: any convex, closed, and twice-differentiable surface in three-dimensional Euclidean space admits at least two umbilical points.
Cartan–Hadamard conjecture: can the classical isoperimetric inequality for subsets of Euclidean space be extended to spaces of nonpositive curvature, known as Cartan–Hadamard manifolds?
Chern's conjecture (affine geometry) that the Euler characteristic of a compact affine manifold vanishes.
Chern's conjecture for hypersurfaces in spheres, a number of closely related conjectures.
Closed curve problem: find (explicit) necessary and sufficient conditions that determine when, given two periodic functions with the same period, the integral curve is closed.
The filling area conjecture, that a hemisphere has the minimum area among shortcut-free surfaces in Euclidean space whose boundary forms a closed curve of given length
The Hopf conjectures relating the curvature and Euler characteristic of higher-dimensional Riemannian manifolds
Osserman conjecture: that every Osserman manifold is either flat or locally isometric to a rank-one symmetric space
Yau's conjecture on the first eigenvalue that the first eigenvalue for the Laplace–Beltrami operator on an embedded minimal hypersurface of is .
Discrete geometry
The big-line-big-clique conjecture on the existence of either many collinear points or many mutually visible points in large planar point sets
The Hadwiger conjecture on covering n-dimensional convex bodies with at most 2n smaller copies
Solving the happy ending problem for arbitrary
Improving lower and upper bounds for the Heilbronn triangle problem.
Kalai's 3d conjecture on the least possible number of faces of centrally symmetric polytopes.
The Kobon triangle problem on triangles in line arrangements
The Kusner conjecture: at most points can be equidistant in spaces
The McMullen problem on projectively transforming sets of points into convex position
Opaque forest problem on finding opaque sets for various planar shapes
How many unit distances can be determined by a set of points in the Euclidean plane?
Finding matching upper and lower bounds for k-sets and halving lines
Tripod packing: how many tripods can have their apexes packed into a given cube?
Euclidean geometry
The Atiyah conjecture on configurations on the invertibility of a certain -by- matrix depending on points in
Bellman's lost-in-a-forest problem – find the shortest route that is guaranteed to reach the boundary of a given shape, starting at an unknown point of the shape with unknown orientation
Borromean rings — are there three unknotted space curves, not all three circles, which cannot be arranged to form this link?
Danzer's problem and Conway's dead fly problem – do Danzer sets of bounded density or bounded separation exist?
Dissection into orthoschemes – is it possible for simplices of every dimension?
Ehrhart's volume conjecture: a convex body in dimensions containing a single lattice point in its interior as its center of mass cannot have volume greater than
Falconer's conjecture: sets of Hausdorff dimension greater than in must have a distance set of nonzero Lebesgue measure
The values of the Hermite constants for dimensions other than 1–8 and 24
Inscribed square problem, also known as Toeplitz' conjecture and the square peg problem – does every Jordan curve have an inscribed square?
The Kakeya conjecture – do -dimensional sets that contain a unit line segment in every direction necessarily have Hausdorff dimension and Minkowski dimension equal to ?
The Kelvin problem on minimum-surface-area partitions of space into equal-volume cells, and the optimality of the Weaire–Phelan structure as a solution to the Kelvin problem
Lebesgue's universal covering problem on the minimum-area convex shape in the plane that can cover any shape of diameter one
Mahler's conjecture on the product of the volumes of a centrally symmetric convex body and its polar.
Moser's worm problem – what is the smallest area of a shape that can cover every unit-length curve in the plane?
The moving sofa problem – what is the largest area of a shape that can be maneuvered through a unit-width L-shaped corridor?
Does every convex polyhedron have Rupert's property?
Shephard's problem (a.k.a. Dürer's conjecture) – does every convex polyhedron have a net, or simple edge-unfolding?
Is there a non-convex polyhedron without self-intersections with more than seven faces, all of which share an edge with each other?
The Thomson problem – what is the minimum energy configuration of mutually-repelling particles on a unit sphere?
Convex uniform 5-polytopes – find and classify the complete set of these shapes
Graph theory
Algebraic graph theory
Babai's problem: which groups are Babai invariant groups?
Brouwer's conjecture on upper bounds for sums of eigenvalues of Laplacians of graphs in terms of their number of edges
Games on graphs
Graham's pebbling conjecture on the pebbling number of Cartesian products of graphs
Meyniel's conjecture that cop number is
Graph coloring and labeling
The 1-factorization conjecture that if is odd or even and respectively, then a -regular graph with vertices is 1-factorable.
The perfect 1-factorization conjecture that every complete graph on an even number of vertices admits a perfect 1-factorization.
Cereceda's conjecture on the diameter of the space of colorings of degenerate graphs
The Earth–Moon problem: what is the maximum chromatic number of biplanar graphs?
The Erdős–Faber–Lovász conjecture on coloring unions of cliques
The graceful tree conjecture that every tree admits a graceful labeling
Rosa's conjecture that all triangular cacti are graceful or nearly-graceful
The Gyárfás–Sumner conjecture on χ-boundedness of graphs with a forbidden induced tree
The Hadwiger conjecture relating coloring to clique minors
The Hadwiger–Nelson problem on the chromatic number of unit distance graphs
Jaeger's Petersen-coloring conjecture: every bridgeless cubic graph has a cycle-continuous mapping to the Petersen graph
The list coloring conjecture: for every graph, the list chromatic index equals the chromatic index
The overfull conjecture that a graph with maximum degree is class 2 if and only if it has an overfull subgraph satisfying .
The total coloring conjecture of Behzad and Vizing that the total chromatic number is at most two plus the maximum degree
Graph drawing and embedding
The Albertson conjecture: the crossing number can be lower-bounded by the crossing number of a complete graph with the same chromatic number
Conway's thrackle conjecture that thrackles cannot have more edges than vertices
The GNRS conjecture on whether minor-closed graph families have embeddings with bounded distortion
Harborth's conjecture: every planar graph can be drawn with integer edge lengths
Negami's conjecture on projective-plane embeddings of graphs with planar covers
The strong Papadimitriou–Ratajczak conjecture: every polyhedral graph has a convex greedy embedding
Turán's brick factory problem – Is there a drawing of any complete bipartite graph with fewer crossings than the number given by Zarankiewicz?
Universal point sets of subquadratic size for planar graphs
Restriction of graph parameters
Conway's 99-graph problem: does there exist a strongly regular graph with parameters (99,14,1,2)?
Degree diameter problem: given two positive integers , what is the largest graph of diameter such that all vertices have degrees at most ?
Jørgensen's conjecture that every 6-vertex-connected K6-minor-free graph is an apex graph
Does a Moore graph with girth 5 and degree 57 exist?
Do there exist infinitely many strongly regular geodetic graphs, or any strongly regular geodetic graphs that are not Moore graphs?
Subgraphs
Barnette's conjecture: every cubic bipartite three-connected planar graph has a Hamiltonian cycle
Gilbert–Pollack conjecture on the Steiner ratio of the Euclidean plane that the Steiner ratio is
Chvátal's toughness conjecture, that there is a number such that every -tough graph is Hamiltonian
The cycle double cover conjecture: every bridgeless graph has a family of cycles that includes each edge twice
The Erdős–Gyárfás conjecture on cycles with power-of-two lengths in cubic graphs
The Erdős–Hajnal conjecture on large cliques or independent sets in graphs with a forbidden induced subgraph
The linear arboricity conjecture on decomposing graphs into disjoint unions of paths according to their maximum degree
The Lovász conjecture on Hamiltonian paths in symmetric graphs
The Oberwolfach problem on which 2-regular graphs have the property that a complete graph on the same number of vertices can be decomposed into edge-disjoint copies of the given graph.
What is the largest possible pathwidth of an -vertex cubic graph?
The reconstruction conjecture and new digraph reconstruction conjecture on whether a graph is uniquely determined by its vertex-deleted subgraphs.
The snake-in-the-box problem: what is the longest possible induced path in an -dimensional hypercube graph?
Sumner's conjecture: does every -vertex tournament contain as a subgraph every -vertex oriented tree?
Szymanski's conjecture: every permutation on the -dimensional doubly-directed hypercube graph can be routed with edge-disjoint paths.
Tuza's conjecture: if the maximum number of disjoint triangles is , can all triangles be hit by a set of at most edges?
Vizing's conjecture on the domination number of cartesian products of graphs
Zarankiewicz problem: how many edges can there be in a bipartite graph on a given number of vertices with no complete bipartite subgraphs of a given size?
Word-representation of graphs
Are there any graphs on n vertices whose representation requires more than floor(n/2) copies of each letter?
Characterise (non-)word-representable planar graphs
Characterise word-representable graphs in terms of (induced) forbidden subgraphs.
Characterise word-representable near-triangulations containing the complete graph K4 (such a characterisation is known for K4-free planar graphs)
Classify graphs with representation number 3, that is, graphs that can be represented using 3 copies of each letter, but cannot be represented using 2 copies of each letter
Is it true that out of all bipartite graphs, crown graphs require longest word-representants?
Is the line graph of a non-word-representable graph always non-word-representable?
Which (hard) problems on graphs can be translated to words representing them and solved on words (efficiently)?
Miscellaneous graph theory
The implicit graph conjecture on the existence of implicit representations for slowly-growing hereditary families of graphs
Ryser's conjecture relating the maximum matching size and minimum transversal size in hypergraphs
The second neighborhood problem: does every oriented graph contain a vertex for which there are at least as many other vertices at distance two as at distance one?
Sidorenko's conjecture on homomorphism densities of graphs in graphons
Tutte's conjectures:
every bridgeless graph has a nowhere-zero 5-flow
every Petersen-minor-free bridgeless graph has a nowhere-zero 4-flow
Woodall's conjecture that the minimum number of edges in a dicut of a directed graph is equal to the maximum number of disjoint dijoins
Model theory and formal languages
The Cherlin–Zilber conjecture: A simple group whose first-order theory is stable in is a simple algebraic group over an algebraically closed field.
Generalized star height problem: can all regular languages be expressed using generalized regular expressions with limited nesting depths of Kleene stars?
For which number fields does Hilbert's tenth problem hold?
Kueker's conjecture
The main gap conjecture, e.g. for uncountable first order theories, for AECs, and for -saturated models of a countable theory.
Shelah's categoricity conjecture for : If a sentence is categorical above the Hanf number then it is categorical in all cardinals above the Hanf number.
Shelah's eventual categoricity conjecture: For every cardinal there exists a cardinal such that if an AEC K with LS(K)<= is categorical in a cardinal above then it is categorical in all cardinals above .
The stable field conjecture: every infinite field with a stable first-order theory is separably closed.
The stable forking conjecture for simple theories
Tarski's exponential function problem: is the theory of the real numbers with the exponential function decidable?
The universality problem for C-free graphs: For which finite sets C of graphs does the class of C-free countable graphs have a universal member under strong embeddings?
The universality spectrum problem: Is there a first-order theory whose universality spectrum is minimum?
Vaught conjecture: the number of countable models of a first-order complete theory in a countable language is either finite, , or .
Assume K is the class of models of a countable first order theory omitting countably many types. If K has a model of cardinality does it have a model of cardinality continuum?
Do the Henson graphs have the finite model property?
Does a finitely presented homogeneous structure for a finite relational language have finitely many reducts?
Does there exist an o-minimal first order theory with a trans-exponential (rapid growth) function?
If the class of atomic models of a complete first order theory is categorical in the , is it categorical in every cardinal?
Is every infinite, minimal field of characteristic zero algebraically closed? (Here, "minimal" means that every definable subset of the structure is finite or co-finite.)
Is the Borel monadic theory of the real order (BMTO) decidable? Is the monadic theory of well-ordering (MTWO) consistently decidable?
Is the theory of the field of Laurent series over decidable? of the field of polynomials over ?
Is there a logic L which satisfies both the Beth property and Δ-interpolation, is compact but does not satisfy the interpolation property?
Determine the structure of Keisler's order.
Probability theory
Ibragimov–Iosifescu conjecture for φ-mixing sequences
Number theory
General
Beilinson's conjectures
Brocard's problem: are there any integer solutions to other than ?
Büchi's problem on sufficiently large sequences of square numbers with constant second difference.
Carmichael's totient function conjecture: do all values of Euler's totient function have multiplicity greater than ?
Casas-Alvero conjecture: if a polynomial of degree defined over a field of characteristic has a factor in common with its first through -th derivative, then must be the -th power of a linear polynomial?
Catalan–Dickson conjecture on aliquot sequences: no aliquot sequences are infinite but non-repeating.
Erdős–Ulam problem: is there a dense set of points in the plane all at rational distances from one-another?
Exponent pair conjecture: for all , is the pair an exponent pair?
The Gauss circle problem: how far can the number of integer points in a circle centered at the origin be from the area of the circle?
Grand Riemann hypothesis: do the nontrivial zeros of all automorphic L-functions lie on the critical line with real ?
Generalized Riemann hypothesis: do the nontrivial zeros of all Dirichlet L-functions lie on the critical line with real ?
Riemann hypothesis: do the nontrivial zeros of the Riemann zeta function lie on the critical line with real ?
Grimm's conjecture: each element of a set of consecutive composite numbers can be assigned a distinct prime number that divides it.
Hall's conjecture: for any , there is some constant such that either or .
Hardy–Littlewood zeta function conjectures
Hilbert–Pólya conjecture: the nontrivial zeros of the Riemann zeta function correspond to eigenvalues of a self-adjoint operator.
Hilbert's eleventh problem: classify quadratic forms over algebraic number fields.
Hilbert's ninth problem: find the most general reciprocity law for the norm residues of -th order in a general algebraic number field, where is a power of a prime.
Hilbert's twelfth problem: extend the Kronecker–Weber theorem on Abelian extensions of to any base number field.
Keating–Snaith conjecture concerning the asymptotics of an integral involving the Riemann zeta function
Lehmer's totient problem: if divides , must be prime?
Leopoldt's conjecture: a p-adic analogue of the regulator of an algebraic number field does not vanish.
Lindelöf hypothesis that for all ,
The density hypothesis for zeroes of the Riemann zeta function
Littlewood conjecture: for any two real numbers , , where is the distance from to the nearest integer.
Mahler's 3/2 problem that no real number has the property that the fractional parts of are less than for all positive integers .
Montgomery's pair correlation conjecture: the normalized pair correlation function between pairs of zeros of the Riemann zeta function is the same as the pair correlation function of random Hermitian matrices.
n conjecture: a generalization of the abc conjecture to more than three integers.
abc conjecture: for any , is true for only finitely many positive such that .
Szpiro's conjecture: for any , there is some constant such that, for any elliptic curve defined over with minimal discriminant and conductor , we have .
Newman's conjecture: the partition function satisfies any arbitrary congruence infinitely often.
Piltz divisor problem on bounding
Dirichlet's divisor problem: the specific case of the Piltz divisor problem for
Ramanujan–Petersson conjecture: a number of related conjectures that are generalizations of the original conjecture.
Sato–Tate conjecture: also a number of related conjectures that are generalizations of the original conjecture.
Scholz conjecture: the length of the shortest addition chain producing is at most plus the length of the shortest addition chain producing .
Do Siegel zeros exist?
Singmaster's conjecture: is there a finite upper bound on the multiplicities of the entries greater than 1 in Pascal's triangle?
Vojta's conjecture on heights of points on algebraic varieties over algebraic number fields.
Are there infinitely many perfect numbers?
Do any odd perfect numbers exist?
Do quasiperfect numbers exist?
Do any non-power of 2 almost perfect numbers exist?
Are there 65, 66, or 67 idoneal numbers?
Are there any pairs of amicable numbers which have opposite parity?
Are there any pairs of betrothed numbers which have same parity?
Are there any pairs of relatively prime amicable numbers?
Are there infinitely many amicable numbers?
Are there infinitely many betrothed numbers?
Are there infinitely many Giuga numbers?
Does every rational number with an odd denominator have an odd greedy expansion?
Do any Lychrel numbers exist?
Do any odd noncototients exist?
Do any odd weird numbers exist?
Do any (2, 5)-perfect numbers exist?
Do any Taxicab(5, 2, n) exist for n > 1?
Is there a covering system with odd distinct moduli?
Is a normal number (i.e., is each digit 0–9 equally frequent)?
Are all irrational algebraic numbers normal?
Is 10 a solitary number?
Can a 3×3 magic square be constructed from 9 distinct perfect square numbers?
Find the value of the De Bruijn–Newman constant.
Additive number theory
Erdős conjecture on arithmetic progressions that if the sum of the reciprocals of the members of a set of positive integers diverges, then the set contains arbitrarily long arithmetic progressions.
Erdős–Turán conjecture on additive bases: if is an additive basis of order , then the number of ways that positive integers can be expressed as the sum of two numbers in must tend to infinity as tends to infinity.
Gilbreath's conjecture on consecutive applications of the unsigned forward difference operator to the sequence of prime numbers.
Goldbach's conjecture: every even natural number greater than is the sum of two prime numbers.
Lander, Parkin, and Selfridge conjecture: if the sum of -th powers of positive integers is equal to a different sum of -th powers of positive integers, then .
Lemoine's conjecture: all odd integers greater than can be represented as the sum of an odd prime number and an even semiprime.
Minimum overlap problem of estimating the minimum possible maximum number of times a number appears in the termwise difference of two equally large sets partitioning the set
Pollock's conjectures
Does every nonnegative integer appear in Recamán's sequence?
Skolem problem: can an algorithm determine if a constant-recursive sequence contains a zero?
The values of g(k) and G(k) in Waring's problem
Do the Ulam numbers have a positive density?
Determine growth rate of rk(N) (see Szemerédi's theorem)
Algebraic number theory
Class number problem: are there infinitely many real quadratic number fields with unique factorization?
Fontaine–Mazur conjecture: actually numerous conjectures, all proposed by Jean-Marc Fontaine and Barry Mazur.
Gan–Gross–Prasad conjecture: a restriction problem in representation theory of real or p-adic Lie groups.
Greenberg's conjectures
Hermite's problem: is it possible, for any natural number , to assign a sequence of natural numbers to each real number such that the sequence for is eventually periodic if and only if is algebraic of degree ?
Kummer–Vandiver conjecture: primes do not divide the class number of the maximal real subfield of the -th cyclotomic field.
Lang and Trotter's conjecture on supersingular primes that the number of supersingular primes less than a constant is within a constant multiple of
Selberg's 1/4 conjecture: the eigenvalues of the Laplace operator on Maass wave forms of congruence subgroups are at least .
Stark conjectures (including Brumer–Stark conjecture)
Characterize all algebraic number fields that have some power basis.
Computational number theory
Can integer factorization be done in polynomial time?
Diophantine approximation and transcendental number theory
Schanuel's conjecture on the transcendence degree of certain field extensions of the rational numbers. In particular: Are and algebraically independent? Which nontrivial combinations of transcendental numbers (such as ) are themselves transcendental?
The four exponentials conjecture: the transcendence of at least one of four exponentials of combinations of irrationals
Are Euler's constant and Catalan's constant irrational? Are they transcendental? Is Apéry's constant transcendental?
Which transcendental numbers are (exponential) periods?
How well can non-quadratic irrational numbers be approximated? What is the irrationality measure of specific (suspected) transcendental numbers such as and ?
Which irrational numbers have simple continued fraction terms whose geometric mean converges to Khinchin's constant?
Diophantine equations
Beal's conjecture: for all integral solutions to where , all three numbers must share some prime factor.
Congruent number problem (a corollary to Birch and Swinnerton-Dyer conjecture, per Tunnell's theorem): determine precisely what rational numbers are congruent numbers.
Erdős–Moser problem: is the only solution to the Erdős–Moser equation?
Erdős–Straus conjecture: for every , there are positive integers such that .
Fermat–Catalan conjecture: there are finitely many distinct solutions to the equation with being positive coprime integers and being positive integers satisfying .
Goormaghtigh conjecture on solutions to where and .
The uniqueness conjecture for Markov numbers that every Markov number is the largest number in exactly one normalized solution to the Markov Diophantine equation.
Pillai's conjecture: for any , the equation has finitely many solutions when are not both .
Which integers can be written as the sum of three perfect cubes?
Can every integer be written as a sum of four perfect cubes?
Prime numbers
Agoh–Giuga conjecture on the Bernoulli numbers that is prime if and only if
Agrawal's conjecture that given coprime positive integers and , if , then either is prime or
Artin's conjecture on primitive roots that if an integer is neither a perfect square nor , then it is a primitive root modulo infinitely many prime numbers
Brocard's conjecture: there are always at least prime numbers between consecutive squares of prime numbers, aside from and .
Bunyakovsky conjecture: if an integer-coefficient polynomial has a positive leading coefficient, is irreducible over the integers, and has no common factors over all where is a positive integer, then is prime infinitely often.
Catalan's Mersenne conjecture: some Catalan–Mersenne number is composite and thus all Catalan–Mersenne numbers are composite after some point.
Dickson's conjecture: for a finite set of linear forms with each , there are infinitely many for which all forms are prime, unless there is some congruence condition preventing it.
Dubner's conjecture: every even number greater than is the sum of two primes which both have a twin.
Elliott–Halberstam conjecture on the distribution of prime numbers in arithmetic progressions.
Erdős–Mollin–Walsh conjecture: no three consecutive numbers are all powerful.
Feit–Thompson conjecture: for all distinct prime numbers and , does not divide
Fortune's conjecture that no Fortunate number is composite.
The Gaussian moat problem: is it possible to find an infinite sequence of distinct Gaussian prime numbers such that the difference between consecutive numbers in the sequence is bounded?
Gillies' conjecture on the distribution of prime divisors of Mersenne numbers.
Landau's problems
Goldbach conjecture: all even natural numbers greater than are the sum of two prime numbers.
Legendre's conjecture: for every positive integer , there is a prime between and .
Twin prime conjecture: there are infinitely many twin primes.
Are there infinitely many primes of the form ?
Problems associated to Linnik's theorem
New Mersenne conjecture: for any odd natural number , if any two of the three conditions or , is prime, and is prime are true, then the third condition is also true.
Polignac's conjecture: for all positive even numbers , there are infinitely many prime gaps of size .
Schinzel's hypothesis H that for every finite collection of nonconstant irreducible polynomials over the integers with positive leading coefficients, either there are infinitely many positive integers for which are all primes, or there is some fixed divisor which, for all , divides some .
Selfridge's conjecture: is 78,557 the lowest Sierpiński number?
Does the converse of Wolstenholme's theorem hold for all natural numbers?
Are all Euclid numbers square-free?
Are all Fermat numbers square-free?
Are all Mersenne numbers of prime index square-free?
Are there any composite c satisfying 2c − 1 ≡ 1 (mod c2)?
Are there any Wall–Sun–Sun primes?
Are there any Wieferich primes in base 47?
Are there infinitely many balanced primes?
Are there infinitely many Carol primes?
Are there infinitely many cluster primes?
Are there infinitely many cousin primes?
Are there infinitely many Cullen primes?
Are there infinitely many Euclid primes?
Are there infinitely many Fibonacci primes?
Are there infinitely many Kummer primes?
Are there infinitely many Kynea primes?
Are there infinitely many Lucas primes?
Are there infinitely many Mersenne primes (Lenstra–Pomerance–Wagstaff conjecture); equivalently, infinitely many even perfect numbers?
Are there infinitely many Newman–Shanks–Williams primes?
Are there infinitely many palindromic primes to every base?
Are there infinitely many Pell primes?
Are there infinitely many Pierpont primes?
Are there infinitely many prime quadruplets?
Are there infinitely many prime triplets?
Are there infinitely many regular primes, and if so is their relative density ?
Are there infinitely many sexy primes?
Are there infinitely many safe and Sophie Germain primes?
Are there infinitely many Wagstaff primes?
Are there infinitely many Wieferich primes?
Are there infinitely many Wilson primes?
Are there infinitely many Wolstenholme primes?
Are there infinitely many Woodall primes?
Can a prime p satisfy and simultaneously?
Does every prime number appear in the Euclid–Mullin sequence?
What is the smallest Skewes's number?
For any given integer a > 0, are there infinitely many Lucas–Wieferich primes associated with the pair (a, −1)? (Specially, when a = 1, this is the Fibonacci-Wieferich primes, and when a = 2, this is the Pell-Wieferich primes)
For any given integer a > 0, are there infinitely many primes p such that ap − 1 ≡ 1 (mod p2)?
For any given integer a which is not a square and does not equal to −1, are there infinitely many primes with a as a primitive root?
For any given integer b which is not a perfect power and not of the form −4k4 for integer k, are there infinitely many repunit primes to base b?
For any given integers , with and are there infinitely many primes of the form with integer n ≥ 1?
Is every Fermat number composite for ?
Is 509,203 the lowest Riesel number?
Set theory
Note: These conjectures are about models of Zermelo-Frankel set theory with choice, and may not be able to be expressed in models of other set theories such as the various constructive set theories or non-wellfounded set theory.
(Woodin) Does the generalized continuum hypothesis below a strongly compact cardinal imply the generalized continuum hypothesis everywhere?
Does the generalized continuum hypothesis entail for every singular cardinal ?
Does the generalized continuum hypothesis imply the existence of an ℵ2-Suslin tree?
If ℵω is a strong limit cardinal, is (see Singular cardinals hypothesis)? The best bound, ℵω4, was obtained by Shelah using his PCF theory.
The problem of finding the ultimate core model, one that contains all large cardinals.
Woodin's Ω-conjecture: if there is a proper class of Woodin cardinals, then Ω-logic satisfies an analogue of Gödel's completeness theorem.
Does the consistency of the existence of a strongly compact cardinal imply the consistent existence of a supercompact cardinal?
Does there exist a Jónsson algebra on ℵω?
Is OCA (the open coloring axiom) consistent with ?
Reinhardt cardinals: Without assuming the axiom of choice, can a nontrivial elementary embedding V→V exist?
Topology
Baum–Connes conjecture: the assembly map is an isomorphism.
Berge conjecture that the only knots in the 3-sphere which admit lens space surgeries are Berge knots.
Bing–Borsuk conjecture: every -dimensional homogeneous absolute neighborhood retract is a topological manifold.
Borel conjecture: aspherical closed manifolds are determined up to homeomorphism by their fundamental groups.
Halperin conjecture on rational Serre spectral sequences of certain fibrations.
Hilbert–Smith conjecture: if a locally compact topological group has a continuous, faithful group action on a topological manifold, then the group must be a Lie group.
Mazur's conjectures
Novikov conjecture on the homotopy invariance of certain polynomials in the Pontryagin classes of a manifold, arising from the fundamental group.
Quadrisecants of wild knots: it has been conjectured that wild knots always have infinitely many quadrisecants.
Telescope conjecture: the last of Ravenel's conjectures in stable homotopy theory to be resolved.
Unknotting problem: can unknots be recognized in polynomial time?
Volume conjecture relating quantum invariants of knots to the hyperbolic geometry of their knot complements.
Whitehead conjecture: every connected subcomplex of a two-dimensional aspherical CW complex is aspherical.
Zeeman conjecture: given a finite contractible two-dimensional CW complex , is the space collapsible?
Problems solved since 1995
Algebra
Mazur's conjecture B (Vessilin Dimitrov, Ziyang Gao, and Philipp Habegger, 2020)
Suita conjecture (Qi'an Guan and Xiangyu Zhou, 2015)
Torsion conjecture (Loïc Merel, 1996)
Carlitz–Wan conjecture (Hendrik Lenstra, 1995)
Serre's nonnegativity conjecture (Ofer Gabber, 1995)
Analysis
Kadison–Singer problem (Adam Marcus, Daniel Spielman and Nikhil Srivastava, 2013) (and the Feichtinger's conjecture, Anderson's paving conjectures, Weaver's discrepancy theoretic and conjectures, Bourgain-Tzafriri conjecture and -conjecture)
Ahlfors measure conjecture (Ian Agol, 2004)
Gradient conjecture (Krzysztof Kurdyka, Tadeusz Mostowski, Adam Parusinski, 1999)
Combinatorics
Erdős sumset conjecture (Joel Moreira, Florian Richter, Donald Robertson, 2018)
McMullen's g-conjecture on the possible numbers of faces of different dimensions in a simplicial sphere (also Grünbaum conjecture, several conjectures of Kühnel) (Karim Adiprasito, 2018)
Hirsch conjecture (Francisco Santos Leal, 2010)
Gessel's lattice path conjecture (Manuel Kauers, Christoph Koutschan, and Doron Zeilberger, 2009)
Stanley–Wilf conjecture (Gábor Tardos and Adam Marcus, 2004) (and also the Alon–Friedgut conjecture)
Kemnitz's conjecture (Christian Reiher, 2003, Carlos di Fiore, 2003)
Cameron–Erdős conjecture (Ben J. Green, 2003, Alexander Sapozhenko, 2003)
Dynamical systems
Zimmer's conjecture (Aaron Brown, David Fisher, and Sebastián Hurtado-Salazar, 2017)
Painlevé conjecture (Jinxin Xue, 2014)
Game theory
Existence of a non-terminating game of beggar-my-neighbour (Brayden Casella, 2024)
The angel problem (Various independent proofs, 2006)
Geometry
21st century
Einstein problem (David Smith, Joseph Samuel Myers, Craig S. Kaplan, Chaim Goodman-Strauss, 2024)
Maximal rank conjecture (Eric Larson, 2018)
Weibel's conjecture (Moritz Kerz, Florian Strunk, and Georg Tamme, 2018)
Yau's conjecture (Antoine Song, 2018)
Pentagonal tiling (Michaël Rao, 2017)
Willmore conjecture (Fernando Codá Marques and André Neves, 2012)
Erdős distinct distances problem (Larry Guth, Nets Hawk Katz, 2011)
Heterogeneous tiling conjecture (squaring the plane) (Frederick V. Henle and James M. Henle, 2008)
Tameness conjecture (Ian Agol, 2004)
Ending lamination theorem (Jeffrey F. Brock, Richard D. Canary, Yair N. Minsky, 2004)
Carpenter's rule problem (Robert Connelly, Erik Demaine, Günter Rote, 2003)
Lambda g conjecture (Carel Faber and Rahul Pandharipande, 2003)
Nagata's conjecture (Ivan Shestakov, Ualbai Umirbaev, 2003)
Double bubble conjecture (Michael Hutchings, Frank Morgan, Manuel Ritoré, Antonio Ros, 2002)
20th century
Honeycomb conjecture (Thomas Callister Hales, 1999)
Lange's conjecture (Montserrat Teixidor i Bigas and Barbara Russo, 1999)
Bogomolov conjecture (Emmanuel Ullmo, 1998, Shou-Wu Zhang, 1998)
Kepler conjecture (Samuel Ferguson, Thomas Callister Hales, 1998)
Dodecahedral conjecture (Thomas Callister Hales, Sean McLaughlin, 1998)
Graph theory
Kahn–Kalai conjecture (Jinyoung Park and Huy Tuan Pham, 2022)
Blankenship–Oporowski conjecture on the book thickness of subdivisions (Vida Dujmović, David Eppstein, Robert Hickingbotham, Pat Morin, and David Wood, 2021)
Ringel's conjecture that the complete graph can be decomposed into copies of any tree with edges (Richard Montgomery, Benny Sudakov, Alexey Pokrovskiy, 2020)
Disproof of Hedetniemi's conjecture on the chromatic number of tensor products of graphs (Yaroslav Shitov, 2019)
Kelmans–Seymour conjecture (Dawei He, Yan Wang, and Xingxing Yu, 2020)
Goldberg–Seymour conjecture (Guantao Chen, Guangming Jing, and Wenan Zang, 2019)
Babai's problem (Alireza Abdollahi, Maysam Zallaghi, 2015)
Alspach's conjecture (Darryn Bryant, Daniel Horsley, William Pettersson, 2014)
Alon–Saks–Seymour conjecture (Hao Huang, Benny Sudakov, 2012)
Read–Hoggar conjecture (June Huh, 2009)
Scheinerman's conjecture (Jeremie Chalopin and Daniel Gonçalves, 2009)
Erdős–Menger conjecture (Ron Aharoni, Eli Berger 2007)
Road coloring conjecture (Avraham Trahtman, 2007)
Robertson–Seymour theorem (Neil Robertson, Paul Seymour, 2004)
Strong perfect graph conjecture (Maria Chudnovsky, Neil Robertson, Paul Seymour and Robin Thomas, 2002)
Toida's conjecture (Mikhail Muzychuk, Mikhail Klin, and Reinhard Pöschel, 2001)
Harary's conjecture on the integral sum number of complete graphs (Zhibo Chen, 1996)
Group theory
Hanna Neumann conjecture (Joel Friedman, 2011, Igor Mineyev, 2011)
Density theorem (Hossein Namazi, Juan Souto, 2010)
Full classification of finite simple groups (Koichiro Harada, Ronald Solomon, 2008)
Number theory
21st century
André–Oort conjecture (Jonathan Pila, Ananth Shankar, Jacob Tsimerman, 2021)
Duffin-Schaeffer conjecture (Dimitris Koukoulopoulos, James Maynard, 2019)
Main conjecture in Vinogradov's mean-value theorem (Jean Bourgain, Ciprian Demeter, Larry Guth, 2015)
Goldbach's weak conjecture (Harald Helfgott, 2013)
Existence of bounded gaps between primes (Yitang Zhang, Polymath8, James Maynard, 2013)
Sidon set problem (Javier Cilleruelo, Imre Z. Ruzsa, and Carlos Vinuesa, 2010)
Serre's modularity conjecture (Chandrashekhar Khare and Jean-Pierre Wintenberger, 2008)
Green–Tao theorem (Ben J. Green and Terence Tao, 2004)
Catalan's conjecture (Preda Mihăilescu, 2002)
Erdős–Graham problem (Ernest S. Croot III, 2000)
20th century
Lafforgue's theorem (Laurent Lafforgue, 1998)
Fermat's Last Theorem (Andrew Wiles and Richard Taylor, 1995)
Ramsey theory
Burr–Erdős conjecture (Choongbum Lee, 2017)
Boolean Pythagorean triples problem (Marijn Heule, Oliver Kullmann, Victor W. Marek, 2016)
Theoretical computer science
Sensitivity conjecture for Boolean functions (Hao Huang, 2019)
Topology
Deciding whether the Conway knot is a slice knot (Lisa Piccirillo, 2020)
Virtual Haken conjecture (Ian Agol, Daniel Groves, Jason Manning, 2012) (and by work of Daniel Wise also virtually fibered conjecture)
Hsiang–Lawson's conjecture (Simon Brendle, 2012)
Ehrenpreis conjecture (Jeremy Kahn, Vladimir Markovic, 2011)
Atiyah conjecture for groups with finite subgroups of unbounded order (Austin, 2009)
Cobordism hypothesis (Jacob Lurie, 2008)
Spherical space form conjecture (Grigori Perelman, 2006)
Poincaré conjecture (Grigori Perelman, 2002)
Geometrization conjecture, (Grigori Perelman, series of preprints in 2002–2003)
Nikiel's conjecture (Mary Ellen Rudin, 1999)
Disproof of the Ganea conjecture (Iwase, 1997)
Uncategorised
2010s
Erdős discrepancy problem (Terence Tao, 2015)
Umbral moonshine conjecture (John F. R. Duncan, Michael J. Griffin, Ken Ono, 2015)
Anderson conjecture on the finite number of diffeomorphism classes of the collection of 4-manifolds satisfying certain properties (Jeff Cheeger, Aaron Naber, 2014)
Gaussian correlation inequality (Thomas Royen, 2014)
Beck's conjecture on discrepancies of set systems constructed from three permutations (Alantha Newman, Aleksandar Nikolov, 2011)
Bloch–Kato conjecture (Vladimir Voevodsky, 2011) (and Quillen–Lichtenbaum conjecture and by work of Thomas Geisser and Marc Levine (2001) also Beilinson–Lichtenbaum conjecture)
2000s
Kauffman–Harary conjecture (Thomas Mattman, Pablo Solis, 2009)
Surface subgroup conjecture (Jeremy Kahn, Vladimir Markovic, 2009)
Normal scalar curvature conjecture and the Böttcher–Wenzel conjecture (Zhiqin Lu, 2007)
Nirenberg–Treves conjecture (Nils Dencker, 2005)
Lax conjecture (Adrian Lewis, Pablo Parrilo, Motakuri Ramana, 2005)
The Langlands–Shelstad fundamental lemma (Ngô Bảo Châu and Gérard Laumon, 2004)
Milnor conjecture (Vladimir Voevodsky, 2003)
Kirillov's conjecture (Ehud Baruch, 2003)
Kouchnirenko's conjecture (Bertrand Haas, 2002)
n! conjecture (Mark Haiman, 2001) (and also Macdonald positivity conjecture)
Kato's conjecture (Pascal Auscher, Steve Hofmann, Michael Lacey, Alan McIntosh, and Philipp Tchamitchian, 2001)
Deligne's conjecture on 1-motives (Luca Barbieri-Viale, Andreas Rosenschon, Morihiko Saito, 2001)
Modularity theorem (Christophe Breuil, Brian Conrad, Fred Diamond, and Richard Taylor, 2001)
Erdős–Stewart conjecture (Florian Luca, 2001)
Berry–Robbins problem (Michael Atiyah, 2000)
See also
List of conjectures
List of unsolved problems in statistics
List of unsolved problems in computer science
List of unsolved problems in physics
Lists of unsolved problems
Open Problems in Mathematics
The Great Mathematical Problems
Scottish Book
Notes
References
Further reading
Books discussing problems solved since 1995
Books discussing unsolved problems
External links
24 Unsolved Problems and Rewards for them
List of links to unsolved problems in mathematics, prizes and research
Open Problem Garden
AIM Problem Lists
Unsolved Problem of the Week Archive. MathPro Press.
Unsolved Problems in Number Theory, Logic and Cryptography
200 open problems in graph theory
The Open Problems Project (TOPP), discrete and computational geometry problems
Kirby's list of unsolved problems in low-dimensional topology
Erdös' Problems on Graphs
Unsolved Problems in Virtual Knot Theory and Combinatorial Knot Theory
Open problems from the 12th International Conference on Fuzzy Set Theory and Its Applications
List of open problems in inner model theory
Barry Simon's 15 Problems in Mathematical Physics
Alexandre Eremenko. Unsolved problems in Function Theory
Mathematics
Lists of problems | List of unsolved problems in mathematics | [
"Mathematics"
] | 12,832 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
183,120 | https://en.wikipedia.org/wiki/Overpressure | Overpressure (or blast overpressure) is the pressure caused by a shock wave over and above normal atmospheric pressure. The shock wave may be caused by sonic boom or by explosion, and the resulting overpressure receives particular attention when measuring the effects of nuclear weapons or thermobaric bombs.
Effects
According to an article in the journal Toxicological Sciences,
Blast overpressure (BOP), also known as high energy impulse noise, is a damaging outcome of explosive detonations and firing of weapons. Exposure to BOP shock waves alone results in injury predominantly to the hollow organ systems such as auditory, respiratory, and gastrointestinal systems.
An EOD suit worn by bomb disposal experts can protect against the effects of BOP.
The above table details the effects of overpressure on the human body in a building affected by a blast of overpressure waves, as clarified later in the journal.
According to documents released by the United States Military Defense Technical Information Center (DTIC),
Calculation for an enclosed space
Overpressure in an enclosed space is determined using "Weibull's formula":
where:
22.5 is a constant based on experimentation
= (kilograms) net explosive mass calculated using all explosive materials and their relative effectiveness
= (cubic meters) volume of given area (primarily used to determine volume within an enclosed space)
See also
Bomb disposal
References
Pressure
Explosives
Shock waves | Overpressure | [
"Physics",
"Chemistry"
] | 283 | [
"Scalar physical quantities",
"Physical phenomena",
"Mechanical quantities",
"Shock waves",
"Physical quantities",
"Pressure",
"Waves",
"Explosives",
"Explosions",
"Wikipedia categories named after physical quantities"
] |
183,132 | https://en.wikipedia.org/wiki/Connate%20fluids | In geology and sedimentology, connate fluids are liquids that were trapped in the pores of sedimentary rocks as they were deposited. These liquids are largely composed of water, but also contain many mineral components as ions in solution.
As rocks are buried, they undergo lithification and the connate fluids are usually expelled. If the escape route for these fluids is blocked, the pore fluid pressure can build up, leading to overpressure.
Significance
An understanding of the geochemistry of connate fluids is important if the diagenesis of the rock is to be quantified. The solutes in the connate fluids often precipitate and reduce the porosity and permeability of the host rock, which can have important implications for its hydrocarbon prospectivity. The chemical components of the connate fluid can also yield information on the provenance of aquifers and of the thermal history of the host rock. Minute bubbles of fluid are often trapped within the crystals of the cementing material. These fluid inclusions provide direct information about the composition of the fluid and the pressure-temperature conditions that existed during diagenesis of the sediments.
Some analyses of connate water samples from Louisiana (USA) compared to seawater
Similar, but different in origin, is the concept of fossil water, which is used to describe very old groundwater found in deep aquifers or bedrock. Typically it was recharged during a different climatic period (e.g., the last ice age) so is also very old, but possibly not of the same genesis as the rock.
See also
Petroleum geology
References
Petroleum
Sedimentology
Soil mechanics | Connate fluids | [
"Physics",
"Chemistry"
] | 329 | [
"Soil mechanics",
"Petroleum",
"Chemical mixtures",
"Applied and interdisciplinary physics"
] |
183,194 | https://en.wikipedia.org/wiki/Thermal%20history%20modelling | Thermal history modelling is an exercise undertaken during basin modelling to evaluate the temperature history of stratigraphic layers in a sedimentary basin.
The thermal history of a basin is usually calibrated using thermal indicator data, including vitrinite reflectance and fission tracks in the minerals apatite and zircon.
The temperatures undergone by rocks in a sedimentary basin are crucial when attempting to evaluate the quantity, nature and volume of hydrocarbons (fossil fuels) produced by diagenesis of kerogens (a group of chemicals formed from the decay of organic matter).
Fourier's law provides a simplified one-dimensional description of the variation in heat flow Q as a function of thermal conductivity k and thermal gradient dT/dz:
(The minus sign indicates that heat flows in the opposite direction to increasing depth, that is, towards the Earth's surface.)
If the assumptions used to justify this simplified approximation (i.e. steady-state heat conduction, no convection or advection) are accepted, we define the simple 1-dimensional heat diffusion equation where temperature T at a depth z and time t is given by the equation:
where Tt0 is the surface temperature history, Qt is the heat flow history and k is thermal conductivity. The integral thus represents the integrated thermal conductivity history of a 1-dimensional column of rock.
Thermal history modelling attempts to describe the temperature history Tz,t and therefore requires a knowledge of the burial history of the stratigraphic layers which is obtained through the process of back-stripping.
References
See also
Petroleum geology
Petroleum geology
Sedimentology | Thermal history modelling | [
"Chemistry"
] | 325 | [
"Petroleum",
"Petroleum geology"
] |
183,222 | https://en.wikipedia.org/wiki/Submitochondrial%20particle | A submitochondrial particle (SMP) is an artificial vesicle made from the inner mitochondrial membrane. They can be formed by subjecting isolated mitochondria to sonication, freezing and thawing, high pressure, or osmotic shock. SMPs can be used to study the electron transport chain in a cell-free context.
The process of SMP formation forces the inner mitochondrial membrane inside out, meaning that the matrix-facing leaflet becomes the outer surface of the SMP, and the intermembrane space-facing leaflet faces the lumen of the SMP. As a consequence, the F1 particles which normally face the matrix are exposed. Chaotropic agents can destabilize F1 particles and cause them to dissociate from the membrane, thereby uncoupling the final step of oxidative phosphorylation from the rest of the electron transport chain.
References
Cellular respiration
Membrane biology
Mitochondria | Submitochondrial particle | [
"Chemistry",
"Biology"
] | 198 | [
"Cellular respiration",
"Mitochondria",
"Membrane biology",
"Molecular and cellular biology stubs",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry",
"Metabolism"
] |
183,241 | https://en.wikipedia.org/wiki/Satellite%20modem | A satellite modem or satmodem is a modem used to establish data transfers using a communications satellite as a relay. A satellite modem's main function is to transform an input bitstream to a radio signal and vice versa.
There are some devices that include only a demodulator (and no modulator, thus only allowing data to be downloaded by satellite) that are also referred to as "satellite modems." These devices are used in satellite Internet access (in this case uploaded data is transferred through a conventional PSTN modem or an ADSL modem).
Satellite link
A satellite modem is not the only device needed to establish a communication channel. Other equipment that is essential for creating a satellite link include satellite antennas and frequency converters.
Data to be transmitted are transferred to a modem from data terminal equipment (e.g. a computer). The modem usually has intermediate frequency (IF) output (that is, 50-200 MHz), however, sometimes the signal is modulated directly to L band. In most cases, frequency has to be converted using an upconverter before amplification and transmission.
A modulated signal is a sequence of symbols, pieces of data represented by a corresponding signal state, e.g. a bit or a few bits, depending upon the modulation scheme being used. Recovering a symbol clock (making a local symbol clock generator synchronous with the remote one) is one of the most important tasks of a demodulator.
Similarly, a signal received from a satellite is firstly downconverted (this is done by a Low-noise block converter - LNB), then demodulated by a modem, and at last handled by data terminal equipment. The LNB is usually powered by the modem through the signal cable with 13 or 18 V DC.
Features
The main functions of a satellite modem are modulation and demodulation. Satellite communication standards also define error correction codes and framing formats.
Popular modulation types being used for satellite communications:
Binary phase-shift keying (BPSK);
Quadrature phase-shift keying (QPSK);
Offset quadrature phase-shift keying (OQPSK);
8PSK;
Quadrature amplitude modulation (QAM), especially 16QAM.
The popular satellite error correction codes include:
Convolutional codes:
with constraint length less than 10, usually decoded using a Viterbi algorithm (see Viterbi decoder);
with constraint length more than 10, usually decoded using a Fano algorithm (see Sequential decoder);
Reed–Solomon codes usually concatenated with convolutional codes with an interleaving;
New modems support superior error correction codes (turbo codes and LDPC codes).
Frame formats that are supported by various satellite modems include:
Intelsat business service (IBS) framing
Intermediate data rate (IDR) framing
MPEG-2 transport framing (used in DVB)
E1 and T1 framing
High-end modems also incorporate some additional features:
Multiple data interfaces (like RS-232, RS-422, V.35, G.703, LVDS, Ethernet);
Embedded Distant-end Monitor and Control (EDMAC), allowing to control the distant-end modem;
Automatic Uplink Power Control (AUPC), that is, adjusting the output power to maintain a constant signal to noise ratio at the remote end;
Drop and insert feature for a multiplexed stream, allowing to replace some channels in it.
Internal structure
Probably the best way of understanding how a modem works is to look at its internal structure. A block diagram of a generic satellite modem is shown on the image.
Analog tract
After a digital-to-analog conversion in the transmitter, the signal passes through a reconstruction filter. Then, if needed, frequency conversion is performed.
The purpose of the analog tract in the receiver is to convert signal's frequency, to adjust its power via an automatic gain control circuit and to get its complex envelope components.
The input signal for the analog tract is at the intermediate frequency, sometimes, in the L band, in which case it must be converted to an IF. Then the signal is either sampled or processed by the four-quadrant multiplier which produces the complex envelope components (I, Q) through multiplying it by the heterodyne frequency (see superheterodyne receiver).
At last the signal passes through an anti-aliasing filter and is sampled or (digitized).
Modulator and demodulator
A digital modulator transforms a digital stream into a radio signal at the intermediate frequency (IF). A modulator is generally simpler than a demodulator because it doesn't have to recover symbol and carrier frequencies.
A demodulator is one of the most important parts of the receiver. The exact structure of the demodulator is defined by a modulation type. However, the fundamental concepts are similar. Moreover, it is possible to develop a demodulator that can process signals with different modulation types.
Digital demodulation implies that a symbol clock (and, in most cases, an intermediate frequency generator) at the receiving side has to be synchronous with those at the transmitting side. This is achieved by the following two circuits:
timing recovery circuit, determining the borders of symbols;
carrier recovery circuit, which determines the actual meaning of each symbol. There are modulation types (like frequency-shift keying) that can be demodulated without carrier recovery, however, this method, known as noncoherent demodulation, is generally worse.
There are also additional components in the demodulator such as the intersymbol interference equalizer.
If the analog signal was digitized without a four-quadrant multiplier, the complex envelope has to be calculated by a digital complex mixer.
Sometimes a digital automatic gain control circuit is implemented in the demodulator.
FEC coding
Error correction techniques are essential for satellite communications, because, due to satellite's limited power a signal-to-noise ratio at the receiver is usually rather poor. Error correction works by adding an artificial redundancy to a data stream at the transmitting side and using this redundancy to correct errors caused by noise and interference. This is performed by an FEC encoder. The encoder applies an error correction code to the digital stream, thereby adding redundancy.
An FEC decoder decodes the Forward error correction code used within the signal. For example, the Digital Video Broadcasting standard defines a concatenated code consisting of inner convolutional (standard NASA code, punctured, with rates , , , , ), interleaving and outer Reed–Solomon code (block length: 204 bytes, information block: 188 bytes, can correct up to 8 bytes in the block).
Differential coding
There are several modulation types (such as PSK and QAM) that have a phase ambiguity, that is, a carrier can be restored in different ways. Differential coding is used to resolve this ambiguity.
When differential coding is used, the data are deliberately made to depend not only on the current symbol, but also on the previous one.
Scrambling
Scrambling is a technique used to randomize a data stream to eliminate long '0'-only and '1'-only sequences and to assure energy dispersal. Long '0'-only and '1'-only sequences create difficulties for timing recovery circuit. Scramblers and descramblers are usually based on linear-feedback shift registers.
A scrambler randomizes the transmitted data stream. A descrambler restores the original stream from the scrambled one.
Scrambling shouldn't be confused with encryption, since it doesn't protect information from intruders.
Multiplexing
A multiplexer transforms several digital streams into one stream. This is often referred to as 'muxing.'
Generally, a demultiplexer is a device that transforms one multiplexed data stream into several. Satellite modems don't have many outputs, so a demultiplexer here performs a drop operation, allowing to the modem to choose channels that will be transferred to the output.
A demultiplexer achieves this goal by maintaining frame synchronization.
Applications
Satellite modems are often used for home internet access.
There are two different types, both employing the Digital Video Broadcasting (DVB) standard as their basis:
One-way satmodems (DVB-IP modems) use a return channel not based on communication with the satellite, such as telephone or cable.
Two-way satmodems (DVB-RCS modems, also called astromodems) employ a satellite-based return channel as well; they do not need another connection. DVB-RCS is ETSI standard EN 301 790.
There are also industrial satellite modems intended to provide a permanent link. They are used, for example, in the Steel shankar network.
See also
Communications satellite
Data collection satellite
Yahsat
Intelsat
Satellite Internet access
VSAT
External links
ITU Radio Regulations, Section IV. Radio Stations and Systems – Article 1.113, definition: satellite link International Telecommunication Union (ITU)
Satellite broadcasting
Modems
Telecommunications equipment
Telecommunications infrastructure | Satellite modem | [
"Engineering"
] | 1,914 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
183,256 | https://en.wikipedia.org/wiki/Nuclear%20isomer | A nuclear isomer is a metastable state of an atomic nucleus, in which one or more nucleons (protons or neutrons) occupy excited state levels (higher energy levels). "Metastable" describes nuclei whose excited states have half-lives 100 to 1000 times longer than the half-lives of the excited nuclear states that decay with a "prompt" half life (ordinarily on the order of 10−12 seconds). The term "metastable" is usually restricted to isomers with half-lives of 10−9 seconds or longer. Some references recommend 5 × 10−9 seconds to distinguish the metastable half life from the normal "prompt" gamma-emission half-life. Occasionally the half-lives are far longer than this and can last minutes, hours, or years. For example, the nuclear isomer survives so long (at least 1015 years) that it has never been observed to decay spontaneously. The half-life of a nuclear isomer can even exceed that of the ground state of the same nuclide, as shown by as well as , , , , and multiple holmium isomers.
Sometimes, the gamma decay from a metastable state is referred to as isomeric transition, but this process typically resembles shorter-lived gamma decays in all external aspects with the exception of the long-lived nature of the meta-stable parent nuclear isomer. The longer lives of nuclear isomers' metastable states are often due to the larger degree of nuclear spin change which must be involved in their gamma emission to reach the ground state. This high spin change causes these decays to be forbidden transitions and delayed. Delays in emission are caused by low or high available decay energy.
The first nuclear isomer and decay-daughter system (uranium X2/uranium Z, now known as /) was discovered by Otto Hahn in 1921.
Nuclei of nuclear isomers
The nucleus of a nuclear isomer occupies a higher energy state than the non-excited nucleus existing in the ground state. In an excited state, one or more of the protons or neutrons in a nucleus occupy a nuclear orbital of higher energy than an available nuclear orbital. These states are analogous to excited states of electrons in atoms.
When excited atomic states decay, energy is released by fluorescence. In electronic transitions, this process usually involves emission of light near the visible range. The amount of energy released is related to bond-dissociation energy or ionization energy and is usually in the range of a few to few tens of eV per bond. However, a much stronger type of binding energy, the nuclear binding energy, is involved in nuclear processes. Due to this, most nuclear excited states decay by gamma ray emission. For example, a well-known nuclear isomer used in various medical procedures is , which decays with a half-life of about 6 hours by emitting a gamma ray of 140 keV of energy; this is close to the energy of medical diagnostic X-rays.
Nuclear isomers have long half-lives because their gamma decay is "forbidden" from the large change in nuclear spin needed to emit a gamma ray. For example, has a spin of 9 and must gamma-decay to with a spin of 1. Similarly, has a spin of 1/2 and must gamma-decay to with a spin of 9/2.
While most metastable isomers decay through gamma-ray emission, they can also decay through internal conversion. During internal conversion, energy of nuclear de-excitation is not emitted as a gamma ray, but is instead used to accelerate one of the inner electrons of the atom. These excited electrons then leave at a high speed. This occurs because inner atomic electrons penetrate the nucleus where they are subject to the intense electric fields created when the protons of the nucleus re-arrange in a different way.
In nuclei that are far from stability in energy, even more decay modes are known.
After fission, several of the fission fragments that may be produced have a metastable isomeric state. These fragments are usually produced in a highly excited state, in terms of energy and angular momentum, and go through a prompt de-excitation. At the end of this process, the nuclei can populate both the ground and the isomeric states. If the half-life of the isomers is long enough, it is possible to measure their production rate and compare it to that of the ground state, calculating the so-called isomeric yield ratio.
Metastable isomers
Metastable isomers can be produced through nuclear fusion or other nuclear reactions. A nucleus produced this way generally starts its existence in an excited state that relaxes through the emission of one or more gamma rays or conversion electrons. Sometimes the de-excitation does not completely proceed rapidly to the nuclear ground state. This usually occurs as a spin isomer when the formation of an intermediate excited state has a spin far different from that of the ground state. Gamma-ray emission is hindered if the spin of the post-emission state differs greatly from that of the emitting state, especially if the excitation energy is low. The excited state in this situation is a good candidate to be metastable if there are no other states of intermediate spin with excitation energies less than that of the metastable state.
Metastable isomers of a particular isotope are usually designated with an "m". This designation is placed after the mass number of the atom; for example, cobalt-58m1 is abbreviated , where 27 is the atomic number of cobalt. For isotopes with more than one metastable isomer, "indices" are placed after the designation, and the labeling becomes m1, m2, m3, and so on. Increasing indices, m1, m2, etc., correlate with increasing levels of excitation energy stored in each of the isomeric states (e.g., hafnium-178m2, or ).
A different kind of metastable nuclear state (isomer) is the fission isomer or shape isomer. Most actinide nuclei in their ground states are not spherical, but rather prolate spheroidal, with an axis of symmetry longer than the other axes, similar to an American football or rugby ball. This geometry can result in quantum-mechanical states where the distribution of protons and neutrons is so much further from spherical geometry that de-excitation to the nuclear ground state is strongly hindered. In general, these states either de-excite to the ground state far more slowly than a "usual" excited state, or they undergo spontaneous fission with half-lives of the order of nanoseconds or microseconds—a very short time, but many orders of magnitude longer than the half-life of a more usual nuclear excited state. Fission isomers may be denoted with a postscript or superscript "f" rather than "m", so that a fission isomer, e.g. of plutonium-240, can be denoted as plutonium-240f or .
Nearly stable isomers
Most nuclear excited states are very unstable and "immediately" radiate away the extra energy after existing on the order of 10−12 seconds. As a result, the characterization "nuclear isomer" is usually applied only to configurations with half-lives of 10−9 seconds or longer. Quantum mechanics predicts that certain atomic species should possess isomers with unusually long lifetimes even by this stricter standard and have interesting properties. Some nuclear isomers are so long-lived that they are relatively stable and can be produced and observed in quantity.
The most stable nuclear isomer occurring in nature is , which is present in all tantalum samples at about 1 part in 8,300. Its half-life is at least 1015 years, markedly longer than the age of the universe. The low excitation energy of the isomeric state causes both gamma de-excitation to the ground state (which itself is radioactive by beta decay, with a half-life of only 8 hours) and direct electron capture to hafnium or beta decay to tungsten to be suppressed due to spin mismatches. The origin of this isomer is mysterious, though it is believed to have been formed in supernovae (as are most other heavy elements). Were it to relax to its ground state, it would release a photon with a photon energy of 75 keV.
It was first reported in 1988 by C. B. Collins that theoretically can be forced to release its energy by weaker X-rays, although at that time this de-excitation mechanism had never been observed. However, the de-excitation of by resonant photo-excitation of intermediate high levels of this nucleus (E ≈ 1 MeV) was observed in 1999 by Belic and co-workers in the Stuttgart nuclear physics group.
is another reasonably stable nuclear isomer. It possesses a half-life of 31 years and the highest excitation energy of any comparably long-lived isomer. One gram of pure contains approximately 1.33 gigajoules of energy, the equivalent of exploding about of TNT. In the natural decay of , the energy is released as gamma rays with a total energy of 2.45 MeV. As with , there are disputed reports that can be stimulated into releasing its energy. Due to this, the substance is being studied as a possible source for gamma-ray lasers. These reports indicate that the energy is released very quickly, so that can produce extremely high powers (on the order of exawatts). Other isomers have also been investigated as possible media for gamma-ray stimulated emission.
Holmium's nuclear isomer has a half-life of 1,200 years, which is nearly the longest half-life of any holmium radionuclide. Only , with a half-life of 4,570 years, is more stable.
has a remarkably low-lying metastable isomer only above the ground state. This low energy produces "gamma rays" at a wavelength of , in the far ultraviolet, which allows for direct nuclear laser spectroscopy. Such ultra-precise spectroscopy, however, could not begin without a sufficiently precise initial estimate of the wavelength, something that was only achieved in 2024 after two decades of effort. The energy is so low that the ionization state of the atom affects its half-life. Neutral decays by internal conversion with a half-life of , but because the isomeric energy is less than thorium's second ionization energy of , this channel is forbidden in thorium cations and decays by gamma emission with a half-life of . This conveniently moderate lifetime allows the development of a nuclear clock of unprecedented accuracy.
High-spin suppression of decay
The most common mechanism for suppression of gamma decay of excited nuclei, and thus the existence of a metastable isomer, is lack of a decay route for the excited state that will change nuclear angular momentum along any given direction by the most common amount of 1 quantum unit ħ in the spin angular momentum. This change is necessary to emit a gamma photon, which has a spin of 1 unit in this system. Integral changes of 2 and more units in angular momentum are possible, but the emitted photons carry off the additional angular momentum. Changes of more than 1 unit are known as forbidden transitions. Each additional unit of spin change larger than 1 that the emitted gamma ray must carry inhibits decay rate by about 5 orders of magnitude. The highest known spin change of 8 units occurs in the decay of 180mTa, which suppresses its decay by a factor of 1035 from that associated with 1 unit. Instead of a natural gamma-decay half-life of 10−12 seconds, it has a half-life of more than 1023 seconds, or at least 3 × 1015 years, and thus has yet to be observed to decay.
Gamma emission is impossible when the nucleus begins in a zero-spin state, as such an emission would not conserve angular momentum.
Applications
Hafnium isomers (mainly 178m2Hf) have been considered as weapons that could be used to circumvent the Nuclear Non-Proliferation Treaty, since it is claimed that they can be induced to emit very strong gamma radiation. This claim is generally discounted. DARPA had a program to investigate this use of both nuclear isomers. The potential to trigger an abrupt release of energy from nuclear isotopes, a prerequisite to their use in such weapons, is disputed. Nonetheless a 12-member Hafnium Isomer Production Panel (HIPP) was created in 2003 to assess means of mass-producing the isotope.
Technetium isomers (with a half-life of 6.01 hours) and (with a half-life of 61 days) are used in medical and industrial applications.
Nuclear batteries
Nuclear batteries use small amounts (milligrams and microcuries) of radioisotopes with high energy densities. In one betavoltaic device design, radioactive material sits atop a device with adjacent layers of P-type and N-type silicon. Ionizing radiation directly penetrates the junction and creates electron–hole pairs. Nuclear isomers could replace other isotopes, and with further development, it may be possible to turn them on and off by triggering decay as needed. Current candidates for such use include 108Ag, 166Ho, 177Lu, and 242Am. As of 2004, the only successfully triggered isomer was 180mTa, which required more photon energy to trigger than was released.
An isotope such as 177Lu releases gamma rays by decay through a series of internal energy levels within the nucleus, and it is thought that by learning the triggering cross sections with sufficient accuracy, it may be possible to create energy stores that are 106 times more concentrated than high explosive or other traditional chemical energy storage.
Decay processes
An isomeric transition or internal transition (IT) is the decay of a nuclear isomer to a lower-energy nuclear state. The actual process has two types (modes):
γ (gamma ray) emission (emission of a high-energy photon),
internal conversion (the energy is used to eject one of the atom's electrons).
Isomers may decay into other elements, though the rate of decay may differ between isomers. For example, 177mLu can beta-decay to 177Hf with a half-life of 160.4 d, or it can undergo isomeric transition to 177Lu with a half-life of 160.4 d, which then beta-decays to 177Hf with a half-life of 6.68 d.
The emission of a gamma ray from an excited nuclear state allows the nucleus to lose energy and reach a lower-energy state, sometimes its ground state. In certain cases, the excited nuclear state following a nuclear reaction or other type of radioactive decay can become a metastable nuclear excited state. Some nuclei are able to stay in this metastable excited state for minutes, hours, days, or occasionally far longer.
The process of isomeric transition is similar to gamma emission from any excited nuclear state, but differs by involving excited metastable states of nuclei with longer half-lives. As with other excited states, the nucleus can be left in an isomeric state following the emission of an alpha particle, beta particle, or some other type of particle.
The gamma ray may transfer its energy directly to one of the most tightly bound electrons, causing that electron to be ejected from the atom, a process termed the photoelectric effect. This should not be confused with the internal conversion process, in which no gamma-ray photon is produced as an intermediate particle.
See also
Induced gamma emission
Isomeric shift
Mössbauer effect
References
External links
Research group which presented initial claims of hafnium nuclear isomer de-excitation control. – The Center for Quantum Electronics, The University of Texas at Dallas.
JASON Defense Advisory Group report on high energy nuclear materials mentioned in the Washington Post story above
login required?
Confidence for Hafnium Isomer Triggering in 2006. – The Center for Quantum Electronics, The University of Texas at Dallas.
Reprints of articles about nuclear isomers in peer reviewed journals. – The Center for Quantum Electronics, The University of Texas at Dallas.
Isomer, nuclear | Nuclear isomer | [
"Physics"
] | 3,338 | [
"Nuclear physics"
] |
183,265 | https://en.wikipedia.org/wiki/Dating%20creation | Dating creation is the attempt to provide an estimate of the age of Earth or the age of the universe as understood through the creation myths of various religious traditions. Various traditional beliefs hold that the Earth, or the entire universe, was brought into being in a grand creation event by one or more deities. After these cultures develop calendars, a question arises: Precisely how long ago did this creation event happen?
Sumerian and Babylonian
One of the Old Babylonian versions of the ancient Sumerian King List (WB 444) lists various mythical antediluvian kings and gives them reigns of several tens of thousands of years. The first Sumerian king Alulim, at Eridu, is described as reigning for 28,800 years, followed by several later kings of similar periods. In total these antediluvian kings ruled for 241,200 years from the time when "the kingship was lowered from heaven" to the time when "the flood" swept over the land. However, most modern scholars do not believe the ancient Sumerians or Babylonians believed in a chronology of their own this old. Instead they believed that these figures were either fabrications, or were based on not literal solar years (365.2425 days) but instead lunar months (29.53059 days).
Cicero, reacting to the chronologies of such authors as Berossos (who composed a Greek-language history of Babylonia, known as the Babyloniaca, during the 3rd century BC), strongly criticised the claim that the Babylonians had kings going back hundreds of thousands of years:
Diodorus Siculus also wrote something similar about how he believed the Babylonians fabricated their chronology:
Despite these criticisms, some ancient Greeks, including most notably Alexander Polyhistor and Proclus, believed the Babylonian kings were hundreds of thousands of years old, and that the Babylonians dated their creation 400,000–200,000 years before their own time.
Egyptian
The ancient Turin King List lists a mythical predynastic "reign of the gods" which first occurred 36,620 years before Menes (3050 BC), therefore dating the creation to around 39,670 BC.
Fragments from Manetho (Eusebius, George Syncellus and preserved in Felix Jacoby's FGrH), however, list different dates. Eusebius, regarding Aegyptiaca, in his Chronicle recorded that:
Using these times, 13,900 + 1,255 + 1,817 + 1,790 + 350 + 5,813 = 24,950 years, which counting back from Menes (3050 BC) fixes the creation at 28,000 BC. George Syncellus preserved yet another set of figures for the predynastic "reign of the gods", 11,984 years for Gods and 2,646 for demigods producing 14,630 years, thus dating the creation to 17,680 BC.
The Book of Sothis, considered as Pseudo-Manetho by many scholars, provides different figures. One fragment from Pseudo-Manetho dates the reign of the first Egyptian God (Ptah) 36,525 years before Menes (FGrH, #610 F2) and so dates the creation to about 39,575 BC.
The ancient Greeks reported similar figures on ancient Egyptian chronology. Diogenes Laërtius recorded that the ancient Egyptians dated their creation to their first god Hephaestus, who by interpretatio graeca was Ptah. According to Laertius, Hephaestus (Ptah) lived 48,863 years before Alexander the Great (b. 356 BC), dating the creation to 49,219 BC. Herodotus wrote that the ancient Egyptians had gods who ruled over them before the first dynasty of Egypt, but did not attempt to precisely date their creation by using their chronology:
According to Herodotus the ancient Egyptian demigods began 11,340 years before the reign of Seti I (1290 BC), so 11,340 + 1290 = 12,630 BC, while he listed earlier figures, 15,000 and 17,000, for the reign of the gods.
The ancient Greek writer Diodorus Siculus wrote that the ancient Egyptians dated their creation (or start of their reign of Gods) "a little less than eighteen thousand years" from Ptolemy XII Auletes (117–51 BC).
Apollonius, an Egyptian pagan priest in the 2nd century AD, calculated the cosmos to be 153,075 years old as reported by Theophilus of Antioch.
Martianus Capella, a pagan writer, wrote in his De nuptiis in the 5th century AD that the ancient Egyptians had archives of astronomy which started 40,000 years before his own era.
Herodotus' figures were discussed by Isaac Newton in his The Chronology of Ancient Kingdoms Amended (1728) but were dismissed by Newton because they did not fit Christian cosmology.
The mathematician and esotericist R. A. Schwaller de Lubicz, in his work Sacred Science, reconstructed Herodotus' dates to conclude that the ancient Egyptians dated their creation to an astronomical (stellar) event some 30,000 years before Herodotus' own time.
Hinduism
The Rig Veda questions the origin of the cosmos in the Nasadiya Sukta (the 129th hymn of Rigveda 10th mandala):
Dick Teresi in his book Lost Discoveries: The Ancient Roots of Modern Science, reviewing Vedas writes that:
Carl Sagan and Fritjof Capra have pointed out similarities between the latest scientific understanding of the age of the universe and the Hindu concept of a "day and night of Brahma", which is much closer to the current known age of the universe than other creation views. The days and nights of Brahma posit a view of the universe that is divinely created, and is not strictly evolutionary, but an ongoing cycle of birth, death, and rebirth of the universe. According to Sagan:
Also, as per Hinduism, Kaliyuga, the last part of the current cycle (yuga cycle) of time traditionally starts in 3102 BC.
Greek and Roman
Most ancient Greek and Roman chroniclers, poets, grammarians, and scholars (Eratosthenes, Varro, Apollodorus of Athens, Ovid, Censorinus, Catullus, and Castor of Rhodes) believed in a threefold division of history: ádelon (obscure), mythikón (mythical) and historikón (historical) periods. According to the Roman grammarian Censorinus, the first period of ádelon (obscure), was calculated by Varro as follows:
The primordial ádelon (obscure) period ended with the flood of Ogyges and what followed was the beginning of the mythikón (mythical) period. Varro dated this flood to 2137 BC but Censorinus wrote in his De Die Natali ch. xxi that the Ogyges’ diluvium occurred 1600 years before the first Olympiad (776 BC) meaning 2376 BC. Castor of Rhodes also provided another date for the start of the mythikón (mythical) period, 2123 BC. Censorinus recorded that the second period, the mythikón, stretched from the flood of Ogyges to the first Olympiad:
Censorinus quoted Varro in saying the second period (mythikón) lasted from 2137 to 776 BC, or if Censorinus' own dates are used: 2376 BC to 776 BC, or finally if Castor's: 2123 BC to 776 BC. Ovid, however, dated the start of the mythikón period to the reign of Inachus, whom he dated 400 or so years after the flood of Ogyges, meaning around 1900–1700 BC, but agreed with Varro that the mythikón ended during the first Olympiad (776 BC). See Ages of Man for more details about Ovid's chronology. Another ancient date for the start of the mythikón (mythical) period is found preserved in Augustine's City of God xviii.3, which dates it to 2050 BC. The final period according to Censorinus and Varro, the historikón (historical) era, began from 776 BC (the first Olympiad) to their own time:
Eratosthenes and Apollodorus of Athens, however, pushed back the start of the historical period to the Trojan War, which they fixed at 1184 BC.
Very few ancient Greeks or Romans attempted to date the creation, or beginning of the ádelon (obscure) period. While all ancient sources (excluding Ovid) dated the end of this period and start of the mythical (mythikón) period to 2376–2050 BC, most did not claim to know when the creation (ádelon period) exactly began. As Censorinus admitted:
Varro and Castor of Rhodes also wrote something very similar; however, some ancient Greeks and Romans attempted to calculate the date for the creation by using ancient sources or records of mythological figures. Since Inachus was dated 400 years after the flood of Ogyges and that Ogyges himself was considered a Titan or a primordial Autochthon "from earliest ages", some ancient Greeks or Romans dated the creation (beginning with Chaos or Gaia) only a few hundred years before Ogyges (2376–2050 BC). Most ancient Greeks, however, did not subscribe to such a literalist view of using mythology to attempt to date the creation; Hecataeus of Miletus was an early ancient Greek logographer who strongly criticised this method, while Ptolemy wrote of such an "immense period" of time before the historical period (776 BC), and thus believed in a much greater age for the creation.
Among the ancient Greek and Roman philosophers there were different opinions and traditions pertaining to the date of the creation. Some philosophers believed the Universe was eternal, and actually had no date of creation.
Zoroastrianism
Zoroastrianism involves a 12,000-year cosmogony and chronology, often divided into four ages as outlined in the Bundahishn. The first age lasted for 3,000 years and included the spiritual creation by Ahura Mazda, followed by the physical creation of 3,000 years when evil entered the world (see Angra Mainyu). During the 6,000th year, Zoroaster's Fravashi was created, followed by the prophet Zoroaster himself at the end of the 9th millennium. The 9,000th year marked the start of the fourth and last age. Modern Zoroastrians believe they are living currently in the final age. Since evil first entered the physical creation after the spiritual creation was complete, Zoroastrians maintain that for 9,000 years the world continues to be a battlefield between Ahura Mazda and Angra Mainyu, which will end during the 12,000th year, when the Saoshyants brings about the final renovation of the world to defeat evil.
Dating precisely the beginning of the start of the 12,000th year cosmogony rests solely on the date Zoroaster is estimated to have been born. Since Zoroaster was born himself at the end of the 9th millennium (just before the 9,000th year), the date of creation can be calculated by counting back 8,900–9,000 years. The Persian Zoroastrian tradition places Zoroaster around the 7th or 6th century BC, since the Bundahishn (34. 1-9) and the Book of Arda Viraf dates Zoroaster 258 years before the era of Alexander the Great (356-323 BC) which dates Zoroaster from 614-581 BC. The 11th century Persian Muslim scholar Abū Rayḥān al-Bīrūnī also dated Zoroaster 258 years before the era of Alexander (The Remaining Signs of Past Centuries, p. 17, l. 10, transl. Sachau). This date is also found in the historical account The Meadows of Gold (iv.107) written by the 9th-century Arab historian Al-Masudi. Other Arabic, Persian and Muslim sources place Zoroaster around the same date (600 BC). Therefore, if 8,900-9,000 years are added to about 600 BC the date of creation comes to 9600 - 9500 BC. A 12,000 year chronology places the end date at around 2400-2500 AD, which is why modern Zoroastrians believe they are living in the end few hundred years of the final era. Other dates for Zoroaster, however, differ and dates proposed for Zoroaster's birth range from 1750 to 500 BC.
Chinese
The ancient Chinese historian Xu Zheng () in his Three Five Historic Records dated the creation of the world by Pangu 36,000 years (2 x 18,000) before the reign of the legendary Three Sovereigns and Five Emperors. The date of the Three sovereigns is fixed at 3000–2700 BC and therefore dates the creation about 39,000 BC.
Maya
The Mesoamerican Long Count calendar dates the creation of the world of human beings to 11 August 3114 BC (in the most commonly accepted correlation) according to the proleptic Gregorian calendar, or Monday, 6 September 3114 BC according to the proleptic Julian calendar. There was also a previous creation that did not have a beginning date, but a date on Stela F from Quiriguá refers to a date possibly 24 trillion years in the past.
Abrahamic religions
Genesis creation narrative
Within the biblical framework and chronology, various dates have been proposed for the date of creation since ancient times, to more recent periods. The Bible begins with the Book of Genesis, in which God creates the Earth, the rest of the Universe, and the Earth's plants and animals, including the first humans, in six days. A second narrative begins with the first human pair, Adam and Eve, and goes on to list many of their descendants, in many cases giving the ages at which they had children and died. If these events and ages are interpreted literally throughout and the genealogies are considered closed, it is possible to build up a chronology in which many of the events of the Old Testament are dated to an estimated number of years after creation. Some biblical scholars have gone further, attempting to harmonise this biblical chronology with that of recorded history, thus establishing a date for creation in a modern calendar. Since the biblical story lacks chronology for some periods, the duration of events has been subject to interpretation in many different ways, resulting in a variety of estimates of the date of creation.
Numerous efforts have been made to determine the biblical date of creation, yielding varying results. Besides differences in interpretation, the use of different versions of the Bible can also affect the result. Two dominant dates for creation using such models exist, about 5500 BC and about 4000 BC. These were calculated from the genealogies in two versions of the Bible, with most of the difference arising from two versions of Genesis. The older dates stem from the Greek Septuagint. The later dates are based on the Hebrew Masoretic Text. The patriarchs from Adam to Terah, the father of Abraham, were often 100 years older when they begat their named son in the Septuagint than they were in the Hebrew or the Vulgate (Genesis 5, 11). The net difference between the two genealogies of Genesis amounts to 1466 years (ignoring the "second year after the flood" ambiguity), which accounts for virtually all of the 1500-year difference between 5500 BC and 4000 BC. For example, the period from the creation to the Flood derives from the genealogical table of the ten patriarchs listed in , and , called the generations of Adam. According to the Masoretic Text, this period consists of 1,656 years, and Western Christian Bibles deriving from the Latin Vulgate also follow this dating. However, the Samaritan texts give an equivalent period of 1,307 years, and according to the Septuagint (Codex Alexandrinus, Elizabeth Bible) it is 2,262 years. James Ussher agrees with the dating until the birth of Abraham, which he argues took place when Terah was 130, and not 70 as is the direct reading of , thus adding 60 years to his chronology for events postdating Abraham.
Early Jewish estimations
The earliest post-exilic Jewish chronicle preserved in the Hebrew language, the Seder Olam Rabbah, compiled by Jose ben Halafta in 160 AD, dates the creation of the world to 3761 BC while the later Seder Olam Zutta to 4339 BC. The Hebrew calendar has traditionally, since the 4th century AD by Hillel II, dated the creation to 3761 BC.
Septuagint
Many of the earliest Christians who used the Septuagint version of the Bible calculated creation as having occurred about 5500 BC, and Christians up to the Middle Ages continued to use this rough estimate: Clement of Alexandria (5592 BC), Theophilus of Antioch (5529 BC), Sextus Julius Africanus (5501 BC), Hippolytus of Rome (5500 BC), Panodorus of Alexandria (5493 BC), Maximus the Confessor (5493 BC), George Syncellus (5492 BC), Sulpicius Severus (5469 BC), Isidore of Seville (5336 BC) and Gregory of Tours (5200 BC). The Byzantine calendar has traditionally dated the creation of the world to September 1, 5509 BC.
The Chronicon of Eusebius (early 4th century) dated creation to 5228 BC while Jerome (c. 380, Constantinople) dated creation to 5199 BC. In the Roman Martyrology, the Proclamation of the Birth of Christ formerly used this date, as did the Irish Annals of the Four Masters.
Bede was one of the first to break away from the standard Septuagint date for the creation and in his work De Temporibus ("On Time") (completed in 703 AD) dated the creation to 18 March 3952 BC but was accused of heresy at the table of Bishop Wilfrid, because his chronology was contrary to accepted calculations of around 5500 BC.
Masoretic
After the Masoretic Text was published, however, dating creation around 4000 BC became common, and was received with wide support. Proposed calculations of the date of creation using the Masoretic from the 10th century to the 18th century include: Marianus Scotus (4192 BC), Henry Fynes Clinton (4138 BC), Henri Spondanus (4051 BC), Benedict Pereira (4021 BC), Louis Cappel (4005 BC), James Ussher (4004 BC), Augustin Calmet (4002 BC), Isaac Newton (3998 BC), Petavius (3984 BC), Theodore Bibliander (3980 BC), Johannes Kepler (April 27, 3977 BC) [based on his book Mysterium Cosmographicum], Heinrich Bünting (3967 BC), Christen Sørensen Longomontanus (3966 BC), Melanchthon (3964 BC), Martin Luther (3961 BC), Cornelius Cornelii a Lapide (3961 BC), John Lightfoot (3960 BC), Joseph Justus Scaliger (3949 BC), Christoph Helvig (3947 BC), Gerardus Mercator (3928 BC), Matthieu Brouard (3927 BC), Benito Arias Montano (3849 BC), Andreas Helwig (3836 BC).
Among the Masoretic creation estimates or calculations for the date of creation only the Archbishop Ussher chronology dates the creation to 4004 BC and became the most accepted and popular, mainly because this specific date was attached to the King James Bible.
Polish Christmas carol Wśród nocnej ciszy contains text Cztery tysiące lat wyglądany (Looked out for four thousand years) related to Jesus, which also corresponds to date of creation based on Masoretic Text.
Alfonsine tables
Alfonso X of Castile commissioned the Alfonsine tables, composed of astronomical data based on observation, from which the date of the creation has been calculated to be either 6984 BC or 6484 BC.
Other biblical estimations
In 1738, said he had collected over 200 different estimates, ranging from 3483 BC to 6984 BC. John Clark Ridpath attributes these values respectively to Yom-Tov Lipmann-Muhlhausen and Regiomontanus.
Christian Charles Josias Bunsen in the 19th century dated the creation to 20,000 BC.
Tarleton Perry Crawford dated the creation to 12,500 BC.
Harold Camping dated the creation to 11,013 BC.
See also
Chronology of the Bible
Omphalos hypothesis
Old Earth creationism
Relationship between religion and science
Sefer HaTemunah
Young Earth creationism
References
Citations
Sources
Further reading
External links
Estimates of the age of the Earth -- when it was created by God or coalesced out of stellar matter, B. A. Robinson
Bishop Ussher Dates the World: 4004 BC
Creationism
Religious cosmologies | Dating creation | [
"Biology"
] | 4,426 | [
"Creationism",
"Biology theories",
"Obsolete biology theories"
] |
183,275 | https://en.wikipedia.org/wiki/Exploratory%20engineering | Exploratory engineering is a term coined by K. Eric Drexler to describe the process of designing and analyzing detailed hypothetical models of systems that are not feasible with current technologies or methods, but do seem to be clearly within the bounds of what science considers to be possible within the narrowly defined scope of operation of the hypothetical system model. It usually results in paper or video prototypes, or (more likely nowadays) computer simulations that are as convincing as possible to those that know the relevant science, given the lack of experimental confirmation. By analogy with protoscience, it might be considered a form of protoengineering.
Usage
Due to the difficulty and necessity of anticipating results in such areas as genetic modification, climate change, molecular engineering, and megascale engineering, parallel fields such as bioethics, climate engineering and hypothetical molecular nanotechnology sometimes emerge to develop and examine hypotheses, define limits, and express potential solutions to the anticipated technological problems. Proponents of exploratory engineering contend that it is an appropriate initial approach to such problems.
Engineering is concerned with the design of a solution to a practical problem. A scientist may ask "why?" and proceed to research the answer to the question. By contrast, engineers want to know how to solve a problem, and how to implement that solution. Exploratory engineering often posits that a highly detailed solution exists, and explores the putative characteristics of such a solution, while holding in abeyance the question of how to implement that solution. If a point can be reached where the attempted implementation of the solution is addressed using the principles of engineering physics, the activity transitions from protoengineering to actual engineering, and results in success or failure to implement the design.
Requirements
Unlike the scientific method which relies on peer reviewed experiments which attempt to prove or disprove a falsifiable hypothesis, exploratory engineering relies on peer review, simulation and other methods employed by scientists, but applies them to some hypothetical artifact, a specific and detailed hypothesized design or process, rather than to an abstract model or theory. Because of the inherent lack of experimental falsifiability in exploratory engineering, its practitioners must take particular care to avoid falling into practices analogous to cargo cult science, pseudoscience, and pathological science.
Criticism
Exploratory engineering has its critics, who dismiss the activity as mere armchair speculation, albeit with computer assist. A boundary which would take exploratory engineering out of the realm of mere speculation and define it as a realistic design activity is often indiscernible to such critics, and at the same time is often inexpressible by the proponents of exploratory engineering. While both critics and proponents often agree that much of the highly detailed simulation effort in the field may never result in a physical device, the dichotomy between the two groups is exemplified by the situation in which proponents of molecular nanotechnology contend that many complicated molecular machinery designs will be realizable after an unspecified "assembler breakthrough" envisioned by K. Eric Drexler, while critics contend that this attitude embodies wishful thinking equivalent to that in the famous Sidney Harris cartoon () "And then a miracle occurs" published in the American Scientist magazine. In summary the critics contend that a hypothetical model which is both self-consistent and consistent with the laws of science concerning its operation, in the absence of a path to build the device modeled, provides no evidence that the desired device can be built. Proponents contend that there are so many potential ways to build the desired device that surely at least one of those ways will not display a critical flaw preventing the device from being built.
Science fiction
Both proponents and critics often point to science fiction stories as the origin of exploratory engineering. On the positive side of the science fiction ledger, the ocean-going submarine, the telecommunications satellite, and other inventions were anticipated in such stories before they could be built. On the negative side of the same ledger, other science fiction devices such as the space elevator may be forever impossible because of basic strength of materials issues or due to other difficulties, either anticipated or unanticipated.
See also
Climate engineering
Macro-engineering
Megascale engineering
Planetary engineering
References
2. Eric Drexler : "Physical Laws and the future of nanotechnology". Inaugural Lecture of the Oxford Martin Program, Feb,2012. https://www.youtube.com/watch?v=zQHA-UaUAe0
Engineering disciplines
Nanotechnology | Exploratory engineering | [
"Materials_science",
"Technology",
"Engineering"
] | 924 | [
"Nanotechnology",
"Materials science",
"nan",
"Exploratory engineering"
] |
183,290 | https://en.wikipedia.org/wiki/Life%20extension | Life extension is the concept of extending the human lifespan, either modestly through improvements in medicine or dramatically by increasing the maximum lifespan beyond its generally-settled biological limit of around 125 years. Several researchers in the area, along with "life extensionists", "immortalists", or "longevists" (those who wish to achieve longer lives themselves), postulate that future breakthroughs in tissue rejuvenation, stem cells, regenerative medicine, molecular repair, gene therapy, pharmaceuticals, and organ replacement (such as with artificial organs or xenotransplantations) will eventually enable humans to have indefinite lifespans through complete rejuvenation to a healthy youthful condition (agerasia). The ethical ramifications, if life extension becomes a possibility, are debated by bioethicists.
The sale of purported anti-aging products such as supplements and hormone replacement is a lucrative global industry. For example, the industry that promotes the use of hormones as a treatment for consumers to slow or reverse the aging process in the US market generated about $50 billion of revenue a year in 2009. The use of such hormone products has not been proven to be effective or safe.
Average life expectancy and lifespan
During the process of aging, an organism accumulates damage to its macromolecules, cells, tissues, and organs. Specifically, aging is characterized as and thought to be caused by "genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication." Oxidation damage to cellular contents caused by free radicals is believed to contribute to aging as well.
The longest documented human lifespan is 122 years 164 days, the case of Jeanne Calment, who according to records was born in 1875 and died in 1997, whereas the maximum lifespan of a wildtype mouse, commonly used as a model in research on aging, is about three years. Genetic differences between humans and mice that may account for these different aging rates include differences in efficiency of DNA repair, antioxidant defenses, energy metabolism, proteostasis maintenance, and recycling mechanisms such as autophagy.
The average life expectancy in a population is lowered by infant and child mortality, which are frequently linked to infectious diseases or nutrition problems. Later in life, vulnerability to accidents and age-related chronic disease such as cancer or cardiovascular disease play an increasing role in mortality. Extension of life expectancy and lifespan can often be achieved by access to improved medical care, vaccinations, good diet, exercise, and avoidance of hazards such as smoking.
Maximum lifespan is determined by the rate of aging for a species inherent in its genes and by environmental factors. Widely recognized methods of extending maximum lifespan in model organisms such as nematodes, fruit flies, and mice include caloric restriction, gene manipulation, and administration of pharmaceuticals. Another technique uses evolutionary pressures such as breeding from only older members or altering levels of extrinsic mortality.
Some animals such as hydra, planarian flatworms, and certain sponges, corals, and jellyfish do not die of old age and exhibit potential immortality.
History
The extension of life has been a desire of humanity and a mainstay motif in the history of scientific pursuits and ideas throughout history, from the Sumerian Epic of Gilgamesh and the Egyptian Smith medical papyrus, all the way through the Taoists, Ayurveda practitioners, alchemists, hygienists such as Luigi Cornaro, Johann Cohausen and Christoph Wilhelm Hufeland, and philosophers such as Francis Bacon, René Descartes, Benjamin Franklin and Nicolas Condorcet. However, the beginning of the modern period in this endeavor can be traced to the end of the 19th – beginning of the 20th century, to the so-called "fin-de-siècle" (end of the century) period, denoted as an "end of an epoch" and characterized by the rise of scientific optimism and therapeutic activism, entailing the pursuit of life extension (or life-extensionism). Among the foremost researchers of life extension at this period were the Nobel Prize winning biologist Elie Metchnikoff (1845-1916) -- the author of the cell theory of immunity and vice director of Institut Pasteur in Paris, and Charles-Édouard Brown-Séquard (1817-1894) -- the president of the French Biological Society and one of the founders of modern endocrinology.
Sociologist James Hughes claims that science has been tied to a cultural narrative of conquering death since the Age of Enlightenment. He cites Francis Bacon (1561–1626) as an advocate of using science and reason to extend human life, noting Bacon's novel New Atlantis, wherein scientists worked toward delaying aging and prolonging life. Robert Boyle (1627–1691), founding member of the Royal Society, also hoped that science would make substantial progress with life extension, according to Hughes, and proposed such experiments as "to replace the blood of the old with the blood of the young". Biologist Alexis Carrel (1873–1944) was inspired by a belief in indefinite human lifespan that he developed after experimenting with cells, says Hughes.
Contemporary
Regulatory and legal struggles between the Food and Drug Administration (FDA) and the Life Extension organization included seizure of merchandise and court action. In 1991, Saul Kent and Bill Faloon, the principals of the organization, were jailed for four hours and were released on $850,000 bond each. After 11 years of legal battles, Kent and Faloon convinced the US Attorney's Office to dismiss all criminal indictments brought against them by the FDA.
In 2003, Doubleday published "The Immortal Cell: One Scientist's Quest to Solve the Mystery of Human Aging," by Michael D. West. West emphasised the potential role of embryonic stem cells in life extension.
Other modern life extensionists include writer Gennady Stolyarov, who insists that death is "the enemy of us all, to be fought with medicine, science, and technology"; transhumanist philosopher Zoltan Istvan, who proposes that the "transhumanist must safeguard one's own existence above all else"; futurist George Dvorsky, who considers aging to be a problem that desperately needs to be solved; and recording artist Steve Aoki, who has been called "one of the most prolific campaigners for life extension".
Scientific research
In 1991, the American Academy of Anti-Aging Medicine (A4M) was formed. The American Board of Medical Specialties recognizes neither anti-aging medicine nor the A4M's professional standing.
In 2003, Aubrey de Grey and David Gobel formed the Methuselah Foundation, which gives financial grants to anti-aging research projects. In 2009, de Grey and several others founded the SENS Research Foundation, a California-based scientific research organization which conducts research into aging and funds other anti-aging research projects at various universities. In 2013, Google announced Calico, a new company based in San Francisco that will harness new technologies to increase scientific understanding of the biology of aging. It is led by Arthur D. Levinson, and its research team includes scientists such as Hal V. Barron, David Botstein, and Cynthia Kenyon. In 2014, biologist Craig Venter founded Human Longevity Inc., a company dedicated to scientific research to end aging through genomics and cell therapy. They received funding with the goal of compiling a comprehensive human genotype, microbiome, and phenotype database.
Aside from private initiatives, aging research is being conducted in university laboratories, and includes universities such as Harvard and UCLA. University researchers have made a number of breakthroughs in extending the lives of mice and insects by reversing certain aspects of aging.
Research
Theoretically, extension of maximum lifespan in humans could be achieved by reducing the rate of aging damage by periodic replacement of damaged tissues, molecular repair or rejuvenation of deteriorated cells and tissues, reversal of harmful epigenetic changes, or the enhancement of enzyme telomerase activity.
Research geared towards life extension strategies in various organisms is currently under way at a number of academic and private institutions. Since 2009, investigators have found ways to increase the lifespan of nematode worms and yeast by 10-fold; the record in nematodes was achieved through genetic engineering and the extension in yeast by a combination of genetic engineering and caloric restriction. A 2009 review of longevity research noted: "Extrapolation from worms to mammals is risky at best, and it cannot be assumed that interventions will result in comparable life extension factors. Longevity gains from dietary restriction, or from mutations studied previously, yield smaller benefits to Drosophila than to nematodes, and smaller still to mammals. This is not unexpected, since mammals have evolved to live many times the worm's lifespan, and humans live nearly twice as long as the next longest-lived primate. From an evolutionary perspective, mammals and their ancestors have already undergone several hundred million years of natural selection favoring traits that could directly or indirectly favor increased longevity, and may thus have already settled on gene sequences that promote lifespan. Moreover, the very notion of a "life-extension factor" that could apply across taxa presumes a linear response rarely seen in biology."
Anti-aging drugs
There are numerous chemicals intended to slow the aging process under study in animal models. One type of research is related to the observed effects of a calorie restriction (CR) diet, which has been shown to extend lifespan in some animals. Based on that research, there have been attempts to develop drugs that will have the same effect on the aging process as a CR diet, which are known as caloric restriction mimetic drugs, such as rapamycin and metformin.
Sirtuin activating polyphenols, such as resveratrol and pterostilbene, and flavonoids, such as quercetin and fisetin, as well as oleic acid are dietary supplements that have also been studied in this context. Other common supplements with less clear biological pathways to target aging include lipoic acid, senolytics, and coenzyme Q10.
While agents such as these have some limited laboratory evidence of efficacy in animals, there are no studies to date in humans for drugs that may promote life extension, mainly because research investment remains at a low level, and regulatory standards are high. Aging is not recognized as a preventable condition by governments, indicating there is no clear pathway to approval of anti-aging medications. Further, anti-aging drug candidates are under constant review by regulatory authorities like the US Food and Drug Administration, which stated in 2023 that "no medication has been proven to slow or reverse the aging process."
Nanotechnology
Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular computers, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical nanomachines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
Cyborgs
Replacement of biological (susceptible to diseases) organs with mechanical ones could extend life. This is the goal of the 2045 Initiative.
Cryonics
Cryonics is the low-temperature freezing (usually at ) of a human corpse, with the hope that resuscitation may be possible in the future. It is regarded with skepticism within the mainstream scientific community and has been characterized as quackery.
Strategies for engineered negligible senescence
Another proposed life extension technology aims to combine existing and predicted future biochemical and genetic techniques. SENS proposes that rejuvenation may be obtained by removing aging damage via the use of stem cells and tissue engineering, telomere-lengthening machinery, allotopic expression of mitochondrial proteins, targeted ablation of cells, immunotherapeutic clearance, and novel lysosomal hydrolases.
While some biogerontologists find these ideas "worthy of discussion", others contend that the alleged benefits are too speculative given the current state of technology, referring to it as "fantasy rather than science".
Genetic editing
Genome editing, in which nucleic acid polymers are delivered as a drug and are either expressed as proteins, interfere with the expression of proteins, or correct genetic mutations, has been proposed as a future strategy to prevent aging.
CRISPR/Cas9
CRISPR/Cas9 edits genes by precisely cutting DNA and then harnessing natural DNA repair processes to modify the gene in the desired manner. The system has two components: the Cas9 enzyme and a guide RNA. A large array of genetic modifications have been found to increase lifespan in model organisms such as yeast, nematode worms, fruit flies, and mice. As of 2013, the longest extension of life caused by a single gene manipulation was roughly 50% in mice and 10-fold in nematode worms.
In July 2020 scientists, using public biological data on 1.75 m people with known lifespans overall, identify 10 genomic loci which appear to intrinsically influence healthspan, lifespan, and longevity – of which half have not been reported previously at genome-wide significance and most being associated with cardiovascular disease – and identify haem metabolism as a promising candidate for further research within the field. Their study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. The same month other scientists report that yeast cells of the same genetic material and within the same environment age in two distinct ways, describe a biomolecular mechanism that can determine which process dominates during aging and genetically engineer a novel aging route with substantially extended lifespan.
Fooling genes
In The Selfish Gene, Richard Dawkins describes an approach to life-extension that involves "fooling genes" into thinking the body is young. Dawkins attributes inspiration for this idea to Peter Medawar. The basic idea is that our bodies are composed of genes that activate throughout our lifetimes, some when we are young and others when we are older. Presumably, these genes are activated by environmental factors, and the changes caused by these genes activating can be lethal. It is a statistical certainty that we possess more lethal genes that activate in later life than in early life. Therefore, to extend life, we should be able to prevent these genes from switching on, and we should be able to do so by "identifying changes in the internal chemical environment of a body that take place during aging... and by simulating the superficial chemical properties of a young body".
Cloning and body part replacement
Some life extensionists suggest that therapeutic cloning and stem cell research could one day provide a way to generate cells, body parts, or even entire bodies (generally referred to as reproductive cloning) that would be genetically identical to a prospective patient. In 2008, the US Department of Defense announced a program to research the possibility of growing human body parts on mice. Complex biological structures, such as mammalian joints and limbs, have not yet been replicated. Dog and primate brain transplantation experiments were conducted in the mid-20th century but failed due to rejection and the inability to restore nerve connections. As of 2006, the implantation of bio-engineered bladders grown from patients' own cells has proven to be a viable treatment for bladder disease. Proponents of body part replacement and cloning contend that the required biotechnologies are likely to appear earlier than other life-extension technologies.
The use of human stem cells, particularly embryonic stem cells, is controversial. Opponents' objections generally are based on interpretations of religious teachings or ethical considerations. Proponents of stem cell research point out that cells are routinely formed and destroyed in a variety of contexts. Use of stem cells taken from the umbilical cord or parts of the adult body may not provoke controversy.
The controversies over cloning are similar, except general public opinion in most countries stands in opposition to reproductive cloning. Some proponents of therapeutic cloning predict the production of whole bodies, lacking consciousness, for eventual brain transplantation.
Ethics and politics
Scientific controversy
Some critics dispute the portrayal of aging as a disease. For example, Leonard Hayflick, who determined that fibroblasts are limited to around 50 cell divisions, reasons that aging is an unavoidable consequence of entropy. Hayflick and fellow biogerontologists Jay Olshansky and Bruce Carnes have strongly criticized the anti-aging industry in response to what they see as unscrupulous profiteering from the sale of unproven anti-aging supplements.
Consumer motivations
Research by Sobh and Martin (2011) suggests that people buy anti-aging products to obtain a hoped-for self (e.g., keeping a youthful skin) or to avoid a feared-self (e.g., looking old). The research shows that when consumers pursue a hoped-for self, it is expectations of success that most strongly drive their motivation to use the product. The research also shows why doing badly when trying to avoid a feared self is more motivating than doing well. When product use is seen to fail it is more motivating than success when consumers seek to avoid a feared-self.
Political parties
Though many scientists state that life extension and radical life extension are possible, there are still no international or national programs focused on radical life extension. There are political forces working both for and against life extension. By 2012, in Russia, the United States, Israel, and the Netherlands, the Longevity political parties started. They aimed to provide political support to radical life extension research and technologies, and ensure the fastest possible and at the same time soft transition of society to the next step – life without aging and with radical life extension, and to provide access to such technologies to most currently living people.
Silicon Valley
Some tech innovators and Silicon Valley entrepreneurs have invested heavily into anti-aging research. This includes Jeff Bezos (founder of Amazon), Larry Ellison (founder of Oracle), Peter Thiel (former PayPal CEO), Larry Page (co-founder of Google), Peter Diamandis, Sam Altman (CEO of OpenAI, invested in Retro Biosciences), and Brian Armstrong (founder of Coinbase and NewLimit), Bryan Johnson (Founder of Kernel).
Commentators
Leon Kass (chairman of the US President's Council on Bioethics from 2001 to 2005) has questioned whether potential exacerbation of overpopulation problems would make life extension unethical. He states his opposition to life extension with the words:
John Harris, former editor-in-chief of the Journal of Medical Ethics, argues that as long as life is worth living, according to the person himself, we have a powerful moral imperative to save the life and thus to develop and offer life extension therapies to those who want them.
Transhumanist philosopher Nick Bostrom has argued that any technological advances in life extension must be equitably distributed and not restricted to a privileged few. In an extended metaphor entitled "The Fable of the Dragon-Tyrant", Bostrom envisions death as a monstrous dragon who demands human sacrifices. In the fable, after a lengthy debate between those who believe the dragon is a fact of life and those who believe the dragon can and should be destroyed, the dragon is finally killed. Bostrom argues that political inaction allowed many preventable human deaths to occur.
Overpopulation concerns
Controversy about life extension is due to fear of overpopulation and possible effects on society. Biogerontologist Aubrey De Grey counters the overpopulation critique by pointing out that the therapy could postpone or eliminate menopause, allowing women to space out their pregnancies over more years and thus decreasing the yearly population growth rate. Moreover, the philosopher and futurist Max More argues that, given that the worldwide population growth rate is slowing down and is projected to eventually stabilize and begin falling, superlongevity would be unlikely to contribute to overpopulation.
Opinion polls
A Spring 2013 Pew Research poll in the United States found that 38% of Americans would want life extension treatments, and 56% would reject it. However, it also found that 68% believed most people would want it and that only 4% consider an "ideal lifespan" to be more than 120 years. The median "ideal lifespan" was 91 years of age and the majority of the public (63%) viewed medical advances aimed at prolonging life as generally good. 41% of Americans believed that radical life extension (RLE) would be good for society, while 51% said they believed it would be bad for society. One possibility for why 56% of Americans claim they would reject life extension treatments may be due to the cultural perception that living longer would result in a longer period of decrepitude, and that the elderly in our current society are unhealthy.
Religious people are no more likely to oppose life extension than the unaffiliated, though some variation exists between religious denominations.
Aging as a disease
Most mainstream medical organizations and practitioners do not consider aging to be a disease. Biologist David Sinclair says: "I don't see aging as a disease, but as a collection of quite predictable diseases caused by the deterioration of the body." The two main arguments used are that aging is both inevitable and universal while diseases are not. However, not everyone agrees. Harry R. Moody, director of academic affairs for AARP, notes that what is normal and what is disease strongly depend on a historical context. David Gems, assistant director of the Institute of Healthy Ageing, argues that aging should be viewed as a disease. In response to the universality of aging, David Gems notes that it is as misleading as arguing that Basenji are not dogs because they do not bark. Because of the universality of aging he calls it a "special sort of disease". Robert M. Perlman, coined the terms "aging syndrome" and "disease complex" in 1954 to describe aging.
The discussion whether aging should be viewed as a disease or not has important implications. One view is, this would stimulate pharmaceutical companies to develop life extension therapies and in the United States of America, it would also increase the regulation of the anti-aging market by the Food and Drug Administration (FDA). Anti-aging now falls under the regulations for cosmetic medicine which are less tight than those for drugs.
Beliefs and methods
Senolytics and prolongevity drugs
Senolytics eliminate senescent cells whereas senomorphics – with candidates such as Apigenin, Everolimus and Rapamycin – modulate properties of senescent cells without eliminating them, suppressing phenotypes of senescence, including the SASP. Senomorphic effects may be one major effect mechanism of a range of prolongevity drug candidates. Such candidates are however typically not studied for just one mechanism, but multiple. There are biological databases of prolongevity drug candidates under research as well as of potential gene/protein targets. These are enhanced by longitudinal cohort studies, electronic health records, computational (drug) screening methods, computational biomarker-discovery methods and computational biodata-interpretation/personalized medicine methods.
Besides rapamycin and senolytics, the drug-repurposing candidates studied most extensively include metformin, acarbose, spermidine and NAD+ enhancers.
Many prolongevity drugs are synthetic alternatives or potential complements to existing nutraceuticals, such as various sirtuin-activating compounds under investigation like SRT2104. In some cases pharmaceutical administration is combined with that of neutraceuticals – such as in the case of glycine combined with NAC. Often studies are structured based on or thematize specific prolongevity targets, listing both nutraceuticals and pharmaceuticals (together or separately) such as FOXO3-activators.
Researchers are also exploring ways to mitigate side-effects from such substances (possibly most notably rapamycin and its derivatives) such as via protocols of intermittent administration and have called for research that helps determine optimal treatment schedules (including timing) in general.
Diets and supplements
Vitamins and antioxidants
The free-radical theory of aging suggests that antioxidant supplements might extend human life. Reviews, however, have found that use of vitamin A (as β-carotene) and vitamin E supplements possibly can increase mortality. Other reviews have found no relationship between vitamin E and other vitamins with mortality. Vitamin D supplementation of various dosages is investigated in trials and there also is research into GlyNAC .
Complications
Complications of antioxidant supplementation (especially continuous high dosages far above the RDA) include that reactive oxygen species (ROS), which are mitigated by antioxidants, "have been found to be physiologically vital for signal transduction, gene regulation, and redox regulation, among others, implying that their complete elimination would be harmful". In particular, one way of multiple they can be detrimental is by inhibiting adaptation to exercise such as muscle hypertrophy (e.g. during dedicated periods of caloric surplus). There is also research into stimulating/activating/fueling endogenous antioxidant generation, in particular e.g. of neutraceutical glycine and pharmaceutical NAC. Antioxidants can change the oxidation status of different e.g. tissues, targets or sites each with potentially different implications, especially for different concentrations. A review suggests mitochondria have a hormetic response to ROS, whereby low oxidative damage can be beneficial.
Dietary restriction
As of 2021, there is no clinical evidence that any dietary restriction practice contributes to human longevity.
Healthy diet
Research suggests that increasing adherence to Mediterranean diet patterns is associated with a reduction in total and cause-specific mortality, extending health- and lifespan. Research is identifying the key beneficial components of the Mediterranean diet. Studies suggest dietary changes are a factor of national relative rises in life-span.
Optimal diet
Approaches to develop optimal diets for health- and lifespan (or "longevity diets") include:
modifying the Mediterranean diet as the baseline via nutrition science. For instance, via:
(additional) increase in plant-based foods alongside additional restriction of meat intake – meat reduction is (or can be) typically healthy,
keeping alcohol consumption of any type at a minimum – conventional Mediterranean diets include alcohol consumption (i.e. of wine), which is under research due to data suggesting negative long-term brain impacts even at low/moderate consumption levels.
fully replacing refined grains – some guidelines of Mediterranean diets do not clarify or include the principle of whole-grain consumption instead of refined grains. Whole grains are included in Mediterranean diets.
Other approaches
Further advanced biosciences-based approaches include:
Genetic and epigenetic alterations: Human genetic enhancement for pro-longevity and protective genes – see genetics of aging
Cellular reprogramming: in vivo reprogramming to complement or augment human regenerative capacity and rejuvenate or replace cells
Epigenetic reprogramming: early-stage research about rejuvenating/repairing epigenetic machinery
Stem-cell interventions: "Increasing the number and quality of stem cells and activate regenerative signals"
Nanomedicine: early-stage research of in vivo pro-longevity nanotechnology
Tissue engineering: of tissues and organs (see also: xenotransplantation and artificial organ)
Endogenous circulating biomolecules: Blood proteins of blood from young animals have shown some pro-longevity potential in animal studies (e.g. via transfer of blood or plasma, and of plasma proteins). Moreover, exerkines – signalling biomolecules released during/after exercise – have also shown promising results. Exerkines include myokines. Extracellular vesicles were shown to be secreted concomitantly with exerkines and are also investigated. (See also: body fluid and cerebrospinal fluid)
Personalized interventions: future studies may tailor and investigate personalized medicine-type interventions. For instance, effects of interventions or e.g. dosages may vary per age and/or genome. A review suggests that the field of precision medicine and geroscience will have to interact closely (see also: combination therapy)
Peptides: such as MOTS-c released by mitochondria
Mitochondria modulation: early-stage research indicates mitochondrial interventions such as mitochondrial transplantation may have potential to be efficacious (See also: mitochondrial theory of ageing)
Within the field
There is a need and research into the development of aging biomarkers such as the epigenetic clock "to assess the ageing process and the efficacy of interventions to bypass the need for large-scale longitudinal studies". Such biomarkers may also include in vivo brain imaging.
Reviews sometimes include structured tables that provide systematic overviews of intervention/drug candidates with a review calling for integrating "current knowledge with multi-omics, health records, and drug safety data to predict drugs that can improve health in late life" and listing major outstanding questions. Biological databases of prolongevity drug candidates under research as well as of potential gene/protein targets include GenAge, DrugAge and Geroprotectors.
A review has pointed out that the approach of "'epidemiological' comparison of how a low versus a high consumption of an isolated macronutrient and its association with health and mortality may not only fail to identify protective or detrimental nutrition patterns but may lead to misleading interpretations". It proposes a multi-pillar approach, and summarizes findings towards constructing – multi-system-considering and at least age-personalized dynamic – refined longevity diets. Epidemiological-type observational studies included in meta-analyses should according to the study at least be complemented by "(1) basic research focused on lifespan and healthspan, (2) carefully controlled clinical trials, and (3) studies of individuals and populations with record longevity".
Hormone treatment
The anti-aging industry offers several hormone therapies. Some of these have been criticized for possible dangers and a lack of proven effect. For example, the American Medical Association has been critical of some anti-aging hormone therapies.
While growth hormone (GH) decreases with age, the evidence for use of growth hormone as an anti-aging therapy is mixed and based mostly on animal studies. There are mixed reports that GH or IGF-1 modulates the aging process in humans and about whether the direction of its effect is positive or negative.
Klotho and exerkines like irisin are being investigated for potential pro-longevity therapies.
Lifestyle factors
Loneliness/isolation, social life and support, exercise/physical activity (partly via neurobiological effects and increased NAD+ levels), psychological characteristics/personality (possibly highly indirectly), sleep duration, circadian rhythms (patterns of sleep, drug-administration and feeding), type of leisure activities, not smoking, altruistic emotions and behaviors, subjective well-being, mood and stress (including via heat shock protein) are investigated as potential (modulatable) factors of life extension.
Healthy lifestyle practices and healthy diet have been suggested as "first-line function-preserving strategies, with pharmacological agents, including existing and new pharmaceuticals and novel 'nutraceutical' compounds, serving as potential complementary approaches".
Societal strategies
Collectively, addressing common causes of death could extend lifespans of populations and humanity overall. For instance, a 2020 study indicates that the global mean loss of life expectancy (LLE) from air pollution in 2015 was 2.9 years, substantially more than, for example, 0.3 years from all forms of direct violence, albeit a significant fraction of the LLE (a measure similar to years of potential life lost) is considered to be unavoidable.
Regular screening and doctor visits has been suggested as a lifestyle-societal intervention. (See also: medical test and biomarker)
Health policy and changes to standard healthcare could support the adoption of the field's conclusions – a review suggests that the longevity diet would be a "valuable complement to standard healthcare and that, taken as a preventative measure, it could aid in avoiding morbidity, sustaining health into advanced age" as a form of preventive healthcare.
It has been suggested that in terms of healthy diets, Mediterranean-style diets could be promoted by countries for ensuring healthy-by-default choices ("to ensure the healthiest choice is the easiest choice") and with highly effective measures including dietary education, food checklists and recipes that are "simple, palatable, and affordable".
A review suggests that "targeting the aging process per se may be a far more effective approach to prevent or delay aging-associated pathologies than treatments specifically targeted to particular clinical conditions".
Low ambient temperature
Low ambient temperature as a physical factor affecting free radical levels was identified as a treatment producing exceptional lifespan increase in Drosophila melanogaster and other living beings.
Young blood conspiracy theory
Conspiracy theorists claim that some clinics currently offer injection of blood products from young donors. The alleged benefits of the treatment, none of which have been demonstrated in a proper study, include a longer life, darker hair, better memory, better sleep, curing heart diseases, diabetes and Alzheimer's disease. The approach is based on parabiosis studies such as those Irina Conboy has done on mice, but Conboy says young blood does not reverse aging (even in mice) and that those who offer those treatments have misunderstood her research. Neuroscientist Tony Wyss-Coray, who also studied blood exchanges on mice as recently as 2014, said people offering those treatments are "basically abusing people's trust" and that young blood treatments are "the scientific equivalent of fake news". The treatment appeared in HBO's Silicon Valley fiction series.
Two clinics in California, run by Jesse Karmazin and David C. Wright, offer $8,000 injections of plasma extracted from the blood of young people. Karmazin has not published in any peer-reviewed journal and his current study does not use a control group.
Microbiome alterations
Fecal microbiota transplantation and probiotics are being investigated as means for life and healthspan extension.
Mind uploading
One hypothetical future strategy that, as some suggest, "eliminates" the complications related to a physical body, involves the copying or transferring (e.g. by progressively replacing neurons with transistors) of a conscious mind from a biological brain to a non-biological computer system or computational device. The basic idea is to scan the structure of a particular brain in detail, and then construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain. Whether or not an exact copy of one's mind constitutes actual life extension is matter of debate.
However, critics argue that the uploaded mind would simply be a clone and not a true continuation of a person's consciousness.
Some scientists believe that the dead may one day be "resurrected" through simulation technology.
See also
Advanced glycation end product
Aging brain
Biological immortality
Centenarian
DNA damage theory of aging
Human enhancement
Immortality
Maximum lifespan
Senescence
Slow aging
Supercentenarian
References
Further reading
External links
Life extension on Wikiversity
Ageing
Population
Transhumanism | Life extension | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 7,455 | [
"Anti-aging substances",
"Genetic engineering",
"Transhumanism",
"Senescence",
"Ethics of science and technology"
] |
183,324 | https://en.wikipedia.org/wiki/Thermodynamic%20activity | In thermodynamics, activity (symbol ) is a measure of the "effective concentration" of a species in a mixture, in the sense that the species' chemical potential depends on the activity of a real solution in the same way that it would depend on concentration for an ideal solution. The term "activity" in this sense was coined by the American chemist Gilbert N. Lewis in 1907.
By convention, activity is treated as a dimensionless quantity, although its value depends on customary choices of standard state for the species. The activity of pure substances in condensed phases (solids and liquids) is taken as = 1. Activity depends on temperature, pressure and composition of the mixture, among other things. For gases, the activity is the effective partial pressure, and is usually referred to as fugacity.
The difference between activity and other measures of concentration arises because the interactions between different types of molecules in non-ideal gases or solutions are different from interactions between the same types of molecules. The activity of an ion is particularly influenced by its surroundings.
Equilibrium constants should be defined by activities but, in practice, are often defined by concentrations instead. The same is often true of equations for reaction rates. However, there are circumstances where the activity and the concentration are significantly different and, as such, it is not valid to approximate with concentrations where activities are required. Two examples serve to illustrate this point:
In a solution of potassium hydrogen iodate KH(IO3)2 at 0.02 M the activity is 40% lower than the calculated hydrogen ion concentration, resulting in a much higher pH than expected.
When a 0.1 M hydrochloric acid solution containing methyl green indicator is added to a 5 M solution of magnesium chloride, the color of the indicator changes from green to yellow—indicating increasing acidity—when in fact the acid has been diluted. Although at low ionic strength (< 0.1 M) the activity coefficient approaches unity, this coefficient can actually increase with ionic strength in a high ionic strength regime. For hydrochloric acid solutions, the minimum is around 0.4 M.
Definition
The relative activity of a species , denoted , is defined as:
where is the (molar) chemical potential of the species under the conditions of interest, is the (molar) chemical potential of that species under some defined set of standard conditions, is the gas constant, is the thermodynamic temperature and is the exponential constant.
Alternatively, this equation can be written as:
In general, the activity depends on any factor that alters the chemical potential. Such factors may include: concentration, temperature, pressure, interactions between chemical species, electric fields, etc. Depending on the circumstances, some of these factors, in particular concentration and interactions, may be more important than others.
The activity depends on the choice of standard state such that changing the standard state will also change the activity. This means that activity is a relative term that describes how "active" a compound is compared to when it is under the standard state conditions. In principle, the choice of standard state is arbitrary; however, it is often chosen out of mathematical or experimental convenience. Alternatively, it is also possible to define an "absolute activity" (i.e., the fugacity in statistical mechanics), , which is written as:
Note that this definition corresponds to setting as standard state the solution of , if the latter exists.
Activity coefficient
The activity coefficient , which is also a dimensionless quantity, relates the activity to a measured mole fraction (or in the gas phase), molality , mass fraction , molar concentration (molarity) or mass concentration :
The division by the standard molality (usually 1 mol/kg) or the standard molar concentration (usually 1 mol/L) is necessary to ensure that both the activity and the activity coefficient are dimensionless, as is conventional.
The activity depends on the chosen standard state and composition scale; for instance, in the dilute limit it approaches the mole fraction, mass fraction, or numerical value of molarity, all of which are different. However, the activity coefficients are similar.
When the activity coefficient is close to 1, the substance shows almost ideal behaviour according to Henry's law (but not necessarily in the sense of an ideal solution). In these cases, the activity can be substituted with the appropriate dimensionless measure of composition , or . It is also possible to define an activity coefficient in terms of Raoult's law: the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol for this activity coefficient, although this should not be confused with fugacity.
Standard states
Gases
In most laboratory situations, the difference in behaviour between a real gas and an ideal gas is dependent only on the pressure and the temperature, not on the presence of any other gases. At a given temperature, the "effective" pressure of a gas is given by its fugacity : this may be higher or lower than its mechanical pressure. By historical convention, fugacities have the dimension of pressure, so the dimensionless activity is given by:
where is the dimensionless fugacity coefficient of the species, is its mole fraction in the gaseous mixture ( for a pure gas) and is the total pressure. The value is the standard pressure: it may be equal to 1 atm (101.325 kPa) or 1 bar (100 kPa) depending on the source of data, and should always be quoted.
Mixtures in general
The most convenient way of expressing the composition of a generic mixture is by using the mole fractions (written in the gas phase) of the different components (or chemical species: atoms or molecules) present in the system, where
with , the number of moles of the component i, and , the total number of moles of all the different components present in the mixture.
The standard state of each component in the mixture is taken to be the pure substance, i.e. the pure substance has an activity of one. When activity coefficients are used, they are usually defined in terms of Raoult's law,
where is the Raoult's law activity coefficient: an activity coefficient of one indicates ideal behaviour according to Raoult's law.
Dilute solutions (non-ionic)
A solute in dilute solution usually follows Henry's law rather than Raoult's law, and it is more usual to express the composition of the solution in terms of the molar concentration (in mol/L) or the molality (in mol/kg) of the solute rather than in mole fractions. The standard state of a dilute solution is a hypothetical solution of concentration = 1 mol/L (or molality = 1 mol/kg) which shows ideal behaviour (also referred to as "infinite-dilution" behaviour). The standard state, and hence the activity, depends on which measure of composition is used. Molalities are often preferred as the volumes of non-ideal mixtures are not strictly additive and are also temperature-dependent: molalities do not depend on volume, whereas molar concentrations do.
The activity of the solute is given by:
Ionic solutions
When the solute undergoes ionic dissociation in solution (for example a salt), the system becomes decidedly non-ideal and we need to take the dissociation process into consideration. One can define activities for the cations and anions separately ( and ).
In a liquid solution the activity coefficient of a given ion (e.g. Ca2+) isn't measurable because it is experimentally impossible to independently measure the electrochemical potential of an ion in solution. (One cannot add cations without putting in anions at the same time). Therefore, one introduces the notions of
mean ionic activity
mean ionic molality
mean ionic activity coefficient
where represent the stoichiometric coefficients involved in the ionic dissociation process
Even though and cannot be determined separately, is a measurable quantity that can also be predicted for sufficiently dilute systems using Debye–Hückel theory. For electrolyte solutions at higher concentrations, Debye–Hückel theory needs to be extended and replaced, e.g., by a Pitzer electrolyte solution model (see external links below for examples). For the activity of a strong ionic solute (complete dissociation) we can write:
Measurement
The most direct way of measuring the activity of a volatile species is to measure its equilibrium partial vapor pressure. For water as solvent, the water activity aw is the equilibrated relative humidity. For non-volatile components, such as sucrose or sodium chloride, this approach will not work since they do not have measurable vapor pressures at most temperatures. However, in such cases it is possible to measure the vapor pressure of the solvent instead. Using the Gibbs–Duhem relation it is possible to translate the change in solvent vapor pressures with concentration into activities for the solute.
The simplest way of determining how the activity of a component depends on pressure is by measurement of densities of solution, knowing that real solutions have deviations from the additivity of (molar) volumes of pure components compared to the (molar) volume of the solution. This involves the use of partial molar volumes, which measure the change in chemical potential with respect to pressure.
Another way to determine the activity of a species is through the manipulation of colligative properties, specifically freezing point depression. Using freezing point depression techniques, it is possible to calculate the activity of a weak acid from the relation,
where is the total equilibrium molality of solute determined by any colligative property measurement (in this case ), is the nominal molality obtained from titration and is the activity of the species.
There are also electrochemical methods that allow the determination of activity and its coefficient.
The value of the mean ionic activity coefficient of ions in solution can also be estimated with the Debye–Hückel equation, the Davies equation or the Pitzer equations.
Single ion activity measurability revisited
The prevailing view that single ion activities are unmeasurable, or perhaps even physically meaningless, has its roots in the work of Edward A. Guggenheim in the late 1920s. However, chemists have not given up the idea of single ion activities. For example, pH is defined as the negative logarithm of the hydrogen ion activity. By implication, if the prevailing view on the physical meaning and measurability of single ion activities is correct it relegates pH to the category of thermodynamically unmeasurable quantities. For this reason the International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only and further states that the establishment of primary pH standards requires the application of the concept of 'primary method of measurement' tied to the Harned cell. Nevertheless, the concept of single ion activities continues to be discussed in the literature, and at least one author purports to define single ion activities in terms of purely thermodynamic quantities. The same author also proposes a method of measuring single ion activity coefficients based on purely thermodynamic processes. A different approach has a similar objective.
Use
Chemical activities should be used to define chemical potentials, where the chemical potential depends on the temperature , pressure and the activity according to the formula:
where is the gas constant and is the value of under standard conditions. Note that the choice of concentration scale affects both the activity and the standard state chemical potential, which is especially important when the reference state is the infinite dilution of a solute in a solvent. Chemical potential has units of joules per mole (J/mol), or energy per amount of matter. Chemical potential can be used to characterize the specific Gibbs free energy changes occurring in chemical reactions or other transformations.
Formulae involving activities can be simplified by considering that:
For a chemical solution:
the solvent has an activity of unity (only a valid approximation for rather dilute solutions)
At a low concentration, the activity of a solute can be approximated to the ratio of its concentration over the standard concentration:
Therefore, it is approximately equal to its concentration.
For a mix of gas at low pressure, the activity is equal to the ratio of the partial pressure of the gas over the standard pressure: Therefore, it is equal to the partial pressure in atmospheres (or bars), compared to a standard pressure of 1 atmosphere (or 1 bar).
For a solid body, a uniform, single species solid has an activity of unity at standard conditions. The same thing holds for a pure liquid.
The latter follows from any definition based on Raoult's law, because if we let the solute concentration go to zero, the vapor pressure of the solvent will go to . Thus its activity will go to unity. This means that if during a reaction in dilute solution more solvent is generated (the reaction produces water for example) we can typically set its activity to unity.
Solid and liquid activities do not depend very strongly on pressure because their molar volumes are typically small. Graphite at 100 bars has an activity of only 1.01 if we choose as standard state. Only at very high pressures do we need to worry about such changes. Activity expressed in terms of pressure is called fugacity.
Example values
Example values of activity coefficients of sodium chloride in aqueous solution are given in the table. In an ideal solution, these values would all be unity. The deviations tend to become larger with increasing molality and temperature, but with some exceptions.
See also
Fugacity, the equivalent of activity for partial pressure
Chemical equilibrium
Electrochemical potential
Excess chemical potential
Partial molar property
Thermodynamic equilibrium
Thermal expansion
Virial expansion
Water activity
Non-random two-liquid model (NRTL model) – phase equilibrium calculations
UNIQUAC model – phase equilibrium calculations
References
External links
Equivalences among different forms of activity coefficients and chemical potentials
Calculate activity coefficients of common inorganic electrolytes and their mixtures
AIOMFAC online-model: calculator for activity coefficients of inorganic ions, water, and organic compounds in aqueous solutions and multicomponent mixtures with organic compounds.
Dimensionless numbers of chemistry
Physical chemistry
Thermodynamic properties | Thermodynamic activity | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,932 | [
"Thermodynamic properties",
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Thermodynamics",
"nan",
"Physical chemistry",
"Dimensionless numbers of chemistry"
] |
183,330 | https://en.wikipedia.org/wiki/Chassis | A chassis (, ; plural chassis from French châssis ) is the load-bearing framework of a manufactured object, which structurally supports the object in its construction and function. An example of a chassis is a vehicle frame, the underpart of a motor vehicle, on which the body is mounted; if the running gear such as wheels and transmission, and sometimes even the driver's seat, are included, then the assembly is described as a rolling chassis.
Examples of use
Vehicles
In the case of vehicles, the term rolling chassis means the frame plus the "running gear" like engine, transmission, drive shaft, differential, and suspension. The "rolling chassis" description originated from assembly production when an integrated chassis "rolled on its own tires" just before truck bodies were bolted to the frames near the end of the line. An underbody (sometimes referred to as "coachwork"), which is usually not necessary for the integrity of the structure, is built on the chassis to complete the vehicle.
For commercial vehicles, a rolling chassis consists of an assembly of all the essential parts of a truck without the body to be ready for operation on the road. A car chassis will be different from one for commercial vehicles because of the heavier loads and constant work use. Commercial vehicle manufacturers sell "chassis only", "cowl and chassis", as well as "chassis cab" versions that can be outfitted with specialized bodies. These include motor homes, fire engines, ambulances, box trucks, etc.
In particular applications, such as school buses, a government agency like National Highway Traffic Safety Administration (NHTSA) in the U.S. defines the design standards of chassis and body conversions.
An armoured fighting vehicle's hull serves as the chassis and comprises the bottom part of the AFV that includes the tracks, engine, driver's seat, and crew compartment. This describes the lower hull, although common usage might include the upper hull to mean the AFV without the turret. The hull serves as a basis for platforms on tanks, armoured personnel carriers, combat engineering vehicles, etc.
In the intermodal trucking industry, a chassis is a type of semi-trailer onto which a cargo container can be mounted for road transport.
Electronics
In an electronic device (such as a computer), the chassis consists of a frame or other internal supporting structure on which the circuit boards and other electronics are mounted.
In some designs, such as older ENIAC sets, the chassis is mounted inside a heavy, rigid cabinet, while in other designs such as modern computer cases, lightweight covers or panels are attached to the chassis.
The combination of chassis and outer covering is sometimes called an enclosure.
Firearms
In firearms, the chassis is a bedding frame on long guns such as rifles to replace the traditionally wooden stock, for the purpose of better accurizing the gun. The chassis is usually made from hard metallic material such as aluminium alloy (and less frequently stainless steel, titanium alloy or recently magnesium alloy) due to metals having superior stiffness and compressive strength compared with wood or synthetic polymer, which are commonly used in conventional rifle stocks.
The chassis essentially functions as a more extensive pillar bedding, providing a metal-on-metal bearing surface that has reduced shifting potential under the stress of recoil. A barreled action bedded into a metal chassis would theoretically operate more consistently during repeated firing, resulting in better precision. With the increasing availability of CNC machining, chassis have become more affordable and sophisticated as well as gained increasing popularity as these types of chassis can be expanded to accommodate customizable "furniture" (buttstock, pistol grip, etc.) and rail interface systems that provide mounting points for various accessories.
See also
Airframe
Backbone chassis
Body-on-frame
Bogie
Coachbuilder
Locomotive frame
Monocoque, construction from a structural shell instead of a structural frame
Undercarriage (disambiguation)
Underframe
References
External links
Automotive chassis types
Vehicle technology
Computer enclosure
Carriages and mountings | Chassis | [
"Engineering"
] | 804 | [
"Vehicle technology",
"Mechanical engineering by discipline"
] |
183,336 | https://en.wikipedia.org/wiki/Glyn%20Moody | Glyn Moody is a London-based technology writer. He is best known for his book Rebel Code: Linux and the Open Source Revolution (2001). It describes the evolution and significance of the free software and open source movements with interviews of hackers.
His writings have appeared in Wired, Computer Weekly, Linux Journal, and Ars Technica. In 2009, he criticised the software education policy of the government of José Luís Rodríguez Zapatero on his blog.
Selective bibliography
Walled Culture: How Big Content Uses Technology and the Law to Lock Down Culture and Keep Creators Poor (Paperback or ebook - 2022)
Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine, and Business by Glyn Moody (Hardcover - Feb 3, 2004)
Rebel Code: Linux and the Open Source Revolution by Glyn Moody (Paperback - Jul 15, 2002)
The Internet with Windows by Glyn Moody (Paperback - Jan 15, 1996)
References
External links
open... (blog)
Open Enterprise blog
Walled Culture
British technology writers
Living people
Year of birth missing (living people) | Glyn Moody | [
"Technology"
] | 219 | [
"Computing stubs",
"Computer specialist stubs"
] |
183,350 | https://en.wikipedia.org/wiki/Ambidexterity | Ambidexterity is the ability to use both the right and left hand equally well. When referring to objects, the term indicates that the object is equally suitable for right-handed and left-handed people. When referring to humans, it indicates that a person has no marked preference for the use of the right or left hand.
Only about one percent of people are naturally ambidextrous, which equates to about 80,000,000 people in the world today. In modern times, it is common to find some people considered ambidextrous who were originally left-handed and who learned to be ambidextrous, either by choice or as a result of training in schools or in jobs where right-handedness is often emphasized or required. Since many everyday devices such as can openers and scissors are asymmetrical and designed for right-handed people, many left-handers learn to use them right-handedly due to the rarity or lack of left-handed models. Thus, left-handed people are more likely to develop motor skills in their non-dominant hand than right-handed people.
Etymology
The word "ambidextrous" is derived from the Latin roots ambi-, meaning "both", and dexter, meaning "right" or "favorable". Thus, ambidextrous is literally "both right" or "both favorable". The term ambidexter in English was originally used in a legal sense of jurors who accepted bribes from both parties for their verdict.
Writing
Some people can write with both hands. Famous examples include Albert Einstein, Benjamin Franklin, Nikola Tesla, James A. Garfield, and Leonardo da Vinci.
In India's Singrauli district there is a unique ambidextrous school named Veena Vadini School in Budhela village, where students are taught to write simultaneously with both hands.
Sports
Baseball
Ambidexterity is highly prized in the sport of baseball. "Switch hitting" is the most common phenomenon, and is highly prized because a batter usually has a higher statistical chance of successfully hitting the baseball when it is thrown by an opposite-handed pitcher. Therefore, an ambidextrous hitter can bat from whichever side is more advantageous to them in that situation. Pete Rose, the record holder for most hits in Major League Baseball, was a switch hitter.
Switch pitchers, comparatively rare in contrast to switch hitters, also exist. Tony Mullane won 284 games in the 19th century. Elton Chamberlain and Larry Corcoran were also notable ambidextrous pitchers. In the 20th century, Greg A. Harris was the only major league pitcher to pitch with both his left and his right arm. A natural right-hander, by 1986 he could throw well enough with his left hand that he felt capable of pitching with either hand in a game. Harris was not allowed to throw left-handed in a regular-season game until September 1995 in the penultimate game of his career. Against the Cincinnati Reds in the ninth inning, Harris (then a member of the Montreal Expos) retired Reggie Sanders pitching right-handed, then switched to his left hand for the next two hitters, Hal Morris and Ed Taubensee, who both batted left-handed. Harris walked Morris but got Taubensee to ground out. He then went back to his right hand to retire Bret Boone to end the inning.
In the 21st century there is only one major league pitcher, Pat Venditte of the Seattle Mariners, who regularly pitches with both arms. Venditte became the 21st century's first switch pitcher in the major leagues with his debut on June 5, 2015, against the Boston Red Sox, pitching two innings, allowing only one hit and recording five outs right-handed and one out left-handed. During his career, an eponymous "Venditte Rule" was created restricting the ability of a pitcher to change arms in the middle of an at-bat.
Billy Wagner was a natural right-handed pitcher in his youth, but after breaking his throwing arm twice, he taught himself how to use his left arm by throwing nothing but fastballs against a barn wall. He became a dominant left-handed relief pitcher, most known for his 100+ mph fastball. In his 1999 season, Wagner captured the National League Relief Man of the Year Award as a Houston Astro.
St. Louis Cardinals pitcher Brett Cecil is naturally right-handed, but starting from a very early age, threw with his left. As such, he writes and performs most tasks with the right side of his body, but throws with his left.
Basketball
In basketball a player may choose to make a pass or shot with the weaker hand. NBA stars LeBron James, Larry Bird, Kyrie Irving, Carlos Boozer, David Lee, John Wall, Derrick Rose, Chandler Parsons, Andrew Bogut, John Henson, Michael Beasley, and Jerryd Bayless are ambidextrous players, as was Kobe Bryant. Bogut and Henson are both stronger in the post with their left-handed hook shot than they are with their natural right hands. Brothers Marc and Pau Gasol can make hook shots with either hand while the right hand is dominant for each. Bob Cousy, a Boston Celtics legend was forced to play with his left hand in high school when he injured his right hand, thus making him effectively ambidextrous. Mike Conley shoots left-handed, but has preferred to shoot floaters right handed, as he does everything else right-handed off the court. Ben Simmons and Luke Kennard are also natural right-handers shooting left-handed. Tristan Thompson is a natural left-hander, and was a left-handed shooter, but has shot right-handed since the 2013–2014 season. He does perform left-handed hook shots more often. Los Angeles Lakers center DeAndre Jordan who is left-handed, shoots with his left hand but has been known to dunk with his right hand, spin clockwise in his 360 dunks, and shoot right handed hook shots more accurately and from further out. Charlotte Hornets power forward Miles Bridges is a left-handed shooter; however, he dunks the ball and blocks shots more frequently with his right hand. Former Los Angeles Lakers center Roy Hibbert shoots his hook shots equally well with either hand. Former Oklahoma City Thunder left-handed point guard Derek Fisher used to dunk with his right hand in his early years. Candace Parker, forward for the Chicago Sky, also has equal dominance with either hand. Los Angeles Lakers superstar Kobe Bryant shot with either hand, although his right hand was dominant: due to an injury to the right hand, he was forced to shoot with his left. Paul George, Tracy McGrady and Vince Carter are all noted to be right-handed, but rotates clockwise for dunks, but Carter is able to also spin counterclockwise, as he did during high school. McGrady also spins anti-clockwise for his baseline dunks. Larry Bird, LeBron James, Paul Millsap, Russell Westbrook, Danny Ainge and Gary Payton shoot right-handed, but do almost everything left-handed off the courts, but Bird once had a game in which he only shot left-handed running hook shots, cross passes and layups. Ronnie Price, however has a tendency to dunk with his left hand, but he is a right-handed shooter. Josh McRoberts is known to be a left handed shooter but does everything with his right hand such as his famous dunks. Ivica Zubac is a right handed shooter, but can shoot hook shots with both hands, and is more accurate with his left handed hooks. Greg Monroe is also a left-handed shooter but does right handed jump hooks and everything else right handed off the court.
Trevor Booker is left handed for shooting a basketball but writes with his right hand. Ben Simmons shoots jumpers and free throws left-handed, but does everything else right-handed, including dunking, throwing long passes and writing. He also shoots more right-handed non-jumpers (layups, floaters and hook shots).
Board sports
In skateboarding, being able to skate successfully with not only one's dominant foot forward but also the less dominant one is called "switch skating", or "skating goofy", and is a prized ability. To illustrate the stances further; there is "Regular" which is left shoulder and foot towards the front of the board and the opposite (right shoulder foot towards the front) is referred to as goofy. These terms hold true to surfing and snowboarding. With skateboarding, whether one pushes with their front or back foot determines whether they are considered regular v. regular-mongo or goofy v. goofy-mongo. The ability to ride both regular and goofy is considered to be "switch stance". Notable switch skateboarders include Rodney Mullen, Eric Koston, Guy Mariano, Paul Rodriguez Jr., Mike Mo Capaldi, and Bob Burnquist. Similarly, surfers who ride equally well in either stance are said to be surfing "switch”. Also, snowboarding at the advanced level requires the ability to ride equally well in either.
Combat sports
In combat sports fighters may choose to face their opponent with either the left shoulder forward in a right-handed stance ("orthodox") or the right shoulder forward in a left-handed stance ("south-paw"), thus a degree of cross dominance is useful. In boxing, Manny Pacquiao has a southpaw stance in the ring even though he is really ambidextrous outside the ring. Also, in mixed martial arts, many naturally left-handed strikers like Lyoto Machida and Anderson Silva will switch stances in order to counter opponent's strikes or takedown attempts to stay standing. Additionally, some fighters actually choose to fight in a southpaw stance despite their dominant hand being their right, one such fighter being Vasyl Lomachenko. This is done as it gives access to a strong and precise jab from the lead hand, which is arguably the most important strike in boxing for setting up combos and interrupting your opponent during their attacks. Bruce Lee also practiced this same method of fighting with his dominant hand forward. Left handed fighters such as Oscar De La Hoya, Miguel Cotto, Andre Ward, and Gerry Cooney fought in orthodox. This made their left hooks their most powerful weapons, along with enhancing the strength of their jab.
Cricket
In cricket, it is also beneficial to be able to use both arms. Ambidextrous fielders can make one-handed catches or throws with either hand. Sachin Tendulkar uses his left hand for writing, but bats and bowls with his right hand, it is the same with Ajinkya Rahane, Kane Williamson, and Shane Watson. There are many players who are naturally right-handed but bat left and vice versa. Sourav Ganguly, Thisara Perera uses his right hand for writing and bowls with the right hand, too, but bats with his left hand. Players due to injuries may also switch arms for fielding. Zaheer Khan bowls left-arm fast-medium but bats right handed. Phillip Hughes batted, bowled, and fielded left-handed before a shoulder injury. Australian batsman George Bailey also due to sustaining an injury, taught himself to throw with his weaker left arm. He is now often seen throughout matches switching between arms as he throws the ball. See also reverse sweep and switch hitting. David Warner has batted right-handed in high school, and has practiced right-handed as well, when he is normally a left-handed switch-hitter. Alastair Cook, Jimmy Anderson, Stuart Broad, Ben Stokes, Eoin Morgan, Ben Dunk, Adam Gilchrist, Matthew Hayden, Travis Head, Chris Gayle, Gautam Gambhir, Rishabh Pant, Ishan Kishan, Devdutt Padikkal, Yashasvi Jaiswal, Smriti Mandhana and Kagiso Rabada are natural right-handers, but bat left-handed.
Michael Clarke is naturally a left handed person who bowls left handed but bats right handed.
Akshay Karnewar is an ambidextrous bowler. Originally, he only bowled with his right hand, but since he does everything else with his left hand, he was taught to bowl left-handed as well but needs to signal to the umpire when he switches hands when bowling to allow for the field to change. He is a left-handed batsman. As an off-spinner and left-arm orthodox spin, the ball will always spin towards the batsman (OB vs. RHB; SLO vs. LHB), or away from opposite-handed batsmen, which is the predominant role of switch-handed spinners.
Sri Lankan Kusal Perera started his cricket as a right hand batsman, until he changed to left hand to mimic his favourite cricketer Sanath Jayasuriya. Jayasuriya bats and bowls left handed but writes with his right hand. Another Sri Lankan Kamindu Mendis is also a handy ambidextrous bowler. He can bowl orthodox left-arm spin and he can bowl right-arm offspin as well. Yasir Jan, however is a fast bowler both right and left handed and tops over 140 km/h with both hands, with his right arm being faster.
Jofra Archer warms up with slow orthodox left-arm spin, and Jos Buttler practiced left-handed as a club cricketer.
Cue sports
In cue sports, players can reach farther across the table if they are able to play with either hand, since the cue must either be placed on the left or the right side of the body. English snooker player Ronnie O'Sullivan is a rarity amongst top snooker professionals, in that he is able to play to world-class standard with either hand. While he lacks power in his left arm, his ability to alternate hands allows him to take shots that would otherwise require awkward cueing or the use of a rest. When he first displayed this ability in the 1996 World Championship against the Canadian player Alain Robidoux, Robidoux accused him of disrespect. O'Sullivan responded that he played better with his left hand than Robidoux could with his right. O'Sullivan was summoned to a disciplinary hearing in response to Robidoux's formal complaint, where he had to prove that he could play to a high level with his left hand.
Figure skating
In figure skating, most skaters who are right-handed spin and jump to the left, and vice versa for left-handed individuals, but it also down to habit for ballerinas. Olympic Champion figure skater John Curry notably performed his jumps in one direction (anti-clockwise) while spinning predominantly in the other. Very few skaters have such an ability to perform jumps and spins in both directions, and it is now considered a "difficult variation" in spins under the ISU Judging System to rotate in the non-dominant direction. Michelle Kwan used an opposite-rotating camel spin in some of her programs as a signature move. No point bonus exists for opposite direction jumps or bi-directional combination jumps, despite their being much harder to perfect. Nobody can perform a jump sequence (because it requires change of edge, whereas a combo is maintained on the same edge) from clockwise to anti-clockwise, or vice versa.
Football codes
American football
In American football, it is especially advantageous to be able to use either arm to perform various tasks. Ambidextrous receivers can make one-handed catches with either hand; linemen can hold their shoulders square and produce an equal amount of power with both arms; and punters can handle a bad snap and roll out and punt with either leg, limiting the chance of a block. Naturally right-handed quarterbacks may have to perform left-handed passes to avoid sacks. Chris Jones is cross-dominant. Although he is a left-footed punter, he throws with his right. Chris Hanson was dual-footed, able to punt with either foot.
Golf
Some players find cross-dominance advantageous in golf, especially if a left-handed player utilizes right-handed clubs. Having more precise coordination with the left hand is believed to allow better-controlled, and stronger drives. Mac O'Grady was a touring pro who played right-handed, yet could play "scratch" (no handicap) golf left-handed. He lobbied the USGA for years to be certified as an amateur "lefty" and a pro "righty" to no avail. Although not ambidextrous, Phil Mickelson and Mike Weir are both right-handers who golf left-handed; Ben Hogan was the opposite, being a natural left-hander who played golf right-handed, as is Cristie Kerr. This is known as cross-dominance or mixed-handedness.
Hockey
Ice hockey players may shoot from the left or right side of the body. For the most part, right-handed players shoot left and, likewise, most left-handed players shoot right as the player will often wield the stick one-handed. The dominant hand is typically placed on the top of the stick to allow for better stickhandling and control of the puck. Gordie Howe was one of few players capable of doing both, although this was at a time when the blade of the stick was not curved.
Another ice hockey goaltender Bill Durnan, had the ability to catch the puck with either hand. He won the Vezina Trophy, then for the National Hockey League's goalie with the fewest goals allowed six times out of only seven seasons. He had developed this ability playing for church-league teams in Toronto and Montreal to make up for his poor lateral movement. He wore custom gloves that permitted him to hold his stick with either hand. Most goaltenders nowadays choose to catch with their non-dominant hand.
Field hockey players are forced to play right-handed. The rules of the game denote that the ball can only be struck with the flat side of the stick. Only one player Laeeq Ahmed on Pakistan National Hockey team, played with unorthodox left hand below and right hand up side of stick grip with full command. He played from 1991 to 1992 for the national team. Perhaps to avoid confusing referees, there are no left-handed sticks. In floorball, like ice hockey, right-handed players shoot left and, likewise, most left-handed players shoot right as the player will often wield the stick one-handed. Floorball goalkeepers do not use a stick, so they have two glove hands, and act much like a soccer goalkeeper, but with an ice hockey helmet. When they venture out of the goal box, they act just like an outfield soccer player.
Lacrosse
In field lacrosse, which is more popular in the United States, it is extremely advantageous to be able to use both hands, as players can play on both sides of the field and are harder to defend against. Usually in field lacrosse, all players except goalies, but especially offensive players, are expected to be able to catch and throw with their weak hand. However, in box lacrosse, which is more popular in Canada, players often only use their dominant hand, like in hockey.
Martial arts
The traditional martial arts tend to feature a larger number of practitioners who have intentionally developed ambidexterity to a high degree, compared to athletes in combat sports. This is because unlike sports, which have structured rules and common player preferences, traditional martial arts are intended for situations such as self-defense, in which a wider array of physical challenges may occur.
Some arts and schools practice all or most techniques and movements with both sides, while others emphasize that some techniques should only be trained on the right or the left (though both sides tend to eventually receive nearly equal attention). This may be for a number of reasons. Some of these arts rely on the tendency of right-handed people to move differently with the left side than with the right, and attempt to take advantage of this. Similarly, certain weapons are more often carried on one side. For instance, most weapons in ancient China were wielded primarily with the right hand and on the right side; this habit has carried on to the practice of those weapons in modern times. As an example, in xingyiquan, most schools that teach spear-fighting only practice on the right side, although much of the rest of the art is ambidextrous in practice.
Professional wrestling
Shawn Michaels is ambidextrous. He typically kicks with his right leg in Sweet Chin Music, but uses either arm for his signature elbow drop, depending on the position.
Racing
In professional sports car racing, drivers who participate in various events in both the United States and Europe will sometimes encounter machines with the steering wheel mounted on different sides of the car. While steering ability is largely unaffected, the hand used for shifting changes is, due to the shift pattern relative to the driver changing, i.e., a gear change that requires moving the lever toward the driver in a left-hand-drive vehicle becomes a movement away from the driver in a right-hand-drive vehicle. A driver skilled in shifting with the opposite hand is at an advantage.
Racket sports
In tennis, a player may be able to reach balls on the backhand side more easily if they're able to use the weaker hand. An example of a player who is ambidextrous is Luke Jensen. Due to a physical advantage on the space of time needed when matching the ball with the racket simultaneously with tagging the opponent's movement, being laterality-crossed on eyedness with handedness may be a decisive factor for outstanding performance, since the hand which strikes the ball can do it while the overriding eye, matching with this hand, can be tagging the opponent's movement-decisions. Such have the case of Rafael Nadal who uses his right hand for writing, but plays tennis with left. There are many players who are naturally right handed, but play lefty and vice versa. Evgenia Kulikovskaya is also an ambidextrous player, Kulikovskaya played with two forehands and no backhand, switching her racket hand depending on where the ball was coming. Jan-Michael Gambill is the opposite case of Kulikovskaya, since he played with a two-handed forehand and backhand, although he served with his right hand. Other famous examples of a two-handed forehand are Fabrice Santoro and Monica Seles. Seles' playing style was unusual in that she hit with two hands on both sides and, at the same time, always kept her (dominant) left hand at the base of her racket. This meant that she hit her forehand cross-handed. Maria Sharapova is also known to be ambidextrous. Cheong-eui Kim is a truly ambidextrous player with no backhand, and can serve left-handed as well as right-handed.
Some table tennis players have used their ability to hit with their non-dominant hand to return balls out of reach of their dominant hand's backhand, most notably Timo Boll, a former world #1 player.
Although it is quite uncommon, in badminton, ambidextrous players are able to switch the racquet between their hands, often to get to the awkward backhand corner quickly. As badminton can be a very fast sport at professional levels of play, players might not have time to switch the racquet, as this disrupts their reaction time.
Rugby
In rugby league and rugby union being ambidextrous is an advantage when it comes to passing the ball between teammates as well as being able to use both feet by the halves is an advantage in gaining field position by kicking the ball ahead. Jonny Wilkinson is a prime example of a union player who is good at kicking with both feet, he is left handed and normally place kicks using his left, but he dropped the goal that won the Rugby World Cup in 2003 with his right. Dan Carter is actually right handed, but kicks predominantly with his left, sometimes with his right.
Volleyball
A volleyball player has to be ambidextrous to control the ball to either direction and performing basic digs. On the other hand, the setter has to be proficient in performing dump sets with either hand to throw off blockers. Wing spikers that can spike with either hand can alter trajectories to throw off receivers' timing.
In art
Although most artists have a favored hand, some artists use both of their hands for arts such as drawing and sculpting. It is believed that Leonardo da Vinci utilized both of his hands after an injury to his right hand during his early childhood.
A contemporary artist, Gur Keren, can draw with both his hands and even feet. Thea Alba was a well-known German who could write with all ten fingers.
In music
In drum and bugle corps (and drum and bell corps), snare drummers, quads (tenors), and bass drummers need to be somewhat ambidextrous. Since they have to abide by what the composer/arranger has written, they have to learn to play evenly in terms of dynamics and speed with their right and left hands. Former Beatles member Paul McCartney is left-handed (guitar and bass guitar) and played left-handed when performing (as can be seen in many photos and videos throughout his musical career). The drummer of The Beatles, Ringo Starr, is left-handed as well, but he plays a right-handed drum kit. American instrumental guitarist Michael Angelo Batio is known for being able to play both right-handed and left-handed guitar proficiently.
The ambidexterity of Jimi Hendrix has been explored in a psychology, but he was known for playing a standard right-handed guitar with his left hand. The guitarist Duane Allman was the reverse of Hendrix, playing right-handed but left-handed in all other tasks. Shara Lin is naturally left-handed, but plays the violin and guitar right handed. She can also play the piano with her left hand while playing the zither with her right. Also, naturally left-handed musicians have to play with right-handed-only instruments (violin, viola, cello).
Kurt Cobain, frontman of Nirvana, was naturally ambidextrous. He grew up having a slight preference for his left-hand (as can be seen in many of his childhood photographies), but as an adult he wrote right-handed. He played guitar exclusively left-handed.
Tools
With respect to tools, ambidextrous may be used to mean that the tool may be used equally well with either hand; an "ambidextrous knife" refers to the opening mechanism and locking mechanism on a folding knife. It can also mean that the tool can be interchanged between left and right in some other way, such as an "ambidextrous headset," which can be worn on either the left or right ear. Many tools and implements are made specifically for use in the right hand, and will not work properly if used in the other hand. There exist shops dedicated to selling implements and tools made specifically for left-handed use. For example, left-handed, and ambidextrous, scissors are available.
Many knives are sold sharpened asymmetrically for right-hand use, and resharpened in the same way. It is possible to buy knives sharpened for left-handed use, and to sharpen any knife in that way.
Medicine and surgery
A degree of ambidexterity is required in surgery because surgeons must be able to tie with their left and right hands in either single or double knots. This is usually due to factors like the positioning of the surgeon, whether they have an assistant and the angle required to throw and secure the knot.
Ambidexterity is also useful after surgery on a dominant hand or arm, as it allows the patient to use their non-dominant hand with equal facility as the limb which is recovering from surgery.
Ambisinistrality
A related variation to one that is ambidextrous is a person who displays "ambisinistrality" or is "ambisinistrous". This term is a near inverse to ambidexterity as Latin root of the word ambi- means both and the Latin root of the word -sinistral means "left", being derived from the word sinister. The term "ambisinistral" can be directly interpreted as "both left" or "both sinister".
The term is used in non-scientific manners to describe individuals who have two non-dominant hands, as both hands are either clumsy or insufficient in motor skill and are therefore used equally as much. In a 1992 New York Times Q&A article on ambidexterity, the term was used to describe people "...with both hands as skilled as a right-hander's left hand."
See also
Brain asymmetry
Cross-dominance
Dual brain theory
Dual wield
Handedness
Laterality
Lateralization of brain function
Note
References
Further reading
Handedness
Mental processes | Ambidexterity | [
"Physics",
"Chemistry",
"Biology"
] | 6,017 | [
"Behavior",
"Motor control",
"Chirality",
"Asymmetry",
"Handedness",
"Symmetry"
] |
183,365 | https://en.wikipedia.org/wiki/Universal%20Plug%20and%20Play | Universal Plug and Play (UPnP) is a set of networking protocols on the Internet Protocol (IP) that permits networked devices, such as personal computers, printers, Internet gateways, Wi-Fi access points and mobile devices, to seamlessly discover each other's presence on the network and establish functional network services. UPnP is intended primarily for residential networks without enterprise-class devices.
UPnP assumes the network runs IP, and then uses HTTP on top of IP to provide device/service description, actions, data transfer and event notification. Device search requests and advertisements are supported by running HTTP on top of UDP (port 1900) using multicast (known as HTTPMU). Responses to search requests are also sent over UDP, but are instead sent using unicast (known as HTTPU).
Conceptually, UPnP extends plug and play—a technology for dynamically attaching devices directly to a computer—to zero-configuration networking for residential and SOHO wireless networks. UPnP devices are plug-and-play in that, when connected to a network, they automatically establish working configurations with other devices, removing the need for users to manually configure and add devices through IP addresses.
UPnP is generally regarded as unsuitable for deployment in business settings for reasons of economy, complexity, and consistency: the multicast foundation makes it chatty, consuming too many network resources on networks with a large population of devices; the simplified access controls do not map well to complex environments; and it does not provide a uniform configuration syntax such as the CLI environments of Cisco IOS or JUNOS.
Overview
The UPnP architecture allows device-to-device networking of consumer electronics, mobile devices, personal computers, and networked home appliances. It is a distributed, open architecture protocol based on established standards such as the Internet Protocol Suite (TCP/IP), HTTP, XML, and SOAP. UPnP control points (CPs) are devices which use UPnP protocols to control UPnP controlled devices (CDs).
The UPnP architecture supports zero-configuration networking. A UPnP-compatible device from any vendor can dynamically join a network, obtain an IP address, announce its name, advertise or convey its capabilities upon request, and learn about the presence and capabilities of other devices. Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) servers are optional and are only used if they are available on the network. Devices can disconnect from the network automatically without leaving state information.
UPnP was published as a 73-part international standard ISO/IEC 29341 in December 2008.
Other UPnP features include:
Media and device independence UPnP technology can run on many media that support IP, including Ethernet, FireWire, IR (IrDA), home wiring (G.hn) and RF (Bluetooth, Wi-Fi). No special device driver support is necessary; common network protocols are used instead.
User interface (UI) control Optionally, the UPnP architecture enables devices to present a user interface through a web browser (see Presentation below).
Operating system and programming language independence Any operating system and any programming language can be used to build UPnP products. UPnP stacks are available for most platforms and operating systems in both closed- and open-source forms.
Programmatic control UPnP architecture also enables conventional application programmatic control.
Extensibility Each UPnP product can have device-specific services layered on top of the basic architecture. In addition to combining services defined by UPnP Forum in various ways, vendors can define their own device and service types, and can extend standard devices and services with vendor-defined actions, state variables, data structure elements, and variable values.
Protocol
UPnP uses common Internet technologies. It assumes the network must run Internet Protocol (IP) and then uses HTTP, SOAP and XML on top of IP, in order to provide device/service description, actions, data transfer and eventing. Device search requests and advertisements are supported by running HTTP on top of UDP using multicast (known as HTTPMU). Responses to search requests are also sent over UDP, but are instead sent using unicast (known as HTTPU). UPnP uses UDP due to its lower overhead in not requiring confirmation of received data and retransmission of corrupt packets. HTTPU and HTTPMU were initially submitted as an Internet Draft, but it expired in 2001; these specifications have since been integrated into the actual UPnP specifications.
UPnP uses UDP port 1900, and all used TCP ports are derived from the SSDP alive and response messages.
Addressing
The foundation for UPnP networking is IP addressing. Each device must implement a DHCP client and search for a DHCP server when the device is first connected to the network. If no DHCP server is available, the device must assign itself an address. The process by which a UPnP device assigns itself an address is known within the UPnP Device Architecture as AutoIP. In UPnP Device Architecture Version 1.0, AutoIP is defined within the specification itself; in UPnP Device Architecture Version 1.1, AutoIP references IETF . If during the DHCP transaction, the device obtains a domain name, for example, through a DNS server or via DNS forwarding, the device should use that name in subsequent network operations; otherwise, the device should use its IP address.
Discovery
Once a device has established an IP address, the next step in UPnP networking is discovery. The UPnP discovery protocol is known as the Simple Service Discovery Protocol (SSDP). When a device is added to the network, SSDP allows that device to advertise its services to control points on the network. This is achieved by sending SSDP alive messages. When a control point is added to the network, SSDP allows that control point to actively search for devices of interest on the network or listen passively to the SSDP alive messages of devices. The fundamental exchange is a discovery message containing a few essential specifics about the device or one of its services, for example, its type, identifier, and a pointer (network location) to more detailed information.
Description
After a control point has discovered a device, the control point still knows very little about the device. For the control point to learn more about the device and its capabilities, or to interact with the device, the control point must retrieve the device's description from the location (URL) provided by the device in the discovery message. The UPnP Device Description is expressed in XML and includes vendor-specific manufacturer information like the model name and number, serial number, manufacturer name, (presentation) URLs to vendor-specific web sites, etc. The description also includes a list of any embedded services. For each service, the Device Description document lists the URLs for control, eventing and service description. Each service description includes a list of the commands, or actions, to which the service responds, and parameters, or arguments, for each action; the description for a service also includes a list of variables; these variables model the state of the service at run time and are described in terms of their data type, range, and event characteristics.
Control
Having retrieved a description of the device, the control point can send actions to a device's service. To do this, a control point sends a suitable control message to the control URL for the service (provided in the device description). Control messages are also expressed in XML using the Simple Object Access Protocol (SOAP). Much like function calls, the service returns any action-specific values in response to the control message. The effects of the action, if any, are modeled by changes in the variables that describe the run-time state of the service.
Event notification
Another capability of UPnP networking is event notification, or eventing. The event notification protocol defined in the UPnP Device Architecture is known as General Event Notification Architecture (GENA). A UPnP description for a service includes a list of actions the service responds to and a list of variables that model the state of the service at run time. The service publishes updates when these variables change, and a control point may subscribe to receive this information. The service publishes updates by sending event messages. Event messages contain the names of one or more state variables and the current value of those variables. These messages are also expressed in XML. A special initial event message is sent when a control point first subscribes; this event message contains the names and values for all evented variables and allows the subscriber to initialize its model of the state of the service. To support scenarios with multiple control points, eventing is designed to keep all control points equally informed about the effects of any action. Therefore, all subscribers are sent all event messages, subscribers receive event messages for all "evented" variables that have changed, and event messages are sent no matter why the state variable changed (either in response to a requested action or because the state the service is modeling changed).
Presentation
The final step in UPnP networking is presentation. If a device has a URL for presentation, then the control point can retrieve a page from this URL, load the page into a web browser, and depending on the capabilities of the page, allow a user to control the device and/or view device status. The degree to which each of these can be accomplished depends on the specific capabilities of the presentation page and device.
AV standards
UPnP AV architecture is an audio and video extension of the UPnP, supporting a variety of devices such as TVs, VCRs, CD/DVD players/jukeboxes, settop boxes, stereos systems, MP3 players, still image cameras, camcorders, electronic picture frames (EPFs), and personal computers. The UPnP AV architecture allows devices to support different types of formats for the entertainment content, including MPEG2, MPEG4, JPEG, MP3, Windows Media Audio (WMA), bitmaps (BMP), and NTSC, PAL or ATSC formats. Multiple types of transfer protocols are supported, including IEEE 1394, HTTP, RTP and TCP/IP.
On 12 July 2006, the UPnP Forum announced the release of version 2 of the UPnP Audio and Video specifications, with new MediaServer (MS) version 2.0 and MediaRenderer (MR) version 2.0 classes. These enhancements are created by adding capabilities to the MediaServer and MediaRenderer device classes, allowing a higher level of interoperability between products made by different manufacturers. Some of the early devices complying with these standards were marketed by Philips under the Streamium brand name.
Since 2006, versions 3 and 4 of the UPnP audio and video device control protocols have been published. In March 2013, an updated uPnP AV architecture specification was published, incorporating the updated device control protocols. UPnP Device Architecture 2.0 was released in April 2020.
The UPnP AV standards have been referenced in specifications published by other organizations including Digital Living Network Alliance Networked Device Interoperability Guidelines, International Electrotechnical Commission IEC 62481-1, and Cable Television Laboratories OpenCable Home Networking Protocol.
AV components
Generally a UPnP audio/video (AV) architecture consists of:
Control Point: a device that discovers Media Servers and Media Renderers, then connects them
Media Server: the server that stores content on the network to be accessed by Media Renderers
Media Renderer: a device that renders ('plays') content received from a Media Server.
Media server
A is the UPnP-server ("master" device) that provides media library information and streams media-data (like audio/video/picture/files) to UPnP clients on the network. It is a computer system or a similar digital appliance that stores digital media, such as photographs, movies, or music and shares these with other devices.
UPnP AV media servers provide a service to UPnP AV client devices, so-called control points, for browsing the media content of the server and request the media server to deliver a file to the control point for playback.
UPnP media servers are available for most operating systems and many hardware platforms. UPnP AV media servers can either be categorized as software-based or hardware-based. Software-based UPnP AV media servers can be run on a PC. Hardware-based UPnP AV media servers may run on any NAS devices or any specific hardware for delivering media, such as a DVR. As of May 2008, there were more software-based UPnP AV media servers than there were hardware-based servers.
Other components
UPnP MediaServer ControlPoint - which is the UPnP-client (a 'slave' device) that can auto-detect UPnP-servers on the network to browse and stream media/data-files from them.
UPnP MediaRenderer DCP - which is a 'slave' device that can render (play) content.
UPnP RenderingControl DCP - control MediaRenderer settings; volume, brightness, RGB, sharpness, and more.
UPnP Remote User Interface (RUI) client/server - which sends/receives control-commands between the UPnP-client and UPnP-server over network, (like record, schedule, play, pause, stop, etc.).
Web4CE (CEA 2014) for UPnP Remote UI - CEA-2014 standard designed by Consumer Electronics Association's R7 Home Network Committee. Web-based Protocol and Framework for Remote User Interface on UPnP Networks and the Internet (Web4CE). This standard allows a UPnP-capable home network device to provide its interface (display and control options) as a web page to display on any other device connected to the home network. That means that one can control a home networking device through any web-browser-based communications method for CE devices on a UPnP home network using ethernet and a special version of HTML called CE-HTML.
QoS (quality of service) - is an important (but not mandatory) service function for use with UPnP AV (Audio and Video). QoS (quality of service) refers to control mechanisms that can provide different priority to different users or data flows, or guarantee a certain level of performance to a data flow in accordance with requests from the application program. Since UPnP AV is mostly to deliver streaming media that is often near real-time or real-time audio/video data which it is critical to be delivered within a specific time or the stream is interrupted. QoS guarantees are especially important if the network capacity is limited, for example public networks, like the internet.
QoS for UPnP consist of Sink Device (client-side/front-end) and Source Device (server-side/back-end) service functions. With classes such as; Traffic Class that indicates the kind of traffic in the traffic stream, (for example, audio or video). Traffic Identifier (TID) which identifies data packets as belonging to a unique traffic stream. Traffic Specification (TSPEC) which contains a set of parameters that define the characteristics of the traffic stream, (for example operating requirement and scheduling). Traffic Stream (TS) which is a unidirectional flow of data that originates at a source device and terminates at one or more sink device(s).
Remote Access - defines methods for connecting UPnP device sets that are not in the same multicast domain.
NAT traversal
One solution for NAT traversal, called the Internet Gateway Device Control Protocol (UPnP IGD Protocol), is implemented via UPnP. Many routers and firewalls expose themselves as Internet Gateway Devices, allowing any local UPnP control point to perform a variety of actions, including retrieving the external IP address of the device, enumerating existing port mappings, and adding or removing port mappings. By adding a port mapping, a UPnP controller behind the IGD can enable traversal of the IGD from an external address to an internal client.
There are numerous compatibility issues due the different interpretations of the very large actually backward compatible IGDv1 and IGDv2 specifications. One of them is the UPnP IGD client integrated with current Microsoft Windows and Xbox systems with certified IGDv2 routers. The compatibility issue still exist since the introduced of the IGDv1 client in Windows XP in 2001, and a IGDv2 router without a workaround that makes router port mapping impossible.
If UPnP is only used to control router port mappings and pinholes, there are alternative, newer much simpler and lightweight protocols such as the PCP and the NAT-PMP, both of which have been standardized as RFCs by the IETF. These alternatives are not yet known to have compatibility issues between different clients and servers, but adoption is still low. For consumer routers, only AVM and the open-source router software projects OpenWrt, OPNsense, and pfSense are currently known to support PCP as an alternative to UPnP. AVM's Fritz!Box UPnP IGDv2 and PCP implementation has been very buggy since its introduction. In many cases it does not work.
Problems
Authentication
The UPnP protocol, by default, does not implement any authentication, so UPnP device implementations must implement the additional Device Protection service, or implement the Device Security Service. There also exists a non-standard solution called UPnP-UP (Universal Plug and Play - User Profile) which proposes an extension to allow user authentication and authorization mechanisms for UPnP devices and applications. Many UPnP device implementations lack authentication mechanisms, and by default assume local systems and their users are completely trustworthy.
When the authentication mechanisms are not implemented, routers and firewalls running the UPnP IGD protocol are vulnerable to attack. For example, Adobe Flash programs running outside the sandbox of the browser (e.g. this requires specific version of Adobe Flash with acknowledged security issues) are capable of generating a specific type of HTTP request which allows a router implementing the UPnP IGD protocol to be controlled by a malicious web site when someone with a UPnP-enabled router simply visits that web site. This only applies to the "firewall-hole-punching"-feature of UPnP; it does not apply when the router/firewall does not support UPnP IGD or has been disabled on the router. Also, not all routers can have such things as DNS server settings altered by UPnP because much of the specification (including LAN Host Configuration) is optional for UPnP enabled routers. As a result, some UPnP devices ship with UPnP turned off by default as a security measure.
Access from the Internet
In 2011, researcher Daniel Garcia developed a tool designed to exploit a flaw in some UPnP IGD device stacks that allow UPnP requests from the Internet. The tool was made public at DEFCON 19 and allows portmapping requests to external IP addresses from the device and internal IP addresses behind the NAT. The problem is widely propagated around the world, with scans showing millions of vulnerable devices at a time.
In January 2013, the security company Rapid7 in Boston reported on a six-month research programme. A team scanned for signals from UPnP-enabled devices announcing their availability for internet connection. Some 6900 network-aware products from 1500 companies at 81 million IP-addresses responded to their requests. 80% of the devices are home routers; others include printers, webcams and surveillance cameras. Using the UPnP-protocol, many of those devices can be accessed and/or manipulated.
In February 2013, the UPnP forum responded in a press release by recommending more recent versions of the used UPnP stacks, and by improving the certification program to include checks to avoid further such issues.
IGMP snooping and reliability
UPnP is often the only significant multicast application in use in digital home networks; therefore, multicast network misconfiguration or other deficiencies can appear as UPnP issues rather than underlying network issues.
If IGMP snooping is enabled on a switch, or more commonly a wireless router/switch, it will interfere with UPnP/DLNA device discovery (SSDP) if incorrectly or incompletely configured (e.g. without an active querier or IGMP proxy), making UPnP appear unreliable.
Typical scenarios observed include a server or client (e.g. smart TV) appearing after power on, and then disappearing after a few minutes (often 30 by default configuration) due to IGMP group membership expiring.
Callback vulnerability
On 8 June 2020, yet another protocol design flaw was announced. Dubbed "CallStranger" by its discoverer, it allows an attacker to subvert the event subscription mechanism and execute a variety of attacks: amplification of requests for use in DDoS; enumeration; and data exfiltration.
OCF had published a fix to the protocol specification in April 2020, but since many devices running UPnP are not easily upgradable, CallStranger is likely to remain a threat for a long time to come. CallStranger has fueled calls for end-users to abandon UPnP because of repeated failures in security of its design and implementation.
History
The UPnP protocols were promoted by the UPnP Forum (formed in October 1999), a computer industry initiative to enable simple and robust connectivity to standalone devices and personal computers from many different vendors. The Forum consisted of more than 800 vendors involved in everything from consumer electronics to network computing. Since 2016, all UPnP efforts have been managed by the Open Connectivity Foundation (OCF).
In the fall of 2008, the UPnP Forum ratified the successor to UPnP 1.0 Device Architecture, UPnP 1.1. The Devices Profile for Web Services (DPWS) standard was a candidate successor to UPnP, but UPnP 1.1 was selected by the UPnP Forum. Version 2 of IGD is standardized.
The UPnP Internet Gateway Device (IGD) standard has a WANIPConnection service, which provides similar functionality to IETF-standard Port Control Protocol. The NAT-PMP specification contains a list of the problems with IGDP that prompted the creation of NAT-PMP and its successor PCP.
A number of further standards have been defined for the UPnP Device Architecture:
The Wi-Fi Alliance defines a set of "WFA device" () services related to the wireless access point.
The WFAWLANConfig service is a required part and defines ways to query the capabilities of a wireless access point and set up wireless connections. This service is used in the AP-ER and UPnP-C types of Wi-Fi Protected Setup.
See also
Comparison of UPnP AV media servers
Devices Profile for Web Services
Digital Living Network Alliance (DLNA)
Internet Gateway Device Protocol (UPnP IGD)
List of UPnP AV media servers and clients
NAT Port Mapping Protocol (NAT-PMP)
Port (computer networking)
Port Control Protocol (PCP)
Zeroconf
References
Further reading
Golden G. Richard: Service and Device Discovery: Protocols and Programming, McGraw-Hill Professional,
Michael Jeronimo, Jack Weast: UPnP Design by Example: A Software Developer's Guide to Universal Plug and Play, Intel Press,
External links
UPnP Standards & Architecture, at Open Connectivity Foundation
ISO/IEC 29341-1:2011
Port Mapping Protocols Overview and Comparison 2024: About UPnP IGD & PCP/NAT-PMP
Digital media
Windows administration
Windows communication and services
Mobile content
Servers (computing)
Media servers
Discovery protocols | Universal Plug and Play | [
"Technology"
] | 4,872 | [
"Multimedia",
"Mobile content",
"Digital media"
] |
183,403 | https://en.wikipedia.org/wiki/Learning | Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, non-human animals, and some machines; there is also evidence for some kind of learning in certain plants. Some learning is immediate, induced by a single event (e.g. being burned by a hot stove), but much skill and knowledge accumulate from repeated experiences. The changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be "lost" from that which cannot be retrieved.
Human learning starts at birth (it might even start before) and continues until death as a consequence of ongoing interactions between people and their environment. The nature and processes involved in learning are studied in many established fields (including educational psychology, neuropsychology, experimental psychology, cognitive sciences, and pedagogy), as well as emerging fields of knowledge (e.g. with a shared interest in the topic of learning from safety events such as incidents/accidents, or in collaborative learning health systems). Research in such fields has led to the identification of various sorts of learning. For example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. Learning may occur consciously or without conscious awareness. Learning that an aversive event cannot be avoided or escaped may result in a condition called learned helplessness. There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.
Play has been approached by several theorists as a form of learning. Children experiment with the world, learn the rules, and learn to interact through play. Lev Vygotsky agrees that play is pivotal for children's development, since they make meaning of their environment through playing educational games. For Vygotsky, however, play is the first form of learning language and communication, and the stage where a child begins to understand rules and symbols. This has led to a view that learning in organisms is always related to semiosis, and is often associated with representational systems/activity.
Types
There are various functional categorizations of memory which have developed. Some memory researchers distinguish memory based on the relationship between the stimuli involved (associative vs non-associative) or based to whether the content can be communicated through language (declarative/explicit vs procedural/implicit). Some of these categories can, in turn, be parsed into sub-types. For instance, declarative memory comprises both episodic and semantic memory.
Non-associative learning
Non-associative learning refers to "a relatively permanent change in the strength of response to a single stimulus due to repeated exposure to that stimulus." This definition exempts the changes caused by sensory adaptation, fatigue, or injury.
Non-associative learning can be divided into habituation and sensitization.
Habituation
Habituation is an example of non-associative learning in which one or more components of an innate response (e.g., response probability, response duration) to a stimulus diminishes when the stimulus is repeated. Thus, habituation must be distinguished from extinction, which is an associative process. In operant extinction, for example, a response declines because it is no longer followed by a reward. An example of habituation can be seen in small song birds—if a stuffed owl (or similar predator) is put into the cage, the birds initially react to it as though it were a real predator. Soon the birds react less, showing habituation. If another stuffed owl is introduced (or the same one removed and re-introduced), the birds react to it again as though it were a predator, demonstrating that it is only a very specific stimulus that is habituated to (namely, one particular unmoving owl in one place). The habituation process is faster for stimuli that occur at a high rather than for stimuli that occur at a low rate as well as for the weak and strong stimuli, respectively. Habituation has been shown in essentially every species of animal, as well as the sensitive plant Mimosa pudica and the large protozoan Stentor coeruleus. This concept acts in direct opposition to sensitization.
Sensitization
Sensitization is an example of non-associative learning in which the progressive amplification of a response follows repeated administrations of a stimulus. This is based on the notion that a defensive reflex to a stimulus such as withdrawal or escape becomes stronger after the exposure to a different harmful or threatening stimulus. An everyday example of this mechanism is the repeated tonic stimulation of peripheral nerves that occurs if a person rubs their arm continuously. After a while, this stimulation creates a warm sensation that can eventually turn painful. This pain results from a progressively amplified synaptic response of the peripheral nerves. This sends a warning that the stimulation is harmful. Sensitization is thought to underlie both adaptive as well as maladaptive learning processes in the organism.
Active learning
Active learning occurs when a person takes control of his/her learning experience. Since understanding information is the key aspect of learning, it is important for learners to recognize what they understand and what they do not. By doing so, they can monitor their own mastery of subjects. Active learning encourages learners to have an internal dialogue in which they verbalize understandings. This and other meta-cognitive strategies can be taught to a child over time. Studies within metacognition have proven the value in active learning, claiming that the learning is usually at a stronger level as a result. In addition, learners have more incentive to learn when they have control over not only how they learn but also what they learn. Active learning is a key characteristic of student-centered learning. Conversely, passive learning and direct instruction are characteristics of teacher-centered learning (or traditional education).
Associative learning
Associative learning is the process by which a person or animal learns an association between two stimuli or events. In classical conditioning, a previously neutral stimulus is repeatedly paired with a reflex-eliciting stimulus until eventually the neutral stimulus elicits a response on its own. In operant conditioning, a behavior that is reinforced or punished in the presence of a stimulus becomes more or less likely to occur in the presence of that stimulus.
Operant conditioning
Operant conditioning is a way in which behavior can be shaped or modified according to the desires of the trainer or head individual. Operant conditioning uses the thought that living things seek pleasure and avoid pain, and that an animal or human can learn through receiving either reward or punishment at a specific time called trace conditioning. Trace conditioning is the small and ideal period of time between the subject performing the desired behavior, and receiving the positive reinforcement as a result of their performance. The reward needs to be given immediately after the completion of the wanted behavior.
Operant conditioning is different from classical conditioning in that it shapes behavior not solely on bodily reflexes that occur naturally to a specific stimulus, but rather focuses on the shaping of wanted behavior that requires conscious thought, and ultimately requires learning.
Punishment and reinforcement are the two principal ways in which operant conditioning occurs. Punishment is used to reduce unwanted behavior, and ultimately (from the learner's perspective) leads to avoidance of the punishment, not necessarily avoidance of the unwanted behavior. Punishment is not an appropriate way to increase wanted behavior for animals or humans. Punishment can be divided into two subcategories, positive punishment and negative punishment. Positive punishment is when an aversive aspect of life or thing is added to the subject, for this reason it is called positive punishment. For example, the parent spanking their child would be considered a positive punishment, because a spanking was added to the child. Negative punishment is considered the removal of something loved or desirable from the subject. For example, when a parent puts his child in time out, in reality, the child is losing the opportunity to be with friends, or to enjoy the freedom to do as he pleases. In this example, negative punishment is the removal of the child's desired rights to play with his friends etc.
Reinforcement on the other hand is used to increase a wanted behavior either through negative reinforcement or positive reinforcement. Negative reinforcement is defined by removing an undesirable aspect of life, or thing. For example, a dog might learn to sit as the trainer scratches his ears, which ultimately is removing his itches (undesirable aspect). Positive reinforcement is defined by adding a desirable aspect of life or thing. For example, a dog might learn to sit if he receives a treat. In this example the treat was added to the dog's life.
Classical conditioning
The typical paradigm for classical conditioning involves repeatedly pairing an unconditioned stimulus (which unfailingly evokes a reflexive response) with another previously neutral stimulus (which does not normally evoke the response). Following conditioning, the response occurs both to the unconditioned stimulus and to the other, unrelated stimulus (now referred to as the "conditioned stimulus"). The response to the conditioned stimulus is termed a conditioned response. The classic example is Ivan Pavlov and his dogs. Pavlov fed his dogs meat powder, which naturally made the dogs salivate—salivating is a reflexive response to the meat powder. Meat powder is the unconditioned stimulus (US) and the salivation is the unconditioned response (UR). Pavlov rang a bell before presenting the meat powder. The first time Pavlov rang the bell, the neutral stimulus, the dogs did not salivate, but once he put the meat powder in their mouths they began to salivate. After numerous pairings of bell and food, the dogs learned that the bell signaled that food was about to come, and began to salivate when they heard the bell. Once this occurred, the bell became the conditioned stimulus (CS) and the salivation to the bell became the conditioned response (CR). Classical conditioning has been demonstrated in many species. For example, it is seen in honeybees, in the proboscis extension reflex paradigm. It was recently also demonstrated in garden pea plants.
Another influential person in the world of classical conditioning is John B. Watson. Watson's work was very influential and paved the way for B.F. Skinner's radical behaviorism. Watson's behaviorism (and philosophy of science) stood in direct contrast to Freud and other accounts based largely on introspection. Watson's view was that the introspective method was too subjective and that we should limit the study of human development to directly observable behaviors. In 1913, Watson published the article "Psychology as the Behaviorist Views", in which he argued that laboratory studies should serve psychology best as a science. Watson's most famous, and controversial, experiment was "Little Albert", where he demonstrated how psychologists can account for the learning of emotion through classical conditioning principles.
Observational learning
Observational learning is learning that occurs through observing the behavior of others. It is a form of social learning which takes various forms, based on various processes. In humans, this form of learning seems to not need reinforcement to occur, but instead, requires a social model such as a parent, sibling, friend, or teacher with surroundings.
Imprinting
Imprinting is a kind of learning occurring at a particular life stage that is rapid and apparently independent of the consequences of behavior. In filial imprinting, young animals, particularly birds, form an association with another individual or in some cases, an object, that they respond to as they would to a parent. In 1935, the Austrian Zoologist Konrad Lorenz discovered that certain birds follow and form a bond if the object makes sounds.
Play
Play generally describes behavior with no particular end in itself, but that improves performance in similar future situations. This is seen in a wide variety of vertebrates besides humans, but is mostly limited to mammals and birds. Cats are known to play with a ball of string when young, which gives them experience with catching prey. Besides inanimate objects, animals may play with other members of their own species or other animals, such as orcas playing with seals they have caught. Play involves a significant cost to animals, such as increased vulnerability to predators and the risk of injury and possibly infection. It also consumes energy, so there must be significant benefits associated with play for it to have evolved. Play is generally seen in younger animals, suggesting a link with learning. However, it may also have other benefits not associated directly with learning, for example improving physical fitness.
Play, as it pertains to humans as a form of learning is central to a child's learning and development. Through play, children learn social skills such as sharing and collaboration. Children develop emotional skills such as learning to deal with the emotion of anger, through play activities. As a form of learning, play also facilitates the development of thinking and language skills in children.
There are five types of play:
Sensorimotor play aka functional play, characterized by the repetition of an activity
Roleplay occurs starting at the age of three
Rule-based play where authoritative prescribed codes of conduct are primary
Construction play involves experimentation and building
Movement play aka physical play
These five types of play are often intersecting. All types of play generate thinking and problem-solving skills in children. Children learn to think creatively when they learn through play. Specific activities involved in each type of play change over time as humans progress through the lifespan. Play as a form of learning, can occur solitarily, or involve interacting with others.
Enculturation
Enculturation is the process by which people learn values and behaviors that are appropriate or necessary in their surrounding culture. Parents, other adults, and peers shape the individual's understanding of these values. If successful, enculturation results in competence in the language, values, and rituals of the culture. This is different from acculturation, where a person adopts the values and societal rules of a culture different from their native one.
Multiple examples of enculturation can be found cross-culturally. Collaborative practices in the Mazahua people have shown that participation in everyday interaction and later learning activities contributed to enculturation rooted in nonverbal social experience. As the children participated in everyday activities, they learned the cultural significance of these interactions. The collaborative and helpful behaviors exhibited by Mexican and Mexican-heritage children is a cultural practice known as being "acomedido". Chillihuani girls in Peru described themselves as weaving constantly, following behavior shown by the other adults.
Episodic learning
Episodic learning is a change in behavior that occurs as a result of an event. For example, a fear of dogs that follows being bitten by a dog is episodic learning. Episodic learning is so named because events are recorded into episodic memory, which is one of the three forms of explicit learning and retrieval, along with perceptual memory and semantic memory. Episodic memory remembers events and history that are embedded in experience and this is distinguished from semantic memory, which attempts to extract facts out of their experiential context or – as some describe – a timeless organization of knowledge. For instance, if a person remembers the Grand Canyon from a recent visit, it is an episodic memory. He would use semantic memory to answer someone who would ask him information such as where the Grand Canyon is. A study revealed that humans are very accurate in the recognition of episodic memory even without deliberate intention to memorize it. This is said to indicate a very large storage capacity of the brain for things that people pay attention to.
Multimedia learning
Multimedia learning is where a person uses both auditory and visual stimuli to learn information. This type of learning relies on dual-coding theory.
E-learning and augmented learning
Electronic learning or e-learning is computer-enhanced learning. A specific and always more diffused e-learning is mobile learning (m-learning), which uses different mobile telecommunication equipment, such as cellular phones.
When a learner interacts with the e-learning environment, it is called augmented learning. By adapting to the needs of individuals, the context-driven instruction can be dynamically tailored to the learner's natural environment. Augmented digital content may include text, images, video, audio (music and voice). By personalizing instruction, augmented learning has been shown to improve learning performance for a lifetime. See also minimally invasive education.
Moore (1989) purported that three core types of interaction are necessary for quality, effective online learning:
Learner–learner (i.e. communication between and among peers with or without the teacher present),
Learner–instructor (i.e. student-teacher communication), and
Learner–content (i.e. intellectually interacting with content that results in changes in learners' understanding, perceptions, and cognitive structures).
In his theory of transactional distance, Moore (1993) contented that structure and interaction or dialogue bridge the gap in understanding and communication that is created by geographical distances (known as transactional distance).
Rote learning
Rote learning is memorizing information so that it can be recalled by the learner exactly the way it was read or heard. The major technique used for rote learning is learning by repetition, based on the idea that a learner can recall the material exactly (but not its meaning) if the information is repeatedly processed. Rote learning is used in diverse areas, from mathematics to music to religion.
Meaningful learning
Meaningful learning is the concept that learned knowledge (e.g., a fact) is fully understood to the extent that it relates to other knowledge. To this end, meaningful learning contrasts with rote learning in which information is acquired without regard to understanding. Meaningful learning, on the other hand, implies there is a comprehensive knowledge of the context of the facts learned.
Evidence-based learning
Evidence-based learning is the use of evidence from well designed scientific studies to accelerate learning. Evidence-based learning methods such as spaced repetition can increase the rate at which a student learns.
Formal learning
Formal learning is a deliberate way attaining of knowledge, which takes place within a teacher-student environment, such as in a school system or work environment. The term formal learning has nothing to do with the formality of the learning, but rather the way it is directed and organized. In formal learning, the learning or training departments set out the goals and objectives of the learning and oftentimes learners will be awarded with a diploma, or a type of formal recognition.
Non-formal learning
Non-formal learning is organized learning outside the formal learning system. For example, learning by coming together with people with similar interests and exchanging viewpoints, in clubs or in (international) youth organizations, and workshops. From the organizer's point of reference, non-formal learning does not always need a main objective or learning outcome. From the learner's point of view, non-formal learning, although not focused on outcomes, often results in an intentional learning opportunity.
Informal learning
Informal learning is less structured than "non-formal learning". It may occur through the experience of day-to-day situations (for example, one would learn to look ahead while walking because of the possible dangers inherent in not paying attention to where one is going). It is learning from life, during a meal at the table with parents, during play, and while exploring etc.. For the learner, informal learning is most often an experience of happenstance, and not a deliberately planned experience. Thus this does not require enrollment into any class. Unlike formal learning, informal learning typically does not lead to accreditation. Informal learning begins to unfold as the learner ponders his or her situation. This type of learning does not require a professor of any kind, and learning outcomes are unforeseen following the learning experience.
Informal learning is self-directed and because it focuses on day-to-day situations, the value of informal learning can be considered high. As a result, information retrieved from informal learning experiences will likely be applicable to daily life. Children with informal learning can at times yield stronger support than subjects with formal learning in the topic of mathematics. Daily life experiences take place in the workforce, family life, and any other situation that may arise during one's lifetime. Informal learning is voluntary from the learner's viewpoint, and may require making mistakes and learning from them. Informal learning allows the individual to discover coping strategies for difficult emotions that may arise while learning. From the learner's perspective, informal learning can become purposeful, because the learner chooses which rate is appropriate to learn and because this type of learning tends to take place within smaller groups or by oneself.
Nonformal learning and combined approaches
The educational system may use a combination of formal, informal, and nonformal learning methods. The UN and EU recognize these different forms of learning (cf. links below). In some schools, students can get points that count in the formal-learning systems if they get work done in informal-learning circuits. They may be given time to assist international youth workshops and training courses, on the condition they prepare, contribute, share, and can prove this offered valuable new insight, helped to acquire new skills, a place to get experience in organizing, teaching, etc.
To learn a skill, such as solving a Rubik's Cube quickly, several factors come into play at once:
Reading directions helps a player learn the patterns that solve the Rubik's Cube.
Practicing the moves repeatedly helps build "muscle memory" and speed.
Thinking critically about moves helps find shortcuts, which speeds future attempts.
Observing the Rubik's Cube's six colors help anchor solutions in the mind.
Revisiting the cube occasionally helps retain the skill.
Tangential learning
Tangential learning is the process by which people self-educate if a topic is exposed to them in a context that they already enjoy. For example, after playing a music-based video game, some people may be motivated to learn how to play a real instrument, or after watching a TV show that references Faust and Lovecraft, some people may be inspired to read the original work. Self-education can be improved with systematization. According to experts in natural learning, self-oriented learning training has proven an effective tool for assisting independent learners with the natural phases of learning.
Extra Credits writer and game designer James Portnow was the first to suggest games as a potential venue for "tangential learning". Mozelius et al. points out that intrinsic integration of learning content seems to be a crucial design factor, and that games that include modules for further self-studies tend to present good results. The built-in encyclopedias in the Civilization games are presented as an example – by using these modules gamers can dig deeper for knowledge about historical events in the gameplay. The importance of rules that regulate learning modules and game experience is discussed by Moreno, C., in a case study about the mobile game Kiwaka. In this game, developed by Landka in collaboration with ESA and ESO, progress is rewarded with educational content, as opposed to traditional education games where learning activities are rewarded with gameplay.
Dialogic learning
Dialogic learning is a type of learning based on dialogue.
Incidental learning
In incidental teaching learning is not planned by the instructor or the student, it occurs as a byproduct of another activity — an experience, observation, self-reflection, interaction, unique event (e.g. in response to incidents/accidents), or common routine task. This learning happens in addition to or apart from the instructor's plans and the student's expectations. An example of incidental teaching is when the instructor places a train set on top of a cabinet. If the child points or walks towards the cabinet, the instructor prompts the student to say "train". Once the student says "train", he gets access to the train set.
Here are some steps most commonly used in incidental teaching:
An instructor will arrange the learning environment so that necessary materials are within the student's sight, but not within his reach, thus impacting his motivation to seek out those materials.
An instructor waits for the student to initiate engagement.
An instructor prompts the student to respond if needed.
An instructor allows access to an item/activity contingent on a correct response from the student.
The instructor fades out the prompting process over a period of time and subsequent trials.
Incidental learning is an occurrence that is not generally accounted for using the traditional methods of instructional objectives and outcomes assessment. This type of learning occurs in part as a product of social interaction and active involvement in both online and onsite courses. Research implies that some un-assessed aspects of onsite and online learning challenge the equivalency of education between the two modalities. Both onsite and online learning have distinct advantages with traditional on-campus students experiencing higher degrees of incidental learning in three times as many areas as online students. Additional research is called for to investigate the implications of these findings both conceptually and pedagogically.
Domains
Benjamin Bloom has suggested three domains of learning in his taxonomy which are:
Cognitive: To recall, calculate, discuss, analyze, problem solve, etc.
Psychomotor: To dance, swim, ski, dive, drive a car, ride a bike, etc.
Affective: To like something or someone, love, appreciate, fear, hate, worship, etc.
These domains are not mutually exclusive. For example, in learning to play chess, the person must learn the rules (cognitive domain)—but must also learn how to set up the chess pieces and how to properly hold and move a chess piece (psychomotor). Furthermore, later in the game the person may even learn to love the game itself, value its applications in life, and appreciate its history (affective domain).
Transfer
Transfer of learning is the application of skill, knowledge or understanding to resolve a novel problem or situation that happens when certain conditions are fulfilled. Research indicates that learning transfer is infrequent; most common when "... cued, primed, and guided..." and has sought to clarify what it is, and how it might be promoted through instruction.
Over the history of its discourse, various hypotheses and definitions have been advanced. First, it is speculated that different types of transfer exist, including: near transfer, the application of skill to solve a novel problem in a similar context; and far transfer, the application of skill to solve a novel problem presented in a different context. Furthermore, Perkins and Salomon (1992) suggest that positive transfer in cases when learning supports novel problem solving, and negative transfer occurs when prior learning inhibits performance on highly correlated tasks, such as second or third-language learning. Concepts of positive and negative transfer have a long history; researchers in the early 20th century described the possibility that "...habits or mental acts developed by a particular kind of training may inhibit rather than facilitate other mental activities". Finally, Schwarz, Bransford and Sears (2005) have proposed that transferring knowledge into a situation may differ from transferring knowledge out to a situation as a means to reconcile findings that transfer may both be frequent and challenging to promote.
A significant and long research history has also attempted to explicate the conditions under which transfer of learning might occur. Early research by Ruger, for example, found that the "level of attention", "attitudes", "method of attack" (or method for tackling a problem), a "search for new points of view", a "careful testing of hypothesis" and "generalization" were all valuable approaches for promoting transfer. To encourage transfer through teaching, Perkins and Salomon recommend aligning ("hugging") instruction with practice and assessment, and "bridging", or encouraging learners to reflect on past experiences or make connections between prior knowledge and current content.
Factors affecting learning
Genetics
Some aspects of intelligence are inherited genetically, so different learners to some degree have different abilities with regard to learning and speed of learning.
Socioeconomic and physical conditions
Problems like malnutrition, fatigue, and poor physical health can slow learning, as can bad ventilation or poor lighting at home, and unhygienic living conditions.
The design, quality, and setting of a learning space, such as a school or classroom, can each be critical to the success of a learning environment. Size, configuration, comfort—fresh air, temperature, light, acoustics, furniture—can all affect a student's learning. The tools used by both instructors and students directly affect how information is conveyed, from the display and writing surfaces (blackboards, markerboards, tack surfaces) to digital technologies. For example, if a room is too crowded, stress levels rise, student attention is reduced, and furniture arrangement is restricted. If furniture is incorrectly arranged, sightlines to the instructor or instructional material are limited and the ability to suit the learning or lesson style is restricted. Aesthetics can also play a role, for if student morale suffers, so does motivation to attend school.
Psychological factors and teaching style
Intrinsic motivation, such as a student's own intellectual curiosity or desire to experiment or explore, has been found to sustain learning more effectively than extrinsic motivations such as grades or parental requirements. Rote learning involves repetition in order to reinforce facts in memory, but has been criticized as ineffective and "drill and kill" since it kills intrinsic motivation. Alternatives to rote learning include active learning and meaningful learning.
The speed, accuracy, and retention, depend upon aptitude, attitude, interest, attention, energy level, and motivation of the students. Students who answer a question properly or give good results should be praised. This encouragement increases their ability and helps them produce better results. Certain attitudes, such as always finding fault in a student's answer or provoking or embarrassing the student in front of a class are counterproductive.
Certain techniques can increase long-term retention:
The spacing effect means that lessons or studying spaced out over time (spaced repetition) are better than cramming
Teaching material to other people
"Self-explaining" (paraphrasing material to oneself) rather than passive reading
Low-stakes quizzing
Epigenetic factors
The underlying molecular basis of learning appears to be dynamic changes in gene expression occurring in brain neurons that are introduced by epigenetic mechanisms. Epigenetic regulation of gene expression involves, most notably, chemical modification of DNA or DNA-associated histone proteins. These chemical modifications can cause long-lasting changes in gene expression. Epigenetic mechanisms involved in learning include the methylation and demethylation of neuronal DNA as well as methylation, acetylation and deacetylation of neuronal histone proteins.
During learning, information processing in the brain involves induction of oxidative modification in neuronal DNA followed by the employment of DNA repair processes that introduce epigenetic alterations. In particular, the DNA repair processes of non-homologous end joining and base excision repair are employed in learning and memory formation.
General cognition-related factors
Adult learning vs children's learning
Learning is often more efficient in children and takes longer or is more difficult with age. A study using neuroimaging identified rapid neurotransmitter GABA boosting as a major potential explanation-component for why that is.
Children's brains contain more "silent synapses" that are inactive until recruited as part of neuroplasticity and flexible learning or memories. Neuroplasticity is heightened during critical or sensitive periods of brain development, mainly referring to brain development during child development.
However researchers, after subjecting late middle aged participants to university courses, suggest perceived age differences in learning may be a result of differences in time, support, environment, and attitudes, rather than inherent ability.
What humans learn at the early stages, and what they learn to apply, sets humans on course for life or has a disproportional impact. Adults usually have a higher capacity to select what they learn, to what extent and how. For example, children may learn the given subjects and topics of school curricula via classroom blackboard-transcription handwriting, instead of being able to choose specific topics/skills or jobs to learn and the styles of learning. For instance, children may not have developed consolidated interests, ethics, interest in purpose and meaningful activities, knowledge about real-world requirements and demands, and priorities.
In animal evolution
Animals gain knowledge in two ways. First is learning—in which an animal gathers information about its environment and uses this information. For example, if an animal eats something that hurts its stomach, it learns not to eat that again. The second is innate knowledge that is genetically inherited. An example of this is when a horse is born and can immediately walk. The horse has not learned this behavior; it simply knows how to do it. In some scenarios, innate knowledge is more beneficial than learned knowledge. However, in other scenarios the opposite is true—animals must learn certain behaviors when it is disadvantageous to have a specific innate behavior. In these situations, learning evolves in the species.
Costs and benefits of learned and innate knowledge
In a changing environment, an animal must constantly gain new information to survive. However, in a stable environment, this same individual needs to gather the information it needs once, and then rely on it for the rest of its life. Therefore, different scenarios better suit either learning or innate knowledge.
Essentially, the cost of obtaining certain knowledge versus the benefit of already having it determines whether an animal evolved to learn in a given situation, or whether it innately knew the information. If the cost of gaining the knowledge outweighs the benefit of having it, then the animal does not evolve to learn in this scenario—but instead, non-learning evolves. However, if the benefit of having certain information outweighs the cost of obtaining it, then the animal is far more likely to evolve to have to learn this information.
Non-learning is more likely to evolve in two scenarios. If an environment is static and change does not or rarely occurs, then learning is simply unnecessary. Because there is no need for learning in this scenario—and because learning could prove disadvantageous due to the time it took to learn the information—non-learning evolves. Similarly, if an environment is in a constant state of change, learning is also disadvantageous, as anything learned is immediately irrelevant because of the changing environment. The learned information no longer applies. Essentially, the animal would be just as successful if it took a guess as if it learned. In this situation, non-learning evolves. In fact, a study of Drosophila melanogaster showed that learning can actually lead to a decrease in productivity, possibly because egg-laying behaviors and decisions were impaired by interference from the memories gained from the newly learned materials or because of the cost of energy in learning.
However, in environments where change occurs within an animal's lifetime but is not constant, learning is more likely to evolve. Learning is beneficial in these scenarios because an animal can adapt to the new situation, but can still apply the knowledge that it learns for a somewhat extended period of time. Therefore, learning increases the chances of success as opposed to guessing. An example of this is seen in aquatic environments with landscapes subject to change. In these environments, learning is favored because the fish are predisposed to learn the specific spatial cues where they live.
In plants
In recent years, plant physiologists have examined the physiology of plant behavior and cognition. The concepts of learning and memory are relevant in identifying how plants respond to external cues, a behavior necessary for survival. Monica Gagliano, an Australian professor of evolutionary ecology, makes an argument for associative learning in the garden pea, Pisum sativum. The garden pea is not specific to a region, but rather grows in cooler, higher altitude climates. Gagliano and colleagues' 2016 paper aims to differentiate between innate phototropism behavior and learned behaviors. Plants use light cues in various ways, such as to sustain their metabolic needs and to maintain their internal circadian rhythms. Circadian rhythms in plants are modulated by endogenous bioactive substances that encourage leaf-opening and leaf-closing and are the basis of nyctinastic behaviors.
Gagliano and colleagues constructed a classical conditioning test in which pea seedlings were divided into two experimental categories and placed in Y-shaped tubes. In a series of training sessions, the plants were exposed to light coming down different arms of the tube. In each case, there was a fan blowing lightly down the tube in either the same or opposite arm as the light. The unconditioned stimulus (US) was the predicted occurrence of light and the conditioned stimulus (CS) was the wind blowing by the fan. Previous experimentation shows that plants respond to light by bending and growing towards it through differential cell growth and division on one side of the plant stem mediated by auxin signaling pathways.
During the testing phase of Gagliano's experiment, the pea seedlings were placed in different Y-pipes and exposed to the fan alone. Their direction of growth was subsequently recorded. The 'correct' response by the seedlings was deemed to be growing into the arm where the light was "predicted" from the previous day. The majority of plants in both experimental conditions grew in a direction consistent with the predicted location of light based on the position of the fan the previous day. For example, if the seedling was trained with the fan and light coming down the same arm of the Y-pipe, the following day the seedling grew towards the fan in the absence of light cues despite the fan being placed in the opposite side of the Y-arm. Plants in the control group showed no preference to a particular arm of the Y-pipe. The percentage difference in population behavior observed between the control and experimental groups is meant to distinguish innate phototropism behavior from active associative learning.
While the physiological mechanism of associative learning in plants is not known, Telewski et al. describes a hypothesis that describes photoreception as the basis of mechano-perception in plants. One mechanism for mechano-perception in plants relies on MS ion channels and calcium channels. Mechanosensory proteins in cell lipid bilayers, known as MS ion channels, are activated once they are physically deformed in response to pressure or tension. Ca2+ permeable ion channels are "stretch-gated" and allow for the influx of osmolytes and calcium, a well-known second messenger, into the cell. This ion influx triggers a passive flow of water into the cell down its osmotic gradient, effectively increasing turgor pressure and causing the cell to depolarize. Gagliano hypothesizes that the basis of associative learning in Pisum sativum is the coupling of mechanosensory and photosensory pathways and is mediated by auxin signaling pathways. The result is directional growth to maximize a plant's capture of sunlight.
Gagliano et al. published another paper on habituation behaviors in the mimosa pudica plant whereby the innate behavior of the plant was diminished by repeated exposure to a stimulus. There has been controversy around this paper and more generally around the topic of plant cognition. Charles Abrahmson, a psychologist and behavioral biologist, says that part of the issue of why scientists disagree about whether plants have the ability to learn is that researchers do not use a consistent definition of "learning" and "cognition". Similarly, Michael Pollan, an author, and journalist, says in his piece The Intelligent Plant that researchers do not doubt Gagliano's data but rather her language, specifically her use of the term "learning" and "cognition" with respect to plants. A direction for future research is testing whether circadian rhythms in plants modulate learning and behavior and surveying researchers' definitions of "cognition" and "learning".
Machine learning
Machine learning, a branch of artificial intelligence, concerns the construction and study of systems that can learn from data. For example, a machine learning system could be trained on email messages to learn to distinguish between spam and non-spam messages. Most of the Machine Learning models are based on probabilistic theories where each input (e.g. an image ) is associated with a probability to become the desired output.
Types
Phases
See also
Information theory
Types of education
References
Notes
Further reading
External links
How People Learn: Brain, Mind, Experience, and School (expanded edition) published by the National Academies Press
Applying Science of Learning in Education: Infusing Psychological Science into the Curriculum published by the American Psychological Association
Memorization
Cognitive science
Developmental psychology
Intelligence
Neuropsychological assessment
Systems science
Articles containing video clips | Learning | [
"Biology"
] | 8,425 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
183,435 | https://en.wikipedia.org/wiki/Epistemic%20community | An epistemic community is a network of professionals with recognized knowledge and skill in a particular issue-area. They share a set of beliefs, which provide a value-based foundation for the actions of members. Members of an epistemic community also share causal beliefs, which result from their analysis of practices that contribute to set of problems in their issue-area that then allow them to see the multiple links between policy and outcomes. Third, they share notions of validity, or internationally defined criteria for validating knowledge in their area of know-how. However, the members are from all different professions. Epistemic communities also have a common set of practices associated with a set of problems towards which their professional knowledge is directed, because of the belief that human welfare will benefit as a result. Communities evolve independently and without influence of authority or government. They do not have to be large; some are made up of only a few members. Even non-members can have an influence on epistemic communities. However, if the community loses consensus, then its authority decreases.
Definition
The definitive conceptual framework of an epistemic community is widely accepted as that of Peter M. Haas. He describes them as "...a network of professionals with recognised expertise and competence in a particular domain and an authoritative claim to policy relevant knowledge within that domain or issue-area."
As discussed Haas's definitive text, an epistemic community is made up of a diverse range of academic and professional experts, who are allied on the basis of four unifying characteristics:
a shared set of normative and principled beliefs which provide a value-based rationale for the social action of community members;
shared causal beliefs which are derived from their analysis of practices leading or contributing to a central set of problems in their domain and which then serve as the basis for elucidating the multiple linkages between possible policy actions and desired outcomes;
shared notions of validity, i.e. intersubjective, internally defined criteria for weighing and validating knowledge in the domain of their expertise; and
a common policy enterprise, or a set of common practices associated with a set of problems to which their professional competence is directed, presumably out of the conviction that human welfare will be enhanced as a consequence.
Thus, when viewed as an epistemic community, the overall enterprise of the expert members emerges as the product of a combination of shared beliefs and more subtle conformity pressures, rather than a direct drive for concurrence (Michael J. Mazarr). Epistemic communities also have a "normative component" meaning the end goal is always for the betterment of society, rather than self gain of the community itself (Peter M. Haas).
Most researchers carefully distinguish between epistemic forms of community and "real" or "bodily" community which consists of people sharing risk, especially bodily risk. It is also problematic to draw the line between modern ideas and more ancient ones, for example, Joseph Campbell's concept of myth from cultural anthropology, and Carl Jung's concept of archetype in psychology. Some consider forming an epistemic community a deep human need, and ultimately a mythical or even religious obligation. Among these very notably are E. O. Wilson, as well as Ellen Dissanayake, an American historian of aesthetics who famously argued that almost all of our broadly shared conceptual metaphors centre on one basic idea of safety: that of "home".
From this view, an epistemic community may be seen as a group of people who do not have any specific history together, but search for a common idea of home as if forming an intentional community. For example, an epistemic community can be found in a network of professionals from a wide variety of disciplines and backgrounds.
Although the members of an epistemic community may originate from a variety of academic or professional backgrounds, they are linked by a set of unifying characteristics for the promotion of collective amelioration and not collective gain. This is termed their "normative component". In the big picture, epistemic communities are socio-psychological entities that create and justify knowledge. Such communities can constitute of only two persons and yet gain an important role in building knowledge on any specific subject. Miika Vähämaa has recently suggested that epistemic communities consist of persons being able to understand, discuss and gain self-esteem concerning the matters being discussed.
Some theorists argue that an epistemic community may consist of those who accept one version of a story, or one version of validating a story. Michel Foucault referred more elaborately to mathesis as a rigorous episteme suitable for enabling cohesion of a discourse and thus uniting a community of its followers. In philosophy of science and systems science the process of forming a self-maintaining epistemic community is sometimes called a mindset. In politics, a tendency or faction is usually described in very similar terms.
Emergence
Epistemic communities came to be because of the rapid professionalization of government agencies. The Columbia Basin Inter-Agency Committee was created by U.S. President Franklin D. Roosevelt to coordinate the planning process. However, it did not actually participate in the planning process, but rather, was the venue that the Army Corps of Engineers and the Bureau of Reclamation used to divide construction projects. The failure of the Columbia Basin Inter-Agency Committee to be part of the planning process shows that “committees imposed from the top may be less likely to promote coordination than to provide agency officials with a means to enhance their autonomy,” (Thomas 1997, 225). Another reason why epistemic communities came to be is that decision makers began turning to experts to help them understand issues because there were more issues and all were more complicated. This caused greater interest in planning, and future-oriented research, which caused the establishment of environmental and natural resource agencies in 118 countries from 1972 to 1982. Growing professionalization of bureaucracies caused more respect towards experts, especially scientists. The first achievement by epistemic communities was the Anti-Ballistic Missile Treaty between the United States and Russia.
Role in international relations
Epistemic communities influence policy by providing knowledge to policy makers. Uncertainty plays a large role in an epistemic community's influence, because they hold the knowledge that policy makers need to create the wanted outcomes in policy. According to Robert Keohane they fill the absence of “a research program” [that shows] in particular studies that it can illuminate important issues in world politics,” (Adler/Haas 1992, 367). They can influence the setting of standards and the development of regulations as well as help coordinate structure of IR. The communities influence through communicative action; diffusing ideas nationally, transnationally, and internationally. An epistemic community's scope of cooperation is directly linked to the comprehensiveness of their beliefs. The strength of cooperative agreements depends on the power that the epistemic community has gathered within agencies and governments. The duration of cooperation is determined by the epistemic community's continued power. The most important contributions of epistemic communities are; that they direct attention towards the conditions which will likely cause a coalition to form and the possibilities of expansion, they insists on the importance of awareness and knowledge in negotiation, and they deepen the knowledge of how various actors define their interests.
Role in international policy coordination
Epistemic communities usually aid in issues concerning a technical nature. Normally, they guide decision makers towards the appropriate norms and institutions by framing and institutionalizing the issue-area.
Epistemic communities are also a source of policy innovation. Communities have indirect and direct roles in policy coordination by diffusing ideas and influencing the positions adopted. Policy evolution occurs in four steps: policy innovation, diffusion, selection, and persistence. Through framing the range of political controversy surrounding an issue, defining state interests, and setting standards epistemic communities can define the best solution to a problem. The definition of interest is specially important because there are many different definitions of what is a priority for a government. Intellectual innovations (produced by epistemic communities) are carried by domestic or international organizations (epistemic communities are a part of these organizations) then are selected by political process. Peter M. Haas argued “that epistemic communities help to explain the emergence and character of cooperation at the international level,” (Thomas 1997, 223). The shared interests they represent last more than the disagreements about a specific issue. Epistemic communities create a reality that is hindered by political factors and related considerations. If an epistemic community only acquires power in one country or international body, then its power is a direct effect of that country or body's power.
Impact
Epistemic communities became institutionalized in the short term because of change into the policy-making process and to persuade others that their approach is the right approach. Long-term effects occur through socialization. There are a myriad of examples of the impact that epistemic communities have had on public policy. Arms control ideas are reflected in the ABM Treaty and agreements following it during Cold War. Epistemic communities brought attention to chlorofluorocarbons and their polluting consequences. This realization led to the creation of environmental international agencies in a majority of the world's governments. This caused environmental decisions to go through the United Nations Environment Programme rather than through General Agreement on Tariffs and Trade (GATT) who would normally dispute these issues. Such was the case when the 1989 Basel Convention on the Control of Trans-boundary Movements of Hazardous Wastes and Their Disposal.
An epistemic community helped identify issues and direct the parameters that provided outline for GATT and some free trade agreements. They have also helped in telecommunications agreements and economy issues around the world. In telecommunications, “without the influence of an epistemic community of engineers concerned about design and international coordination of telecommunications equipment and standards, the regime would not have moved in the direction of multilateral agreements,” (Adler/Haas 1992, 377).
Epistemic communities were directly involved in the creation of the Board of Plant Genetic Resources. As well as the creation of food aid and the way that food aid functions. Epistemic communities also have brought attention to the habitat fragmentation and decline of biodiversity on the planet. This has led to reform throughout the world creating conservation agencies and policies. In California, an ecological epistemic community succeeded in creating the Memorandum of Understanding Biological Diversity (MOU on Biodiversity). The agreement covered all habitats and species in California for protection. The MOU on Biodiversity was followed by the Endangered Species Act which applied to all of the United States. Epistemic communities have a direct effect on agenda setting in intergovernmental organizations and indirect effect on the behavior of small countries. The ideas and policies of an epistemic community can become orthodoxy through the work of that community and through socialization. However, their effect is limited because there is a need for a shock to cause policy makers to seek epistemic community.
Role in environmental governance
The global environmental agenda is increasing in complexity and interconnectedness. Often environmental policymakers do not understand the technical aspects of the issues they are regulating. This affects their ability to define state interests and develop suitable solutions within cross-boundary environmental regulation.
As a result, conditions of uncertainty are produced which stimulate a demand for new information. Environmental crises play a significant role in exacerbating conditions of uncertainty for decision-makers. Political elites seek expert knowledge and advice to reduce this technical uncertainty, on issues including:
the scale of environmental problems,
cause-and-effect interrelations of ecological processes, and
how (science-based) policy options will play out.
Therefore, epistemic communities can frame environmental problems as they see fit, and environmental decision-makers begin to make policy-shaping decisions based on these specific depictions.
The initial identification and bounding of environmental issues by epistemic community members is very influential. They can limit what would be preferable in terms of national interests, frame what issues are available for collective debate, and delimit the policy alternatives deemed possible. The political effects are not easily reversible. The epistemic community vision is institutionalised as a collective set of understandings reflected in any subsequent policy choices.
This is a key point of power. Policy actors are persuaded to conform to the community’s consensual, knowledge-driven ideas without the epistemic community requiring a more material form of power. Members of successful communities can become strong actors at the national and international level as decision-makers attach responsibility to their advice.
As a result, epistemic communities have a direct input on how international cooperation may develop in the long term. Transboundary environmental problems require a unified response rather than patchwork policy efforts, but this is problematic due to enduring differences of state interest and concerns over reciprocity. The transnational nature of epistemic communities means numerous states may absorb new patterns of logic and behaviour, leading to the adoption of concordant state policies. Therefore, the likelihood of convergent state behaviour and associated international coordination is increased.
International cooperation is further facilitated if powerful states are involved, as a quasi-structure is created containing the reasons, expectations and arguments for coordination. Also, if epistemic community members have developed authoritative bureaucratic reputations in various countries, they are likely to participate in the creation and running of national and international institutions that directly pursue international policy coordination, for example, a regulatory agency, think tank or governmental research body.
As a result, epistemic community members in a number of different countries can become connected through intergovernmental channels, as well as existing community channels, producing a transnational governance network, and facilitating the promotion of international policy coordination. An example of a scientific epistemic community in action is the 1975 collectively negotiated Mediterranean Action Plan (MAP), a marine pollution control regime for the Mediterranean Sea developed by the United Nations Environment Programme.
Limitations
Some refutations have been formulated about epistemic communities, in the study of the circulation of international expertise. Firstly, one should be cautious about the risk of retrospective thinking when conceptualizing epistemic communities. Indeed, the solutions proposed by expert groups which are eventually adopted by policy makers are one but many that have been formulated by the scientific community. The epistemic community proposition is confirmed by the fact that the solution adopted are necessarily "tolerable" by policy makers, who choose between all those which have been proposed by the scientific community. Secondly, it is difficult to assess the limits of the term "experts". For instance, the G7 "experts" would in fact be civil servants from the member-states of the organization, who therefore cannot claim the scientific legitimacy of researchers. Finally, this hypothesis does not take into consideration the influence of national contexts in the agenda-setting of epistemic communities. The experts are restricted to the limit of the tolerable in their own national context, which is also crucial in the adoption of the solutions they propose at the local level.
See also
Epistemic community (international relations)
Global governance
Knowledge falsification
Network governance
Social network
Internationalism
Transnationalism
International community
Statism
Governmentality
References
Adler, Emanuel. “The Emergence of Cooperation: National Epistemic Communities and the International Evolution of the Idea of Nuclear Arms Control.” International Organization. Vol. 46, No. 1. The MIT Press Winter, 1992. pp. 101–145.
Adler, Emanuel and Peter M. Haas. “Conclusion: Epistemic Communities, World Order, and the Creation of a Reflective Research Program.” International Organization. Vol. 46. No. 1. Winter. MIT Press, 1992. P. 367–390.
Haas, Peter M. “Epistemic Communities and International Policy Coordination.” International Organization. Vol. 46. No. 1. Winter. MIT Press, 1992. p. 1-35.
Haas, Peter M. “Do Regimes Matter? Epistemic Communities and Mediterranean Pollution.” International Organization. Vol. 43. No. 3. The MIT Press Summer, 1989. pp. 377–403.
Kolodziej, Edward A. “Epistemic Communities Searching for Regional Cooperation.” Mershon International Studies Review. Vol. 41. No. 1 Blackwell Publishing May, 1997. pp. 93–98.
Sebenius, James K. “Challenging Conventional Explanations of International Cooperation: Negotiation Analysis and the Case of Epistemic Communities.” International Organization. Vol. 46, No. 1. The MIT Press Winter, 1992. pp. 323–365.
Thomas, Craig W. “Public Management as Interagency Cooperation: Testing Epistemic Community Theory at the Domestic Level.” Journal of Public Administration Research and Theory. J-PART. Vol. 7. No. 2. Oxford University Press, Apr. 1997. p. 221-246.
Cross, Mai'a K. Davis. "Rethinking Epistemic Communities Twenty Years Later." Review of International Studies, Vol. 39. No. 1, Jan 2013, pp. 137–160.
Cross, Mai'a K. Davis. "Security Integration in Europe: How Knowledge-based Networks are Transforming the European Union." University of Michigan Press, 2011.
Further reading
.
Details.
Details.
External links
UNEP Mediterranean Action Plan website
Types of communities
Social epistemology
International relations
Concepts in epistemology | Epistemic community | [
"Technology"
] | 3,543 | [
"Social epistemology",
"Science and technology studies"
] |
183,478 | https://en.wikipedia.org/wiki/Proof%20theory | Proof theory is a major branch of mathematical logic and theoretical computer science within which proofs are treated as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of a given logical system. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature.
Some of the major areas of proof theory include structural proof theory, ordinal analysis, provability logic, reverse mathematics, proof mining, automated theorem proving, and proof complexity. Much research also focuses on applications in computer science, linguistics, and philosophy.
History
Although the formalisation of logic was much advanced by the work of such figures as Gottlob Frege, Giuseppe Peano, Bertrand Russell, and Richard Dedekind, the story of modern proof theory is often seen as being established by David Hilbert, who initiated what is called Hilbert's program in the Foundations of Mathematics. The central idea of this program was that if we could give finitary proofs of consistency for all the sophisticated formal theories needed by mathematicians, then we could ground these theories by means of a metamathematical argument, which shows that all of their purely universal assertions (more technically their provable sentences) are finitarily true; once so grounded we do not care about the non-finitary meaning of their existential theorems, regarding these as pseudo-meaningful stipulations of the existence of ideal entities.
The failure of the program was induced by Kurt Gödel's incompleteness theorems, which showed that any ω-consistent theory that is sufficiently strong to express certain simple arithmetic truths, cannot prove its own consistency, which on Gödel's formulation is a sentence. However, modified versions of Hilbert's program emerged and research has been carried out on related topics. This has led, in particular, to:
Refinement of Gödel's result, particularly J. Barkley Rosser's refinement, weakening the above requirement of ω-consistency to simple consistency;
Axiomatisation of the core of Gödel's result in terms of a modal language, provability logic;
Transfinite iteration of theories, due to Alan Turing and Solomon Feferman;
The discovery of self-verifying theories, systems strong enough to talk about themselves, but too weak to carry out the diagonal argument that is the key to Gödel's unprovability argument.
In parallel to the rise and fall of Hilbert's program, the foundations of structural proof theory were being founded. Jan Łukasiewicz suggested in 1926 that one could improve on Hilbert systems as a basis for the axiomatic presentation of logic if one allowed the drawing of conclusions from assumptions in the inference rules of the logic. In response to this, Stanisław Jaśkowski (1929) and Gerhard Gentzen (1934) independently provided such systems, called calculi of natural deduction, with Gentzen's approach introducing the idea of symmetry between the grounds for asserting propositions, expressed in introduction rules, and the consequences of accepting propositions in the elimination rules, an idea that has proved very important in proof theory. Gentzen (1934) further introduced the idea of the sequent calculus, a calculus advanced in a similar spirit that better expressed the duality of the logical connectives, and went on to make fundamental advances in the formalisation of intuitionistic logic, and provide the first combinatorial proof of the consistency of Peano arithmetic. Together, the presentation of natural deduction and the sequent calculus introduced the fundamental idea of analytic proof to proof theory.
Structural proof theory
Structural proof theory is the subdiscipline of proof theory that studies the specifics of proof calculi. The three most well-known styles of proof calculi are:
The Hilbert calculi
The natural deduction calculi
The sequent calculi
Each of these can give a complete and axiomatic formalization of propositional or predicate logic of either the classical or intuitionistic flavour, almost any modal logic, and many substructural logics, such as relevance logic or linear logic. Indeed, it is unusual to find a logic that resists being represented in one of these calculi.
Proof theorists are typically interested in proof calculi that support a notion of analytic proof. The notion of analytic proof was introduced by Gentzen for the sequent calculus; there the analytic proofs are those that are cut-free. Much of the interest in cut-free proofs comes from the : every formula in the end sequent of a cut-free proof is a subformula of one of the premises. This allows one to show consistency of the sequent calculus easily; if the empty sequent were derivable it would have to be a subformula of some premise, which it is not. Gentzen's midsequent theorem, the Craig interpolation theorem, and Herbrand's theorem also follow as corollaries of the cut-elimination theorem.
Gentzen's natural deduction calculus also supports a notion of analytic proof, as shown by Dag Prawitz. The definition is slightly more complex: we say the analytic proofs are the normal forms, which are related to the notion of normal form in term rewriting. More exotic proof calculi such as Jean-Yves Girard's proof nets also support a notion of analytic proof.
A particular family of analytic proofs arising in reductive logic are focused proofs which characterise a large family of goal-directed proof-search procedures. The ability to transform a proof system into a focused form is a good indication of its syntactic quality, in a manner similar to how admissibility of cut shows that a proof system is syntactically consistent.
Structural proof theory is connected to type theory by means of the Curry–Howard correspondence, which observes a structural analogy between the process of normalisation in the natural deduction calculus and beta reduction in the typed lambda calculus. This provides the foundation for the intuitionistic type theory developed by Per Martin-Löf, and is often extended to a three way correspondence, the third leg of which are the cartesian closed categories.
Other research topics in structural theory include analytic tableau, which apply the central idea of analytic proof from structural proof theory to provide decision procedures and semi-decision procedures for a wide range of logics, and the proof theory of substructural logics.
Ordinal analysis
Ordinal analysis is a powerful technique for providing combinatorial consistency proofs for subsystems of arithmetic, analysis, and set theory. Gödel's second incompleteness theorem is often interpreted as demonstrating that finitistic consistency proofs are impossible for theories of sufficient strength. Ordinal analysis allows one to measure precisely the infinitary content of the consistency of theories. For a consistent recursively axiomatized theory T, one can prove in finitistic arithmetic that the well-foundedness of a certain transfinite ordinal implies the consistency of T. Gödel's second incompleteness theorem implies that the well-foundedness of such an ordinal cannot be proved in the theory T.
Consequences of ordinal analysis include (1) consistency of subsystems of classical second order arithmetic and set theory relative to constructive theories, (2) combinatorial independence results, and (3) classifications of provably total recursive functions and provably well-founded ordinals.
Ordinal analysis was originated by Gentzen, who proved the consistency of Peano Arithmetic using transfinite induction up to ordinal ε0. Ordinal analysis has been extended to many fragments of first and second order arithmetic and set theory. One major challenge has been the ordinal analysis of impredicative theories. The first breakthrough in this direction was Takeuti's proof of the consistency of Π-CA0 using the method of ordinal diagrams.
Provability logic
Provability logic is a modal logic, in which the box operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory. As basic axioms of the provability logic GL (Gödel-Löb), which captures provable in Peano Arithmetic, one takes modal analogues of the Hilbert-Bernays derivability conditions and Löb's theorem (if it is provable that the provability of A implies A, then A is provable).
Some of the basic results concerning the incompleteness of Peano Arithmetic and related theories have analogues in provability logic. For example, it is a theorem in GL that if a contradiction is not provable then it is not provable that a contradiction is not provable (Gödel's second incompleteness theorem). There are also modal analogues of the fixed-point theorem. Robert Solovay proved that the modal logic GL is complete with respect to Peano Arithmetic. That is, the propositional theory of provability in Peano Arithmetic is completely represented by the modal logic GL. This straightforwardly implies that propositional reasoning about provability in Peano Arithmetic is complete and decidable.
Other research in provability logic has focused on first-order provability logic, polymodal provability logic (with one modality representing provability in the object theory and another representing provability in the meta-theory), and interpretability logics intended to capture the interaction between provability and interpretability. Some very recent research has involved applications of graded provability algebras to the ordinal analysis of arithmetical theories.
Reverse mathematics
Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. The field was founded by Harvey Friedman. Its defining method can be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory.
In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem "Every bounded sequence of real numbers has a supremum" it is necessary to use a base system that can speak of real numbers and sequences of real numbers.
For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T, two proofs are required. The first proof shows T is provable from S; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S. The second proof, known as a reversal, shows that T itself implies S; this proof is carried out in the base system. The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T.
One striking phenomenon in reverse mathematics is the robustness of the Big Five axiom systems. In order of increasing strength, these systems are named by the initialisms RCA0, WKL0, ACA0, ATR0, and Π-CA0. Nearly every theorem of ordinary mathematics that has been reverse mathematically analyzed has been proven equivalent to one of these five systems. Much recent research has focused on combinatorial principles that do not fit neatly into this framework, like RT (Ramsey's theorem for pairs).
Research in reverse mathematics often incorporates methods and techniques from recursion theory as well as proof theory.
Functional interpretations
Functional interpretations are interpretations of non-constructive theories in functional ones. Functional interpretations usually proceed in two stages. First, one "reduces" a classical theory C to an intuitionistic one I. That is, one provides a constructive mapping that translates the theorems of C to the theorems of I. Second, one reduces the intuitionistic theory I to a quantifier free theory of functionals F. These interpretations contribute to a form of Hilbert's program, since they prove the consistency of classical theories relative to constructive ones. Successful functional interpretations have yielded reductions of infinitary theories to finitary theories and impredicative theories to predicative ones.
Functional interpretations also provide a way to extract constructive information from proofs in the reduced theory. As a direct consequence of the interpretation one usually obtains the result that any recursive function whose totality can be proven either in I or in C is represented by a term of F. If one can provide an additional interpretation of F in I, which is sometimes possible, this characterization is in fact usually shown to be exact. It often turns out that the terms of F coincide with a natural class of functions, such as the primitive recursive or polynomial-time computable functions. Functional interpretations have also been used to provide ordinal analyses of theories and classify their provably recursive functions.
The study of functional interpretations began with Kurt Gödel's interpretation of intuitionistic arithmetic in a quantifier-free theory of functionals of finite type. This interpretation is commonly known as the Dialectica interpretation. Together with the double-negation interpretation of classical logic in intuitionistic logic, it provides a reduction of classical arithmetic to intuitionistic arithmetic.
Formal and informal proof
The informal proofs of everyday mathematical practice are unlike the formal proofs of proof theory. They are rather like high-level sketches that would allow an expert to reconstruct a formal proof at least in principle, given enough time and patience. For most mathematicians, writing a fully formal proof is too pedantic and long-winded to be in common use.
Formal proofs are constructed with the help of computers in interactive theorem proving.
Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, whereas finding proofs (automated theorem proving) is generally hard. An informal proof in the mathematics literature, by contrast, requires weeks of peer review to be checked, and may still contain errors.
Proof-theoretic semantics
In linguistics, type-logical grammar, categorial grammar and Montague grammar apply formalisms based on structural proof theory to give a formal natural language semantics.
See also
Intermediate logic
Model theory
Proof (truth)
Proof techniques
Sequent calculus
Notes
References
J. Avigad and E.H. Reck (2001). "'Clarifying the nature of the infinite': the development of metamathematics and proof theory". Carnegie-Mellon Technical Report CMU-PHIL-120.
S. Buss, ed. (1998) Handbook of Proof Theory. Elsevier.
G. Gentzen (1935/1969). "Investigations into logical deduction". In M. E. Szabo, ed. Collected Papers of Gerhard Gentzen. North-Holland. Translated by Szabo from "Untersuchungen über das logische Schliessen", Mathematisches Zeitschrift v. 39, pp. 176–210, 405 431.
A. S. Troelstra and H. Schwichtenberg (1996). Basic Proof Theory, Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, .
External links
J. von Plato (2008). The Development of Proof Theory. Stanford Encyclopedia of Philosophy.
P
Metalogic | Proof theory | [
"Mathematics"
] | 3,319 | [
"Mathematical logic",
"Proof theory"
] |
183,501 | https://en.wikipedia.org/wiki/Ontology%20Inference%20Layer | OIL (Ontology Inference Layer or Ontology Interchange Language) can be regarded as an ontology infrastructure for the Semantic Web. OIL is based on concepts developed in Description Logic (DL) and frame-based systems and is compatible with RDFS.
OIL was developed by Dieter Fensel, Frank van Harmelen (Vrije Universiteit, Amsterdam) and Ian Horrocks (University of Manchester) as part of the IST OntoKnowledge project.
Much of the work in OIL was subsequently incorporated into DAML+OIL and the Web Ontology Language (OWL).
See also
DARPA Agent Markup Language (DAML)
DAML+OIL
Ontology
References
Knowledge representation languages
Ontology (information science)
de:Ontology Inference Layer | Ontology Inference Layer | [
"Technology"
] | 153 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
183,508 | https://en.wikipedia.org/wiki/Metamathematics | Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic" (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.
History
Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century to focus on what was then called the foundational crisis of mathematics. Richard's paradox (Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-known Russell's paradox (Does the set of all those sets that do not contain themselves contain itself?).
Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory, category theory, recursion theory and pure model theory.
Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift, published in 1879.
David Hilbert was the first to invoke the term "metamathematics" with regularity (see Hilbert's program), in the early 20th century. In his hands, it meant something akin to contemporary proof theory, in which finitary methods are used to study various axiomatized mathematical theorems (Kleene 1952, p. 55).
Other prominent figures in the field include Bertrand Russell, Thoralf Skolem, Emil Post, Alonzo Church, Alan Turing, Stephen Kleene, Willard Quine, Paul Benacerraf, Hilary Putnam, Gregory Chaitin, Alfred Tarski, Paul Cohen and Kurt Gödel.
Today, metalogic and metamathematics broadly overlap, and both have been substantially subsumed by mathematical logic in academia.
Milestones
The discovery of hyperbolic geometry
The discovery of hyperbolic geometry had important philosophical consequences for metamathematics. Before its discovery there was just one geometry and mathematics; the idea that another geometry existed was considered improbable.
When Gauss discovered hyperbolic geometry, it is said that he did not publish anything about it out of fear of the "uproar of the Boeotians", which would ruin his status as princeps mathematicorum (Latin, "the Prince of Mathematicians").
The "uproar of the Boeotians" came and went, and gave an impetus to metamathematics and great improvements in mathematical rigour, analytical philosophy and logic.
Begriffsschrift
Begriffsschrift (German for, roughly, "concept-script") is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book.
Begriffsschrift is usually translated as concept writing or concept notation; the full title of the book identifies it as "a formula language, modeled on that of arithmetic, of pure thought." Frege's motivation for developing his formal approach to logic resembled Leibniz's motivation for his calculus ratiocinator (despite that, in his Foreword Frege clearly denies that he reached this aim, and also that his main aim would be constructing an ideal language like Leibniz's, what Frege declares to be quite hard and idealistic, however, not impossible task). Frege went on to employ his logical calculus in his research on the foundations of mathematics, carried out over the next quarter century.
Principia Mathematica
Principia Mathematica, or "PM" as it is often abbreviated, was an attempt to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. As such, this ambitious project is of great importance in the history of mathematics and philosophy, being one of the foremost products of the belief that such an undertaking may be achievable. However, in 1931, Gödel's incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them.
One of the main inspirations and motivations for PM was the earlier work of Gottlob Frege on logic, which Russell discovered allowed for the construction of paradoxical sets. PM sought to avoid this problem by ruling out the unrestricted creation of arbitrary sets. This was achieved by replacing the notion of a general set with notion of a hierarchy of sets of different 'types', a set of a certain type only allowed to contain sets of strictly lower types. Contemporary mathematics, however, avoids paradoxes such as Russell's in less unwieldy ways, such as the system of Zermelo–Fraenkel set theory.
Gödel's incompleteness theorem
Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem.
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.
Tarski's definition of model-theoretic satisfaction
The T-schema or truth schema (not to be confused with 'Convention T') is used to give an inductive definition of truth which lies at the heart of any realisation of Alfred Tarski's semantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett.
The T-schema is often expressed in natural language, but it can be formalized in many-sorted predicate logic or modal logic; such a formalisation is called a T-theory. T-theories form the basis of much fundamental work in philosophical logic, where they are applied in several important controversies in analytic philosophy.
As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S):
'S' is true if and only if S
Example: 'snow is white' is true if and only if snow is white.
The undecidability of the Entscheidungsproblem
The (German for 'decision problem') is a challenge posed by David Hilbert in 1928. The asks for an algorithm that takes as input a statement of a first-order logic (possibly with a finite number of axioms beyond the usual axioms of first-order logic) and answers "Yes" or "No" according to whether the statement is universally valid, i.e., valid in every structure satisfying the axioms. By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so the can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic.
In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the Entscheidungsproblem is impossible, assuming that the intuitive notation of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing thesis.
See also
Meta
Metalogic
Model theory
Philosophy of mathematics
Proof theory
References
Further reading
W. J. Blok and Don Pigozzi, "Alfred Tarski's Work on General Metamathematics", The Journal of Symbolic Logic, v. 53, No. 1 (Mar., 1988), pp. 36–50.
I. J. Good. "A Note on Richard's Paradox". Mind, New Series, Vol. 75, No. 299 (Jul., 1966), p. 431. JStor
Douglas Hofstadter, 1980. Gödel, Escher, Bach. Vintage Books. Aimed at laypeople.
Stephen Cole Kleene, 1952. Introduction to Metamathematics. North Holland. Aimed at mathematicians.
Jules Richard, Les Principes des Mathématiques et le Problème des Ensembles, Revue Générale des Sciences Pures et Appliquées (1905); translated in Heijenoort J. van (ed.), Source Book in Mathematical Logic 1879-1931 (Cambridge, Massachusetts, 1964).
Alfred North Whitehead, and Bertrand Russell. Principia Mathematica, 3 vols, Cambridge University Press, 1910, 1912, and 1913. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged as Principia Mathematica to *56, Cambridge University Press, 1962.
Mathematical logic
Logic
Metatheory | Metamathematics | [
"Mathematics"
] | 2,150 | [
"Mathematical logic"
] |
183,593 | https://en.wikipedia.org/wiki/Timeline%20of%20the%20early%20universe | The timeline of the universe begins with the Big Bang, 13.799 ± 0.021 billion years ago, and follows the formation and subsequent evolution of the Universe up to the present day.
Each era or age of the universe begins with an "epoch", a time of significant change. Times on this list are relative to the moment of the Big Bang.
First 20 minutes
Planck epoch
c. 0 seconds (13.799 ± 0.021 Gya): Planck epoch begins: earliest meaningful time. Conjecture dominates discussion about the earliest moments of the universe's history. The Big Bang occurs in which ordinary space and time develop out of a primeval state (possibly a virtual particle or false vacuum) described by a quantum theory of gravity or "Theory of everything". All matter and energy of the entire visible universe is contained in a hot, dense point (gravitational singularity), a billionth the size of a nuclear particle. This state has been described as a particle desert. Weakly interacting massive particles (WIMPs) or dark matter and dark energy may have appeared and been the catalyst for the expansion of the singularity. The infant universe cools as it begins expanding. It is almost completely smooth, with quantum variations beginning to cause slight variations in density.
Grand unification epoch
c. 10−43 seconds: Grand unification epoch begins: While still at an infinitesimal size, the universe cools down to 1032 kelvin. Gravity separates and begins operating on the universe—the remaining fundamental forces stabilize into the electronuclear force, also known as the Grand Unified Force or Grand Unified Theory (GUT), mediated by (the hypothetical) X and Y bosons which allow early matter at this stage to fluctuate between baryon and lepton states.
Electroweak epoch
c. 10−36 seconds: Electroweak epoch begins: The Universe cools down to 1028 kelvin. As a result, the strong nuclear force becomes distinct from the electroweak force. A wide array of exotic elementary particles result from the decay of X and Y bosons, which include W and Z bosons and Higgs bosons.
c. 10−33 seconds: Space is subjected to inflation, expanding by a factor of the order of 1026 over a time of the order of 10−33 to 10−32 seconds. The universe is supercooled from about 1027 down to 1022 kelvin.
c. 10−32 seconds: Cosmic inflation ends. The familiar elementary particles now form as a soup of hot ionized gas called quark–gluon plasma; hypothetical components of cold dark matter (such as axions) would also have formed at this time.
Quark epoch
c. 10−12 seconds: Electroweak phase transition: the four fundamental interactions familiar from the modern universe now operate as distinct forces. The weak nuclear force is now a short-range force as it separates from electromagnetic force, so matter particles can acquire mass and interact with the Higgs Field. The temperature is still too high for quarks to coalesce into hadrons, and the quark–gluon plasma persists (Quark epoch). The universe cools to 1015 kelvin.
c. 10−11 seconds: Baryogenesis may have taken place with matter gaining the upper hand over anti-matter as baryon to antibaryon constituencies are established.
Hadron epoch
c. 10−6 seconds: Hadron epoch begins: As the universe cools to about 1010 kelvin, a quark-hadron transition takes place in which quarks bind to form more complex particles—hadrons. This quark confinement includes the formation of protons and neutrons (nucleons), the building blocks of atomic nuclei.
Lepton epoch
c. 1 second: Lepton epoch begins: The universe cools to 109 kelvin. At this temperature, the hadrons and antihadrons annihilate each other, leaving behind leptons and antileptons – possible disappearance of antiquarks. Gravity governs the expansion of the universe: neutrinos decouple from matter creating a cosmic neutrino background.
Photon epoch
c. 10 seconds: Photon epoch begins: Most leptons and antileptons annihilate each other. As electrons and positrons annihilate, a small number of unmatched electrons are left over – disappearance of the positrons.
c. 10 seconds: Universe dominated by photons of radiation – ordinary matter particles are coupled to light and radiation. In contrast, dark matter particles build non-linear structures as dark matter halos. The universe becomes a super-hot glowing fog because charged electrons and protons hinder light emission.
c. 3 minutes: Primordial nucleosynthesis: nuclear fusion begins as lithium and heavy hydrogen (deuterium) and helium nuclei form from protons and neutrons.
c. 20 minutes: Primordial nucleosynthesis ceases: normal matter consists of a mass of 75% hydrogen nuclei and 25% helium nuclei or one helium nucleus per twelve hydrogen nuclei– free electrons begin scattering light.
Matter era
Matter and radiation equivalence
c. 47,000 years (z = 3600): Matter and radiation equivalence: at the beginning of this era, the expansion of the universe was decelerating at a faster rate.
c. 70,000 years: As the temperature falls, gravity overcomes pressure allowing the first aggregates of matter to form.
Cosmic Dark Age
c. 370,000 years (z = 1,100): The "Dark Ages" is the period between decoupling, when the universe first becomes transparent, until the formation of the first stars. Recombination: electrons combine with nuclei to form atoms, mostly hydrogen and helium. At this time, hydrogen and helium transport remains constant as the electron-baryon plasma thins. The temperature falls to . Ordinary matter particles decouple from radiation. The photons present during the decoupling are the same photons seen in the cosmic microwave background (CMB) radiation.
c. 400,000 years: Density waves begin imprinting characteristic polarization signals.
c. 10–17 million years: The "Dark Ages" span a period during which the temperature of cosmic microwave background radiation cooled from some down to about . The background temperature was between , allowing the possibility of liquid water, during a period of about 7 million years, from about 10 to 17 million after the Big Bang (redshift 137–100). Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe".
Reionization
c. 100 million years: Gravitational collapse: ordinary matter particles fall into the structures created by dark matter. Reionization begins: smaller (stars) and larger non-linear structures (quasars) begin to take shape – their ultraviolet light ionizes remaining neutral gas.
200–300 million years: First stars begin to shine: Because many are Population III stars (some Population II stars are accounted for at this time) they are much bigger and hotter and their life cycle is fairly short. Unlike later generations of stars, these stars are metal free. Reionization begins, with the absorption of certain wavelengths of light by neutral hydrogen creating Gunn–Peterson troughs. The resulting ionized gas (especially free electrons) in the intergalactic medium causes some scattering of light, but with much lower opacity than before recombination due the expansion of the universe and clumping of gas into galaxies.
200 million years: The oldest-known star (confirmed) – SMSS J031300.36−670839.3, forms.
300 million years: First large-scale astronomical objects, protogalaxies and quasars may have begun forming. As Population III stars continue to burn, stellar nucleosynthesis operates – stars burn mainly by fusing hydrogen to produce more helium in what is referred to as the main sequence. Over time these stars are forced to fuse helium to produce carbon, oxygen, silicon and other heavy elements up to iron on the periodic table. These elements, when seeded into neighbouring gas clouds by supernova, will lead to the formation of more Population II stars (metal poor) and gas giants.
320 million years (z = 13.3): HD1, the oldest-known spectroscopically-confirmed galaxy, forms.
380 million years: UDFj-39546284 forms, current record holder for unconfirmed oldest-known quasar.
420 million years: The quasar MACS0647-JD, the, or one of the, furthest known quasars, forms.
600 million years: HE 1523-0901, the oldest star found producing neutron capture elements forms, marking a new point in ability to detect stars with a telescope.
630 million years (z = 8.2): GRB 090423, the oldest gamma-ray burst recorded suggests that supernovas may have happened very early on in the evolution of the Universe
Galaxy epoch
< 1 billion years, (13 Gya): first stars in the central bar portion of the Milky Way are born,
2.6 billion years (11 Gya): first stars in the thick disk region of the Milky Way are formed.
4 billion years (10 Gya): Gaia Enceladus merges into Milky Way.
5 or 6 billion years, (8 or 9 Gya): first stars in the thin disk region of the Milky Way are formed.
Acceleration
8.8 billion years (5 Gya, z = 0.5): Acceleration: dark-energy dominated era begins, following the matter-dominated era during which cosmic expansion was slowing down.
Epochs of the formation of the Solar System
9.2 billion years (4.6–4.57 Gya): Primal supernova, possibly triggers the formation of The Solar System.
9.2318 billion years (4.5682 Gya): Sun forms – Planetary nebula begins accretion of planets.
9.23283 billion years (4.56717–4.55717 Gya): Four Jovian planets (Jupiter, Saturn, Uranus, Neptune) evolve around the Sun.
9.257 billion years (4.543–4.5 Gya): Solar System of Eight planets, four terrestrial (Mercury, Venus, Earth, Mars) evolve around the Sun. Because of accretion many smaller planets form orbits around the proto-Sun some with conflicting orbits – early heavy bombardment begins. Precambrian Supereon and Hadean eon begin on Earth. Pre-Noachian Era begins on Mars. Pre-Tolstojan Period begins on Mercury – a large planetoid strikes Mercury, stripping it of outer envelope of original crust and mantle, leaving the planet's core exposed – Mercury's iron content is notably high. Many of the Galilean moons may have formed at this time including Europa and Titan which may presently be hospitable to some form of living organism.
9.266 billion years (4.533 Gya): Formation of Earth-Moon system following giant impact by hypothetical planetoid Theia (planet). Moon's gravitational pull helps stabilize Earth's fluctuating axis of rotation. Pre-Nectarian Period begins on Moon
9.271 billion years (4.529 Gya): Major collision with a pluto-sized planetoid establishes the Martian dichotomy on Mars – formation of North Polar Basin of Mars
9.3 billion years (4.5 Gya): Sun becomes a main sequence yellow star: formation of the Oort cloud and Kuiper belt from which a stream of comets like Halley's Comet and Hale-Bopp begins passing through the Solar System, sometimes colliding with planets and the Sun
9.396 billion years (4.404 Gya): Liquid water may have existed on the surface of the Earth, probably due to the greenhouse warming of high levels of methane and carbon dioxide present in the atmosphere.
9.7 billion years (4.1 Gya): Resonance in Jupiter and Saturn's orbits moves Neptune out into the Kuiper belt causing a disruption among asteroids and comets there. As a result, Late Heavy Bombardment batters the inner Solar System. Herschel Crater formed on Mimas, a moon of Saturn. Meteorite impact creates the Hellas Planitia on Mars, the largest unambiguous structure on the planet. Anseris Mons an isolated massif (mountain) in the southern highlands of Mars, located at the northeastern edge of Hellas Planitia is uplifted in the wake of the meteorite impact
10.4 billion years (3.5 Gya): Earliest fossil traces of life on Earth (stromatolites)
10.6 billion years (3.2 Gya): Amazonian Period begins on Mars: Martian climate thins to its present density: groundwater stored in upper crust (megaregolith) begins to freeze, forming thick cryosphere overlying deeper zone of liquid water – dry ices composed of frozen carbon dioxide form Eratosthenian period begins on the Moon: main geologic force on the Moon becomes impact cratering
10.8 billion years (3 Gya): Beethoven Basin forms on Mercury – unlike many basins of similar size on the Moon, Beethoven is not multi ringed and ejecta buries crater rim and is barely visible
11.2 billion years (2.5 Gya): Proterozoic begins
11.6 billion years (2.2 Gya): Last great tectonic period in Martian geologic history: Valles Marineris, largest canyon complex in the Solar System, forms – although some suggestions of thermokarst activity or even water erosion, it is suggested Valles Marineris is rift fault
Recent history
11.8 billion years (2 Gya): Star formation in Andromeda Galaxy slows. Formation of Hoag's Object from a galaxy collision. Olympus Mons, the largest volcano in the Solar System, is formed
12.1 billion years (1.7 Gya): Sagittarius Dwarf Spheroidal Galaxy captured into an orbit around Milky Way Galaxy
12.7 billion years (1.1 Gya): Copernican Period begins on Moon: defined by impact craters that possess bright optically immature ray systems
12.8 billion years (1 Gya): The Kuiperian Era (1 Gyr – present) begins on Mercury: modern Mercury, a desolate cold planet that is influenced by space erosion and solar wind extremes. Interactions between Andromeda and its companion galaxies Messier 32 and Messier 110. Galaxy collision with Messier 82 forms its patterned spiral disc: galaxy interactions between NGC 3077 and Messier 81; Saturn's moon Titan begins evolving the recognisable surface features that include rivers, lakes, and deltas
13 billion years (800 Mya): Copernicus (lunar crater) forms from the impact on the Lunar surface in the area of Oceanus Procellarum – has terrace inner wall and 30 km wide, sloping rampart that descends nearly a kilometre to the surrounding mare
13.175 billion years (625 Mya): formation of Hyades star cluster: consists of a roughly spherical group of hundreds of stars sharing the same age, place of origin, chemical content and motion through space
13.15-21 billion years (590–650 Mya): Capella star system forms
13.2 billion years (600 Mya): Collision of spiral galaxies leads to the creation of Antennae Galaxies. Whirlpool Galaxy collides with NGC 5195 forming a present connected galaxy system. HD 189733 b forms around parent star HD 189733: the first planet to reveal the climate, organic constituencies, even colour (blue) of its atmosphere
13.345 billion years (455 Mya): Vega, the fifth-brightest star in Earth's galactic neighbourhood, forms.
13.6–13.5 billion years (300-200 Mya): Sirius, the brightest star in the Earth's sky, forms.
13.7 billion years (100 Mya): Formation of Pleiades Star Cluster
13.73 billion years (70 Mya): North Star, Polaris, one of the significant navigable stars, forms
13.780 billion years (20 Mya): Possible formation of Orion Nebula
13.788 billion years (12 Mya): Antares forms.
13.792 billion years (7.6 Mya): Betelgeuse forms.
13.8 billion years (Without uncertainties): Present day.
See also
Chronology of the universe
Formation and evolution of the Solar System
Timeline of natural history (formation of the Earth to evolution of modern humans)
Detailed logarithmic timeline
Timeline of the far future
Timelines of world history
References
Formation of the Universe
Formation of the Universe
cosmology epochs | Timeline of the early universe | [
"Astronomy"
] | 3,505 | [
"Astronomy timelines",
"History of astronomy"
] |
183,601 | https://en.wikipedia.org/wiki/Sidereus%20Nuncius | Sidereus Nuncius (usually Sidereal Messenger, also Starry Messenger or Sidereal Message) is a short astronomical treatise (or pamphlet) published in Neo-Latin by Galileo Galilei on March 13, 1610. It was the first published scientific work based on observations made through a telescope, and it contains the results of Galileo's early observations of the imperfect and mountainous Moon, of hundreds of stars not visible to the naked eye in the Milky Way and in certain constellations, and of the Medicean Stars (later Galilean moons) that appeared to be circling Jupiter.
The Latin word nuncius was typically used during this time period to denote messenger; however, it was also (though less frequently) rendered as message. Though the title Sidereus Nuncius is usually translated into English as Sidereal Messenger, many of Galileo's early drafts of the book and later related writings indicate that the intended purpose of the book was "simply to report the news about recent developments in astronomy, not to pass himself off solemnly as an ambassador from heaven."
Telescope
The first telescopes appeared in the Netherlands in 1608 when Middelburg spectacle-maker Hans Lippershey tried to obtain a patent on one. By 1609 Galileo had heard about it and built his own improved version. He probably was not the first person to aim the new invention at the night sky but his was the first systematic (and published) study of celestial bodies using one. One of Galileo's first telescopes had 8x to 10x linear magnification and was made out of lenses that he had ground himself. This was increased to 20x linear magnification in the improved telescope he used to make the observations in Sidereus Nuncius.
Content
Sidereus Nuncius contains more than seventy drawings and diagrams of the Moon, certain constellations such as Orion, the Pleiades, and Taurus, and the Medicean Stars of Jupiter. Galileo's text also includes descriptions, explanations, and theories of his observations.
Moon
In observing the Moon, Galileo saw that the line separating lunar day from night (the terminator) was smooth where it crossed the darker regions of the Moon but quite irregular where it crossed the brighter areas. From this he deduced that the darker regions are flat, low-lying areas, and the brighter regions rough and mountainous. Basing his estimate on the distance of sunlit mountaintops from the terminator, he judged, quite accurately, that the lunar mountains were at least four miles high. Galileo's engravings of the lunar surface provided a new form of visual representation, besides shaping the field of selenography, the study of physical features on the Moon.
Stars
Galileo reported that he saw at least ten times more stars through the telescope than are visible to the naked eye, and he published star charts of the belt of Orion and the star cluster Pleiades showing some of the newly observed stars. With the naked eye observers could see only six stars in the Taurus cluster; through his telescope, however, Galileo was capable of seeing thirty-five – almost six times as many. When he turned his telescope on Orion, he was capable of seeing eighty stars, rather than the previously observed nine – almost nine times more. In Sidereus Nuncius, Galileo revised and reproduced these two star groups by distinguishing between the stars seen without the telescope and those seen with it. Also, when he observed some of the "nebulous" stars in the Ptolemaic star catalogue, he saw that rather than being cloudy, they were made of many small stars. From this he deduced that the nebulae and the Milky Way were "congeries of innumerable stars grouped together in clusters" too small and distant to be resolved into individual stars by the naked eye.
Medicean Stars (Moons of Jupiter)
In the last part of Sidereus Nuncius, Galileo reported his discovery of four objects that appeared to form a straight line of stars near Jupiter. On the first night he detected a line of three little stars close to Jupiter parallel to the ecliptic; the following nights brought different arrangements and another star into his view, totalling four stars around Jupiter. Throughout the text, Galileo gave illustrations of the relative positions of Jupiter and its apparent companion stars as they appeared nightly from late January through early March 1610. That they changed their positions relative to Jupiter from night to night and yet always appeared in the same straight line near it, persuaded Galileo that they were orbiting Jupiter. On January 11 after four nights of observation he wrote:
I therefore concluded and decided unhesitatingly, that there are three stars in the heavens moving about Jupiter, as Venus and Mercury round the Sun; which at length was established as clear as daylight by numerous subsequent observations. These observations also established that there are not only three, but four, erratic sidereal bodies performing their revolutions round Jupiter...the revolutions are so swift that an observer may generally get differences of position every hour.
In his drawings, Galileo used an open circle to represent Jupiter and asterisks to represent the four stars. He made this distinction to show that there was in fact a difference between these two types of celestial bodies. It is important to note that Galileo used the terms planet and star interchangeably, and "both words were correct usage within the prevailing Aristotelian terminology."
At the time of Sidereus Nuncius publication, Galileo was a mathematician at the University of Padua and had recently received a lifetime contract for his work in building more powerful telescopes. He desired to return to Florence, and in hopes of gaining patronage there, he dedicated Sidereus Nuncius to his former pupil, now the Grand Duke of Tuscany, Cosimo II de' Medici. In addition, he named his discovered four moons of Jupiter the "Medicean Stars," in honor of the four royal Medici brothers. This helped him receive the position of Chief Mathematician and Philosopher to the Medici at the University of Pisa. Ultimately, his effort at naming the moons failed, for they are now referred to as the "Galilean moons".
Reception
The reactions to Sidereus Nuncius, ranging from appraisal and hostility to disbelief, soon spread throughout Italy and England. Many poems and texts were published expressing love for the new form of astronomical science. Three works of art were even created in response to Galileo's book: Adam Elsheimer's The Flight into Egypt (1610; contested by Keith Andrews), Lodovico Cigoli's Assumption of the Virgin (1612), and Andrea Sacchi's Divine Wisdom (1631). In addition, the discovery of the Medicean Stars fascinated other astronomers, and they wanted to view the moons for themselves. Their efforts "set the stage for the modern scientific requirement of experimental reproducibility by independent researchers. Verification versus falsifiability…saw their origins in the announcement of Sidereus Nuncius."
But many individuals and communities were sceptical. A common response to the Medicean Stars was simply to say that the telescope had a lens defect and was producing illusory points of light and images; those saying this completely denied the existence of the moons. That only a few could initially see and verify what Galileo had observed supported the supposition that the optical theory during this period "could not clearly demonstrate that the instrument was not deceiving the senses." By naming the four moons after the Medici brothers and convincing the Grand Duke Cosimo II of his discoveries, the defence of Galileo's reports became a matter of State. Moran notes, “the court itself became actively involved in pursuing the confirmation of Galileo’s observations by paying Galileo out of its treasury to manufacture spyglasses that could be sent through ambassadorial channels to the major courts of Europe." The secretary to Giovanni Antonio Magini, a Bohemian astronomer named Martin Horký, published an incendiary pamphlet criticizing the Sidereus Nuncius, alleging in it that Galileo's observations were the result of poor lenses and influenced by personal ambitions. After gaining some traction in Italy, however, Horky's work was ultimately strongly rebutted.
The first astronomer to publicly support Galileo's findings was Johannes Kepler, who published an open letter in April 1610, enthusiastically endorsing Galileo's credibility. It was not until August 1610 that Kepler was able to publish his independent confirmation of Galileo's findings, due to the scarcity of sufficiently powerful telescopes.
Several astronomers, such as Thomas Harriot, Joseph Gaultier de la Vatelle, Nicolas-Claude Fabri de Peiresc, and Simon Marius, published their confirmation of the Medicean Stars after Jupiter became visible again in the autumn of 1610. Marius, a German astronomer who had studied with Tycho Brahe, was the first to publish a book of his observations. Marius attacked Galileo in Mundus Jovialis (published in 1614) by insisting that he had found Jupiter's four moons before Galileo and had been observing them since 1609. Marius believed that he therefore had the right to name them, which he did: he named them after Jupiter's love conquests: Io, Europa, Ganymede, and Callisto. But Galileo was not confounded; he pointed out that being outside the Church, Marius had not yet accepted the Gregorian calendar and was still using the Julian calendar. Therefore, the night Galileo first observed Jupiter's moons was January 7, 1610 on the Gregorian calendar—December 28, 1609 on the Julian calendar (Marius claimed to have first observed Jupiter's moons on December 29, 1609). Although Galileo did indeed discover Jupiter's four moons before Marius, Io, Europa, Ganymede, and Callisto are now the names of Galileo's four moons.
By 1626 knowledge of the telescope had spread to China when German Jesuit and astronomer Johann Adam Schall von Bell published Yuan jing shuo, (Explanation of the Telescope) in Chinese and Latin.
Controversy with the Catholic Church
Galileo's drawings of an imperfect Moon directly contradicted Ptolemy's and Aristotle's cosmological descriptions of perfect and unchanging heavenly bodies made of quintessence (the fifth element in ancient and medieval philosophy of which the celestial bodies are composed).
Before the publication of Sidereus Nuncius, the Catholic Church accepted the Copernican heliocentric system as strictly mathematical and hypothetical. However, once Galileo began to speak of the Copernican system as fact rather than theory, it introduced "a more chaotic system, a less-than-godly lack of organization." In fact, the Copernican system that Galileo believed to be real challenged the Scripture, "which referred to the sun 'rising' and the earth as 'unmoving.
The conflict ended in 1633 with Galileo being sentenced to a form of house arrest by the Catholic Church. However, by 1633, Galileo had published other works in support of the Copernican view, and these were largely what caused his sentencing.
Translations
English
Edward Stafford Carlos; translations with introduction and notes. The Sidereal messenger of Galileo Galilei, and a part of the preface to Kepler's Dioptrics. Waterloo Place, London: Oxford and Cambridge, January 1880. 148 pp. .
Stillman Drake. Discoveries and Opinions of Galileo, includes translation of Galileo's Sidereus Nuncius. Doubleday: Anchor, 1957. 320 pp. .
Stillman Drake. Telescopes, Tides, and Tactics: A Galilean Dialogue about The Starry Messenger and Systems of the World, including translation of Galileo’s Sidereus Nuncius. London: University Of Chicago Press, 1983. 256 pp. .
Albert Van Helden (Professor Emeritus of History at Rice University); translation with introduction, conclusion and notes. Galileo Galilei, Sidereus Nuncius, or The Sidereal Messenger. Chicago and London: The University of Chicago Press, 1989. xiii + 127 pp. .
William R. Shea and Tiziana Bascelli; translated from the Latin by William R. Shea, introduction and notes by William R. Shea and Tiziana Bascelli. Galileo’s Sidereus Nuncius or Sidereal Message. Sagamore Beach, MA: Science History Publications/USA, 2009. viii + 115 pp. .
French
Isabelle Pantin. Sidereus Nuncius: Le Messager Céleste. Paris: Belles Lettres, 1992. ASIN B0028S7JLK.
Fernand Hallyn. Le messager des étoiles. France: Points, 1992. .
Italian
Maria Timpanaro Cardini. Sidereus nuncius. Firenze: Sansoni, 1948.
See also
Discourse on Comets
Letters on Sunspots
Nuncius (journal)
Selenographia, sive Lunae descriptio
References
External links
Sidereus Nuncius 1610. From Rare Book Room. Photographed first edition.
Sidereus Nuncius, in Latin in HTML format, or in Italian in pdf format or odt format. From LiberLiber.
Linda Hall Library has a scanned first edition, as well as a scanned pirated edition from Frankfurt, also from 1610.
Sidereus nuncius (Adams.5.61.1) Full digital edition in Cambridge Digital Library.
The Sidereal Messenger of Galileo Galilei in English at Project Gutenberg.
Sidereus nuncius Full digital edition in the Stanford Libraries.
1610 books
Books by Galileo Galilei
Astronomy books
1610 in science
Copernican Revolution
17th-century books in Latin | Sidereus Nuncius | [
"Astronomy"
] | 2,742 | [
"Astronomy books",
"Copernican Revolution",
"Works about astronomy",
"History of astronomy"
] |
183,603 | https://en.wikipedia.org/wiki/Francis%20Younghusband | Lieutenant Colonel Sir Francis Edward Younghusband, (31 May 1863 – 31 July 1942) was a British Army officer, explorer and spiritual writer. He is remembered for his travels in the Far East and Central Asia; especially the 1904 British expedition to Tibet, led by himself, and for his writings on Asia and foreign policy. Younghusband held positions including British commissioner to Tibet and president of the Royal Geographical Society.
Early life
Francis Younghusband was born in 1863 at Murree, British India (now Pakistan), to a British military family, being the brother of Major-General George Younghusband and the second son of Major-General John W. Younghusband and his wife Clara Jane Shaw. Clara's brother, Robert Shaw, was a noted explorer of Central Asia. His uncle Lieutenant-General Charles Younghusband CB FRS, was a British Army officer and meteorologist.
As an infant, Francis was taken to live in England by his mother. When Clara returned to India in 1867 she left her son in the care of two austere and strictly religious aunts. In 1870 his mother and father returned to England and reunited the family. In 1876 at age thirteen, Francis entered Clifton College, Bristol. In 1881 he entered the Royal Military College, Sandhurst, and was commissioned as a subaltern in the 1st King's Dragoon Guards in 1882.
Military career
Having read General MacGregor's book Defence of India he could have justifiably called himself an expert on the "Great Game" of espionage that was unfolding on the Steppes of Asia. In 1886–1887, on leave from his regiment, Younghusband made an expedition across Asia though still a young officer. After sailing to China his party set out, with Colonel Mark Bell's permission, to cross 1200 miles of desert with the ostensible authority to survey the geography; but in reality, the purposes were to ascertain the strength of the Russian physical threats to the Raj. Departing Peking with a senior colleague, Henry E. M. James (on leave from his Indian Civil Service position) and a young British consular officer from Newchwang, Harry English Fulford, on 4 April 1887, Lieut Younghusband explored Manchuria, visiting the frontier areas of Chinese settlement in the region of the Changbai Mountains.
On arrival in India, he was granted three months' leave by the Commander-in-Chief Field Marshal Lord Roberts; the scientific results of this travel would prove vital information to the Royal Geographical Society. Younghusband had already carried out numerous scientific observations in particular, showing that the Changbai Mountains' highest peak, Baekdu Mountain, is only around 8,000 feet tall, even though the travellers' British maps showed [nonexistent] snow-capped peaks 10,000-12,000 ft tall in the area. Fulford provided the travellers with language and cultural expertise. Younghusband crossed the most inhospitable terrain in the world to the Himalayas before being ordered to make his way home. Parting with his British companions, he crossed the Taklamakan Desert to Chinese Turkestan, and pioneered a route from Kashgar to India through the uncharted Mustagh Pass. He reported to the Viceroy, Lord Dufferin, his crossing through the Karakoram Range, the Hindu Kush, the Pamirs and where the range converged with the Himalayas; the nexus of three great empires. In the 1880s the region of the Upper Oxus was still largely unmapped. For this achievement, aged still only 24, he was elected the youngest member of the Royal Geographical Society and received the society's 1890 Patron's Medal.
In 1889, he was made captain and was dispatched with a small escort of Gurkha soldiers to investigate an uncharted region north of Ladakh, where raiders from Hunza had disrupted trade between Yarkand and India the previous year. Whilst encamped in the valley of the Yarkand River, Younghusband received a messenger at his camp, inviting him to dinner with Captain Bronislav Grombchevsky, his Russian counterpart in "The Great Game". Younghusband accepted the invitation to Grombchevsky's camp, and after dinner the two rivals talked into the night, sharing brandy and vodka, and discussing the possibility of a Russian invasion of British India. Grombchevsky impressed Younghusband with the horsemanship skills of his Cossack escort, and Younghusband impressed Grombchevsky with the rifle drill of his Gurkhas. After their meeting in this remote frontier region, Grombchevsky resumed his expedition in the direction of Tibet and Younghusband continued his exploration of the Karakoram.
Indian Political Service career
Younghusband received a telegram from Simla, to attend the Intelligence Department (ID) to be interviewed by Foreign Secretary Sir Mortimer Durand, transferred to the Indian Political Service. He served as a political officer on secondment from the British Army. He refused a request to visit Lhasa as an interpreter, disguised as a Yarkandi trader, a cover not guaranteed to fool the Russians, after Andrew Dalgleish, a Scots merchant, had been hacked to death. Younghusband was accompanied by a Gurkha escort, celebrated for their ferocity in combat. The Forward policy was circumscribed by a legal offer to all travellers of peaceable security crossing borders. Departure from Leh on 8 August 1889 on the caravan route took them up the mountain pass of Shimshal towards Hunza, his aim being to restore the tea trade to Xinjiang and prevent any further raids into Kashmir. Colonel Durand from Gilgit joined him. Younghusband probed the villages to gauge the reception: calculating it was a den of thieves, they ascended the steep ravine. The Hunza was barred to them, a trap was sprung; the parley terms took him inside to negotiate. The nervous reception over, they were all relieved to find safety; Younghusband wanted to know who was waylaying innocent civilian traders, and why. The ruler, Safdar Ali extended a letter of welcome to his Kashmiri kingdom; the British investigated whence came the Russian infiltrators under Agent Gromchevsky. Further south at Ladakh, he kept a close watch on their movements. Reluctantly, Younghusband dined with the Cossack leaders, who divulged the secrets of their common rivalry. Gromchevsky explained that the Raj had invited enmity for meddling in the Black Sea ports. The Russian displayed little grasp of strategy, but basic raw courage; he betrayed the confidence of Abdul Rahman as no friend to the British. Younghusband tentatively concluded that their possessions at Bokhara and Samarkand were vulnerable. Having drunk large quantities of vodka and brandy, the Cossacks presented arms in cordial salute and they parted in peace. Woefully unprepared for winter, the British garrison at Ladakh refused them entry.
Younghusband finally arrived at Gulmit to a 13-gun salute. In khaki, the envoy greeted Safdar Ali at the marquee on the Karakoram Highway, the men of Hunza kneeling at their ruler's feet. This was colonial diplomacy, based on protocol and etiquette, but Younghusband had not come for merely trivial discussions. Reinforced by Durand's troops, Younghusband's arguments were to prevent criminal looting, murder, and highway robbery. Impervious to reason though Safdar Ali was, Younghusband was not prepared to allow him to laugh at the Raj. A demonstration of firepower "caused quite a sensation", he wrote in his diaries. The British major was disdainful, but content when he left on 23 November to return to India, which he reached by Christmas.
In 1890, Younghusband was sent on a mission to Chinese Turkestan, accompanied by George Macartney as interpreter. He spent the winter in Kashgar, where he left Macartney as British consul. Younghusband sought to investigate the Pamir Gap, a possible Russian entry route to India, but first needed to address issues with the Chinese authorities in Kashgar. It was for this reason he recruited a Mandarin interpreter, junior officer George Macartney, to accompany his missions into the frozen mountains. They wintered in Kashgar as a listening post, meeting in conference with the Russian Nikolai Petrovsky, who had always resisted trade with Xinjiang (Sinkiang). The Russian agent was well-informed about British India, but proved unscrupulous. Believing he had succeeded, Younghusband did not reckon on Petrovsky's deal with the Taotai of Xinjiang.
In July 1891, they were still in the Pamirs when news reached them that the Russians intended to send troops "to note and report with the Chinese and Afghans". At Bozai Gumbaz in the Little Pamir on 12 August he encountered Cossack soldiers, who forced him to leave the area. This was one of the incidents which provoked the Hunza-Nagar Campaign. The troop of 20 or so soldiers planted a flag on what they anticipated was unclaimed territory, 150 miles south of the Russian border. However, the British considered the area to be Afghan territory. Colonel Yonov, decorated with the Order of St George, approached his camp to announce that the area now belonged to the Tsar. Younghusband learnt that they had raided the Chitral territory; furthermore, they had penetrated the Darkot Pass into the Yasin Valley. They were joined by eager intelligence officer Lieutenant Davison, but the British were disabused by Ivanov of British sovereignty: Younghusband remained polite, maintained protocol but hospitable to the big Russian bear hug.
During his service in Kashmir, he wrote a book called Kashmir at the request of Edward M. J. Molyneux. Younghusband's descriptions went hand in glove with Molyneux's paintings of the valley. In the book, Younghusband declared his immense admiration of the natural beauty of Kashmir and its history. Younghusband participated in the geopolitical rivalry between Britain and Russia, known as 'The Great Game,' which persisted into the 20th century before being formally concluded by the 1907 Anglo-Russian Treaty. Younghusband, among other explorers such as Sven Hedin, Nikolay Przhevalsky, Shoqan Walikhanov and Sir Marc Aurel Stein, had participated in earnest. Rumours of Russian expansion into the Hindu Kush with a Russian presence in Tibet prompted the new Viceroy of India Lord Curzon to appoint Younghusband, by then a major, British commissioner to Tibet from 1902 to 1904.
Expedition to Tibet
In 1903, Curzon appointed Younghusband as the head of the Tibet Frontier Commission; John Claude White, the political officer of Sikkim, and E. C. Wilton, served as his deputy commissioners in the commission. Younghusband subsequently led the British expedition to Tibet, which had the putative aim to settle disputes over the Sikkim–Tibet border, but eventually exceeded instructions from the government of the United Kingdom and became a de facto invasion of Tibet. Roughly inside Tibet, on the way to Gyantse, thence to the capital of Lhasa, a confrontation outside the hamlet of Guru led to a victory by the expedition's troops over 600–700 Tibetan soldiers. The expedition's troops, equipped with rifles and machine guns, overpowered the less-equipped Tibetan forces, who were armed with hoes, swords, and flintlocks.
Ultimately, 202 men of Younghusband's expedition were killed in action while 411 died of non-combat causes. The expedition was supported by Ugyen Wangchuck of the Kingdom of Bhutan (who was to become the King of Bhutan in 1907), who was knighted in return for his services. However, the invasion of Tibet embarrassed the British government, which desired good relations with the Qing dynasty for the sake of Britain's trade with Chinese coastal settlements. Accordingly, the British government repudiated the Treaty of Lhasa, signed by Younghusband and Tibetan leaders, due to concerns over its impact on relations with the Qing dynasty and trade with Chinese coastal regions.
In 1891, Younghusband received the Companion of the Order of the Indian Empire, and then he was awarded the honour of Knight Commander of the Order of the Star of India in December 1904. He was also awarded the Kaisar-I-Hind Medal (gold) in 1901, and the Gold Medal of the Royal Scottish Geographical Society in 1905. In 1906, Younghusband settled in Kashmir as the British Resident representative before returning to Britain in 1909, where he was an active member of many clubs and societies. In 1908, he was promoted to lieutenant colonel. During the First World War, his patriotic Fight for Right campaign commissioned the song "Jerusalem".
Himalaya and mountaineering
In 1889, Younghusband reached base of Turkestan La (North) from north, and he noted that this was a long glacier and a major Central Asian dividing range.
In 1919, Younghusband was elected President of the Royal Geographical Society, and two years later became Chairman of the Mount Everest Committee which was set up to coordinate the initial 1921 British Reconnaissance Expedition to Mount Everest. Younghusband supported efforts to summit Mount Everest and endorsed George Mallory's participation in early expeditions, and they followed the same initial route as the earlier Tibet Mission. Younghusband remained Chairman through the subsequent 1922 and 1924 British Expeditions.
In 1938, Younghusband encouraged Ernst Schäfer, who was about to lead a German expedition, to "sneak over the border" when faced with British intransigence towards Schäfer's efforts to reach Tibet.
Personal life
In 1897 Younghusband married Helen Augusta Magniac, the daughter of Charles Magniac, MP. Augusta's brother, Vernon, served as Younghusband's private secretary during the expedition to Tibet. The Younghusbands had a son who died in infancy, and a daughter, Eileen Younghusband (1902–1981), who became a prominent social worker.
From 1921 to 1937 the couple lived at Westerham, Kent, but Helen did not accompany her husband on his travels. In July 1942 Younghusband suffered a stroke after addressing a meeting of the World Congress of Faiths in Birmingham. He died of cardiac failure on 31 July 1942 at Madeline Lees' home Post Green House, at Lytchett Minster, Dorset. He was buried in the village churchyard.
Spiritual life
Biographer Patrick French described Younghusband's religious belief as one who was
brought up an Evangelical Christian, read his way into Tolstoyan simplicity, experienced a revelatory vision in the mountains of Tibet, toyed with telepathy in Kashmir, proposed a new faith based on virile racial theory, then transformed it into what Bertrand Russell called 'a religion of atheism.' Ultimately he became a spiritualist and "premature hippie" who "had great faith in the power of cosmic rays, and claimed that there are extraterrestrials with translucent flesh on the planet Altair."
Younghusband described having a mystical experience during his retreat from Tibet, which he said instilled him with a profound sense of 'love for the whole world and convinced him that "men at heart are divine". This conviction was tinged with regret for the invasion of Tibet, and eventually, in 1936, profound religious convictions invited a founder's address to the World Congress of Faiths (in imitation of the World Parliament of Religions). Younghusband published a number of books with titles including The Gleam: Being an account of the life of Nija Svabhava, pseud. (1923); Mother World (in Travail for the Christ that is to be) (1924); and Life in the Stars: An Exposition of the View that on some Planets of some Stars exist Beings higher than Ourselves, and on one a World-Leader, the Supreme Embodiment of the Eternal Spirit which animates the Whole (1927). The last drew the admiration of Lord Baden-Powell, the Boy Scouts founder. Younghusband explored speculative concepts such as pantheism, a Christlike 'world leader' residing on the planet 'Altair,' and ideas reminiscent of the Gaia hypothesis, exploring the theology of spiritualism, and guidance by means of telepathy.
In his book Within: Thoughts During Convalescence (1912), Younghusband stated:
We are giving up the idea that the Kingdom of God is in Heaven, and we are finding that the Kingdom of God is within us. We are relinquishing the old idea of an external God, above, apart, and separate from ourselves; and we are taking on the new idea of an internal spirit working within us – a constraining, immanent influence, a vital, propelling impulse vibrating through us all, expressing itself and fulfilling its purpose through us, and uniting us together in one vast spiritual unity.
Younghusband explored Eastern philosophy and Theosophy advocating for a non-anthropomorphic understanding of divinity. Taking influence from Henri Bergson's Creative Evolution, he proposed purpose in the cosmos through a creative life force. Younghusband's philosophy of cosmic spiritual evolution was outlined in his books Life in the Stars (1927) and The Living Universe (1933). In the latter book he proposed the idea that the universe is a living organism. Younghusband held the view that spiritual forces in the universe are directing evolution and producing life and intelligence on many different planets. Younghusband's philosophical ideas, such as cosmic spiritual evolution, received limited acceptance within the scientific community. He founded the World Congress of Faiths to promote dialogue between different religions.
Younghusband allegedly believed in free love ("freedom to unite when and how a man and a woman please"), marriage laws examined as a matter of "outdated custom".
Association with Paramahansa Yogananda:
In July 1935, Sir Francis Younghusband introduced the Indian yogi and spiritual teacher, Paramahansa Yogananda, during a lecture at Caxton Hall in London. This event is detailed in Yogananda’s “Autobiography of a Yogi,” where he describes his interactions with Younghusband. 
Fictional portrayal
One of Younghusband's domestic servants, Gladys Aylward, became a Christian missionary in China. The Ingrid Bergman film The Inn of the Sixth Happiness (1958) is based on Gladys Aylward's life, with Ronald Squire portraying Younghusband.
Works
Younghusband wrote prolifically between 1885 and 1942. Subjects ranged from Asian events, exploration, mountaineering, philosophy, spirituality, politics and more.
Confidential Report of a Mission to the Northern Frontier of Kashmir in 1889 (Calcutta, 1890).
The Relief of Chitral (1895) (co-authored with his brother George John Younghusband)
South Africa of Today (1896)
The Heart of a Continent (1896) The heart of a continent: vol.1
Kashmir (1909) (with illustrations by Major Edward M. J. Molyneux)
Within: Thoughts During Convalescence (1912)
Mutual Influence: A Re-View of Religion (1915)
The Sense of Community (1916)
The Heart of Nature; or, The quest for natural beauty (1921)
The Gleam (1923)
Modern Mystics (1923) (, reprint 2004)
Mother World in Travail for the Christ that is to be (1924)
Wonders of the Himalayas (1924)
The Epic of Mount Everest (1926) (, reprint 2001).
Life in the Stars (1927)
The Light of Experience (1927)
The Coming Country: A Pre-Vision (1928)
Dawn in India (1930)
The Living Universe (1933)
The Mystery of Nature in Frances Mason. The Great Design: Order and Progress in Nature (1934)
A Venture of Faith: Being a Description of the World Congress of Faiths held in London 1936 (1937)
The World's Need of Religion: Being the Proceedings of the World Congress of Faiths, Oxford, July 23rd-27th, 1937 (1937)
The Renascence of Religion (1938)
The Sum of Things (1939)
Vital Religion: A Brotherhood of Faith (1940)
Taxon named in his honor
Schizopygopsis younghusbandi Regan, 1905 is a species of ray-finned fish endemic to Tibet. It occurs in the Yarlung Tsangpo River (=upper Brahmaputra) drainage and in endorheic lakes in its vicinity.
References
Citations
Sources
Secondary sources
External links
Halkias, Giorgos, The 1904 Younghusband's Expedition to Tibet, ELINEPA, 2004
Description of rare Younghusband photograph collection held by the Royal Geographical Society of South Australia
World Congress of Faiths' History
Royal Geographic Society photograph of Younghusband's Mission to Tibet
1st King's Dragoon Guards (regiments.org)
The heart of nature (1921)
India and Tibet (1910)
1863 births
1942 deaths
19th-century British Army personnel
1st King's Dragoon Guards officers
20th-century British non-fiction writers
Indian Political Service officers
British expatriates in China
British explorers
British military personnel of the British expedition to Tibet
British occult writers
British spiritual writers
British travel writers
Explorers of Central Asia
Explorers of the Himalayas
Graduates of the Royal Military College, Sandhurst
Knights Commander of the Order of the Indian Empire
Knights Commander of the Order of the Star of India
Pantheists
People educated at Clifton College
Presidents of the Royal Geographical Society
Recipients of the MacGregor Medal
Theistic evolutionists
Tibetologists
Vitalists
Francis
British people of the Great Game | Francis Younghusband | [
"Biology"
] | 4,471 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
183,691 | https://en.wikipedia.org/wiki/Allan%20Hills%2084001 | Allan Hills 84001 (ALH84001) is a fragment of a Martian meteorite that was found in the Allan Hills in Antarctica on December 27, 1984, by a team of American meteorite hunters from the ANSMET project. Like other members of the shergottite–nakhlite–chassignite (SNC) group of meteorites, ALH84001 is thought to have originated on Mars. However, it does not fit into any of the previously discovered SNC groups. Its mass upon discovery was .
In 1996, a group of scientists found features in the likeness of microscopic fossils of bacteria in the meteorite, suggesting that these organisms also originated on Mars. The claims immediately made headlines worldwide, culminating in U.S. president Bill Clinton giving a speech about the potential discovery. These claims were controversial from the beginning, and the wider scientific community ultimately rejected the hypothesis once all the unusual features in the meteorite had been explained without requiring life to be present. Despite there being no convincing evidence of Martian life, the initial paper and the enormous scientific and public attention caused by it are considered turning points in the history of the developing science of astrobiology.
History and description
ALH 84001 was found on the Allan Hills Far Western Icefield during the 1984–85 season, by Roberta Score, Lab Manager of the Antarctic Meteorite Laboratory at the Johnson Space Center.
ALH84001 is thought to be one of the oldest Martian meteorites, proposed to have crystallized from molten rock 4.091 billion years ago. Chemical analysis suggests that it originated on Mars when there was liquid water on the planet's surface.
In September 2005, Vicky Hamilton, of the University of Hawaii at Manoa, presented an analysis of the origin of ALH84001 using data from the Mars Global Surveyor and 2001 Mars Odyssey spacecraft orbiting Mars. According to the analysis, Eos Chasma in the Valles Marineris canyon appears to be the source of the meteorite. The analysis was not conclusive, partly because it was limited to areas of Mars not obscured by dust.
The theory holds that ALH84001 was blasted away from the surface of Mars by the impact of a meteor about 17 million years ago, and fell on Earth about 13,000 years ago. These dates were established by a variety of radiometric dating techniques, including samarium–neodymium (Sm–Nd), rubidium–strontium (Rb–Sr), potassium–argon (K–Ar), and carbon-14 dating. Other meteorites that have potential biological markings have generated less interest because they do not contain rock from a "wet" Mars; ALH84001 is the only meteorite originating when Mars may have had liquid surface water.
In October 2011, it was reported that isotopic analysis indicated that the carbonates in ALH84001 were precipitated at a temperature of with water and carbon dioxide from the Martian atmosphere. The carbonate carbon and oxygen isotope ratios imply deposition of the carbonates from a gradually evaporating subsurface water body, probably a shallow aquifer meters or tens of meters below the surface.
In April 2020, researchers reported discovering nitrogen-bearing organics in Allan Hills 84001.
A later study in January 2022 concluded that ALH84001 did not contain Martian life; the discovered organic molecules were found to be associated with abiotic processes (i.e., "serpentinization and carbonation reactions that occurred during the aqueous alteration of basalt rock by hydrothermal fluids") produced on the very early Mars 4 billion years ago instead.
Hypothetical biogenic features
On August 6, 1996, a team of researchers led by NASA scientists including lead author David S. McKay announced that the meteorite may contain trace evidence of life from Mars. This was
published as an article in Science a few days later. Under a scanning electron microscope, structures were visible that some scientists interpreted as fossils of bacteria-like lifeforms. The structures found on ALH84001 are in diameter, similar in size to theoretical nanobacteria, but smaller than any cellular life known at the time of their discovery. If the structures had been fossilized lifeforms, as was proposed by the so-called biogenic hypothesis of their formation, they would have been the first solid evidence of the existence of extraterrestrial life, aside from the chance of their origin being terrestrial contamination.
The announcement of possible extraterrestrial life caused considerable controversy. When the discovery was announced, many immediately conjectured that the fossils were the first true evidence of extraterrestrial life—making headlines around the world, and even prompting President of the United States Bill Clinton to make a formal televised announcement to mark the event.
McKay argued that likely microbial terrestrial contamination found in other Martian meteorites does not resemble the microscopic shapes in ALH84001. In particular, the shapes within ALH84001 look intergrown or embedded in the indigenous material, while likely contamination does not. While it has not yet conclusively been shown how the features in the meteorite were formed, similar features have been recreated in the lab without biological inputs by a team led by D.C. Golden. McKay says these results were obtained using unrealistically pure raw materials as a starting point, and "will not explain many of the features described by us in ALH84001." According to McKay, a plausible inorganic model "must explain simultaneously all of the properties that we and others have suggested as possible biogenic properties of this meteorite." The rest of the scientific community disagreed with McKay.
In January 2010, a team of scientists at Johnson Space Center, including McKay, argued that since their original paper was published in November 2009, the biogenic hypothesis has been further supported by the discovery of three times the original amount of fossil-like data, including more "biomorphs" (suspected Martian fossils), inside two additional Martian meteorites, as well as more evidence in other parts of the Allan Hills meteorite itself.
However, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection." Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation.
Features of ALH84001 that have been interpreted as suggesting the presence of microfossils include:
The structures resemble some modern terrestrial bacteria and their appendages. Though some are much smaller than any known extant Earth microbes, others are of the order of 100–200 nm in size, within the size limits of Pelagibacter ubique, the most common bacterium on Earth, which ranges from 120 to 200 nm, as well as hypothetical nanobacteria. RNA organisms, which are expected to have lived on Earth during the time period when ALH84001 was ejected from Mars, may also have been as small or smaller than these structures, as modern RNA viruses and viroids are often as little as a few dozen nanometers. Some of the structures are even larger, 1–2 microns in diameter. The smallest structures are too small to contain all the systems required by modern life.
Some of the structures resemble colonies and biofilms. However, there are many instances of morphologies that suggested life and were later shown to be due to inorganic processes.
The meteorite contains magnetite crystals of the unusual rectangular prism type, and organized into domains all about the same size, indistinguishable from magnetite produced biologically on Earth and not matching any known non-biological magnetite that forms naturally on Earth. The magnetite is embedded in the carbonate. If found on Earth it would be a very strong biosignature. However, in 2001, scientists were able to explain and produce carbonate globules containing similar magnetite grains through an inorganic process simulating conditions ALH84001 likely experienced on Mars.
It contains polycyclic aromatic hydrocarbons (PAHs) concentrated in the regions containing the carbonate globules, and these have been shown to be indigenous. Other organics such as amino acids do not follow this pattern and are probably due to Antarctic contamination. However, PAHs are also regularly found in asteroids, comets and meteorites, and in deep space, all in the absence of life.
In popular culture
The 2001 mystery-thriller novel Deception Point by Dan Brown, about a discovered meteorite that seems to prove the existence of extraterrestrial life, was inspired by ALH84001.
See also
Glossary of meteoritics
History of Mars observation
List of rocks on Mars
Mars sample-return mission
Nakhla meteorite
Northwest Africa 7034 meteorite
Panspermia
Shergotty meteorite
Tissint meteorite
Yamato 000593 meteorite
Notes
References
Further reading
External links
The ALH84001 Meteorite at JPL NASA website
Lunar and Planetary Institute's Allan Treiman's dissection of ALH84001 literature for the non-specialist
1984 in Antarctica
1984 in Australia
1984 in science
1996 in science
1996 in the United States
Astrobiology
Australian Antarctic Territory
Martian meteorites
Meteorites found in Antarctica
Pseudofossils | Allan Hills 84001 | [
"Astronomy",
"Biology"
] | 1,884 | [
"Origin of life",
"Speculative evolution",
"Astrobiology",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
183,699 | https://en.wikipedia.org/wiki/Bulbourethral%20gland | The bulbourethral glands or Cowper's glands (named for English anatomist William Cowper) are two small exocrine and accessory glands in the reproductive system of many male mammals. They are homologous to Bartholin's glands in females. The bulbourethral glands are responsible for producing a pre-ejaculate fluid called Cowper's fluid (known colloquially as pre-cum), which is secreted during sexual arousal, neutralizing the acidity of the urethra in preparation for the passage of sperm cells. The paired glands are found adjacent to the urethra just below the prostate, seen best by screening (medicine) MRI as a tool in preventative healthcare in males. Screening MRI may be performed when there is a positive prostate-specific antigen on basic laboratory tests. Prostate cancer is the second-most common cause of cancer-related mortality in males in the USA.
Most species of placental mammals have bulbourethral glands, but they are absent in Caniformia and Cetacea. They are the only accessory reproductive glands in male monotremes. Placental mammals usually have one pair of bulbourethral glands, while male marsupials have 1–3 pairs. Of all domesticated animals, they are absent only in dogs.
Location
Bulbourethral glands are located posterior and lateral to the membranous portion of the urethra at the base of the penis, between the two layers of the fascia of the urogenital diaphragm, in the deep perineal pouch. They are enclosed by transverse fibers of the sphincter urethrae membranaceae muscle.
Structure
The bulbourethral glands are compound tubulo-alveolar glands, each approximately the size of a pea in humans. In chimpanzees, they are not visible during dissection, but can be found on microscopic examination. In boars, they are up to 18 cm long and 5 cm in diameter. They are composed of several lobules held together by a fibrous covering. Each lobule consists of a number of acini, lined by columnar epithelial cells, opening into a duct that joins with the ducts of other lobules to form a single excretory duct. This duct is approximately 2.5 cm long and opens into the bulbar urethra at the base of the penis. The glands gradually diminish in size with advancing age.
Function
The bulbourethral gland contributes up to 4 ml of fluid during sexual arousal. The secretion is a clear fluid rich in mucoproteins that help to lubricate the distal urethra and neutralize any acidic urine residue that remains in the urethra.
According to one preliminary study, the bulbourethral gland fluid might not contain any sperm, whereas another study showed some men did leak sperm in potentially significant quantities (in a range from low counts up to 50 million sperm per ml) into the pre-ejaculatory fluid, potentially leading to conception from the introduction of pre-ejaculate. However, the sperm source is a residual or pre-ejaculatory leak from the testicles into the vasa deferentia, rather than from the bulbourethral gland itself.
Gallery
See also
List of homologues of the human reproductive system
Urethral gland
List of distinct cell types in the adult human body
References
Exocrine system
Glands
Human male reproductive system
Mammal male reproductive system
Sex organs | Bulbourethral gland | [
"Biology"
] | 724 | [
"Exocrine system",
"Organ systems"
] |
183,824 | https://en.wikipedia.org/wiki/Sonic%20boom | A sonic boom is a sound associated with shock waves created when an object travels through the air faster than the speed of sound. Sonic booms generate enormous amounts of sound energy, sounding similar to an explosion or a thunderclap to the human ear.
The crack of a supersonic bullet passing overhead or the crack of a bullwhip are examples of a sonic boom in miniature.
Sonic booms due to large supersonic aircraft can be particularly loud and startling, tend to awaken people, and may cause minor damage to some structures. This led to the prohibition of routine supersonic flight overland. Although sonic booms cannot be completely prevented, research suggests that with careful shaping of the vehicle, the nuisance due to sonic booms may be reduced to the point that overland supersonic flight may become a feasible option.
A sonic boom does not occur only at the moment an object crosses the sound barrier and neither is it heard in all directions emanating from the supersonic object. Rather, the boom is a continuous effect that occurs while the object is traveling at supersonic speeds and affects only observers that are positioned at a point that intersects a region in the shape of a geometrical cone behind the object. As the object moves, this conical region also moves behind it and when the cone passes over observers, they will briefly experience the "boom".
Causes
When an aircraft passes through the air, it creates a series of pressure waves in front of the aircraft and behind it, similar to the bow and stern waves created by a boat. These waves travel at the speed of sound and, as the speed of the object increases, the waves are forced together, or compressed, because they cannot get out of each other's way quickly enough. Eventually, they merge into a single shock wave, which travels at the speed of sound, a critical speed known as Mach 1, which is approximately at sea level and .
In smooth flight, the shock wave starts at the nose of the aircraft and ends at the tail. Because the different radial directions around the aircraft's direction of travel are equivalent (given the "smooth flight" condition), the shock wave forms a Mach cone, similar to a vapour cone, with the aircraft at its tip. The half-angle between the direction of flight and the shock wave is given by:
,
where is the inverse of the plane's Mach number . Thus the faster the plane travels, the finer and more pointed the cone is.
There is a rise in pressure at the nose, decreasing steadily to a negative pressure at the tail, followed by a sudden return to normal pressure after the object passes. This "overpressure profile" is known as an N-wave because of its shape. The "boom" is experienced when there is a sudden change in pressure; therefore, an N-wave causes two booms – one when the initial pressure rise reaches an observer, and another when the pressure returns to normal. This leads to a distinctive "double boom" from a supersonic aircraft. When the aircraft is maneuvering, the pressure distribution changes into different forms, with a characteristic U-wave shape.
Since the boom is being generated continually as long as the aircraft is supersonic, it fills out a narrow path on the ground following the aircraft's flight path, a bit like an unrolling red carpet, and hence known as the boom carpet. Its width depends on the altitude of the aircraft. The distance from the point on the ground where the boom is heard to the aircraft depends on its altitude and the angle .
For today's supersonic aircraft in normal operating conditions, the peak overpressure varies from less than 50 to 500 Pa (1 to 10 psf (pound per square foot)) for an N-wave boom. Peak overpressures for U-waves are amplified two to five times the N-wave, but this amplified overpressure impacts only a very small area when compared to the area exposed to the rest of the sonic boom. The strongest sonic boom ever recorded was 7,000 Pa (144 psf) and it did not cause injury to the researchers who were exposed to it. The boom was produced by an F-4 flying just above the speed of sound at an altitude of . In recent tests, the maximum boom measured during more realistic flight conditions was 1,010 Pa (21 psf). There is a probability that some damage—shattered glass, for example—will result from a sonic boom. Buildings in good condition should suffer no damage by pressures of 530 Pa (11 psf) or less. And, typically, community exposure to sonic boom is below 100 Pa (2 psf). Ground motion resulting from the sonic boom is rare and is well below structural damage thresholds accepted by the U.S. Bureau of Mines and other agencies.
The power, or volume, of the shock wave, depends on the quantity of air that is being accelerated, and thus the size and shape of the aircraft. As the aircraft increases speed the shock cone gets tighter around the craft and becomes weaker to the point that at very high speeds and altitudes, no boom is heard. The "length" of the boom from front to back depends on the length of the aircraft to a power of 3/2. Longer aircraft therefore "spread out" their booms more than smaller ones, which leads to a less powerful boom.
Several smaller shock waves can and usually do form at other points on the aircraft, primarily at any convex points, or curves, the leading wing edge, and especially the inlet to engines. These secondary shockwaves are caused by the air being forced to turn around these convex points, which generates a shock wave in supersonic flow.
The later shock waves are somewhat faster than the first one, travel faster, and add to the main shockwave at some distance away from the aircraft to create a much more defined N-wave shape. This maximizes both the magnitude and the "rise time" of the shock which makes the boom seem louder. On most aircraft designs the characteristic distance is about , meaning that below this altitude the sonic boom will be "softer". However, the drag at this altitude or below makes supersonic travel particularly inefficient, which poses a serious problem.
Supersonic aircraft
Supersonic aircraft are any aircraft that can achieve flight faster than Mach 1, which refers to the speed of sound. "Supersonic includes speeds up to five times Mach than the speed of sound, or Mach 5." (Dunbar, 2015) The top mileage per hour for a supersonic aircraft normally ranges from . Typically, most aircraft do not exceed . There are many variations of supersonic aircraft. Some models of supersonic aircraft make use of better-engineered aerodynamics that allow a few sacrifices in the aerodynamics of the model for thruster power. Other models use the efficiency and power of the thruster to allow a less aerodynamic model to achieve greater speeds. A typical model found in United States military use ranges from an average of $13 million to $35 million U.S. dollars.
Measurement and examples
The pressure from sonic booms caused by aircraft is often a few pounds per square foot. A vehicle flying at greater altitude will generate lower pressures on the ground because the shock wave reduces in intensity as it spreads out away from the vehicle, but the sonic booms are less affected by vehicle speed.
Abatement
In the late 1950s when supersonic transport (SST) designs were being actively pursued, it was thought that although the boom would be very large, the problems could be avoided by flying higher. This assumption was proven false when the North American XB-70 Valkyrie first flew, and it was found that the boom was a problem even at 70,000 feet (21,000 m). It was during these tests that the N-wave was first characterized.
Richard Seebass and his colleague Albert George at Cornell University studied the problem extensively and eventually defined a "figure of merit" (FM) to characterize the sonic boom levels of different aircraft. FM is a function of the aircraft's weight and the aircraft length. The lower this value, the less boom the aircraft generates, with figures of about 1 or lower being considered acceptable. Using this calculation, they found FMs of about 1.4 for Concorde and 1.9 for the Boeing 2707. This eventually doomed most SST projects as public resentment, mixed with politics, eventually resulted in laws that made any such aircraft less useful (flying supersonically only over water for instance). Small airplane designs like business jets are favored and tend to produce minimal to no audible booms.
Building on the earlier research of L. B. Jones, Seebass, and George identified conditions in which sonic boom shockwaves could be eliminated. This work was extended by Christine. M. Darden and described as the Jones-Seebass-George-Darden theory of sonic boom minimization. This theory, approached the problem from a different angle, trying to spread out the N-wave laterally and temporally (longitudinally), by producing a strong and downwards-focused (SR-71 Blackbird, Boeing X-43) shock at a sharp, but wide angle nose cone, which will travel at slightly supersonic speed (bow shock), and using a swept back flying wing or an oblique flying wing to smooth out this shock along the direction of flight (the tail of the shock travels at sonic speed). To adapt this principle to existing planes, which generate a shock at their nose cone and an even stronger one at their wing leading edge, the fuselage below the wing is shaped according to the area rule. Ideally, this would raise the characteristic altitude from to 60,000 feet (from 12,000 m to 18,000 m), which is where most SST aircraft were expected to fly.
This remained untested for decades, until DARPA started the Quiet Supersonic Platform project and funded the Shaped Sonic Boom Demonstration (SSBD) aircraft to test it. SSBD used an F-5 Freedom Fighter. The F-5E was modified with a highly refined shape which lengthened the nose to that of the F-5F model. The fairing extended from the nose back to the inlets on the underside of the aircraft. The SSBD was tested over two years culminating in 21 flights and was an extensive study on sonic boom characteristics. After measuring the 1,300 recordings, some taken inside the shock wave by a chase plane, the SSBD demonstrated a reduction in boom by about one-third. Although one-third is not a huge reduction, it could have reduced Concorde's boom to an acceptable level below FM = 1.
As a follow-on to SSBD, in 2006 a NASA-Gulfstream Aerospace team tested the Quiet Spike on NASA Dryden's F-15B aircraft 836. The Quiet Spike is a telescoping boom fitted to the nose of an aircraft specifically designed to weaken the strength of the shock waves forming on the nose of the aircraft at supersonic speeds. Over 50 test flights were performed. Several flights included probing of the shockwaves by a second F-15B, NASA's Intelligent Flight Control System testbed, aircraft 837.
Some theoretical designs do not appear to create sonic booms at all, such as the Busemann biplane. However, creating a shockwave is inescapable if it generates aerodynamic lift.
In 2018, NASA awarded Lockheed Martin a $247.5 million contract to construct a design known as the Low Boom Flight Demonstrator, which aims to reduce the boom to the sound of a car door closing. As of October 2023, the first flight was expected in 2024.
Perception, noise, and other concerns
The sound of a sonic boom depends largely on the distance between the observer and the aircraft shape producing the sonic boom. A sonic boom is usually heard as a deep double "boom" as the aircraft is usually some distance away. The sound is much like that of mortar bombs, commonly used in firework displays. It is a common misconception that only one boom is generated during the subsonic to supersonic transition; rather, the boom is continuous along the boom carpet for the entire supersonic flight. As a former Concorde pilot puts it, "You don't actually hear anything on board. All we see is the pressure wave moving down the airplane – it indicates the instruments. And that's what we see around Mach 1. But we don't hear the sonic boom or anything like that. That's rather like the wake of a ship – it's behind us."
In 1964, NASA and the Federal Aviation Administration began the Oklahoma City sonic boom tests, which caused eight sonic booms per day over six months. Valuable data was gathered from the experiment, but 15,000 complaints were generated and ultimately entangled the government in a class-action lawsuit, which it lost on appeal in 1969.
Sonic booms were also a nuisance in North Cornwall and North Devon in the UK as these areas were underneath the flight path of Concorde. Windows would rattle and in some cases, the "torching" (masonry mortar underneath roof slates) would be dislodged with the vibration.
There has been recent work in this area, notably under DARPA's Quiet Supersonic Platform studies. Research by acoustics experts under this program began looking more closely at the composition of sonic booms, including the frequency content. Several characteristics of the traditional sonic boom "N" wave can influence how loud and irritating it can be perceived by listeners on the ground. Even strong N-waves such as those generated by Concorde or military aircraft can be far less objectionable if the rise time of the over-pressure is sufficiently long. A new metric has emerged, known as perceived loudness, measured in PLdB. This takes into account the frequency content, rise time, etc. A well-known example is the snapping of one's fingers in which the "perceived" sound is nothing more than an annoyance.
The energy range of sonic boom is concentrated in the 0.1–100 hertz frequency range that is considerably below that of subsonic aircraft, gunfire and most industrial noise. Duration of sonic boom is brief; less than a second, 100 milliseconds (0.1 second) for most fighter-sized aircraft and 500 milliseconds for the space shuttle or Concorde jetliner. The intensity and width of a sonic boom path depend on the physical characteristics of the aircraft and how it is operated. In general, the greater an aircraft's altitude, the lower the over-pressure on the ground. Greater altitude also increases the boom's lateral spread, exposing a wider area to the boom. Over-pressures in the sonic boom impact area, however, will not be uniform. Boom intensity is greatest directly under the flight path, progressively weakening with greater horizontal distance away from the aircraft flight track. Ground width of the boom exposure area is approximately for each of altitude (the width is about five times the altitude); that is, an aircraft flying supersonic at will create a lateral boom spread of about . For steady supersonic flight, the boom is described as a carpet boom since it moves with the aircraft as it maintains supersonic speed and altitude. Some maneuvers, diving, acceleration, or turning, can cause the focus of the boom. Other maneuvers, such as deceleration and climbing, can reduce the strength of the shock. In some instances, weather conditions can distort sonic booms.
Depending on the aircraft's altitude, sonic booms reach the ground 2 to 60 seconds after flyover. However, not all booms are heard at ground level. The speed of sound at any altitude is a function of air temperature. A decrease or increase in temperature results in a corresponding decrease or increase in sound speed. Under standard atmospheric conditions, air temperature decreases with increased altitude. For example, when the sea-level temperature is 59 degrees Fahrenheit (15 °C), the temperature at drops to minus 49 degrees Fahrenheit (−45 °C). This temperature gradient helps bend the sound waves upward. Therefore, for a boom to reach the ground, the aircraft's speed relative to the ground must be greater than the speed of sound at the ground. For example, the speed of sound at is about , but an aircraft must travel at least (Mach 1.12) for a boom to be heard on the ground.
The composition of the atmosphere is also a factor. Temperature variations, humidity, atmospheric pollution, and winds can all affect how a sonic boom is perceived on the ground. Even the ground itself can influence the sound of a sonic boom. Hard surfaces such as concrete, pavement, and large buildings can cause reflections that may amplify the sound of a sonic boom. Similarly, grassy fields and profuse foliage can help attenuate the strength of the overpressure of a sonic boom.
Currently, there are no industry-accepted standards for the acceptability of a sonic boom. However, work is underway to create metrics that will help in understanding how humans respond to the noise generated by sonic booms. Until such metrics can be established, either through further study or supersonic overflight testing, it is doubtful that legislation will be enacted to remove the current prohibition on supersonic overflight in place in several countries, including the United States.
Bullwhip
The cracking sound a bullwhip makes when properly wielded is, in fact, a small sonic boom. The end of the whip, known as the "cracker", moves faster than the speed of sound, thus creating a sonic boom.
A bullwhip tapers down from the handle section to the cracker. The cracker has much less mass than the handle section. When the whip is sharply swung, the momentum is transferred down the length of the tapering whip, the declining mass being made up for with increasing speed. Goriely and McMillen showed that the physical explanation is complex, involving the way that a loop travels down a tapered filament under tension.
See also
Cherenkov radiation
Hypersonic
Supershear earthquake
Ground vibration boom
Christine Darden
References
External links
Archived at Ghostarchive and the Wayback Machine:
Boston Globe profile of Spike Aerospace planned S-521 supersonic jet
Aircraft aerodynamics
Aircraft noise
Shock waves
Sound
Acoustics | Sonic boom | [
"Physics"
] | 3,749 | [
"Physical phenomena",
"Shock waves",
"Classical mechanics",
"Acoustics",
"Waves"
] |
183,919 | https://en.wikipedia.org/wiki/Coccidioidomycosis | Coccidioidomycosis (, ) is a mammalian fungal disease caused by Coccidioides immitis or Coccidioides posadasii. It is commonly known as cocci, Valley fever, as well as California fever, desert rheumatism, or San Joaquin Valley fever. Coccidioidomycosis is endemic in certain parts of the United States in Arizona, California, Nevada, New Mexico, Texas, Utah, and northern Mexico.
Description
C. immitis is a dimorphic saprophytic fungus that grows as a mycelium in the soil and produces a spherule form in the host organism. It resides in the soil in certain parts of the southwestern United States, most notably in California and Arizona. It is also commonly found in northern Mexico, and parts of Central and South America. C. immitis is dormant during long dry spells, then develops as a mold with long filaments that break off into airborne spores when it rains. The spores, known as arthroconidia, are swept into the air by disruption of the soil, such as during construction, farming, low-wind or singular dust events, or an earthquake. Windstorms may also cause epidemics far from endemic areas. In December 1977, a windstorm in an endemic area around Arvin, California led to several hundred cases, including deaths, in non-endemic areas hundreds of miles away.
Coccidioidomycosis is a common cause of community-acquired pneumonia in the endemic areas of the United States. Infections usually occur due to inhalation of the arthroconidial spores after soil disruption. The disease is not contagious. In some cases the infection may recur or become chronic.
It was reported in 2022 that valley fever had been increasing in California's Central Valley for years (1,000 cases in Kern county in 2014, 3,000 in 2021); experts said that cases could rise across the American west as the climate makes the landscape drier and hotter.
Classification
After Coccidioides infection, coccidioidomycosis begins with Valley fever, which is its initial acute form. Valley fever may progress to the chronic form and then to disseminated coccidioidomycosis. Therefore, coccidioidomycosis may be divided into the following types:
Acute coccidioidomycosis, sometimes described in literature as primary pulmonary coccidioidomycosis
Chronic coccidioidomycosis
Disseminated coccidioidomycosis, which includes primary cutaneous coccidioidomycosis
Valley fever is not a contagious disease. In some cases the infection may recur or become chronic.
Signs and symptoms
An estimated 60% of people infected with the fungi responsible for coccidioidomycosis have minimal to no symptoms, while 40% will have a range of possible clinical symptoms. Of those who do develop symptoms, the primary infection is most often respiratory, with symptoms resembling bronchitis or pneumonia that resolve over a matter of a few weeks. In endemic regions, coccidioidomycosis is responsible for 20% of cases of community-acquired pneumonia. Notable coccidioidomycosis signs and symptoms include a profound feeling of tiredness, loss of smell and taste, fever, cough, headaches, rash, muscle pain, and joint pain. Fatigue can persist for many months after initial infection. The classic triad of coccidioidomycosis known as "desert rheumatism" includes the combination of fever, joint pains, and erythema nodosum.
A minority (3–5%) of infected individuals do not recover from the initial acute infection and develop a chronic infection. This can take the form of chronic lung infection or widespread disseminated infection (affecting the tissues lining the brain, soft tissues, joints, and bone). Chronic infection is responsible for most of the morbidity and mortality. Chronic fibrocavitary disease is manifested by cough (sometimes productive of mucus), fevers, night sweats and weight loss. Osteomyelitis, including involvement of the spine, and meningitis may occur months to years after initial infection. Severe lung disease may develop in HIV-infected persons.
Complications
Serious complications may occur in patients who have weakened immune systems, including severe pneumonia with respiratory failure and bronchopleural fistulas requiring resection, lung nodules, and possible disseminated form, where the infection spreads throughout the body. The disseminated form of coccidioidomycosis can devastate the body, causing skin ulcers, abscesses, bone lesions, swollen joints with severe pain, heart inflammation, urinary tract problems, and inflammation of the brain's lining, which can lead to death. Coccidioidomycosis is a common cause of community-acquired pneumonia in the endemic areas of the United States. Infections usually occur due to inhalation of the arthroconidial spores after soil disruption.
A particularly severe case of meningitis caused by valley fever in 2012 initially received several incorrect diagnoses such as sinus infections and cluster headaches. The patient became unable to work during diagnosis and original search for treatments. Eventually the right treatment was found—albeit with severe side effects—requiring four pills a day and medication administered directly into the brain every 16 weeks.
Cause
C. immitis is a dimorphic saprophytic fungus that grows as a mycelium in the soil and produces a spherule form in the host organism. It resides in the soil in certain parts of the southwestern United States, most notably in California and Arizona. It is also commonly found in northern Mexico, and parts of Central and South America. C. immitis is dormant during long dry spells, then develops as a mold with long filaments that break off into airborne spores when it rains. The spores, known as arthroconidia, are swept into the air by disruption of the soil, such as during construction, farming, low-wind or singular dust events, or an earthquake. Windstorms may also cause epidemics far from endemic areas. In December 1977, a windstorm in an endemic area around Arvin, California led to several hundred cases, including deaths, in non-endemic areas hundreds of miles away.
Rain starts the cycle of initial growth of the fungus in the soil. In soil (and in agar media), Coccidioides exist in filament form. It forms hyphae in both horizontal and vertical directions. Over a prolonged dry period, cells within hyphae degenerate to form alternating barrel-shaped cells (arthroconidia) which are light in weight and carried by air currents. This happens when the soil is disturbed, often by clearing trees, construction or farming. As the population grows, so do all these activities, causing a potential cascade effect. The more land that is cleared and the more arid the soil, the riper the environment for Coccidioides. These spores can be easily inhaled unknowingly. On reaching alveoli they enlarge in size to become spherules, and internal septations develop. This division of cells is made possible by the optimal temperature inside the body. Septations develop and form endospores within the spherule. Rupture of spherules release these endospores, which in turn repeat the cycle and spread the infection to adjacent tissues within the body of the infected individual. Nodules can form in lungs surrounding these spherules. When they rupture, they release their contents into bronchi, forming thin-walled cavities. These cavities can cause symptoms including characteristic chest pain, coughing up blood, and persistent cough. In individuals with a weakened immune system, the infection can spread through the blood. The fungus can also, rarely, enter the body through a break in the skin and cause infection.
Diagnosis
Coccidioidomycosis diagnosis relies on a combination of an infected person's signs and symptoms, findings on radiographic imaging, and laboratory results.
The disease is commonly misdiagnosed as bacterial community-acquired pneumonia. The fungal infection can be demonstrated by microscopic detection of diagnostic cells in body fluids, exudates, sputum and biopsy tissue by methods of Papanicolaou or Grocott's methenamine silver staining. These stains can demonstrate spherules and surrounding inflammation.
With specific nucleotide primers, C. immitis DNA can be amplified by polymerase chain reaction (PCR). It can also be detected in culture by morphological identification or by using molecular probes that hybridize with C. immitis RNA. C. immitis and C. posadasii cannot be distinguished on cytology or by symptoms, but only by DNA PCR.
An indirect demonstration of fungal infection can be achieved also by serologic analysis detecting fungal antigen or host IgM or IgG antibody produced against the fungus. The available tests include the tube-precipitin (TP) assays, complement fixation assays, and enzyme immunoassays. TP antibody is not found in cerebrospinal fluid (CSF). TP antibody is specific and is used as a confirmatory test, whereas ELISA is sensitive and thus used for initial testing.
If the meninges are affected, CSF will show abnormally low glucose levels, an increased level of protein, and lymphocytic pleocytosis. Rarely, CSF eosinophilia is present.
Imaging
Chest X-rays rarely demonstrate nodules or cavities in the lungs, but these images commonly demonstrate lung opacification, pleural effusions, or enlargement of lymph nodes associated with the lungs. Computed tomography scans of the chest are more sensitive than chest X-rays to detect these changes.
Prevention
Preventing coccidioidomycosis is challenging because it is difficult to avoid breathing in the fungus should it be present; however, the public health effect of the disease is essential to understand in areas where the fungus is endemic. Enhancing surveillance of coccidioidomycosis is key to preparedness in the medical field in addition to improving diagnostics for early infections. There are no completely effective preventive measures available for people who live or travel through Valley fever-endemic areas. Recommended preventive measures include avoiding airborne dust or dirt, but this does not guarantee protection against infection. People in certain occupations may be advised to wear face masks. The use of air filtration indoors is also helpful, in addition to keeping skin injuries clean and covered to avoid skin infection.
From 1998–2011, there were 111,117 U.S. cases of coccidioidomycosis logged in the National Notifiable Diseases Surveillance System (NNDSS). Since many U.S. states do not require reporting of coccidioidomycosis, the actual numbers may be higher. The United States' Centers for Disease Control and Prevention (CDC) called the disease a "silent epidemic" and acknowledged that there is no proven anticoccidioidal vaccine available. A 2001 cost-effectiveness analysis indicated that a potential vaccine could improve health as well as reducing total health care expenditures among infants, teens, and immigrant adults, and more modestly improve health but increase total health care expenditures in older age groups.
Raising both surveillance and awareness of the disease while medical researchers are developing a human vaccine can positively contribute towards prevention efforts. Research demonstrates that patients from endemic areas who are aware of the disease are most likely to request diagnostic testing for coccidioidomycosis. Presently, Meridian Bioscience manufactures the so-called EIA test to diagnose the Valley fever, which however is known for producing a fair quantity of false positives. Recommended prevention measures can include type-of-exposure-based respirator protection for persons engaged in agriculture, construction and others working outdoors in endemic areas. Dust control measures such as planting grass and wetting the soil, and also limiting exposure to dust storms are advisable for residential areas in endemic regions.
Treatment
Significant disease develops in fewer than 5% of those infected and typically occurs in those with a weakened immune system. Mild asymptomatic cases often do not require any treatment. Those with severe symptoms may benefit from antifungal therapy, which requires 3–6 months or more of treatment depending on the response to the treatment. There is a lack of prospective studies that examine optimal antifungal therapy for coccidioidomycosis.
On the whole, oral fluconazole and intravenous amphotericin B are used in progressive or disseminated disease, or in immunocompromised individuals. Amphotericin B was originally the only available treatment, but alternatives, including itraconazole and ketoconazole, became available for milder disease. Fluconazole is the preferred medication for coccidioidal meningitis, due to its penetration into CSF. Intrathecal or intraventricular amphotericin B therapy is used if infection persists after fluconazole treatment. Itraconazole is used for cases that involve treatment of infected person's bones and joints. The antifungal medications posaconazole and voriconazole have also been used to treat coccidioidomycosis. Because the symptoms of coccidioidomycosis are similar to the common flu, pneumonia, and other respiratory diseases, it is important for public health professionals to be aware of the rise of coccidioidomycosis and the specifics of diagnosis. Greyhound dogs often get coccidioidomycosis; their treatment regimen involves 6–12 months of ketoconazole taken with food.
Toxicity
Conventional amphotericin B desoxycholate (AmB: used since the 1950s as a primary agent) is known to be associated with increased drug-induced nephrotoxicity impairing kidney function. Other formulations have been developed such as lipid-soluble formulations to mitigate side-effects such as direct proximal and distal tubular cytotoxicity. These include liposomal amphotericin B, amphotericin B lipid complex such as Abelcet (brand) amphotericin B phospholipid complex also as AmBisome Intravenous, or Amphotec Intravenous (Generic; Amphotericin B Cholesteryl Sul), and amphotericin B colloidal dispersion, all shown to exhibit a decrease in nephrotoxicity. The latter was not as effective in one study as amphotericin B desoxycholate which had a 50% murine (rat and mouse) morbidity rate versus zero for the AmB colloidal dispersion.
The cost of the nephrotoxic AmB deoxycholate, in 2015, for a patient of at 1 mg/kg/day dosage, was approximately US$63.80, compared to $1318.80 for 5 mg/kg/day of the less toxic liposomal AmB.
Epidemiology
Coccidioidomycosis is endemic to the western hemisphere between 40°N and 40°S, including certain parts of the United States in Arizona, California, Nevada, New Mexico, Texas, Utah, and northern Mexico. The ecological niches are characterized by hot summers and mild winters with an annual rainfall of 10–50 cm.
The species are found in alkaline sandy soil, typically 10–30 cm below the surface. In harmony with the mycelium life cycle, incidence increases with periods of dryness after a rainy season; this phenomenon, termed "grow and blow", refers to growth of the fungus in wet weather, producing spores which are spread by the wind during succeeding dry weather. While the majority of cases are observed in the endemic region, cases reported outside the area are generally visitors, who contact the infection and return to their native areas before becoming symptomatic.
North America
In the United States, C. immitis is endemic to southern and central California with the highest presence in the San Joaquin Valley. C. posadassi is most prevalent in Arizona, although it can be found in a wider region spanning from Utah, New Mexico, Texas, and Nevada. Approximately 25,000 cases are reported every year, although the total number of infections is estimated to be around 150,000 per year; the disease is underreported because many cases are asymptomatic, and those who do have symptoms are often difficult to distinguish from other causes of pneumonia if they are not specifically tested for valley fever.The incidence of coccidioidomycosis in the United States in 2011 (42.6 per 100,000) was almost ten times higher than the incidence reported in 1998 (5.3 per 100,000). In area where it is most prevalent, the infection rate is 2-4%.
Incidence varies widely across the west and southwest. In Arizona, for instance, in 2007, there were 3,450 cases in Maricopa County, which in 2007 had an estimated population of 3,880,181 for an incidence of approximately 1 in 1,125. In contrast, though southern New Mexico is considered an endemic region, there were 35 cases in the entire state in 2008 and 23 in 2007, in a region that had an estimated 2008 population of 1,984,356, for an incidence of approximately 1 in 56,695.
Infection rates vary greatly by county, and although population density is important, so are other factors that have not been proven yet. Greater construction activity may disturb spores in the soil. In addition, the effect of altitude on fungi growth and morphology has not been studied, and altitude can range from sea level to 10,000 feet or higher across California, Arizona, Utah and New Mexico.
In California from 2000 to 2007, there were 16,970 reported cases (5.9 per 100,000 people) and 752 deaths of the 8,657 people hospitalized. The highest incidence was in the San Joaquin Valley with 76% of the 16,970 cases (12,855) occurring in the area. Following the 1994 Northridge earthquake, there was a sudden increase of cases in the areas affected by the quake, at a pace of over 10 times baseline.
There was an outbreak in the summer of 2001 in Colorado, away from where the disease was considered endemic. A group of archeologists visited Dinosaur National Monument, and eight members of the crew, along with two National Park Service workers were diagnosed with Valley fever.
California state prisons, beginning in 1919, have been particularly affected by coccidioidomycosis. In 2005 and 2006, the Pleasant Valley State Prison near Coalinga and Avenal State Prison near Avenal on the western side of the San Joaquin Valley had the highest incidence in 2005, of at least 3,000 per 100,000. The receiver appointed in Plata v. Schwarzenegger issued an order in May 2013 requiring relocation of vulnerable populations in those prisons.
The incidence rate has been increasing, with rates as high as 7% during 2006–2010. The cost of care and treatment is $23 million in California prisons. A lawsuit was filed against the state in 2014 on behalf of 58 inmates stating that the Avenal and Pleasant valley state prisons did not take necessary steps to prevent infections.
Population risk factors
There are several populations that have a higher risk for contracting coccidioidomycosis and developing the advanced disseminated version of the disease. Populations with exposure to the airborne arthroconidia working in agriculture and construction have a higher risk. Outbreaks have also been linked to earthquakes, windstorms and military training exercises where the ground is disturbed. Historically, an infection is more likely to occur in males than females, although this could be attributed to occupation rather than being sex-specific. Women who are pregnant and immediately postpartum are at a high risk of infection and dissemination. There is also an association between stage of pregnancy and severity of the disease, with third trimester women being more likely to develop dissemination. Presumably this is related to highly elevated hormonal levels, which stimulate growth and maturation of spherules and subsequent release of endospores. Certain ethnic populations are more susceptible to disseminated coccidioidomycosis. The risk of dissemination is 175 times greater in Filipinos and 10 times greater in African Americans than non-Hispanic whites. Individuals with a weakened immune system are also more susceptible to the disease. In particular, individuals with HIV and diseases that impair T-cell function. Individuals with pre-existing conditions such as diabetes are also at a higher risk. Age also affects the severity of the disease, with more than one-third of deaths being in the 65-84 age group.
History
The first case of what was later named coccidioidomycosis was described in 1892 in Buenos Aires by Alejandro Posadas, a medical intern at the Hospital de Clínicas "José de San Martín". Posadas established an infectious character of the disease after being able to transfer it in laboratory conditions to lab animals. In the U.S., Dr. E. Rixford, a physician from a San Francisco hospital, and T. C. Gilchrist, a pathologist at Johns Hopkins Medical School, became early pioneers of clinical studies of the infection. They decided that the causative organism was a Coccidia-type protozoan and named it Coccidioides immitis (resembling Coccidia, not mild).
Dr. William Ophüls, a professor at Stanford University Hospital (San Francisco), discovered that the causative agent of the disease that was at first called Coccidioides infection and later coccidioidomycosis was a fungal pathogen, and coccidioidomycosis was also distinguished from Histoplasmosis and Blastomycosis. Further, Coccidioides immitis was identified as the culprit of respiratory disorders previously called San Joaquin Valley fever, desert fever, and Valley fever, and a serum precipitin test was developed by Charles E. Smith that was able to detect an acute form of the infection. In retrospect, Smith played a major role in both medical research and raising awareness about coccidioidomycosis, especially when he became dean of the School of Public Health at the University of California at Berkeley in 1951.
Coccidioides immitis was considered by the United States during the 1950s and 1960s as a potential biological weapon. The strain selected for investigation was designated with the military symbol OC, and initial expectations were for its deployment as a human incapacitant. Medical research suggested that OC might have had some lethal effects on the populace, and Coccidioides immitis started to be classified by the authorities as a threat to public health. Coccidioides immitis was never weaponized to the public's knowledge, and most of the military research in the mid-1960s was concentrated on developing a human vaccine. Coccidioides immitis is not on the U.S. Department of Health and Human Services' or Centers for Disease Control and Prevention's list of select agents and toxins.
In 2002, Coccidioides posadasii was identified as genetically distinct from Coccidioides immitis despite their morphologic similarities and can also cause coccidioidomycosis.
It was reported in 2022 that valley fever had been increasing in Central Valley of California for years (1,000 cases in Kern county in 2014, 3,000 in 2021); experts said that cases could rise across the American west as the climate makes the landscape drier and hotter. The Coccidioides flourishes due to the oscillation between extreme dryness and extreme wetness. The California Department of Public Health said the 9,280 new cases of Valley fever with onset dates in 2023 was the highest number the department has ever documented.
Research
As of 2023, there is no vaccine available to prevent infection with Coccidioides immitis or Coccidioides posadasii, but efforts to develop such a vaccine are underway. Anivive Lifesciences and a team at the University of Arizona Medical School was developing a vaccine for use in dogs, which could eventually lead to a vaccine in humans.
Other animals
In dogs, the most common symptom of coccidioidomycosis is a chronic cough, which can be dry or moist. Other symptoms include fever (in approximately 50% of cases), weight loss, anorexia, lethargy, and depression. The disease can disseminate throughout the dog's body, most commonly causing osteomyelitis (infection of the bone), which leads to lameness. Dissemination can cause other symptoms, depending on which organs are infected. If the fungus infects the heart or pericardium, it can cause heart failure and death.
In cats, symptoms may include skin lesions, fever, and loss of appetite, with skin lesions being the most common.
Other species in which Valley fever has been found include livestock such as cattle and horses; llamas; marine mammals, including sea otters; zoo animals such as monkeys and apes, kangaroos, tigers, etc.; and wildlife native to the geographic area where the fungus is found, such as cougars, skunks, and javelinas.
Additional images
In popular culture
In the Season 1 episode of Bones called "The Man in the Fallout Shelter" the entire lab is exposed to coccidioidomycosis through inhalation of bone dust. Erroneously, the team is forced to quarantine in the lab on Christmas Eve to prevent the disease from spreading to the public (in real life, the disease is not contagious). The lab is later exposed to it again in the Season 2 episode "The Priest in the Churchyard" from contaminated graveyard soil but only receives a series of injections rather than be forced to quarantine.
Everything in Between, a 2022 Australian feature film, contains references to coccidioidomycosis.
In House Season 3 Episode 4, "Lines in the Sand", a 17-year-old patient who has been exposed to Coccidioides immitis exhibits symptoms of coccidioidomycosis.
Thunderhead, a 1999 novel by Douglas Preston and Lincoln Child, uses the fungus and illness as a central plot point.
See also
Coccidioides
Coccidioides immitis
Coccidioides posadasii
Zygomycosis
Medical geology
List of cutaneous conditions
Thunderhead, a 1999 novel by Douglas Preston and Lincoln Child which uses the fungus and illness as a central plot point.
References
Further reading
(Review).
(Review).
External links
U.S. Centers for Disease Control and Prevention page on coccidioidomycosis
Medline Plus Entry for coccidioidomycosis
Biological agents
Animal fungal diseases
Neglected American diseases
Fungal diseases | Coccidioidomycosis | [
"Biology",
"Environmental_science"
] | 5,610 | [
"Fungi",
"Toxicology",
"Fungal diseases",
"Biological warfare",
"Biological agents"
] |
183,928 | https://en.wikipedia.org/wiki/UTF-32 | UTF-32 (32-bit Unicode Transformation Format), sometimes called UCS-4, is a fixed-length encoding used to encode Unicode code points that uses exactly 32 bits (four bytes) per code point (but a number of leading bits must be zero as there are far fewer than 232 Unicode code points, needing actually only 21 bits). In contrast, all other Unicode transformation formats are variable-length encodings. Each 32-bit value in UTF-32 represents one Unicode code point and is exactly equal to that code point's numerical value.
The main advantage of UTF-32 is that the Unicode code points are directly indexed. Finding the Nth code point in a sequence of code points is a constant-time operation. In contrast, a variable-length code requires linear-time to count N code points from the start of the string. This makes UTF-32 a simple replacement in code that uses integers that are incremented by one to examine each location in a string, as was commonly done for ASCII. However, Unicode code points are rarely processed in complete isolation, such as combining character sequences and for emoji.
The main disadvantage of UTF-32 is that it is space-inefficient, using four bytes per code point, including 11 bits that are always zero. Characters beyond the BMP are relatively rare in most texts (except, for example, in the case of texts with some popular emojis), and can typically be ignored for sizing estimates. This makes UTF-32 close to twice the size of UTF-16. It can be up to four times the size of UTF-8 depending on how many of the characters are in the ASCII subset.
History
The original ISO/IEC 10646 standard defines a 32-bit encoding form called UCS-4, in which each code point in the Universal Character Set (UCS) is represented by a 31-bit value from 0 to 0x7FFFFFFF (the sign bit was unused and zero). In November 2003, Unicode was restricted by RFC 3629 to match the constraints of the UTF-16 encoding: explicitly prohibiting code points greater than U+10FFFF (and also the high and low surrogates U+D800 through U+DFFF). This limited subset defines UTF-32. Although the ISO standard had (as of 1998 in Unicode 2.1) "reserved for private use" 0xE00000 to 0xFFFFFF, and 0x60000000 to 0x7FFFFFFF these areas were removed in later versions. Because the Principles and Procedures document of ISO/IEC JTC 1/SC 2 Working Group 2 states that all future assignments of code points will be constrained to the Unicode range, UTF-32 will be able to represent all UCS code points and UTF-32 and UCS-4 are identical.
Utility of fixed width
A fixed number of bytes per code point has theoretical advantages, but each of these has problems in reality:
Truncation becomes easier, but not significantly so compared to UTF-8 and UTF-16 (both of which can search backwards for the point to truncate by looking at 2–4 code units at most).
Finding the Nth character in a string. For fixed width, this is simply a O(1) problem, while it is O(n) problem in a variable-width encoding. Novice programmers often vastly overestimate how useful this is. Also what a user might call a "character" is still variable-width, for instance the combining character sequence could be 2 code points, the emoji is three, and the ligature is one.
Quickly knowing the "width" of a string. However even "fixed width" fonts have varying width, often CJK ideographs are twice as wide, plus the already-mentioned problems with the number of code points not being equal to the number of characters.
Use
The main use of UTF-32 is in internal APIs where the data is single code points or glyphs, rather than strings of characters. For instance, in modern text rendering, it is common that the last step is to build a list of structures each containing coordinates (x, y), attributes, and a single UTF-32 code point identifying the glyph to draw. Often non-Unicode information is stored in the "unused" 11 bits of each word.
Use of UTF-32 strings on Windows (where is 16 bits) is almost non-existent. On Unix systems, UTF-32 strings are sometimes, but rarely, used internally by applications, due to the type being defined as 32-bit.
UTF-32 is also forbidden as an HTML character encoding.
Programming languages
Python versions up to 3.2 can be compiled to use them instead of UTF-16; from version 3.3 onward, Unicode strings are stored in UTF-32 if there is at least 1 non-BMP character in the string, but with leading zero bytes optimized away "depending on the [code point] with the largest Unicode ordinal (1, 2, or 4 bytes)" to make all code points that size.
Seed7 and Lasso programming languages encode all strings with UTF-32, in the belief that direct indexing is important, whereas the Julia programming language moved away from built-in UTF-32 support with its 1.0 release, simplifying the language to having only UTF-8 strings (with all the other encodings considered legacy and moved out of the standard library to package) following the "UTF-8 Everywhere Manifesto".
C++11 has 2 built-in data types that use UTF-32. The char32_t data type stores 1 character in UTF-32. The u32string data type stores a string of UTF-32-encoded characters. A UTF-32-encoded character or string literal is marked with U before the character or string literal.
#include <string>
char32_t UTF32_character = U'🔟'; // also written as U'\U0001F51F'
std::u32string UTF32_string = U"UTF–32-encoded string"; // defined as `const char32_t*´C# has a UTF32Encoding class which represents Unicode characters as bytes, rather than as a string.
Variants
Though technically invalid, the surrogate halves are often encoded and allowed. This allows invalid UTF-16 (such as Windows filenames) to be translated to UTF-32, similar to how the WTF-8 variant of UTF-8 works. Sometimes paired surrogates are encoded instead of non-BMP characters, similar to CESU-8. Due to the large number of unused 32-bit values, it is also possible to preserve invalid UTF-8 by using non-Unicode values to encode UTF-8 errors, though there is no standard for this.
UTF-32 has 2 versions for big-endian and little-endian: UTF-32-BE and UTF-32-LE.
See also
Comparison of Unicode encodings
UTF-16
Notes
References
External links
The Unicode Standard 5.0.0, chapter 3 formally defines UTF-32 in § 3.9, D90 (PDF page 40) and § 3.10, D99-D101 (PDF page 45)
Unicode Standard Annex #19 formally defined UTF-32 for Unicode 3.x (March 2001; last updated March 2002)
Registration of new charsets: UTF-32, UTF-32BE, UTF-32LE announcement of UTF-32 being added to the IANA charset registry (April 2002)
Character encoding
Unicode Transformation Formats | UTF-32 | [
"Technology"
] | 1,643 | [
"Natural language and computing",
"Character encoding"
] |
183,932 | https://en.wikipedia.org/wiki/T%20Tauri%20star | T Tauri stars (TTS) are a class of variable stars that are less than about ten million years old. This class is named after the prototype, T Tauri, a young star in the Taurus star-forming region. They are found near molecular clouds and identified by their optical variability and strong chromospheric lines. T Tauri stars are pre-main-sequence stars in the process of contracting to the main sequence along the Hayashi track, a luminosity–temperature relationship obeyed by infant stars of less than 3 solar masses () in the pre-main-sequence phase of stellar evolution. It ends when a star of or larger develops a radiative zone, or when a smaller star commences nuclear fusion on the main sequence.
History
While T Tauri itself was discovered in 1852, the T Tauri class of stars were initially defined by Alfred Harrison Joy in 1945.
Characteristics
T Tauri stars comprise the youngest visible F, G, K and M spectral type stars (). Their surface temperatures are similar to those of main-sequence stars of the same mass, but they are significantly more luminous because their radii are larger. Their central temperatures are too low for hydrogen fusion. Instead, they are powered by gravitational energy released as the stars contract, while moving towards the main sequence, which they reach after about 100 million years. They typically rotate with a period between one and twelve days, compared to a month for the Sun, and are very active and variable.
There is evidence of large areas of starspot coverage, and they have intense and variable X-ray and radio emissions (approximately 1000 times that of the Sun). Many have extremely powerful stellar winds; some eject gas in high-velocity bipolar jets. Another source of brightness variability are clumps (protoplanets and planetesimals) in the disk surrounding T Tauri stars.
Their spectra show a higher lithium abundance than the Sun and other main-sequence stars because lithium is destroyed at temperatures above 2,500,000 K. From a study of lithium abundances in 53 T Tauri stars, it has been found that lithium depletion varies strongly with size, suggesting that "lithium burning" by the p-p chain during the last highly convective and unstable stages during the later pre–main sequence phase of the Hayashi contraction may be one of the main sources of energy for T Tauri stars. Rapid rotation tends to improve mixing and increase the transport of lithium into deeper layers where it is destroyed. T Tauri stars generally increase their rotation rates as they age, through contraction and spin-up, as they conserve angular momentum. This causes an increased rate of lithium loss with age. Lithium burning will also increase with higher temperatures and mass, and will last for at most a little over 100 million years.
The p-p chain for lithium burning is as follows
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ |||| ||
|- style="height:2em;"
| ||+ || ||→ || ||+
|- style="height:2em;"
| ||+ || ||→ || || ||(unstable)
|- style="height:2em;"
| || || ||→ ||2 ||+ energy
|}
It will not occur in stars with less than sixty times the mass of Jupiter (). The rate of lithium depletion can be used to calculate the age of the star.
Types
Several types of TTSs exist:
Classical T Tauri star (CTTS)
Weak-line T Tauri star (WTTS)
Naked T Tauri star (NTTS), which is a subset of WTTS.
Roughly half of T Tauri stars have circumstellar disks, which in this case are called protoplanetary discs because they are probably the progenitors of planetary systems like the Solar System. Circumstellar discs are estimated to dissipate on timescales of up to 10 million years. Most T Tauri stars are in binary star systems. In various stages of their life, they are called young stellar object (YSOs). It is thought that the active magnetic fields and strong solar wind of Alfvén waves of T Tauri stars are one means by which angular momentum gets transferred from the star to the protoplanetary disc. A T Tauri stage for the Solar System would be one means by which the angular momentum of the contracting Sun was transferred to the protoplanetary disc and hence, eventually to the planets.
Analogs of T Tauri stars in the higher mass range (2–8 solar masses)—A and B spectral type pre–main-sequence stars, are called Herbig Ae/Be-type stars. More massive (>8 solar masses) stars in pre–main sequence stage are not observed, because they evolve very quickly: when they become visible (i.e. disperses surrounding circumstellar gas and dust cloud), the hydrogen in the center is already burning and they are main sequence objects.
Planets
Planets around T Tauri stars include:
HD 106906 b around an F-type star
1RXS J160929.1−210524b around a K-type star
Gliese 674 b around an M-type star
V830 Tau b around an M-type star
PDS 70b around a K-type star
See also
Orion variable
EX Lupi
P Cygni profile
References
Discussion of V471 Tauri observations and general T-Tauri properties, Frederick M. Walter, Stony Brook University, April 2004
An empirical criterion to classify T Tauri stars and substellar analogs using low-resolution optical spectroscopy, David Barrado y Navascues, 2003
Star types
Star formation | T Tauri star | [
"Astronomy"
] | 1,210 | [
"Star types",
"Astronomical classification systems"
] |
183,933 | https://en.wikipedia.org/wiki/Starspot | Starspots are stellar phenomena, so-named by analogy with sunspots.
Spots as small as sunspots have not been detected on other stars, as they would cause undetectably small fluctuations in brightness. The commonly observed starspots are in general much larger than those on the Sun: up to about 30% of the stellar surface may be covered, corresponding to starspots 100 times larger than those on the Sun.
Detection and measurements
To detect and measure the extent of starspots one uses several types of methods.
For rapidly rotating stars – Doppler imaging and Zeeman-Doppler imaging. With the Zeeman-Doppler imaging technique the direction of the magnetic field on stars can be determined since spectral lines are split according to the Zeeman effect, revealing the direction and magnitude of the field.
For slowly rotating stars – Line Depth Ratio (LDR). Here one measures two different spectral lines, one sensitive to temperature and one which is not. Since starspots have a lower temperature than their surroundings the temperature-sensitive line changes its depth. From the difference between these two lines the temperature and size of the spot can be calculated, with a temperature accuracy of 10K.
For eclipsing binary stars – Eclipse mapping produces images and maps of spots on both stars.
For giant binary stars - Very-long-baseline interferometry
For stars with transiting extrasolar planets – Light curve variations.
Temperature
Observed starspots have a temperature which is in general 500–2000 kelvins cooler than the stellar photosphere. This temperature difference could give rise to a brightness variation up to 0.6 magnitudes between the spot and the surrounding surface. There also seems to be a relation between the spot temperature and the temperature for the stellar photosphere, indicating that starspots behave similarly for different types of stars (observed in G–K dwarfs).
Lifetimes
The lifetime for a starspot depends on its size.
For small spots the lifetime is proportional to their size, similar to spots on the Sun.
For large spots the sizes depend on the differential rotation of the star, but there are some indications that large spots which give rise to light variations can survive for many years even in stars with differential rotation.
Activity cycles
The distribution of starspots across the stellar surface varies analogous to the solar case, but differs for different types of stars, e.g., depending on whether the star is a binary or not. The same type of activity cycles that are found for the Sun can be seen for other stars, corresponding to the solar (2 times) 11-year cycle.
Maunder minimum
Some stars may have longer cycles, possibly analogous to the Maunder minimum for the Sun which lasted 70 years, for example some Maunder minimum candidates are 51 Pegasi, HD 4915 and HD 166620.
Flip-flop cycles
Another activity cycle is the so-called flip-flop cycle, which implies that the activity on either hemisphere shifts from one side to the other. The same phenomena can be seen on the Sun, with periods of 3.8 and 3.65 years for the northern and southern hemispheres.
Flip-flop phenomena are observed for both binary RS CVn stars and single stars although the extent of the cycles are different between binary and singular stars.
Notes
References
(explains how Doppler imaging works)
K. G. Strassmeier (1997), Aktive Sterne. Laboratorien der solaren Astrophysik, Springer,
Further reading
Concepts in stellar astronomy
Stellar phenomena | Starspot | [
"Physics"
] | 719 | [
"Concepts in stellar astronomy",
"Physical phenomena",
"Stellar phenomena",
"Concepts in astrophysics"
] |
183,934 | https://en.wikipedia.org/wiki/Stellar%20wind | A stellar wind is a flow of gas ejected from the upper atmosphere of a star. It is distinguished from the bipolar outflows characteristic of young stars by being less collimated, although stellar winds are not generally spherically symmetric.
Different types of stars have different types of stellar winds.
Post-main-sequence stars nearing the ends of their lives often eject large quantities of mass in massive ( solar masses per year), slow (v = 10 km/s) winds. These include red giants and supergiants, and asymptotic giant branch stars. These winds are understood to be driven by radiation pressure on dust condensing in the upper atmosphere of the stars.
Young T Tauri stars often have very powerful stellar winds.
Massive stars of types O and B have stellar winds with lower mass loss rates ( solar masses per year) but very high velocities (v > 1–2000 km/s). Such winds are driven by radiation pressure on the resonance absorption lines of heavy elements such as carbon and nitrogen. These high-energy stellar winds blow stellar wind bubbles.
G-type stars like the Sun have a wind driven by their hot, magnetized corona. The Sun's wind is called the solar wind. These winds consist mostly of high-energy electrons and protons (about 1 keV) that are able to escape the star's gravity because of the high temperature of the corona.
Stellar winds from main-sequence stars do not strongly influence the evolution of lower-mass stars such as the Sun. However, for more massive stars such as O stars, the mass loss can result in a star shedding as much as 50% of its mass whilst on the main sequence: this clearly has a significant impact on the later stages of evolution. The influence can even be seen for intermediate mass stars, which will become white dwarfs at the ends of their lives rather than exploding as supernovae only because they lost enough mass in their winds.
See also
Cosmic ray
Cosmic wind
Planetary wind
Colliding-wind binary
Pulsar wind nebula
Galactic superwind
Superwind
References
External links
Wind | Stellar wind | [
"Astronomy"
] | 428 | [
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
183,950 | https://en.wikipedia.org/wiki/Kerogen | Kerogen is solid, insoluble organic matter in sedimentary rocks. It consists of a variety of organic materials, including dead plants, algae, and other microorganisms, that have been compressed and heated by geological processes. All the kerogen on earth is estimated to contain 1016 tons of carbon. This makes it the most abundant source of organic compounds on earth, exceeding the total organic content of living matter 10,000-fold.
The type of kerogen present in a particular rock formation depends on the type of organic material that was originally present. Kerogen can be classified by these origins: lacustrine (e.g., algal), marine (e.g., planktonic), and terrestrial (e.g., pollen and spores). The type of kerogen depends also on the degree of heat and pressure it has been subjected to, and the length of time the geological processes ran. The result is that a complex mixture of organic compounds reside in sedimentary rocks, serving as the precursor for the formation of hydrocarbons such as oil and gas. In short, kerogen amounts to fossilized organic matter that has been buried and subjected to high temperatures and pressures over millions of years, resulting in various chemical reactions and transformations.
Kerogen is insoluble in normal organic solvents and it does not have a specific chemical formula. Upon heating, kerogen converts in part to liquid and gaseous hydrocarbons. Petroleum and natural gas form from kerogen. The name "kerogen" was introduced by the Scottish organic chemist Alexander Crum Brown in 1906, derived from the Greek for "wax birth" (Greek: κηρός "wax" and -gen, γένεση "birth").
The increased production of hydrocarbons from shale has motivated a revival of research into the composition, structure, and properties of kerogen. Many studies have documented dramatic and systematic changes in kerogen composition across the range of thermal maturity relevant to the oil and gas industry. Analyses of kerogen are generally performed on samples prepared by acid demineralization with critical point drying, which isolates kerogen from the rock matrix without altering its chemical composition or microstructure.
Formation
Kerogen is formed during sedimentary diagenesis from the degradation of living matter. The original organic matter can comprise lacustrine and marine algae and plankton and terrestrial higher-order plants. During diagenesis, large biopolymers from, e.g., proteins, lipids, and carbohydrates in the original organic matter, decompose partially or completely. This breakdown process can be viewed as the reverse of photosynthesis. These resulting units can then polycondense to form geopolymers. The formation of geopolymers in this way accounts for the large molecular weights and diverse chemical compositions associated with kerogen. The smallest units are the fulvic acids, the medium units are the humic acids, and the largest units are the humins. This polymerization usually happens alongside the formation and/or sedimentation of one or more mineral components resulting in a sedimentary rock like oil shale.
When kerogen is contemporaneously deposited with geologic material, subsequent sedimentation and progressive burial or overburden provide elevated pressure and temperature owing to lithostatic and geothermal gradients in Earth's crust. Resulting changes in the burial temperatures and pressures lead to further changes in kerogen composition including loss of hydrogen, oxygen, nitrogen, sulfur, and their associated functional groups, and subsequent isomerization and aromatization Such changes are indicative of the thermal maturity state of kerogen. Aromatization allows for molecular stacking in sheets, which in turn drives changes in physical characteristics of kerogen, such as increasing molecular density, vitrinite reflectance, and spore coloration (yellow to orange to brown to black with increasing depth/thermal maturity).
During the process of thermal maturation, kerogen breaks down in high-temperature pyrolysis reactions to form lower-molecular-weight products including bitumen, oil, and gas. The extent of thermal maturation controls the nature of the product, with lower thermal maturities yielding mainly bitumen/oil and higher thermal maturities yielding gas. These generated species are partially expelled from the kerogen-rich source rock and in some cases can charge into a reservoir rock. Kerogen takes on additional importance in unconventional resources, particularly shale. In these formations, oil and gas are produced directly from the kerogen-rich source rock (i.e. the source rock is also the reservoir rock). Much of the porosity in these shales is found to be hosted within the kerogen, rather than between mineral grains as occurs in conventional reservoir rocks. Thus, kerogen controls much of the storage and transport of oil and gas in shale.
Another possible method of formation is that vanabin-containing organisms cleave the core out of chlorin-based compounds such as the magnesium in chlorophyll and replace it with their vanadium center in order to attach and harvest energy via light-harvesting complexes. It is theorized that the bacteria contained in worm castings, Rhodopseudomonas palustris, do this during its photoautotrophism mode of metabolism. Over time colonies of light harvesting bacteria solidify, forming kerogen .
Composition
Kerogen is a complex mixture of organic chemical compounds that make up the most abundant fraction of organic matter in sedimentary rocks. As kerogen is a mixture of organic materials, it is not defined by a single chemical formula. Its chemical composition varies substantially between and even within sedimentary formations. For example, kerogen from the Green River Formation oil shale deposit of western North America contains elements in the proportions carbon 215 : hydrogen 330 : oxygen 12 : nitrogen 5 : sulfur 1.
Kerogen is insoluble in normal organic solvents in part because of the high molecular weight of its component compounds. The soluble portion is known as bitumen. When heated to the right temperatures in the earth's crust, (oil window c. 50–150 °C, gas window c. 150–200 °C, both depending on how quickly the source rock is heated) some types of kerogen release crude oil or natural gas, collectively known as hydrocarbons (fossil fuels). When such kerogens are present in high concentration in rocks such as organic-rich mudrocks shale, they form possible source rocks. Shales that are rich in kerogen but have not been heated to required temperature to generate hydrocarbons instead may form oil shale deposits.
The chemical composition of kerogen has been analyzed by several forms of solid state spectroscopy. These experiments typically measure the speciations (bonding environments) of different types of atoms in kerogen. One technique is 13C NMR spectroscopy, which measures carbon speciation. NMR experiments have found that carbon in kerogen can range from almost entirely aliphatic (sp3 hybridized) to almost entirely aromatic (sp2 hybridized), with kerogens of higher thermal maturity typically having higher abundance of aromatic carbon. Another technique is Raman spectroscopy. Raman scattering is characteristic of, and can be used to identify, specific vibrational modes and symmetries of molecular bonds. The first-order Raman spectra of kerogen comprises two principal peaks; a so-called G band ("graphitic") attributed to in-plane vibrational modes of well-ordered sp2 carbon and a so-called D band ("disordered") from symmetric vibrational modes of sp2 carbon associated with lattice defects and discontinuities. The relative spectral position (Raman shift) and intensity of these carbon species is shown to correlate to thermal maturity, with kerogens of higher thermal maturity having higher abundance of graphitic/ordered aromatic carbons. Complementary and consistent results have been obtained with infrared (IR) spectroscopy, which show that kerogen has higher fraction of aromatic carbon and shorter lengths of aliphatic chains at higher thermal maturities. These results can be explained by the preferential removal of aliphatic carbons by cracking reactions during pyrolysis, where the cracking typically occurs at weak C–C bonds beta to aromatic rings and results in the replacement of a long aliphatic chain with a methyl group. At higher maturities, when all labile aliphatic carbons have already been removed—in other words, when the kerogen has no remaining oil-generation potential—further increase in aromaticity can occur from the conversion of aliphatic bonds (such as alicyclic rings) to aromatic bonds.
IR spectroscopy is sensitive to carbon-oxygen bonds such as quinones, ketones, and esters, so the technique can also be used to investigate oxygen speciation. It is found that the oxygen content of kerogen decreases during thermal maturation (as has also been observed by elemental analysis), with relatively little observable change in oxygen speciation. Similarly, sulfur speciation can be investigated with X-ray absorption near edge structure (XANES) spectroscopy, which is sensitive to sulfur-containing functional groups such as sulfides, thiophenes, and sulfoxides. Sulfur content in kerogen generally decreases with thermal maturity, and sulfur speciation includes a mix of sulfides and thiophenes at low thermal maturities and is further enriched in thiophenes at high maturities.
Overall, changes in kerogen composition with respect to heteroatom chemistry occur predominantly at low thermal maturities (bitumen and oil windows), while changes with respect to carbon chemistry occur predominantly at high thermal maturities (oil and gas windows).
Microstructure
The microstructure of kerogen also evolves during thermal maturation, as has been inferred by scanning electron microscopy (SEM) imaging showing the presence of abundant internal pore networks within the lattice of thermally mature kerogen. Analysis by gas sorption demonstrated that the internal specific surface area of kerogen increases by an order of magnitude (~ 40 to 400 m2/g) during thermal maturation. X-ray and neutron diffraction studies have examined the spacing between carbon atoms in kerogen, revealing during thermal maturation a shortening of carbon-carbon distances in covalently bonded carbons (related to the transition from primarily aliphatic to primarily aromatic bonding) but a lengthening of carbon-carbon distances in carbons at greater bond separations (related to the formation of kerogen-hosted porosity). This evolution is attributed to the formation of kerogen-hosted pores left behind as segments of the kerogen molecule are cracked off during thermal maturation.
Physical properties
These changes in composition and microstructure result in changes in the properties of kerogen. For example, the skeletal density of kerogen increases from approximately 1.1 g/ml at low thermal maturity to 1.7 g/ml at high thermal maturity. This evolution is consistent with the change in carbon speciation from predominantly aliphatic (similar to wax, density < 1 g/ml) to predominantly aromatic (similar to graphite, density > 2 g/ml) with increasing thermal maturity.
Spatial heterogeneity
Additional studies have explored the spatial heterogeneity of kerogen at small length scales. Individual particles of kerogen arising from different inputs are identified and assigned as different macerals. This variation in starting material may lead to variations in composition between different kerogen particles, leading to spatial heterogeneity in kerogen composition at the micron length scale. Heterogeneity between kerogen particles may also arise from local variations in catalysis of pyrolysis reactions due to the nature of the minerals surrounding different particles. Measurements performed with atomic force microscopy coupled to infrared spectroscopy (AFM-IR) and correlated with organic petrography have analyzed the evolution of the chemical composition and mechanical properties of individual macerals of kerogen with thermal maturation at the nanoscale. These results indicate that all macerals decrease in oxygen content and increase in aromaticity (decrease in aliphalicity) during thermal maturation, but some macerals undergo large changes while other macerals undergo relatively small changes. In addition, macerals that are richer in aromatic carbon are mechanically stiffer than macerals that are richer in aliphatic carbon, as expected because highly aromatic forms of carbon (such as graphite) are stiffer than highly aliphatic forms of carbon (such as wax).
Types
Labile kerogen breaks down to generate principally liquid hydrocarbons (i.e., oil), refractory kerogen breaks down to generate principally gaseous hydrocarbons, and inert kerogen generates no hydrocarbons but forms graphite.
In organic petrography, the different components of kerogen can be identified by microscopic inspection and are classified as macerals. This classification was developed originally for coal (a sedimentary rock that is rich in organic matter of terrestrial origin) but is now applied to the study of other kerogen-rich sedimentary deposits.
The Van Krevelen diagram is one method of classifying kerogen by "types", where kerogens form distinct groups when the ratios of hydrogen to carbon and oxygen to carbon are compared.
Type I: algal/sapropelic
Type I kerogens are characterized by high initial hydrogen-to-carbon (H/C) ratios and low initial oxygen-to-carbon (O/C) ratios. This kerogen is rich in lipid-derived material and is commonly, but not always, from algal organic matter in lacustrine (freshwater) environments. On a mass basis, rocks containing type I kerogen yield the largest quantity of hydrocarbons upon pyrolysis. Hence, from the theoretical view, shales containing type I kerogen are the most promising deposits in terms of conventional oil retorting.
Hydrogen:carbon atomic ratio > 1.25
Oxygen:carbon atomic ratio < 0.15
Derived principally from lacustrine algae, deposited in anoxic lake sediments and rarely in marine environments
Composed of alginite, amorphous organic matter, cyanobacteria, freshwater algae, and lesser of land plant resins
Formed mainly from protein and lipid precursors
Has few cyclic or aromatic structures
Shows great tendency to readily produce liquid hydrocarbons (oil) under heating
Type II: planktonic
Type II kerogens are characterized by intermediate initial H/C ratios and intermediate initial O/C ratios. Type II kerogen is principally derived from marine organic materials, which are deposited in reducing sedimentary environments. The sulfur content of type II kerogen is generally higher than in other kerogen types, and sulfur is found in substantial amounts in the associated bitumen. Although pyrolysis of type II kerogen yields less oil than type I, the amount yielded is still sufficient for type II-bearing sedimentary deposits to be petroleum source rocks.
Hydrogen:carbon atomic ratio < 1.25
Oxygen:carbon atomic ratio 0.03–0.18
Derived principally from marine plankton and algae
Produces a mixture oil and gas under heating
Type II-S: sulfurous
Similar to type II but with high sulfur content.
Type III: humic
Type III kerogens are characterized by low initial H/C ratios and high initial O/C ratios. Type III kerogens are derived from terrestrial plant matter, specifically from precursor compounds including cellulose, lignin (a non-carbohydrate polymer formed from phenyl-propane units that binds the strings of cellulose together); terpenes and phenols. Coal is an organic-rich sedimentary rock that is composed predominantly of this kerogen type. On a mass basis, type III kerogens generate the lowest oil yield of principal kerogen types.
Hydrogen:carbon atomic ratio < 1
Oxygen:carbon atomic ratio 0.03–0.3
Has low hydrogen content because of abundant aromatic carbon structures
Derived from terrestrial (land) plants
Tends to produce gas under heating (recent research has shown that type III kerogens can actually produce oil under extreme conditions)
Type IV: inert/residual
Type IV kerogen comprises mostly inert organic matter in the form of polycyclic aromatic hydrocarbons. They have no potential to produce hydrocarbons.
Hydrogen:carbon atomic ratio < 0.5
Kerogen cycle
The diagram on the right shows the organic carbon cycle with the flow of kerogen (black solid lines) and the flow of biospheric carbon (green solid lines), showing both the fixation of atmospheric CO2 by terrestrial and marine primary productivity. The combined flux of reworked kerogen and biospheric carbon into ocean sediments constitutes total organic carbon burial entering the endogenous kerogen pool.
Extra-terrestrial
Carbonaceous chondrite meteorites contain kerogen-like components. Such material is thought to have formed the terrestrial planets. Kerogenous materials have been detected also in interstellar clouds and dust around stars.
The Curiosity rover has detected organic deposits similar to kerogen in mudstone samples in Gale Crater on Mars using a revised drilling technique. The presence of benzene and propane also indicates the possible presence of kerogen-like materials, from which hydrocarbons are derived.
See also
References
Helgeson, H.C.et al. (2009). "A chemical and thermodynamic model of oil generation in hydrocarbon source rocks". Geochim. Cosmochim. Acta. 73, 594–695.
Marakushev, S.A.; Belonogova, O.V. (2021), "An inorganic origin of the “oil-source” rocks carbon substance". Georesursy = Georesources. 23, 164–176.
External links
European Association of Organic Geochemists
Organic Geochemistry (journal)
Animation illustrating kerogene formation (approx t=50s) "Oil and Gas Formation" YouTube clip by EarthScience WesternAustralia
Petroleum
Oil shale geology
Petrology | Kerogen | [
"Chemistry"
] | 3,711 | [
"Petroleum",
"Chemical mixtures"
] |
183,953 | https://en.wikipedia.org/wiki/Catagenesis%20%28geology%29 | Catagenesis is a term used in petroleum geology to describe the cracking process which results in the conversion of organic kerogens into hydrocarbons.
Theoretical reaction
Catagenesis is the second stage of maturation of organic carbon on the path to becoming graphitic. This geologic process accounts for very significant changes in the biogenic materials that make up the carbonaceous sediment. During catagenesis, the temperature increases, the pressure increases, and both organic and inorganic constituents “adjust” their phase or form to compensate. The process of “lithification” begins during this stage. Generally speaking, a rise in temperature results in the volatization of unstable species or elements that are weakly attached to carbon atoms. Increased temperature and pressure also result in the cessation of biogenic processes. One way to express these changes is to look at the ratio of oxygen to carbon, or hydrogen to carbon as the sediment matures. In almost all cases, as biogenic material matures in a geologic environment, the volatile elements such as oxygen and hydrogen are significantly reduced, resulting in a reduction in the O/C and H/C ratios. A typical O/C ratio value for a fully matured, catagenesis stage carbon might be less than 0.1. This means that for every 100 carbon atoms there are less than 10 oxygen atoms. Similar reductions in the level of hydrogen are also apparent.
This chemical reaction is believed to be a time, temperature and pressure dependent process which creates liquid and/or gaseous hydrocarbon Hc from primary kerogen X and can be summarised using the formula:
where X0 is the initial kerogen concentration and X(t) is the kerogen concentration at time t.
It is generally held that the dependence on pressure is negligible, such that the process of catagenesis can be given as a first-order differential equation:
where X is the reactant (kerogen) and κ is the reaction rate constant which introduces the temperature-dependence via the Arrhenius equation.
Important parameters
Several generally unrecognized but important controlling parameters of metamorphism have been suggested.
The absence or presence of water in the system, because hydrocarbon-thermal destruction is significantly suppressed in the presence of water.
Increasing fluid pressure strongly suppresses all organic-matter metamorphism.
Product escape from reaction sites, as lack of product escape retards metamorphism.
Increasing temperature as the principal driver of reactions.
Future Work
A great deal of future research is required to isolate the parameters which are most significant for inducing the catagenetic process. Future work in the field will involve the following:
Establishing the precise relationship between burial time and hydrocarbon cracking.
Determining how hydrogen from water is ultimately incorporated in kerogen.
Establishing the effect of regional shearing.
Determining how static fluid pressure affects hydrocarbon generation. Some experiments have demonstrated that static fluid pressure may explain the presence of hydrocarbon concentrations at depths where their composition would not otherwise be expected.
Many measurements of hydrocarbon content in sample rocks have been done at atmospheric pressure. This ignores the loss of large amounts of hydrocarbons during depressurization. Rock samples at atmospheric pressure have been measured at 0.11–2.13 percent of samples at formation pressure. Observations at well sites include fizzing of rock chips and oil films covering drilling mud pits.
Types of organic matter can not be ignored. Different types of organic matter have different chemical bonds, bond strength patterns, and thus different activation energies.
C15+ hydrocarbons are stable at much higher temperatures than predicted by first-order reaction kinetics.
For example, while it was once assumed that catagenetic processes were first-order reactions, some research has shown that this may not be the case.
See also
Diagenesis
Fossil fuels
Metamorphic reaction
Footnotes
References
Petroleum geology
Fossil fuels | Catagenesis (geology) | [
"Chemistry"
] | 780 | [
"Petroleum",
"Petroleum geology"
] |
183,957 | https://en.wikipedia.org/wiki/Subsidence | Subsidence is a general term for downward vertical movement of the Earth's surface, which can be caused by both natural processes and human activities. Subsidence involves little or no horizontal movement, which distinguishes it from slope movement.
Processes that lead to subsidence include dissolution of underlying carbonate rock by groundwater; gradual compaction of sediments; withdrawal of fluid lava from beneath a solidified crust of rock; mining; pumping of subsurface fluids, such as groundwater or petroleum; or warping of the Earth's crust by tectonic forces. Subsidence resulting from tectonic deformation of the crust is known as tectonic subsidence and can create accommodation for sediments to accumulate and eventually lithify into sedimentary rock.
Ground subsidence is of global concern to geologists, geotechnical engineers, surveyors, engineers, urban planners, landowners, and the public in general. Pumping of groundwater or petroleum has led to subsidence of as much as in many locations around the world and incurring costs measured in hundreds of millions of US dollars. Land subsidence caused by groundwater withdrawal will likely increase in occurrence and related damages, primarily due to global population and economic growth, which will continue to drive higher groundwater demand.
Causes
Dissolution of limestone
Subsidence frequently causes major problems in karst terrains, where dissolution of limestone by fluid flow in the subsurface creates voids (i.e., caves). If the roof of a void becomes too weak, it can collapse and the overlying rock and earth will fall into the space, causing subsidence at the surface. This type of subsidence can cause sinkholes which can be many hundreds of meters deep.
Mining
Several types of sub-surface mining, and specifically methods which intentionally cause the extracted void to collapse (such as pillar extraction, longwall mining and any metalliferous mining method which uses "caving" such as "block caving" or "sub-level caving") will result in surface subsidence. Mining-induced subsidence is relatively predictable in its magnitude, manifestation and extent, except where a sudden pillar or near-surface tunnel collapse occurs (usually very old workings). Mining-induced subsidence is nearly always very localized to the surface above the mined area, plus a margin around the outside. The vertical magnitude of the subsidence itself typically does not cause problems, except in the case of drainage (including natural drainage)–rather, it is the associated surface compressive and tensile strains, curvature, tilts and horizontal displacement that are the cause of the worst damage to the natural environment, buildings and infrastructure.
Where mining activity is planned, mining-induced subsidence can be successfully managed if there is co-operation from all of the stakeholders. This is accomplished through a combination of careful mine planning, the taking of preventive measures, and the carrying out of repairs post-mining.
Extraction of petroleum and natural gas
If natural gas is extracted from a natural gas field the initial pressure (up to 60 MPa (600 bar)) in the field will drop over the years. The pressure helps support the soil layers above the field. If the gas is extracted, the overburden pressure sediment compacts and may lead to earthquakes and subsidence at the ground level.
Since exploitation of the Slochteren (Netherlands) gas field started in the late 1960s the ground level over a 250 km2 area has dropped by a current maximum of 30 cm.
Extraction of petroleum likewise can cause significant subsidence. The city of Long Beach, California, has experienced over the course of 34 years of petroleum extraction, resulting in damage of over $100 million to infrastructure in the area. The subsidence was brought to a halt when secondary recovery wells pumped enough water into the oil reservoir to stabilize it.
Earthquake
Land subsidence can occur in various ways during an earthquake. Large areas of land can subside drastically during an earthquake because of offset along fault lines. Land subsidence can also occur as a result of settling and compacting of unconsolidated sediment from the shaking of an earthquake.
The Geospatial Information Authority of Japan reported immediate subsidence caused by the 2011 Tōhoku earthquake. In Northern Japan, subsidence of 0.50 m (1.64 ft) was observed on the coast of the Pacific Ocean in Miyako, Tōhoku, while Rikuzentakata, Iwate measured 0.84 m (2.75 ft). In the south at Sōma, Fukushima, 0.29 m (0.95 ft) was observed. The maximum amount of subsidence was 1.2 m (3.93 ft), coupled with horizontal diastrophism of up to 5.3 m (17.3 ft) on the Oshika Peninsula in Miyagi Prefecture.
Groundwater-related subsidence
Groundwater-related subsidence is the subsidence (or the sinking) of land resulting from groundwater extraction. It is a growing problem in the developing world as cities increase in population and water use, without adequate pumping regulation and enforcement. One estimate has 80% of serious land subsidence problems associated with the excessive extraction of groundwater, making it a growing problem throughout the world.
Groundwater fluctuations can also indirectly affect the decay of organic material. The habitation of lowlands, such as coastal or delta plains, requires drainage. The resulting aeration of the soil leads to the oxidation of its organic components, such as peat, and this decomposition process may cause significant land subsidence. This applies especially when groundwater levels are periodically adapted to subsidence, in order to maintain desired unsaturated zone depths, exposing more and more peat to oxygen. In addition to this, drained soils consolidate as a result of increased effective stress. In this way, land subsidence has the potential of becoming self-perpetuating, having rates up to 5 cm/yr. Water management used to be tuned primarily to factors such as crop optimization but, to varying extents, avoiding subsidence has come to be taken into account as well.
Faulting induced
When differential stresses exist in the Earth, these can be accommodated either by geological faulting in the brittle crust, or by ductile flow in the hotter and more fluid mantle. Where faults occur, absolute subsidence may occur in the hanging wall of normal faults. In reverse, or thrust, faults, relative subsidence may be measured in the footwall.
Isostatic subsidence
The crust floats buoyantly in the asthenosphere, with a ratio of mass below the "surface" in proportion to its own density and the density of the asthenosphere. If mass is added to a local area of the crust (e.g., through deposition), the crust subsides to compensate and maintain isostatic balance.
The opposite of isostatic subsidence is known as isostatic rebound—the action of the crust returning (sometimes over periods of thousands of years) to a state of isostacy, such as after the melting of large ice sheets or the drying-up of large lakes after the last ice age. Lake Bonneville is a famous example of isostatic rebound. Due to the weight of the water once held in the lake, the earth's crust subsided nearly to maintain equilibrium. When the lake dried up, the crust rebounded. Today at Lake Bonneville, the center of the former lake is about higher than the former lake edges.
Seasonal effects
Many soils contain significant proportions of clay. Because of the very small particle size, they are affected by changes in soil moisture content. Seasonal drying of the soil results in a lowering of both the volume and the surface of the soil. If building foundations are above the level reached by seasonal drying, they move, possibly resulting in damage to the building in the form of tapering cracks.
Trees and other vegetation can have a significant local effect on seasonal drying of soils. Over a number of years, a cumulative drying occurs as the tree grows. That can lead to the opposite of subsidence, known as heave or swelling of the soil, when the tree declines or is felled. As the cumulative moisture deficit is reversed, which can last up to 25 years, the surface level around the tree will rise and expand laterally. That often damages buildings unless the foundations have been strengthened or designed to cope with the effect.
Weight of buildings
High buildings can create land subsidence by pressing the soil beneath with their weight. The problem is already felt in New York City, San Francisco Bay Area, Lagos.
Impacts
Increase of flooding potential
Land subsidence leads to the lowering of the ground surface, altering the topography. This elevation reduction increases the risk of flooding, particularly in river flood plains and delta areas.
Sinking cities
Earth fissures
Earth fissures are linear fractures that appear on the land surface, characterized by openings or offsets. These fissures can be several meters deep, several meters wide, and extend for several kilometers. They form when the deformation of an aquifer, caused by pumping, concentrates stress in the sediment. This inhomogeneous deformation results in the differential compaction of the sediments. Ground fissures develop when this tensile stress exceeds the tensile strength of the sediment.
Infrastructure damage
Land subsidence can lead to differential settlements in buildings and other infrastructures, causing angular distortions. When these angular distortions exceed certain values, the structures can become damaged, resulting in issues such as tilting or cracking.
Field measurement of subsidence
Land subsidence causes vertical displacements (subsidence or uplift). Although horizontal displacements also occur, they are generally less significant. The following are field methods used to measure vertical and horizontal displacements in subsiding areas:
Surveying.
Borehole extensometers.
Global Navigation Satellite System (GNSS)
Interferometric Synthetic Apertura Radar (InSAR)
LiDAR
Tiltmeters.
Tomás et al. conducted a comparative analysis of various land subsidence monitoring techniques. The results indicated that InSAR offered the highest coverage, lowest annual cost per point of information and the highest point density. Additionally, they found that, aside from continuous acquisition systems typically installed in areas with rapid subsidence, InSAR had the highest measurement frequencies. In contrast, leveling, non-permanent GNSS, and non-permanent extensometers generally provided only one or two measurements per year.
Land Subsidence Prediction
Empirical Methods
These methods project future land subsidence trends by extrapolating from existing data, treating subsidence as a function solely of time. The extrapolation can be performed either visually or by fitting appropriate curves. Common functions used for fitting include linear, bilinear, quadratic, and/or exponential models. For example, this method has been successfully applied for predicting mining-induced subsidence.
Semi-Empirical or Statistical Methods
These approaches evaluate land subsidence based on its relationship with one or more influencing factors, such as changes in groundwater levels, the volume of groundwater extraction, and clay content.
Theoretical Methods
1D Model
This model assumes that changes in piezometric levels affecting aquifers and aquitards occur only in the vertical direction. It allows for subsidence calculations at a specific point using only vertical soil parameters.
Quasi-3D Model
Quasi-three-dimensional seepage models apply Terzaghi's one-dimensional consolidation equation to estimate subsidence, integrating some aspects of three-dimensional effects.
3D Model
The fully coupled three-dimensional model simulates water flow in three dimensions and calculates subsidence using Biot's three-dimensional consolidation theory.
Machine learning
Machine learning has become a new approach for tackling nonlinear problems. It has emerged as a promising method for simulating and predicting land subsidence.
Examples
See also
Cave-in
Lateral and subjacent support, a related concept in property law
Mass wasting
Settlement (structural)
Sinkhole
Soil liquefaction
UNESCO Working Group on Land Subsidence
Sea level rise
References
Depressions (geology)
Soil mechanics
Building defects
Geomorphology
Vertical position | Subsidence | [
"Physics",
"Materials_science"
] | 2,486 | [
"Vertical position",
"Applied and interdisciplinary physics",
"Physical quantities",
"Distance",
"Soil mechanics",
"Building defects",
"Mechanical failure"
] |
183,965 | https://en.wikipedia.org/wiki/Lysocline | The lysocline is the depth in the ocean dependent upon the carbonate compensation depth (CCD), usually around 5 km, below which the rate of dissolution of calcite increases dramatically because of a pressure effect. While the lysocline is the upper bound of this transition zone of calcite saturation, the CCD is the lower bound of this zone.
CaCO3 content in sediment varies with different depths of the ocean, spanned by levels of separation known as the transition zone. In the mid-depth area of the ocean, sediments are rich in CaCO3, content values reaching 85–95%. This area is then spanned hundreds of meters by the transition zone, ending in the abyssal depths with 0% concentration. The lysocline is the upper bound of the transition zone, where amounts of CaCO3 content begins to noticeably drop from the mid-depth 85–95% sediment. The CaCO3 content drops to 0% concentration at the lower bound, known as the calcite compensation depth.
Shallow marine waters are generally supersaturated in calcite, CaCO3, because as marine organisms (which often have shells made of calcite or its polymorph, aragonite) die, they tend to fall downwards without dissolving. As depth and pressure increases within the water column, calcite solubility increases, causing supersaturated water above the saturation depth, allowing for preservation and burial of CaCO3 on the seafloor. However, this creates undersaturated seawater below the saturation depth, preventing CaCO3 burial on the sea floor as the shells start to dissolve.
The equation Ω = [Ca2+] X [CO32-]/K'sp expresses the CaCO3 saturation state of seawater. The calcite saturation horizon is where Ω =1; dissolution proceeds slowly below this depth. The lysocline is the depth that this dissolution impacts is again notable, also known as the inflection point with sedimentary CaCO3 versus various water depths.
Calcite compensation depth
The calcite compensation depth (CCD) occurs at the depth that the rate of calcite to the sediments is balanced with the dissolution flux, the depth at which the CaCO3 content are values 2–10%. Hence, the lysocline and CCD are not equivalent. The lysocline and compensation depth occur at greater depths in the Atlantic (5000–6000 m) than in the Pacific (4000–5000 m), and at greater depths in equatorial regions than in polar regions.
The depth of the CCD varies as a function of the chemical composition of the seawater and its temperature. Specifically, it is the deep waters that are undersaturated with calcium carbonate primarily because its solubility increases strongly with increasing pressure and salinity and decreasing temperature. As the atmospheric concentration of carbon dioxide continues to increase, the CCD can be expected to decrease in depth, as the ocean's acidity rises.
See also
Biological pump
Carbonate compensation depth
Ocean acidification
References
Geochemistry
Oceanography | Lysocline | [
"Physics",
"Chemistry",
"Environmental_science"
] | 634 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"nan"
] |
183,968 | https://en.wikipedia.org/wiki/Ibis | The ibis () (collective plural ibises; classical plurals ibides and ibes) are a group of long-legged wading birds in the family Threskiornithidae that inhabit wetlands, forests and plains. "Ibis" derives from the Latin and Ancient Greek word for this group of birds. It also occurs in the scientific name of the western cattle egret (Ardea ibis) mistakenly identified in 1757 as being the sacred ibis.
Description
Ibises all have long, downcurved bills, and usually feed as a group, probing mud for food items, usually crustaceans. They are monogamous and highly territorial while nesting and feeding. Most nest in trees, often with spoonbills or herons. All extant species are capable of flight, but two extinct genera were flightless, namely the kiwi-like Apteribis in the Hawaiian Islands, and the peculiar Xenicibis in Jamaica. The word ibis comes from Latin ibis from Greek ἶβις ibis from Egyptian hb, hīb.
Species in taxonomic order
There are 29 extant species and 4 extinct species of ibis.
An extinct species, the Jamaican ibis or clubbed-wing ibis (Xenicibis xympithecus) was uniquely characterized by its club-like wings. Extinct ibis species include the following:
Geronticus perplexus. Discovered in France. It is known only from a piece of distal right humerus, found at Sansan France, in Middle Miocene rocks. It appears to represent an ancient member of the Geronticus lineage, in line with the theory that most living ibis genera seem to have evolved before 15 million years ago (mya).
Geronticus apelex. Discovered in South Africa.
Geronticus balcanicus. Discovered in Bulgaria.
Theristicus wetmorei. Discovered in Peru.
Eudodmus peruvianus. Discovered in Peru.
Gerandibis pagana. Discovered in France. It is the sole species known for this genera.
Aptertbis glenos. Discovered in Hawaii.
Xenicibis xympithecus. Discovered in Jamaica.
Ecology
Habitat
Most ibises are freshwater wetland birds using natural marshes, ponds, lakes, riversides for foraging. Some ibis species such as the white-faced ibis, and black-headed ibis benefit from flooded and irrigated agriculture. The Andean ibis is unusual in being found in high altitude grasslands of South America. The foraging and nesting behaviour, and fluctuating numbers of the white ibis matches closely with water levels in the Everglades ecosystem leading to its selection as a potential indicator species for the system. Few ibis species such as the olive ibis and green ibis are also found in dense forests. The Llanos grasslands of Venezuela have the highest global ibis diversity with seven species sharing the marshes and grasslands. Multiple ibis species manage to use the same area by exhibiting differences in the habitats used and the prey eaten. In Indian agricultural landscapes, three ibis species manage to live together by altering the habitats they use seasonally with the Black-headed Ibises and Glossy preferring shallow wetlands throughout the year, while the endemic Red-naped Ibises preferred upland areas thereby entirely avoiding potential competitive interactions.
Breeding
Ibises breeding habits are very diverse. Many ibises such as the black-headed Ibis, scarlet ibis, glossy ibis, American white ibis and Australian white ibis breed in large colonies on trees. Nest trees are located either in large wetlands or in agricultural fields, with many species like the red-naped ibis breeding inside cities. The Australian white ibis also breeds extensively inside cities and has greatly expanded its population. The white-faced ibis sometimes nests on dry land and on low shrubs in marshes.
In culture
The African sacred ibis was an object of religious veneration in ancient Egypt, particularly associated with the deity Djehuty or otherwise commonly referred to in Greek as Thoth. He is responsible for writing, mathematics, measurement, and time as well as the moon and magic. In artworks of the Late Period of Ancient Egypt, Thoth is popularly depicted as an ibis-headed man in the act of writing. However, Mitogenomic diversity in sacred ibis mummies indicates that ancient Egyptians captured the birds from the wild rather than farming them.
At the town of Hermopolis, ibises were reared specifically for sacrificial purposes, and in the Ibis Galleries at Saqqara, archaeologists found the mummies of one and a half million ibises.
According to local legend in the Birecik area, the northern bald ibis was one of the first birds that Noah released from the Ark as a symbol of fertility, and a lingering religious sentiment in Turkey helped the colonies there to survive long after the demise of the species in Europe.
The mascot of the University of Miami is an American white ibis named Sebastian. The ibis was selected as the school mascot because of its legendary bravery during hurricanes. According to legend, the ibis is the last of wildlife to take shelter before a hurricane hits and the first to reappear once the storm has passed.
Harvard University's humor magazine, Harvard Lampoon, uses the ibis as its symbol. A copper statue of an ibis is prominently displayed on the roof of the Harvard Lampoon Building at 44 Bow Street.
The short story "The Scarlet Ibis" by James Hurst uses the red bird as foreshadowing for a character's death and as the primary symbol.
The African sacred ibis is the unit symbol of the Israeli Special Forces unit known as Unit 212 or Maglan (Hebrew מגלן).
According to Josephus, Moses used the ibis to help him defeat the Ethiopians.
The Australian white ibis has become a focus of art, pop culture, and memes since rapidly adapting to city life in recent decades, and has earned the popular nicknames "bin chicken" and "tip turkey". In December 2017, the ibis placed second in Guardian Australia inaugural Bird of the Year poll, after leading for much of the voting period.
In April 2022, Queensland sports minister Stirling Hinchliffe suggested the ibis as a potential mascot for the 2032 Olympic Games, which are scheduled to be held in Brisbane. Hinchcliffe's suggestion prompted much discussion in the media.
Gallery
Notes
References
External links
Ibis videos – at Internet Bird Collection
Threskiornithidae
Taxa named by Franz Poche
Paraphyletic groups
Pelecaniformes
Thoth | Ibis | [
"Biology"
] | 1,351 | [
"Phylogenetics",
"Paraphyletic groups"
] |
183,970 | https://en.wikipedia.org/wiki/Aragonite | Aragonite is a carbonate mineral and one of the three most common naturally occurring crystal forms of calcium carbonate (), the others being calcite and vaterite. It is formed by biological and physical processes, including precipitation from marine and freshwater environments.
The crystal lattice of aragonite differs from that of calcite, resulting in a different crystal shape, an orthorhombic crystal system with acicular crystal. Repeated twinning results in pseudo-hexagonal forms. Aragonite may be columnar or fibrous, occasionally in branching helictitic forms called flos-ferri ("flowers of iron") from their association with the ores at the Carinthian iron mines.
Occurrence
The type location for aragonite is Molina de Aragón in the Province of Guadalajara in Castilla-La Mancha, Spain, for which it was named in 1797. Aragonite is found in this locality as cyclic twins inside gypsum and marls of the Keuper facies of the Triassic. This type of aragonite deposit is very common in Spain, and there are also some in France.
An aragonite cave, the Ochtinská Aragonite Cave, is situated in Slovakia.
In the US, aragonite in the form of stalactites and "cave flowers" (anthodite) is known from Carlsbad Caverns and other caves. For a few years in the early 1900s, aragonite was mined at Aragonite, Utah (now a ghost town).
Massive deposits of oolitic aragonite sand are found on the seabed in the Bahamas.
Aragonite is the high pressure polymorph of calcium carbonate. As such, it occurs in high pressure metamorphic rocks such as those formed at subduction zones.
Aragonite forms naturally in almost all mollusk shells, and as the calcareous endoskeleton of warm- and cold-water corals (Scleractinia). Several serpulids have aragonitic tubes. Because the mineral deposition in mollusk shells is strongly biologically controlled, some crystal forms are distinctively different from those of inorganic aragonite. In some mollusks, the entire shell is aragonite; in others, aragonite forms only discrete parts of a bimineralic shell (aragonite plus calcite). The nacreous layer of the aragonite fossil shells of some extinct ammonites forms an iridescent material called ammolite.
Aragonite also forms naturally in the endocarp of Celtis occidentalis.
The skeleton of some calcareous sponges is made of aragonite.
Aragonite also forms in the ocean inorganic precipitates called marine cements (in the sediment) or as free crystals (in the water column).
Inorganic precipitation of aragonite in caves can occur in the form of speleothems. Aragonite is common in serpentinites where magnesium-rich pore solutions apparently inhibit calcite growth and promote aragonite precipitation.
Aragonite is metastable at the low pressures near the Earth's surface and is thus commonly replaced by calcite in fossils. Aragonite older than the Carboniferous is essentially unknown.
Aragonite can be synthesized by adding a calcium chloride solution to a sodium carbonate solution at temperatures above or in water-ethanol mixtures at ambient temperatures.
Physical properties
Aragonite is a thermodynamically unstable phase of calcium carbonate at any pressure below about at any temperature. Aragonite nonetheless frequently forms in near-surface environments at ambient temperatures. The weak Van der Waals forces inside aragonite give an important contribution to both the crystallographic and elastic properties of this mineral. The difference in stability between aragonite and calcite, as measured by the Gibbs free energy of formation, is small, and effects of grain size and impurities can be important. The formation of aragonite at temperatures and pressures where calcite should be the stable polymorph may be an example of Ostwald's step rule, where a less stable phase is the first to form. The presence of magnesium ions may inhibit calcite formation in favor of aragonite. Once formed, aragonite tends to alter to calcite on scales of 107 to 108 years.
The mineral vaterite, also known as μ-CaCO3, is another phase of calcium carbonate that is metastable at ambient conditions typical of Earth's surface, and decomposes even more readily than aragonite.
Uses
In aquaria, aragonite is considered essential for the replication of reef conditions. Aragonite provides the materials necessary for much sea life and also keeps the pH of the water close to its natural level, to prevent the dissolution of biogenic calcium carbonate.
Aragonite has been successfully tested for the removal of pollutants like zinc, cobalt and lead from contaminated wastewaters.
Gallery
See also
Aragonite sea
Ikaite, CaCO3·6H2O
List of minerals
Monohydrocalcite, CaCO3·H2O
Nacre, otherwise known as "Mother-of-Pearl"
References
External links
The Ochtinska Aragonite Cave in Slovakia
Kosovo Caves Aragonite Formations
Calcium minerals
Carbonate minerals
Cave minerals
Aragonite group
Orthorhombic minerals
Minerals in space group 62
Luminescent minerals
Evaporite
Minerals described in 1797 | Aragonite | [
"Chemistry"
] | 1,073 | [
"Luminescence",
"Luminescent minerals"
] |
183,999 | https://en.wikipedia.org/wiki/Cold%20dark%20matter | In cosmology and physics, cold dark matter (CDM) is a hypothetical type of dark matter. According to the current standard model of cosmology, Lambda-CDM model, approximately 27% of the universe is dark matter and 68% is dark energy, with only a small fraction being the ordinary baryonic matter that composes stars, planets, and living organisms. Cold refers to the fact that the dark matter moves slowly compared to the speed of light, giving it a vanishing equation of state. Dark indicates that it interacts very weakly with ordinary matter and electromagnetic radiation. Proposed candidates for CDM include weakly interacting massive particles, primordial black holes, and axions.
History
The theory of cold dark matter was originally published in 1982 by James Peebles; while the warm dark matter picture was proposed independently at the same time by J. Richard Bond, Alex Szalay, and Michael Turner; and George Blumenthal, H. Pagels, and Joel Primack.
A review article in 1984 by Blumenthal, Sandra Moore Faber, Primack, and Martin Rees developed the details of the theory.
Structure formation
In the cold dark matter theory, structure grows hierarchically, with small objects collapsing under their self-gravity first and merging in a continuous hierarchy to form larger and more massive objects. Predictions of the cold dark matter paradigm are in general agreement with observations of cosmological large-scale structure.
In the hot dark matter paradigm, popular in the early 1980s but less so in the 1990s, structure does not form hierarchically (bottom-up), but forms by fragmentation (top-down), with the largest superclusters forming first in flat pancake-like sheets and subsequently fragmenting into smaller pieces like our galaxy the Milky Way.
Since the late 1980s or 1990s, most cosmologists favor the cold dark matter theory (specifically the modern Lambda-CDM model) as a description of how the universe went from a smooth initial state at early times (as shown by the cosmic microwave background radiation) to the lumpy distribution of galaxies and their clusters we see today—the large-scale structure of the universe. Dwarf galaxies are crucial to this theory, having been created by small-scale density fluctuations in the early universe; they have now become natural building blocks that form larger structures.
Composition
Dark matter is detected through its gravitational interactions with ordinary matter and radiation. As such, it is very difficult to determine what the constituents of cold dark matter are. The candidates fall roughly into three categories:
Axions, very light particles with a specific type of self-interaction that makes them a suitable CDM candidate. Since the late 2010s, axions have become one of the most promising candidates for dark matter. Axions have the theoretical advantage that their existence solves the strong CP problem in quantum chromodynamics, but axion particles have only been theorized and never detected. Axions are an example of a more general category of particle called a WISP (weakly interacting "slender" or "slim" particle), which are the low-mass counterparts of WIMPs.
Massive compact halo objects (MACHOs), large, condensed objects such as black holes, neutron stars, white dwarfs, very faint stars, or non-luminous objects like planets. The search for these objects consists of using gravitational lensing to detect the effects of these objects on background galaxies. Most experts believe that the constraints from those searches rule out MACHOs as a viable dark matter candidate.
Weakly interacting massive particles (WIMPs). There is no currently known particle with the required properties, but many extensions of the standard model of particle physics predict such particles. The search for WIMPs involves attempts at direct detection by highly sensitive detectors, as well as attempts at production of WIMPs by particle accelerators. Historically, WIMPs were regarded as one of the most promising candidates for the composition of dark matter, but since the late 2010s, WIMPs have been supplanted by axions with the non-detection of WIMPs in experiments. The DAMA/NaI experiment and its successor DAMA/LIBRA have claimed to have directly detected dark matter particles passing through the Earth, but many scientists remain skeptical because no results from similar experiments seem compatible with the DAMA results.
Challenges
Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model.
Cuspy halo problem
The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves.
Dwarf galaxy problem
Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way.
Satellite disk problem
Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies.
High-velocity galaxy problem
Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy.
Galaxy morphology problem
If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years.
Fast galaxy bar problem
If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast.
Small-scale crisis
Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.
High redshift galaxies
Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2 or JADES-GS-z14-0 at cosmological redshift of 14.32. Such a high rate of large galaxy formation in the early universe appears to contradict the rates of galaxy formation allowed in the existing Lambda CDM model via dark matter halos, as even if galaxy formation were 100% efficient and all mass were allowed to turn into stars in Lambda CDM, it wouldn't be enough to create such large galaxies. However, this depends upon assuming a stellar initial mass function. If early star formation favored massive stars, this could explain the tension.
See also
Fuzzy cold dark matter
Hot dark matter
Meta-cold dark matter
Modified Newtonian dynamics
Self-interacting dark matter
Warm dark matter
References
Further reading
Dark matter
Physical cosmological concepts | Cold dark matter | [
"Physics",
"Astronomy"
] | 1,585 | [
"Physical cosmological concepts",
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astrophysics",
"Concepts in astronomy",
"Unsolved problems in physics",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
184,011 | https://en.wikipedia.org/wiki/Infinitesimal%20strain%20theory | In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller (indeed, infinitesimally smaller) than any relevant dimension of the body; so that its geometry and the constitutive properties of the material (such as density and stiffness) at each point of space can be assumed to be unchanged by the deformation.
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory, or small displacement-gradient theory. It is contrasted with the finite strain theory where the opposite assumption is made.
The infinitesimal strain theory is commonly adopted in civil and mechanical engineering for the stress analysis of structures built from relatively stiff elastic materials like concrete and steel, since a common goal in the design of such structures is to minimize their deformation under typical loads. However, this approximation demands caution in the case of thin flexible bodies, such as rods, plates, and shells which are susceptible to significant rotations, thus making the results unreliable.
Infinitesimal strain tensor
For infinitesimal deformations of a continuum body, in which the displacement gradient tensor (2nd order tensor) is small compared to unity, i.e. ,
it is possible to perform a geometric linearization of any one of the finite strain tensors used in finite strain theory, e.g. the Lagrangian finite strain tensor , and the Eulerian finite strain tensor . In such a linearization, the non-linear or second-order terms of the finite strain tensor are neglected. Thus we have
or
and
or
This linearization implies that the Lagrangian description and the Eulerian description are approximately the same as there is little difference in the material and spatial coordinates of a given material point in the continuum. Therefore, the material displacement gradient tensor components and the spatial displacement gradient tensor components are approximately equal. Thus we have
or
where are the components of the infinitesimal strain tensor , also called Cauchy's strain tensor, linear strain tensor, or small strain tensor.
or using different notation:
Furthermore, since the deformation gradient can be expressed as where is the second-order identity tensor, we have
Also, from the general expression for the Lagrangian and Eulerian finite strain tensors we have
Geometric derivation
Consider a two-dimensional deformation of an infinitesimal rectangular material element with dimensions by (Figure 1), which after deformation, takes the form of a rhombus. From the geometry of Figure 1 we have
For very small displacement gradients, i.e., , we have
The normal strain in the -direction of the rectangular element is defined by
and knowing that , we have
Similarly, the normal strain in the and becomes
The engineering shear strain, or the change in angle between two originally orthogonal material lines, in this case line and , is defined as
From the geometry of Figure 1 we have
For small rotations, i.e., and are we have
and, again, for small displacement gradients, we have
thus
By interchanging and and and , it can be shown that .
Similarly, for the - and - planes, we have
It can be seen that the tensorial shear strain components of the infinitesimal strain tensor can then be expressed using the engineering strain definition, as
Physical interpretation
From finite strain theory we have
For infinitesimal strains then we have
Dividing by we have
For small deformations we assume that , thus the second term of the left hand side becomes: .
Then we have
where , is the unit vector in the direction of , and the left-hand-side expression is the normal strain in the direction of . For the particular case of in the direction, i.e., , we have
Similarly, for and we can find the normal strains and , respectively. Therefore, the diagonal elements of the infinitesimal strain tensor are the normal strains in the coordinate directions.
Strain transformation rules
If we choose an orthonormal coordinate system () we can write the tensor in terms of components with respect to those base vectors as
In matrix form,
We can easily choose to use another orthonormal coordinate system () instead. In that case the components of the tensor are different, say
The components of the strain in the two coordinate systems are related by
where the Einstein summation convention for repeated indices has been used and . In matrix form
or
Strain invariants
Certain operations on the strain tensor give the same result without regard to which orthonormal coordinate system is used to represent the components of strain. The results of these operations are called strain invariants. The most commonly used strain invariants are
In terms of components
Principal strains
It can be shown that it is possible to find a coordinate system () in which the components of the strain tensor are
The components of the strain tensor in the () coordinate system are called the principal strains and the directions are called the directions of principal strain. Since there are no shear strain components in this coordinate system, the principal strains represent the maximum and minimum stretches of an elemental volume.
If we are given the components of the strain tensor in an arbitrary orthonormal coordinate system, we can find the principal strains using an eigenvalue decomposition determined by solving the system of equations
This system of equations is equivalent to finding the vector along which the strain tensor becomes a pure stretch with no shear component.
Volumetric strain
The volumetric strain, also called bulk strain, is the relative variation of the volume, as arising from dilation or compression; it is the first strain invariant or trace of the tensor:
Actually, if we consider a cube with an edge length a, it is a quasi-cube after the deformation (the variations of the angles do not change the volume) with the dimensions and V0 = a3, thus
as we consider small deformations,
therefore the formula.
In case of pure shear, we can see that there is no change of the volume.
Strain deviator tensor
The infinitesimal strain tensor , similarly to the Cauchy stress tensor, can be expressed as the sum of two other tensors:
a mean strain tensor or volumetric strain tensor or spherical strain tensor, , related to dilation or volume change; and
a deviatoric component called the strain deviator tensor, , related to distortion.
where is the mean strain given by
The deviatoric strain tensor can be obtained by subtracting the mean strain tensor from the infinitesimal strain tensor:
Octahedral strains
Let () be the directions of the three principal strains. An octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on an octahedral plane is called the octahedral shear strain and is given by
where are the principal strains.
The normal strain on an octahedral plane is given by
Equivalent strain
A scalar quantity called the equivalent strain, or the von Mises equivalent strain, is often used to describe the state of strain in solids. Several definitions of equivalent strain can be found in the literature. A definition that is commonly used in the literature on plasticity is
This quantity is work conjugate to the equivalent stress defined as
Compatibility equations
For prescribed strain components the strain tensor equation represents a system of six differential equations for the determination of three displacements components , giving an over-determined system. Thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components. With the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the "Saint Venant compatibility equations".
The compatibility functions serve to assure a single-valued continuous displacement function . If the elastic medium is visualised as a set of infinitesimal cubes in the unstrained state, after the medium is strained, an arbitrary strain tensor may not yield a situation in which the distorted cubes still fit together without overlapping.
In index notation, the compatibility equations are expressed as
In engineering notation,
Special cases
Plane strain
In real engineering components, stress (and strain) are 3-D tensors but in prismatic structures such as a long metal billet, the length of the structure is much greater than the other two dimensions. The strains associated with length, i.e., the normal strain and the shear strains and (if the length is the 3-direction) are constrained by nearby material and are small compared to the cross-sectional strains. Plane strain is then an acceptable approximation. The strain tensor for plane strain is written as:
in which the double underline indicates a second order tensor. This strain state is called plane strain. The corresponding stress tensor is:
in which the non-zero is needed to maintain the constraint . This stress term can be temporarily removed from the analysis to leave only the in-plane terms, effectively reducing the 3-D problem to a much simpler 2-D problem.
Antiplane strain
Antiplane strain is another special state of strain that can occur in a body, for instance in a region close to a screw dislocation. The strain tensor for antiplane strain is given by
Relation to infinitesimal rotation tensor
The infinitesimal strain tensor is defined as
Therefore the displacement gradient can be expressed as
where
The quantity is the infinitesimal rotation tensor or infinitesimal angular displacement tensor (related to the infinitesimal rotation matrix). This tensor is skew symmetric. For infinitesimal deformations the scalar components of satisfy the condition . Note that the displacement gradient is small only if the strain tensor and the rotation tensor are infinitesimal.
The axial vector
A skew symmetric second-order tensor has three independent scalar components. These three components are used to define an axial vector, , as follows
where is the permutation symbol. In matrix form
The axial vector is also called the infinitesimal rotation vector. The rotation vector is related to the displacement gradient by the relation
In index notation
If and then the material undergoes an approximate rigid body rotation of magnitude around the vector .
Relation between the strain tensor and the rotation vector
Given a continuous, single-valued displacement field and the corresponding infinitesimal strain tensor , we have (see Tensor derivative (continuum mechanics))
Since a change in the order of differentiation does not change the result, . Therefore
Also
Hence
Relation between rotation tensor and rotation vector
From an important identity regarding the curl of a tensor we know that for a continuous, single-valued displacement field ,
Since we have
Strain tensor in non-Cartesian coordinates
Strain tensor in cylindrical coordinates
In cylindrical polar coordinates (), the displacement vector can be written as
The components of the strain tensor in a cylindrical coordinate system are given by:
Strain tensor in spherical coordinates
In spherical coordinates (), the displacement vector can be written as
The components of the strain tensor in a spherical coordinate system are given by
See also
Deformation (mechanics)
Compatibility (mechanics)
Stress tensor
Strain gauge
Elasticity tensor
Stress–strain curve
Hooke's law
Poisson's ratio
Finite strain theory
Strain rate
Plane stress
Digital image correlation
References
External links
Physical quantities
Elasticity (physics)
Materials science
Solid mechanics
Mechanics | Infinitesimal strain theory | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,302 | [
"Solid mechanics",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Physical quantities",
"Elasticity (physics)",
"Deformation (mechanics)",
"Quantity",
"Materials science",
"Mechanics",
"nan",
"Mechanical engineering",
"Physical properties"
] |
184,072 | https://en.wikipedia.org/wiki/Forced%20perspective | Forced perspective is a technique that employs optical illusion to make an object appear farther away, closer, larger or smaller than it actually is. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera. It has uses in photography, filmmaking and architecture.
In filmmaking
Forced perspective had been a feature of German silent films, and Citizen Kane revived the practice. Movies, especially B-movies in the 1950s and 1960s, were produced on limited budgets and often featured forced perspective shots.
Forced perspective can be made more believable when environmental conditions obscure the difference in perspective. For example, the final scene of the famous movie Casablanca takes place at an airport in the middle of a storm, although the entire scene was shot in a studio. This was accomplished by using a painted backdrop of an aircraft, which was "serviced" by dwarfs standing next to the backdrop. A downpour (created in the studio) draws much of the viewer's attention away from the backdrop and extras, making the simulated perspective less noticeable.
Role of light
Early instances of forced perspective used in low-budget motion pictures showed objects that were clearly different from their surroundings, often blurred or at a different light level. The principal cause of this was geometric. Light from a point source travels in a spherical wave, decreasing in intensity (or illuminance) as the inverse square of the distance travelled. This means that a light source must be four times as bright to produce the same illuminance at an object twice as far away. Thus to create the illusion of a distant object being at the same distance as a near object and scaled accordingly, much more light is required. When shooting with forced perspective, it's important to have the aperture stopped down sufficiently to achieve proper depth of field (DOF), so that the foreground object and background are both sharp. Since miniature models would need to be subjected to far greater lighting than the main focus of the camera, the area of action, it is important to ensure that these can withstand the significant heat generated by the incandescent light sources typically used in film and TV production.
In motion
Peter Jackson's film adaptations of The Lord of the Rings make extended use of forced perspective. Characters apparently standing next to each other would be displaced by several feet in depth from the camera. This, in a still shot, makes some characters (Dwarves and Hobbits) appear much smaller than others. If the camera's point of view were moved, then parallax would reveal the true relative positions of the characters in space. Even if the camera is just rotated, its point of view may move accidentally if the camera is not rotated about the correct point. This point of view is called the 'zero-parallax-point' (or front nodal point), and is approximated in practice as the centre of the entrance pupil.
An extensively used technique in The Lord of the Rings: The Fellowship of the Ring was an enhancement of this principle, which could be used in moving shots. Portions of sets were mounted on movable platforms which would move precisely according to the movement of the camera, so that the optical illusion would be preserved at all times for the duration of the shot. The same techniques were used in the Harry Potter movies to make the character Rubeus Hagrid look like a giant. Props around Harry and his friends are of normal size, while seemingly identical props placed around Hagrid are in fact smaller.
Comic effects
As with many film genres and effects, forced perspective can be used to visual-comedy effect. Typically, when an object or character is portrayed in a scene, its size is defined by its surroundings. A character then interacts with the object or character, in the process showing that the viewer has been fooled and there is forced perspective in use.
The 1930 Laurel and Hardy movie Brats used forced perspective to depict Stan and Ollie simultaneously as adults and as their own sons. An example used for comic effect can be found in the slapstick comedy Top Secret! in a scene which appears to begin as a close-up of a ringing phone with the characters in the distance. However, when the character walks up to the phone (towards the camera) and picks it up, it becomes apparent that the phone is extremely oversized instead of being close to the camera. Another scene in the same movie begins with a close-up of a wristwatch. The next cut shows that the character actually has a gargantuan wristwatch.
The same technique is also used in the Dennis Waterman sketch in the British BBC sketch show Little Britain. In the television version, larger than life props are used to make the caricatured Waterman look just three feet tall or less. In The History of the World, Part I, while escaping the French peasants, Mel Brooks' character, Jacques, who is doubling for King Louis, runs down a hall of the palace, which turns into a ramp, showing the smaller forced perspective door at the end. As he backs down into the normal part of the room, he mutters, "Who designed this place?"
One of the recurring The Kids in the Hall sketches featured Mr. Tyzik, "The Headcrusher", who used forced perspective (from his own point of view) to "crush" other people's heads between his fingers. This is also done by the character Sheldon Cooper in the TV show The Big Bang Theory to his friends when they displease him. In the making of Season 5 of Red vs. Blue, the creators used forced perspective to make the character of Tucker's baby, Junior, look small. In the game, the alien character used as Junior is the same height as other characters. The short-lived 2013 Internet meme "baby mugging" used forced perspective to make babies look like they were inside items like mugs and teacups.
In architecture
In architecture, a structure can be made to seem larger, taller, farther away or otherwise by adjusting the scale of objects in relation to the spectator, increasing or decreasing perceived depth. When forced perspective is supposed to make an object appear farther away, the following method can be used: by constantly decreasing the scale of objects from expectancy and convention toward the farthest point from the spectator, an illusion is created that the scale of said objects is decreasing due to their distant location. In contrast, the opposite technique was sometimes used in classical garden designs and other follies to shorten the perceived distances of points of interest along a path.
The Statue of Liberty is built with a slight forced perspective so that it appears more correctly proportioned when viewed from its base. When the statue was designed in the late 19th century (before easy air flight), there were few other angles from which to view the statue. This caused a difficulty for special effects technicians working on the movie Ghostbusters II, who had to back off on the amount of forced perspective used when replicating the statue for the movie so that their model (which was photographed head-on) would not look top-heavy. This effect can also be seen in Michelangelo's statue of David.
Through depth perception
The technique takes advantage of the visual cues humans use to perceive depth such as angular size, aerial perspective, shading, and relative size. In film, photography and art, perceived object distance is manipulated by altering fundamental monocular cues used to discern the depth of an object in the scene such as aerial perspective, blurring, relative size and lighting. Using these monocular cues in concert with angular size, the eyes can perceive the distance of an object. Artists are able to freely move the visual plane of objects by obscuring these cues to their advantage.
Increasing the object's distance from the audience makes an object appear smaller, its apparent size decreases as distance from the audience increases. This phenomenon is that of the manipulation of angular and apparent size.
A person perceives the size of an object based on the size of the object's image on the retina. This depends solely on the angle created by the rays coming from the topmost and bottommost part of the object that pass through the center of the lens of the eye. The larger the angle an object subtends, the larger the apparent size of the object. The subtended angle increases as the object moves closer to the lens. Two objects with different actual size have the same apparent size when they subtend the same angle. Similarly, two objects of the same actual size can have drastically varying apparent size when they are moved to different distances from the lens.
Calculating angular size
The formula for calculating angular size is as follows:
in which θ is the subtended angle, h is the actual size of the object and D is the distance from the lens to the object.
Techniques employed
Solely manipulating angular size by moving objects closer and farther away cannot fully trick the eye. Objects that are farther away from the eye have a lower luminescent contrast due to atmospheric scattering of rays. Fewer rays of light reach the eye from more distant objects. Using the monocular cue of aerial perspective, the eye uses the relative luminescence of objects in a scene to discern relative distance. Filmmakers and photographers combat this cue by manually increasing the luminescence of objects farther away to equal that of objects in the desired plane. This effect is achieved by making the more distant object more bright by shining more light on it. Because luminance decreases by ½d (where d is distance from the eye), artists can calculate the exact amount of light needed to counter the cue of aerial perspective.
Similarly, blurring can create the opposite effect by giving the impression of depth. Selectively blurring an object moves it out of its original visual plane without having to manually move the object.
A perceptive illusion that may be infused in film culture is the idea of Gestalt psychology, which holds that people often view the whole of an object as opposed to the sum of its individual parts.
Another monocular cue of depth perception is that of lighting and shading. Shading in a scene or on an object allows the audience to locate the light source relative to the object. Making two objects at different distances have the same shading gives the impression that they are in similar positions relative to the light source; therefore, they appear closer to each other than they actually are.
Artists may also employ the simpler technique of manipulating relative size. Once the audience becomes acquainted with the size of an object in proportion to the rest of the objects in a scene, the photographer or filmmaker can replace the object with a larger or smaller replica to change another part of the scene's apparent size. This is done frequently in movies. For example, to aid in the appearance of a person as a giant next to a "regular sized" person, a filmmaker might have a shot of two identical glasses together, then follow with the person who is supposed to play the giant holding a much smaller replica of the glass and the person who is playing the regular-sized person holding a much larger replica. Because the audience sees that the glasses are the same size in the original shot, the difference in relation to the two characters allows the audience to perceive the characters as different sizes based on their relative size to the glasses they hold.
A painter can give the illusion of distance by adding blue or red tinting to the color of the object he is painting. This monocular cue takes advantage of the trend for the color of distant objects to shift towards the blue end of the spectrum, while the colors of closer objects shift toward the red end of the spectrum. The optical phenomenon is known as chromostereopsis.
Examples
In film
Forced perspective has been employed to realize characters in film. One notable example is Rubeus Hagrid, the half-giant in the Harry Potter series.
The technique is used in the Lord of the Rings series for depicting the apparent heights of the hobbit characters, such as Frodo, who are supposed to be around half the height or less of the humans and wizards, such as Gandalf. In reality, the difference in height between the respective actors playing those roles is only , where Elijah Wood as the hobbit Frodo is tall, and Ian McKellen as the wizard Gandalf is . The use of camera angles and trick scenery and props creates the illusion of a much greater difference in size and height.
Numerous camera angle tricks are played in the comedy film Elf (2003) to make the elf characters in the movie appear smaller than the human characters.
In art
In his painting entitled Still life with a curtain, Paul Cézanne creates the illusion of depth by using brighter colors on objects closer to the viewer and dimmer colors and shading to distance the "light source" from objects that he wanted to appear farther away. His shading technique allows the audience to discern the distance between objects due to their relative distances from a stationary light source that illuminates the scene. Furthermore, he uses a blue tint on objects that should be farther away and redder tint to objects in the foreground.
Full size dioramas
Modern museum dioramas may be seen in most major natural history museums. Typically, these displays use a tilted plane to represent what would otherwise be a level surface, incorporate a painted background of distant objects, and often employ false perspective, carefully modifying the scale of objects placed on the plane to reinforce the illusion through depth perception in which objects of identical real-world size placed farther from the observer appear smaller than those closer. Often the distant painted background or sky will be painted upon a continuous curved surface so that the viewer is not distracted by corners, seams, or edges. All of these techniques are means of presenting a realistic view of a large scene in a compact space. A photograph or single-eye view of such a diorama can be especially convincing since in this case there is no distraction by the binocular perception of depth.
Carl Akeley, a naturalist, sculptor, and taxidermist, is credited with creating the first ever habitat diorama in the year 1889. Akeley's diorama featured taxidermied beavers in a three-dimensional habitat with a realistic, painted background. With the support of curator Frank M. Chapman, Akeley designed the popular habitat dioramas featured at the American Museum of Natural History. Combining art with science, these exhibitions were intended to educate the public about the growing need for habitat conservation. The modern AMNH Exhibitions Lab is charged with the creation of all dioramas and otherwise immersive environments in the museum.
Theme parks
Forced perspective is extensively employed at theme parks and other such architecture as found in Disneyland and Las Vegas, often to make structures seem larger than they are in reality where physically larger structures would not be feasible or desirable, or to otherwise provide an optical illusion for entertainment value. Most notably, it is used by Walt Disney Imagineering in the Disney Theme Parks. Some notable examples of forced perspective in the parks, used to make the objects bigger, are the castles (Sleeping Beauty, Cinderella, Belle, Magical Dreams, and Enchanted Storybook). One of the most notable examples of forced perspective being used to make the object appear smaller is The American Adventure pavilion in Epcot.
See also
Ames room
Anamorphosis
Depth perception
Perspective distortion (photography)
Trompe-l'œil
Vista paradox
References
External links
Special effects
Photographic techniques
Architectural communication
Optical illusions | Forced perspective | [
"Physics",
"Engineering"
] | 3,148 | [
"Physical phenomena",
"Optical illusions",
"Architectural communication",
"Optical phenomena",
"Architecture"
] |
184,082 | https://en.wikipedia.org/wiki/K%C5%91nig%27s%20theorem%20%28set%20theory%29 | In set theory, Kőnig's theorem states that if the axiom of choice holds, I is a set, and are cardinal numbers for every i in I, and for every i in I, then
The sum here is the cardinality of the disjoint union of the sets mi, and the product is the cardinality of the Cartesian product. However, without the use of the axiom of choice, the sum and the product cannot be defined as cardinal numbers, and the meaning of the inequality sign would need to be clarified.
Kőnig's theorem was introduced by in the slightly weaker form that the sum of a strictly increasing sequence of nonzero cardinal numbers is less than their product.
Details
The precise statement of the result: if I is a set, Ai and Bi are sets for every i in I, and for every i in I, then
where < means strictly less than in cardinality, i.e. there is an injective function from Ai to Bi, but not one going the other way. The union involved need not be disjoint (a non-disjoint union can't be any bigger than the disjoint version, also assuming the axiom of choice). In this formulation, Kőnig's theorem is equivalent to the axiom of choice.
(Of course, Kőnig's theorem is trivial if the cardinal numbers mi and ni are finite and the index set I is finite. If I is empty, then the left sum is the empty sum and therefore 0, while the right product is the empty product and therefore 1.)
Kőnig's theorem is remarkable because of the strict inequality in the conclusion. There are many easy rules for the arithmetic of infinite sums and products of cardinals in which one can only conclude a weak inequality ≤, for example: if for all i in I, then one can only conclude
since, for example, setting and , where the index set I is the natural numbers, yields the sum for both sides, and we have an equality.
Corollaries of Kőnig's theorem
If is a cardinal, then .
If we take , and for each in , then the left side of the above inequality is just , while the right side is , the cardinality of functions from to , that is, the cardinality of the power set of . Thus, Kőnig's theorem gives us an alternate proof of Cantor's theorem. (Historically of course Cantor's theorem was proved much earlier.)
Axiom of choice
One way of stating the axiom of choice is "an arbitrary Cartesian product of non-empty sets is non-empty". Let Bi be a non-empty set for each i in I. Let Ai = {} for each i in I. Thus by Kőnig's theorem, we have:
If , then .
That is, the Cartesian product of the given non-empty sets Bi has a larger cardinality than the sum of empty sets. Thus it is non-empty, which is just what the axiom of choice states. Since the axiom of choice follows from Kőnig's theorem, we will use the axiom of choice freely and implicitly when discussing consequences of the theorem.
Kőnig's theorem and cofinality
Kőnig's theorem has also important consequences for cofinality of cardinal numbers.
If , then .
If κ is regular, then this follows from Cantor's theorem. If κ is singular, then κ is a limit cardinal. Choose a strictly increasing cf(κ)-sequence of cardinals approaching κ. Let λ be their sum. Each summand is less than κ, so, by Kőnig's theorem, λ is less than the product of cf(κ) copies of κ. We finish the proof by showing that λ = κ. Since each summand is a lower bound for λ, λ ≥ κ. For the other inequality, λ ≤ cf(κ)·κ = κ.
According to Easton's theorem, the next consequence of Kőnig's theorem is the only nontrivial constraint on the continuum function for regular cardinals.
If and , then .
Let . Suppose that, contrary to this corollary, . Then using the previous corollary, , a contradiction.
A proof of Kőnig's theorem
Assuming Zermelo–Fraenkel set theory, including especially the axiom of choice, we can prove the theorem. Remember that we are given , and we want to show
The axiom of choice implies that the condition A < B is equivalent to the condition that there is no function from A onto B and B is nonempty.
So we are given that there is no function from Ai onto Bi≠{}, and we have to show that any function f from the disjoint union of the As to the product of the Bs is not surjective and that the product is nonempty. That the product is nonempty follows immediately from the axiom of choice and the fact that the factors are nonempty. For each i choose a bi in Bi not in the image of Ai under the composition of f with the projection to Bi. Then the product of the elements bi is not in the image of f, so f does not map the disjoint union of the As onto the product of the Bs.
Notes
References
, reprinted as
Articles containing proofs
Axiom of choice
Cardinal numbers
Mathematical logic
Theorems in the foundations of mathematics | Kőnig's theorem (set theory) | [
"Mathematics"
] | 1,120 | [
"Mathematical theorems",
"Cardinal numbers",
"Foundations of mathematics",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Mathematical axioms",
"Numbers",
"Axiom of choice",
"Axioms of set theory",
"Articles containing proofs",
"Mathematical problems",
"Theorems in the foundation... |
184,101 | https://en.wikipedia.org/wiki/Singing%20sand | Singing sand, also called whistling sand, barking sand, booming sand or singing dune, is sand that produces sound. The sound emission may be caused by wind passing over dunes or by walking on the sand.
Certain conditions have to come together to create singing sand:
The sand grains have to be round and between 0.1 and 0.5 mm in diameter.
The sand has to contain silica.
The sand needs to be at a certain humidity.
The most common frequency emitted seems to be close to 450 Hz.
There are various theories about the singing sand mechanism. It has been proposed that the sound frequency is controlled by the shear rate. Others have suggested that the frequency of vibration is related to the thickness of the dry surface layer of sand. The sound waves bounce back and forth between the surface of the dune and the surface of the moist layer, creating a resonance that increases the sound's volume. The noise may be generated by friction between the grains or by the compression of air between them.
Other sounds that can be emitted by sand have been described as "roaring" or "booming".
In dunes
Singing sand dunes, an example of the phenomenon of singing sand, produce a sound described as roaring, booming, squeaking, or the "Song of Dunes". This is a natural sound phenomenon of up to 105 decibels, lasting as long as several minutes, that occurs in about 35 desert locations around the world. The sound is similar to a loud low-pitch rumble. It emanates from crescent-shaped dunes, or barchans. The sound emission accompanies a slumping or avalanching movement of sand, usually triggered by wind passing over the dune or by someone walking near the crest.
Examples of singing sand dunes include California's Kelso Dunes and Eureka Dunes; AuTrain Beach in Northern Michigan; sugar sand beaches and Warren Dunes in southwestern Michigan; Sand Mountain in Nevada; the Booming Dunes in the Namib Desert, Africa; Porth Oer (also known as Whistling Sands) near Aberdaron in Wales; Indiana Dunes in Indiana; Barking Sands Beach in Hawaiʻi; Ming Sha Shan in Dunhuang, China; Kotogahama Beach in Odashi, Japan; Singing Beach in Manchester-by-the-Sea, Massachusetts; near Mesaieed in Qatar; and Gebel Naqous, near el-Tor, South Sinai, Egypt.
The song "The Singing Sands of Alamosa" on Bing Crosby's 1947 album Drifting and Dreaming was inspired by the sand dunes near Alamosa, Colorado, now Great Sand Dunes National Park.
On the beach
On some beaches around the world, dry sand makes a singing, squeaking, whistling, or screaming sound if a person scuffs or shuffles their feet with sufficient force. The phenomenon is not completely understood scientifically, but it has been found that quartz sand does this if the grains are highly spherical. It is believed by some that the sand grains must be of similar size, so the sand must be well sorted by the actions of wind and waves, and that the grains should be close to spherical and have surfaces free of dust, pollution and organic matter. The "singing" sound is then believed to be produced by shear, as each layer of sand grains slides over the layer beneath it. The similarity in size, the uniformity, and the cleanness means that grains move up and down in unison over the layer of grains below them. Even small amounts of pollution on the sand grains reduce the friction enough to silence the sand.
Others believe that the sound is produced by the friction of grain against grain that have been coated with dried salt, in a way that is analogous to the way that the rosin on the bow produces sounds from a violin string. It has also been speculated that thin layers of gas trapped and released between the grains act as "percussive cushions" capable of vibration, and so produce the tones heard.
Not all sands sing, whistle or bark alike. The sounds heard have a wide frequency range that can be different for each patch of sand. Fine sands, where individual grains are barely visible to the naked eye, produce only a poor, weak sounding bark. Medium-sized grains can emit a range of sounds, from a faint squeak or a high-pitched sound, to the best and loudest barks when scuffed enthusiastically.
Water also influences the effect. Wet sands are usually silent because the grains stick together instead of sliding past each other, but small amounts of water can actually raise the pitch of the sounds produced. The most common part of the beach on which to hear singing sand is the dry upper beach above the normal high tide line, but singing has been reported on the lower beach near the low tide line as well.
Singing sand has been reported on 33 beaches in the British Isles, including in the north of Wales and on the little island of Eigg in the Scottish Hebrides. It has also been reported at a number of beaches along North America's Atlantic coast. Singing sands can be found at Souris, on the eastern tip of Prince Edward Island, at the Singing Sands beach in Basin Head Provincial Park; on Singing Beach in Manchester-by-the-Sea, Massachusetts, as well as in the fresh waters of Lake Superior and Lake Michigan and in other places.
See also
References
Literature
External links
Symphony of the Sands - The National newspaper in Abu Dhabi
Video of Singing Sand in Liwa, United Arab Emirates
Booming sand
Enigma of the Singing Dunes article on physics.org
Location information for booming sand dunes around the World
Singing Sand Dunes in Kazakhstan, also called Singing Barkhan
Explanation, Video and Audio clips
Video clips of Singing Sand Dunes
Singing and Booming Sand Dunes of California and Nevada
Dunes
Physical phenomena
Sand
de:Düne#Singende Dünen | Singing sand | [
"Physics"
] | 1,176 | [
"Physical phenomena"
] |
184,120 | https://en.wikipedia.org/wiki/Time%20hierarchy%20theorem | In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time, where n is the input length.
The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard E. Stearns and Juris Hartmanis in 1965. It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the Universal Turing machine. Consequent to the theorem, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions f(n),
,
where DTIME(f(n)) denotes the complexity class of decision problems solvable in time O(f(n)). The left-hand class involves little o notation, referring to the set of decision problems solvable in asymptotically less than f(n) time.
In particular, this shows that if and only if , so we have an infinite time hierarchy.
The time hierarchy theorem for nondeterministic Turing machines was originally proven by Stephen Cook in 1972. It was improved to its current form via a complex proof by Joel Seiferas, Michael Fischer, and Albert Meyer in 1978. Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today. The time hierarchy theorem for nondeterministic Turing machines states that if g(n) is a time-constructible function, and f(n+1) = o(g(n)), then
.
The analogous theorems for space are the space hierarchy theorems. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has one bit of advice.
Background
Both theorems use the notion of a time-constructible function. A function is time-constructible if there exists a deterministic Turing machine such that for every , if the machine is started with an input of n ones, it will halt after precisely f(n) steps. All polynomials with non-negative integer coefficients are time-constructible, as are exponential functions such as 2n.
Proof overview
We need to prove that some time class TIME(g(n)) is strictly larger than some time class TIME(f(n)). We do this by constructing a machine which cannot be in TIME(f(n)), by diagonalization. We then show that the machine is in TIME(g(n)), using a simulator machine.
Deterministic time hierarchy theorem
Statement
Time Hierarchy Theorem. If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time o(f(n)) but can be solved in worst-case deterministic time O(f(n)log f(n)). Thus
Note 1. f(n) is at least n, since smaller functions are never time-constructible.
Example. There are problems solvable in time nlog2n but not time n. This follows by setting , since n is in
Proof
We include here a proof of a weaker result, namely that DTIME(f(n)) is a strict subset of DTIME(f(2n + 1)3), as it is simpler but illustrates the proof idea. See the bottom of this section for information on how to extend the proof to f(n)logf(n).
To prove this, we first define the language of the encodings of machines and their inputs which cause them to halt within f
Notice here that this is a time-class. It is the set of pairs of machines and inputs to those machines (M,x) so that the machine M accepts within f(|x|) steps.
Here, M is a deterministic Turing machine, and x is its input (the initial contents of its tape). [M] denotes an input that encodes the Turing machine M. Let m be the size of the tuple ([M], x).
We know that we can decide membership of Hf by way of a deterministic Turing machine R, that simulates M for f(x) steps by first calculating f(|x|) and then writing out a row of 0s of that length, and then using this row of 0s as a "clock" or "counter" to simulate M for at most that many steps. At each step, the simulating machine needs to look through the definition of M to decide what the next action would be. It is safe to say that this takes at most f(m)3 operations (since it is known that a simulation of a machine of time complexity T(n) for can be achieved in time on a multitape machine, where |M| is the length of the encoding of M), we have that:
The rest of the proof will show that
so that if we substitute 2n + 1 for m, we get the desired result. Let us assume that Hf is in this time complexity class, and we will reach a contradiction.
If Hf is in this time complexity class, then there exists a machine K which, given some machine description [M] and input x, decides whether the tuple ([M], x) is in Hf within
We use this K to construct another machine, N, which takes a machine description [M] and runs K on the tuple ([M], [M]), ie. M is simulated on its own code by K, and then N accepts if K rejects, and rejects if K accepts.
If n is the length of the input to N, then m (the length of the input to K) is twice n plus some delimiter symbol, so m = 2n + 1. Ns running time is thus
Now if we feed [N] as input into N itself (which makes n the length of [N]) and ask the question whether N accepts its own description as input, we get:
If N accepts [N] (which we know it does in at most f(n) operations since K halts on ([N], [N]) in f(n) steps), this means that K rejects ([N], [N]), so ([N], [N]) is not in Hf, and so by the definition of Hf, this implies that N does not accept [N] in f(n) steps. Contradiction.
If N rejects [N] (which we know it does in at most f(n) operations), this means that K accepts ([N], [N]), so ([N], [N]) is in Hf, and thus N does accept [N] in f(n) steps. Contradiction.
We thus conclude that the machine K does not exist, and so
Extension
The reader may have realised that the proof gives the weaker result because we have chosen a simple Turing machine simulation for which we know that
It is known that a more efficient simulation exists which establishes that
.
Non-deterministic time hierarchy theorem
If g(n) is a time-constructible function, and f(n+1) = o(g(n)), then there exists a decision problem which cannot be solved in non-deterministic time f(n) but can be solved in non-deterministic time g(n). In other words, the complexity class NTIME(f(n)) is a strict subset of NTIME(g(n)).
Consequences
The time hierarchy theorems guarantee that the deterministic and non-deterministic versions of the exponential hierarchy are genuine hierarchies: in other words P ⊊ EXPTIME ⊊ 2-EXP ⊊ ... and NP ⊊ NEXPTIME ⊊ 2-NEXP ⊊ ....
For example, since . Indeed, from the time hierarchy theorem.
The theorem also guarantees that there are problems in P requiring arbitrarily large exponents to solve; in other words, P does not collapse to DTIME(nk) for any fixed k. For example, there are problems solvable in n5000 time but not n4999 time. This is one argument against Cobham's thesis, the convention that P is a practical class of algorithms. If such a collapse did occur, we could deduce that P ≠ PSPACE, since it is a well-known theorem that DTIME(f(n)) is strictly contained in DSPACE(f(n)).
However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of computational complexity theory: whether P and NP, NP and PSPACE, PSPACE and EXPTIME, or EXPTIME and NEXPTIME''' are equal or not.
Sharper hierarchy theorems
The gap of approximately between the lower and upper time bound in the hierarchy theorem can be traced to the efficiency of the device used in the proof, namely a universal program that maintains a step-count. This can be done more efficiently on certain computational models. The sharpest results, presented below, have been proved for:
The unit-cost random-access machine
A programming language model whose programs operate on a binary tree that is always accessed via its root. This model, introduced by Neil D. Jones is stronger than a deterministic Turing machine but weaker than a random-access machine.
For these models, the theorem has the following form:
If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case time af(n) for some constant a (dependent on f).
Thus, a constant-factor increase in the time bound allows for solving more problems, in contrast with the situation for Turing machines (see Linear speedup theorem). Moreover, Ben-Amram proved that, in the above models, for f of polynomial growth rate (but more than linear), it is the case that for all , there exists a decision problem which cannot be solved in worst-case deterministic time f(n'') but can be solved in worst-case time .
See also
Space hierarchy theorem
References
Further reading
Pages 310–313 of section 9.1: Hierarchy theorems.
Section 7.2: The Hierarchy Theorem, pp. 143–146.
Structural complexity theory
Theorems in computational complexity theory
Articles containing proofs | Time hierarchy theorem | [
"Mathematics"
] | 2,276 | [
"Theorems in computational complexity theory",
"Articles containing proofs",
"Theorems in discrete mathematics"
] |
184,179 | https://en.wikipedia.org/wiki/Institute%20for%20Advanced%20Study | The Institute for Advanced Study (IAS) is an independent center for theoretical research and intellectual inquiry located in Princeton, New Jersey. It has served as the academic home of internationally preeminent scholars, including Albert Einstein, J. Robert Oppenheimer, Hermann Weyl, John von Neumann, Michael Walzer, Clifford Geertz and Kurt Gödel, many of whom had emigrated from Europe to the United States.
It was founded in 1930 by American educator Abraham Flexner, together with philanthropists Louis Bamberger and Caroline Bamberger Fuld. Despite collaborative ties and neighboring geographic location, the institute, being independent, has "no formal links" with Princeton University. The institute does not charge tuition or fees.
Flexner's guiding principle in founding the institute was the pursuit of knowledge for its own sake. The faculty have no classes to teach. There are no degree programs or experimental facilities at the institute. Research is never contracted or directed. It is left to each individual researcher to pursue their own goals. Established during the rise of fascism in Europe, the institute played a key role in the transfer of intellectual capital from Europe to America. It quickly earned its reputation as the pinnacle of academic and scientific life—a reputation it has retained.
The institute consists of four schools: Historical Studies, Mathematics, Natural Sciences, and Social Sciences. The institute also has a program in Systems Biology.
It is supported entirely by endowments, grants, and gifts. It is one of eight American mathematics institutes funded by the National Science Foundation. It is the model for all ten members of the consortium Some Institutes for Advanced Study.
History
Founding
The institute was founded in 1930 by Abraham Flexner, together with philanthropists Louis Bamberger and Caroline Bamberger Fuld. Flexner was interested in education generally and as early as 1890 he had founded an experimental school which had no formal curriculum, exams, or grades. It was a great success at preparing students for prestigious colleges and this same philosophy would later guide him in the founding of the Institute for Advanced Study.
Flexner's study of medical schools, the 1910 Flexner Report, played a major role in the reform of medical education. Flexner had studied European schools such as Heidelberg University, All Souls College, Oxford, and the –and he wanted to establish a similar advanced research center in the United States.
In his autobiography, Abraham Flexner reports a phone call which he received in the fall of 1929 from representatives of the Bamberger siblings that led to their partnership and the eventual founding of the IAS:
The Bamberger siblings wanted to use the proceeds from the sale of their Bamberger's department store in Newark, New Jersey, to fund a dental school as an expression of gratitude to the state of New Jersey. Flexner convinced them to put their money in the service of more abstract research. (There was a brush with near-disaster when the Bambergers pulled their money out of the market just before the Crash of 1929.) The eminent topologist Oswald Veblen at Princeton University, who had long been trying to found a high-level research institute in mathematics, urged Flexner to locate the new institute near Princeton where it would be close to an existing center of learning and a world-class library. In 1932 Veblen resigned from Princeton and became the first professor in the new Institute for Advanced Study. He selected most of the original faculty and also helped the institute acquire land in Princeton for both the original facility and future expansion.
Flexner and Veblen set out to recruit the best mathematicians and physicists they could find. The rise of fascism and the associated anti-semitism forced many prominent mathematicians to flee Europe and some, such as Einstein and Hermann Weyl (whose wife was Jewish), found a home at the new institute. Weyl as a condition of accepting insisted that the institute also appoint the thirty-year-old Austrian-Hungarian polymath John von Neumann. Indeed, the IAS became the key lifeline for scholars fleeing Europe. Einstein was Flexner's first coup and shortly after that he was followed by Veblen's brilliant student James Alexander and the wunderkind of logic Kurt Gödel. Flexner was fortunate in the luminaries he directly recruited but also in the people that they brought along with them. Thus, by 1934 the fledgeling institute was led by six of the most prominent mathematicians in the world. In 1935 quantum physics pioneer Wolfgang Pauli became a faculty member. With the opening of the Institute for Advanced Study, Princeton replaced Göttingen as the leading center for mathematics in the twentieth century.
Early years
For the six years from its opening in 1933, until Fuld Hall was finished and opened in 1939, the institute was housed within Princeton University—in Fine Hall, which housed Princeton's mathematics department. Princeton University's science departments are less than two miles away and informal ties and collaboration between the two institutions occurred from the beginning. This helped start an incorrect impression that it was part of the university, one that has never been completely eradicated.
On June 4, 1930, the Bambergers wrote as follows to the institute's trustees:
Bamberger's policy did not prevent racial discrimination by Princeton. When African-American mathematician William S. Claytor applied to the IAS in 1937, Princeton University said they "would not permit any colored person to go to the Institute for Advanced Study." It was not until 1939, when the institute had moved into its own building, that Veblen was able to offer Claytor a position; but this time Claytor turned it down on principle.
Flexner had successfully assembled a faculty of unrivaled prestige in the School of Mathematics which officially opened in 1933. He sought to equal this success in the founding of schools of economics and humanities but this proved to be more difficult. The School of Humanistic Studies and the School of Economics and Politics were established in 1935. All three schools along with the office of the director moved into the newly built Fuld Hall in 1939. (Ultimately the schools of Humanistic Studies and Economics and Politics were merged into the present day School of Historical Studies established in 1949.) In the beginning, the School of Mathematics included physicists as well as mathematicians. A separate School of Natural Sciences was not established until 1966. The School of Social Science was founded in 1973.
Mission
In a 1939 essay Flexner emphasized how James Clerk Maxwell, driven only by a desire to know, did abstruse calculations in the field of magnetism and electricity and that these investigations led in a direct line to the entire electrical development of modern times. Citing Maxwell and other theoretical scientists such as Carl Friedrich Gauss, Michael Faraday, Paul Ehrlich and Einstein, Flexner said, "Throughout the whole history of science most of the really great discoveries which have ultimately proved to be beneficial to mankind have been made by men and women who were driven not by the desire to be useful but merely the desire to satisfy their curiosity."
The IAS Bluebook says:
This was the belief to which Flexner clung passionately, and which continues to inspire the institute today.
Impact
From the day it opened the IAS had a major impact on mathematics, physics, economic theory, and world affairs. In mathematics forty-two out of sixty-one Fields Medalists have been affiliated with the institute. Thirty-four Nobel Laureates have worked at the IAS. Of the sixteen Abel Prizes awarded since the establishment of that award in 2003, nine were garnered by Institute professors or visiting scholars. Of the fifty-six Cole Prizes awarded since the establishment of that award in 1928, thirty-nine have gone to scholars associated with the IAS at some point in their career. IAS people have won 20 Wolf Prizes in mathematics and physics.
Its more than 6,000 former members hold positions of intellectual and scientific leadership throughout the academic world.
Pioneering work on the theory of the stored-program computer as laid down by Alan Turing was done at the IAS by John von Neumann, and the IAS machine built in the basement of the Fuld Hall from 1942 to 1951 under von Neumann's direction introduced the basic architecture of most modern digital computers. The IAS is the leading center of research in string theory and its generalization M-theory introduced by Edward Witten at the IAS in 1995. The Langlands program, a far-reaching approach which unites parts of geometry, mathematical analysis, and number theory was introduced by Robert Langlands, the mathematician who now occupies Albert Einstein's old office at the institute. Langlands was inspired by the work of Hermann Weyl, André Weil, and Harish-Chandra, all scholars with wide-ranging ties to the institute, and the IAS maintains the key repository for the papers of Langlands and the Langlands program. The IAS is a main center of research for homotopy type theory, a modern approach to the foundations of mathematics which is not based on classical set theory. A special year organized by Institute professor Vladimir Voevodsky and others resulted in a benchmark book in the subject which was published by the institute in 2013.
The institute is or has been the academic home of many of the best minds of their generation. Among them are James Waddell Alexander II, Michael Atiyah, Enrico Bombieri, Shiing-Shen Chern, Pierre Deligne, Freeman Dyson, Albert Einstein, Clifford Geertz, Kurt Gödel, Albert Hirschman, George F. Kennan, Tsung-Dao Lee, Avishai Margalit, J. Robert Oppenheimer, Erwin Panofsky, Atle Selberg, John von Neumann, André Weil, Hermann Weyl, Frank Wilczek, Edward Witten, Chen-Ning Yang and Shing-Tung Yau.
Special Year programs
Flexner's vision of the kind of results that can emerge in an institution devoted to the pursuit of knowledge for its own sake is illustrated by the "Special Year" programs sponsored by the IAS School of Mathematics. For example, in 2012–13 researchers at the IAS school of mathematics held A Special Year on Univalent Foundations of Mathematics. Intuitionistic type theory was created by the Swedish logician Per Martin-Löf in 1972 to serve as an alternative to set theory as a foundation for mathematics. The special year brought together researchers in topology, computer science, category theory, and mathematical logic with the goal of formalizing and extending this theory of foundations. The program was organized by Steve Awodey, Thierry Coquand and Vladimir Voevodsky, and resulted in a book being published in homotopy type theory. The authors—more than 30 researchers ultimately contributed to the project—noted the essential contribution of the IAS saying,
One of the researchers, Andrej Bauer said,
The book, informally known as The HoTT book, is freely available online.
School of Social Science
Founded in 1973, the School of Social Science is devoted to critical approaches to social research, both theoretical and empirical, and featuring multidisciplinary, multi-method and international perspectives. Wendy Brown, Didier Fassin, and Alondra Nelson are professors of the School of Social Science at the Institute. Among past and emeritus faculty professors are Danielle S. Allen, Clifford Geertz, Albert O. Hirschman, Eric S. Maskin, Dani Rodrik, Joan Wallach Scott, and Michael Walzer.
Criticism
Richard Feynman argued that the IAS does not offer real activity or challenge:
Other Institutes for Advanced Study
The IAS in Princeton is widely recognized as the world's first Institute for Advanced Study. Despite later imitators of the institute's model, it took years before any similar institutions were founded. The Center for Advanced Study in the Behavioral Sciences at Stanford was the first such spinoff in 1954. This was followed by the National Humanities Center founded in North Carolina in 1978. These two institutions eventually became the core of a consortium known as Some Institutes for Advanced Study (SIAS). The SIAS consortium includes the original institute in Princeton and nine other institutes founded explicitly to emulate the model of the original IAS. These ten Institutes for Advanced Study are:
Center for Advanced Study in the Behavioral Sciences in Stanford, California
National Humanities Center in North Carolina
Radcliffe Institute for Advanced Study in Cambridge, Massachusetts
The Institute for Advanced Study in the Humanities (KWI) in Essen, Germany
Netherlands Institute for Advanced Study in Amsterdam, the Netherlands (until 2016 in Wassenaar)
Swedish Collegium for Advanced Study in Uppsala, Sweden
Berlin Institute for Advanced Study in Berlin, Germany
Israel Institute for Advanced Studies in Jerusalem
Nantes Institute for Advanced Study Foundation in Nantes, France
Stellenbosch Institute for Advanced Study in Stellenbosch, South Africa
Institute for Advanced Study in Princeton, New Jersey
In recent years there have been other institutes loosely based on the Princeton original, in some cases established with help from IAS professors. In 1997 IAS professor Chen-Ning Yang helped the Chinese set up the Institute for Advanced Study at Tsinghua University in Beijing. The Freiburg Institute for Advanced Studies in Freiburg, Germany was founded in 2007, with IAS director at the time Peter Goddard giving the inaugural address. Princeton IAS professors André Weil and Armand Borel helped to establish close contacts with the Ramanujan Institute for Advanced Study in Mathematics, founded in 1967 as part of the University of Madras in India.
The prestigious Institut des Hautes Études Scientifiques (IHÉS) founded in 1958 just south of Paris is universally acknowledged to be the French counterpart of the IAS in Princeton. Princeton Institute director Robert Oppenheimer had a close relationship with IHÉS founder Léon Motchane and played a major role in helping to get it established. The Dublin Institute for Advanced Studies, which focuses on theoretical physics, cosmic physics, and Celtic studies, was also based on the IAS, and was the second such institute when it was founded in 1940.
Neither the Princeton IAS nor SIAS is connected with, and should not be confused with, the Consortium of Institutes of Advanced Studies which comprises some twenty research institutes located throughout Great Britain and Ireland. The name Institute for Advanced Study, along with the acronym IAS, is also used by various other independent institutions throughout the world, some having little to do with the Princeton model. See Institute for Advanced Study (disambiguation) for a complete list.
Directors, faculty and members
At any given time, the IAS has a faculty consisting of twenty-eight eminent academics who are appointed for life. Although the faculty do not teach classes (because there are none), they often do give lectures at their own initiative and have the title Professor along with the prestige associated with that title. Furthermore, they direct research and serve as the nucleus of a larger and generally younger group of scholars, whom they have the power to select and invite. Each year fellowships are awarded to about 190 visiting members from over 100 universities and research institutions who come to the institute for periods from one term to a few years. Individuals must apply to become members of the institute, and each of the schools has its own application procedures and deadlines.
Campus, Lands, Olden Farm and Olden Manor
The IAS owns over 600 acres of land, most of which was acquired between 1936 and 1945. Since 1997 the institute has preserved 589 acres of woods, wetlands, and farmland. By 1936, for total of $290,000, the founding trustees of the IAS had purchased 256 acres, including the two-hundred-acre Olden Farm with Olden Manor, which was the former home of William Olden. Olden Manor, with its extensive gardens, has been, since 1940, the residence of the institute's director.
See also
List of Nobel laureates affiliated with the Institute for Advanced Study
List of Fields medalists affiliated with the Institute for Advanced Study
List of Cole Prize winners affiliated with the Institute for Advanced Study
List of Wolf Prize winners affiliated with the Institute for Advanced Study
Some Institutes for Advanced Study
Economic and Financial Organization of the League of Nations, relocated at the IAS 1940–1946
References
Bibliography
Arntzenius, Linda G (2011). Institute for Advanced Study, pub by Arcadia, Charleston, SC.
Axtell, James (2007). The Making of Princeton University : From Woodrow Wilson to the Present, Princeton University Press.
Batterson, Steve (2006). Pursuit of Genius : Flexner, Einstein, and the Early Faculty at the Institute for Advanced Study, A. K. Peters, Ltd., Wellesley, MA.
Bonner, Thomas Neville (2002). Iconoclast: Abraham Flexner and a Life in Learning, Johns Hopkins University Press.
Dyson, George (2012). Turing's Cathedral: The Origins of the Digital Universe, Pantheon Books, New York.
Edwards, Jon R. (2012). A History of Early Computing at Princeton, Princeton Turing Centennial Celebration, Princeton University, May 10–12, 2012
Feuer, Lewis Samuel (1974). Einstein and the Generations of Science, Basic Books.
Flexner, Abraham (1910). Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching , Merrymount Press. OCLC 9795002
Flexner, Abraham (1930). Universities : American, English, German, Oxford Univ. Press, New York, OCLC 238820218
Flexner, Abraham (1939). The Usefulness of Useless Knowledge, Harpers Magazine, Issue 179, June/November 1939
Flexner, Abraham (1960). Abraham Flexner : An Autobiography, Simon and Schuster, New York. OCLC 14616573
Freiberger, Marianne (2011). Review of Pursuit of Genius: Flexner, Einstein, and the Early Faculty at the Institute for Advanced Study, The Mathematical Intelligencer
Frenkel, Edward (2015). Love and Math: The Heart of Hidden Reality, Basic Books, New York,
Grattan-Guinness, Ivor (2003). Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences, volume 2, The Johns Hopkins University Press.
Gunderman, Richard B.; Gascoine, Kelly; Hafferty, Frederic W.; Kanter, Steven L. (2010). A "paradise for scholars": Flexner and the Institute for Advanced Study, Academic Medicine : Journal of the Association of American Medical Colleges, November 2010; 85(11): 1784–9
Institute for Advanced Study (1940). Bulletin No. 9 : History And Organization .
Jogalekar, Ashutoshon (2013). Ich probiere: Revisiting Abraham Flexners dream of the useful pursuit of useless knowledge, Scientific American, December 12, 2013
Leitch, Alexander (1978). The Institute for Advanced Study , in A Princeton Companion, Princeton University Press
Leitch, Alexander (1995). Oswald Veblen , in A Princeton Companion, Princeton University Press
Nasar, Sylvia (1998). A beautiful mind : a biography of John Forbes Nash, Jr., Simon & Schuster, New York,
Nevins, Michael (2010). Abraham Flexner: A Flawed American Icon, iUniverse Inc.,
Britta Padberg (2020). The Global Diversity of Institutes for Advanced Study, Sociologica, vol.14, no.1 (2020)
Pais, Abraham & Crease, Robert P. J. Robert Oppenheimer: A Life, Oxford University Press, New York.
Pasachoff, Naomi (1992). Science's 'Intellectual Hotel' : The Institute for Advanced Study, Encyclopædia Britannica Yearbook of Science and the Future.
Regis, Ed (1987). Who Got Einstein's Office: Eccentricity and Genius at the Institute for Advanced Study, Addison-Wesley, Reading.
Reisz, Matthew (2008). The perfect brainstorm, Times Higher Education, March 20, 2008
Scott, Joan Wallach & Keates, Debra, eds (2001). Schools of Thought : Twenty-five Years of Interpretive Social Science, Princeton University Press. A collection of reflective pieces by former fellows at the Institute for Advanced Study School for Social Science.
Villani, Cédric (2015). Birth of a Theorem : A Mathematical Adventure, Faber and Faber.
Wittrock, Björn (1910). A brief history of institutes for advanced study
External links
"Institute for Advanced Study", a historical overview of the Institute published on the occasion of the 75th anniversary of the founding
Schools in Princeton, New Jersey
National Science Foundation mathematical sciences institutes
1930 establishments in New Jersey
Theoretical physics institutes | Institute for Advanced Study | [
"Physics"
] | 4,191 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
184,182 | https://en.wikipedia.org/wiki/Draize%20test | The Draize test is an acute toxicity test devised in 1944 by Food and Drug Administration (FDA) toxicologists John H. Draize and Jacob M. Spines. Initially used for testing cosmetics, the procedure involves applying 0.5 mL or 0.5 g of a test substance to the eye or skin of a restrained, conscious animal, and then leaving it for a set amount of time before rinsing it out and recording its effects. The animals are observed for up to 14 days for signs of erythema and edema in the skin test, and redness, swelling, discharge, ulceration, hemorrhaging, cloudiness, or blindness in the tested eye. The test subject is commonly an albino rabbit, though other species are used too, including dogs. The animals are euthanized after testing if the test renders irreversible damage to the eye or skin. Animals may be re-used for testing purposes if the product tested causes no permanent damage. Animals are typically reused after a "wash out" period during which all traces of the tested product are allowed to disperse from the test site.
The tests are controversial. They are viewed as cruel as well as unscientific by critics because of the differences between rabbit and human eyes, and the subjective nature of the visual evaluations. The FDA supports the test, stating that "to date, no single test, or battery of tests, has been accepted by the scientific community as a replacement [for] ... the Draize test". Because of its controversial nature, the use of the Draize test in the U.S. and Europe has declined in recent years and is sometimes modified so that anaesthetics are administered and lower doses of the test substances used. Chemicals already shown to have adverse effects in vitro are not currently used in a Draize test, thereby reducing the number and severity of tests that are carried out.
Background
John Henry Draize (1900–1992) obtained a BSc in chemistry then a PhD in pharmacology, studying hyperthyroidism. He then joined the University of Wyoming and investigated plants poisonous to cattle, other livestock, and people. The U.S. Army recruited Draize in 1935 to investigate the effects of mustard gas and other chemical agents.
In 1938, after a number of reports of coal tar in mascara leading to blindness, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act, placing cosmetics under regulatory control. The following year Draize joined the FDA, and was soon promoted to head of the Dermal and Ocular Toxicity Branch where he was charged with developing methods for testing the side effects of cosmetic products. This work culminated in a report by Draize, his laboratory assistant, Geoffrey Woodard, and division chief, Herbert Calvery, describing how to assess acute, intermediate, and chronic exposure to cosmetics by applying compounds to the skin, penis, and eyes of rabbits.
Following this report, the techniques were used by the FDA to evaluate the safety of substances such as insecticides and sunscreens and later adopted to screen many other compounds. By Draize's retirement in 1963, and despite never having personally attached his name to any technique, irritancy procedures were commonly known as "the Draize test" To distinguish the target organ, the tests are now often referred to as "the Draize eye test" and "the Draize skin test".
Reliability
In 1971, before the implementation in 1981 of the modern Draize protocol, toxicologists Carrol Weil and Robert Scala of Carnegie Mellon University distributed three test substances for comparative analysis to 24 different university and state laboratories. The laboratories returned significantly different evaluations, from non-irritating to severely irritating, for the same substances. A 2004 study by the U.S. Scientific Advisory Committee on Alternative Toxicological Methods analyzed the modern Draize skin test. They found that tests would:
Misidentify a serious irritant as safe: 0–0.01%
Misidentify a mild irritant as safe: 3.7–5.5%
Misidentify a serious irritant as a mild irritant: 10.3–38.7%
Descriptions of the test
Anti-testing
According to the American National Anti-Vivisection Society, solutions of products are applied directly into the animals' eyes, which can cause "intense burning, itching and pain". Clips are placed on the rabbits' eyelids to hold them open during the test period, which can last several days, during which time the rabbits are placed in restraining stocks. The chemicals often leave the eyes "ulcerated and bleeding". In the Draize test for skin irritancy, the test substances are applied to skin that is shaved and abraded (several layers of skin are removed with sticky tape), then covered with plastic sheeting.
Pro-testing
According to the British Research Defence Society, the Draize eye test is now a "very mild test", in which small amounts of substances are used and are washed out of the eye at the first sign of irritation. In a letter to Nature, written to refute an article saying that the Draize test had not changed much since the 1940s, Andrew Huxley wrote: "A substance expected from its chemical nature to be seriously painful must not be tested in this way; the test is permissible only if the substance has already been shown not to cause pain when applied to skin, and in vitro pre-screening tests are recommended, such as a test on an isolated and perfused eye. Permission to carry out the test on several animals is given only if the test has been performed on a single animal and a period of 24 hours has been allowed for injury to become evident."
Differences between the rabbit eye and the human eyes
Kirk Wilhelmus, professor in the Department of Ophthalmology at Baylor College of Medicine, conducted a comprehensive review of the Draize eye test in 2001. He also reported that differences in anatomy and biochemistry between the rabbit and human eye indicate that testing substances on rabbits might not predict the effects on humans. However, he noted "that eyes of rabbits are generally more susceptible to irritating substances than the eyes of humans" making them a conservative model of the human eye. Wilhelmus concluded "The Draize eye test ... has assuredly prevented harm" to humans, but predicts it will be "supplanted as in vitro and clinical alternatives emerge for assessing irritancy of the ocular surface".
Alternatives
Industry and regulatory bodies responsible for public health are actively assessing animal-free tests to reduce the requirement for Draize testing. Before 2009 the Organisation for Economic Co-operation and Development (OECD) had not validated any alternative methods for testing eye or skin irritation potential. However, since 2000 OECD had validated alternative tests for corrosivity, meaning acids, bases and other corrosive substances are no longer required to be Draize tested on animals. The alternative tests include a human skin equivalent model and the transepicutaneous resistance test (TER). In addition, the use of human corneal cell line (HCE-T cells) is also another good alternative method to test eye irritation on potential chemicals.
In September 2009 the OECD validated two alternatives to the Draize eye test: the bovine cornea opacity test (BCOP) and isolated chicken eye test (ICE). A 1995 study funded by the European Commission and British Home Office evaluated these among nine potential replacements, including the hens' egg chorioallantoic membrane (HET-CAM) assay and an epithelial model cultivated from human corneal cells, in comparison with Draize test data. The study found that none of the alternative tests, taken alone, proved to be a reliable replacement for the animal test.
Positive results from some of these tests have been accepted by regulatory bodies, such as the British Health and Safety Executive and US Department of Health and Human Services, without testing on live animals, but negative results (no irritation) required further in vivo testing. Regulatory bodies have therefore begun to adopt a tiered testing strategy for skin and eye irritation, using alternatives to reduce Draize testing of substances with the most severe effects.
Regulations
UK
In Britain, the Home Office publishes guideline for eye irritancy tests, with the aim of reducing suffering to the animals. In its 2006 guidelines, it "strongly encourages" in vitro screening of all compounds before testing on animals, and mandates the use of validated alternatives when available. It requires that the test solution's "physical and chemical properties are not such that a severe adverse reaction could be predicted"; therefore "known corrosive substances or those with a high oxidation or reduction potential must not be tested."
The test design requires that the substance be tested on one rabbit initially, and the effect of the substance on the skin must be ascertained before it can be introduced into the eye. If a rabbit shows signs of "severe pain" or distress it must be immediately killed, the study terminated and the compound may not be tested on other animals. In tests where severe eye irritancy is considered likely, a washout should closely follow testing in the eye of the first rabbit. In the UK, any departure from these guidelines requires prior approval from the Secretary of State.
See also
Animal testing on rabbits
test
Notes
Further reading
Clelatt, KN (Ed): Textbook of Veterinary Ophthalmology. Lea & Febiger, Philadelphia. 1981.
Prince JH, Diesem CD, Eglitis I, Ruskell GL: Anatomy and Histology of the Eye and Orbit in Domestic Animals. Charles C. Thomas, Springfield, 1960.
Saunders LZ, Rubin LF: Ophthalmic Pathology in Animals. S. Karger, New York, 1975.
Swanston DW: Eye irritancy testing. In: Balls M, Riddell RJ, Warden AN (Eds). Animals and Alternatives in Toxicity Testing. Academic Press, New York, 1983, pp. 337–367.
Buehler EV, Newmann EA: A comparison of eye irritation in monkeys and rabbits. Toxicol Appl Pharmacol 6:701-710:1964.
Sharpe R: The Draize test-motivations for change. Fd Chem Toxicol 23:139-143:1985.
Freeberg FE, Hooker DT, Griffith JF: Correlation of animal eye test data with human experience for household products: an update. J Toxicol-Cut & Ocular Toxicol 5:115-123:1986.
Griffith JF, Freeberg FE: Empirical and experimental bases for selecting the low volume eye irritation test as the validation standard for in vitro methods. In: Goldber AM (Ed): In Vitro Toxicology: Approaches to Validation. New York, Mary Ann Libert, 1987, pp. 303–311.
Shopsis C, Borenfreund E, Stark DM: Validation studies on a battery of potential in vitro alternatives to the Draize test. In: Goldberg AM (Ed): In Vitro Toxicology: Approaches to Validation. New York. Mary Ann Liebert, 1987, pp. 31–44.
Maurice D: Direct toxicity to the cornea: a nonspecific process? In: Goldberg AM (Ed): In vivo Toxicology: Approaches to Validation. New York. Mary Ann Liebert 1987, pp. 91–93.
Leighton J, Nassauer J, Tchao R, Verdone J: Development of a procedure using the chick egg as an alternative to the Draize rabbit test. In: Goldberg AM (Ed): Product Safety Evaluation. New York. Mary Ann Liebert, 1983, pp. 165-177.
Gordon VC, Bergman HC: The EYETEX-MPA system. Presented at the Symposium, Progress in In Vitro Technology, Johns Hopkins University School of Hygiene and Public Health, Baltimore, Maryland, November 44, 1987.
Hertzfeld HR, Myers TD: The economic viability of in vitro testing techniques. In: Goldberg AM (Ed): In Vitro Toxicology. New York. Mary Ann Liebert, 1987, pp. 189–202.
Animal testing techniques
Animal rights
Toxicology tests
American inventions
Animal testing in the United States | Draize test | [
"Chemistry",
"Environmental_science"
] | 2,538 | [
"Animal testing",
"Toxicology",
"Animal testing techniques",
"Toxicology tests"
] |
184,300 | https://en.wikipedia.org/wiki/Brookite | Brookite is the orthorhombic variant of titanium dioxide (TiO2), which occurs in four known natural polymorphic forms (minerals with the same composition but different structure). The other three of these forms are akaogiite (monoclinic), anatase (tetragonal) and rutile (tetragonal). Brookite is rare compared to anatase and rutile and, like these forms, it exhibits photocatalytic activity. Brookite also has a larger cell volume than either anatase or rutile, with 8 TiO2 groups per unit cell, compared with 4 for anatase and 2 for rutile. Iron (Fe), tantalum (Ta) and niobium (Nb) are common impurities in brookite.
Brookite was named in 1825 by French mineralogist Armand Lévy for Henry James Brooke (1771–1857), an English crystallographer, mineralogist and wool trader.
Arkansite is a variety of brookite from Magnet Cove, Arkansas, US. It is also found in the Murun Massif on the Olyokma-Chara Plateau of Eastern Siberia, Russia, part of the Aldan Shield.
At temperatures above about 750 °C, brookite will revert to the rutile structure.
Unit cell
Brookite belongs to the orthorhombic dipyramidal crystal class 2/m 2/m 2/m (also designated mmm). The space group is Pcab and the unit cell parameters are a = 5.4558 Å, b = 9.1819 Å and c = 5.1429 Å. The formula is TiO2, with 8 formula units per unit cell.
Structure
The brookite structure is built up of distorted octahedra with a titanium ion at the center and oxygen ions at each of the six vertices. Each octahedron shares three edges with adjoining octahedra, forming an orthorhombic structure.
Appearance
Brookite crystals are typically tabular, elongated and striated parallel to their length. They may also be pyramidal, pseudo-hexagonal or prismatic. Brookite and rutile may grow together in an epitaxial relationship.
Brookite is usually brown in color, sometimes yellowish or reddish brown, or even black. Beautiful, deep red crystals (seen above-right) similar to pyrope and almandite garnet are also known. Brookite displays a submetallic luster. It is opaque to translucent, transparent in thin fragments and yellowish brown to dark brown in transmitted light.
Optical properties
Brookite is doubly refracting, as are all orthorhombic minerals, and it is biaxial (+). Refractive indices are very high, above 2.5, which is even higher than diamond at 2.42. For comparison, ordinary window glass has a refractive index of about 1.5.
Brookite exhibits very weak pleochroism, yellowish, reddish and orange to brown. It is neither fluorescent nor radioactive.
Physical properties
Brookite is a brittle mineral, with a subconchoidal to irregular fracture and poor cleavage in one direction parallel to the c crystal axis and traces of cleavage in a direction perpendicular to both the a and the b crystal axes. Twinning is uncertain. The mineral has a Mohs hardness of to 6, between apatite and feldspar. This is the same hardness as anatase and a little less than that of rutile (6 to ). The specific gravity is 4.08 to 4.18, between that of anatase at 3.9 and rutile at 4.2.
Occurrence and associations
Brookite is an accessory mineral in alpine veins in gneiss and schist; it is also a common detrital mineral.
Associated minerals include its polymorphs anatase and rutile, and also titanite, orthoclase, quartz, hematite, calcite, chlorite and muscovite.
The type locality is Twll Maen Grisial, Fron Olau, Prenteg, Gwynedd, Wales. In 2004, brookite crystals were found in the Kharan, in Balochistan, Pakistan.
See also
List of minerals
List of minerals recognized by the International Mineralogical Association
List of minerals named after people
References
External links
Brookite structure
Crystal structures of rutile, anatase and brookite
JMol
Titanium minerals
Oxide minerals
Orthorhombic minerals
Minerals in space group 61
Polymorphism (materials science) | Brookite | [
"Materials_science",
"Engineering"
] | 930 | [
"Polymorphism (materials science)",
"Materials science"
] |
184,306 | https://en.wikipedia.org/wiki/Perovskite%20%28structure%29 | A perovskite is any material of formula ABX3 with a crystal structure similar to that of the mineral perovskite, which consists of calcium titanium oxide (CaTiO3). The mineral was first discovered in the Ural mountains of Russia by Gustav Rose in 1839 and named after Russian mineralogist L. A. Perovski (1792–1856). 'A' and 'B' are two positively charged ions (i.e. cations), often of very different sizes, and X is a negatively charged ion (an anion, frequently oxide) that bonds to both cations. The 'A' atoms are generally larger than the 'B' atoms. The ideal cubic structure has the B cation in 6-fold coordination, surrounded by an octahedron of anions, and the A cation in 12-fold cuboctahedral coordination. Additional perovskite forms may exist where both/either the A and B sites have a configuration of A1x-1A2x and/or B1y-1B2y and the X may deviate from the ideal coordination configuration as ions within the A and B sites undergo changes in their oxidation states.
As one of the most abundant structural families, perovskites are found in an enormous number of compounds which have wide-ranging properties, applications and importance. Natural compounds with this structure are perovskite, loparite, and the silicate perovskite bridgmanite. Since the 2009 discovery of perovskite solar cells, which contain methylammonium lead halide perovskites, there has been considerable research interest into perovskite materials.
Structure
Perovskite structures are adopted by many compounds that have the chemical formula ABX3. The idealized form is a cubic structure (space group Pmm, no. 221), which is rarely encountered. The orthorhombic (e.g. space group Pnma, no. 62, or Amm2, no. 68) and tetragonal (e.g. space group I4/mcm, no. 140, or P4mm, no. 99) structures are the most common non-cubic variants. Although the perovskite structure is named after CaTiO3, this mineral has a non-cubic structure. SrTiO3 and CaRbF3 are examples of cubic perovskites. Barium titanate is an example of a perovskite which can take on the rhombohedral (space group R3m, no. 160), orthorhombic, tetragonal and cubic forms depending on temperature.
In the idealized cubic unit cell of such a compound, the type 'A' atom sits at cube corner position (0, 0, 0), the type 'B' atom sits at the body-center position (1/2, 1/2, 1/2) and X atoms (typically oxygen) sit at face centered positions (1/2, 1/2, 0), (1/2, 0, 1/2) and (0, 1/2, 1/2). The diagram to the right shows edges for an equivalent unit cell with A in the cube corner position, B at the body center, and X at face-centered positions.
Four general categories of cation-pairing are possible: A+B2+X−3, or 1:2 perovskites; A2+B4+X2−3, or 2:4 perovskites; A3+B3+X2−3, or 3:3 perovskites; and A+B5+X2−3, or 1:5 perovskites.
The relative ion size requirements for stability of the cubic structure are quite stringent, so slight buckling and distortion can produce several lower-symmetry distorted versions, in which the coordination numbers of A cations, B cations or both are reduced. Tilting of the BO6 octahedra reduces the coordination of an undersized A cation from 12 to as low as 8. Conversely, off-centering of an undersized B cation within its octahedron allows it to attain a stable bonding pattern. The resulting electric dipole is responsible for the property of ferroelectricity and shown by perovskites such as BaTiO3 that distort in this fashion.
Complex perovskite structures contain two different B-site cations. This results in the possibility of ordered and disordered variants.
Layered perovskites
Perovskites may be structured in layers, with the structure separated by thin sheets of intrusive material. Different forms of intrusions, based on the chemical makeup of the intrusion, are defined as:
Aurivillius phase: the intruding layer is composed of a []2+ ion, occurring every n layers, leading to an overall chemical formula of []-. Their oxide ion-conducting properties were first discovered in the 1970s by Takahashi et al., and they have been used for this purpose ever since.
Dion−Jacobson phase: the intruding layer is composed of an alkali metal (M) every n layers, giving the overall formula as
Ruddlesden-Popper phase: the simplest of the phases, the intruding layer occurs between every one (n = 1) or multiple (n > 1) layers of the lattice. Ruddlesden−Popper phases have a similar relationship to perovskites in terms of atomic radii of elements with A typically being large (such as La or Sr) with the B ion being much smaller typically a transition metal (such as Mn, Co or Ni). Recently, hybrid organic-inorganic layered perovskites have been developed, where the structure is constituted of one or more layers of MX64--octahedra, where M is a +2 metal (such as Pb2+ or Sn2+) and X and halide ion (such as ), separated by layers of organic cations (such as butylammonium- or phenylethylammonium-cation).
Thin films
Perovskites can be deposited as epitaxial thin films on top of other perovskites, using techniques such as pulsed laser deposition and molecular-beam epitaxy. These films can be a couple of nanometres thick or as small as a single unit cell. The well-defined and unique structures at the interfaces between the film and substrate can be used for interface engineering, where new types properties can arise. This can happen through several mechanisms, from mismatch strain between the substrate and film, change in the oxygen octahedral rotation, compositional changes, and quantum confinement. An example of this is LaAlO3 grown on SrTiO3, where the interface can exhibit conductivity, even though both LaAlO3 and SrTiO3 are non-conductive. Another example is SrTiO3 grown on LSAT ((LaAlO3)0.3 (Sr2AlTaO6)0.7) or DyScO3 can morph the incipient ferroelectric into a ferroelectric at room temperature through the means of epitaxially applied biaxial strain. The lattice mismatch of GdScO3 to SrTiO3 (+1.0%) applies tensile stress resulting in a decrease of the out-of-plane lattice constant of SrTiO3, compared to LSAT (−0.9 %), which epitaxially applies compressive stress leading to an extension of the out-of-plane lattice constant of SrTiO3 (and subsequent increase of the in-plane lattice constant).
Octahedral tilting
Beyond the most common perovskite symmetries (cubic, tetragonal, orthorhombic), a more precise determination leads to a total of 23 different structure types that can be found. These 23 structure can be categorized into 4 different so-called tilt systems that are denoted by their respective Glazer notation.
The notation consists of a letter a/b/c, which describes the rotation around a Cartesian axis and a superscript +/—/0 to denote the rotation with respect to the adjacent layer. A "+" denotes that the rotation of two adjacent layers points in the same direction, whereas a "—" denotes that adjacent layers are rotated in opposite directions. Common examples are a0a0a0, a0a0a– and a0a0a+ which are visualized here.
Examples
Minerals
The perovskite structure is adopted at high pressure by bridgmanite, a silicate with the chemical formula , which is the most common mineral in the Earth's mantle. As pressure increases, the SiO44− tetrahedral units in the dominant silica-bearing minerals become unstable compared with SiO68− octahedral units. At the pressure and temperature conditions of the lower mantle, the second most abundant material is likely the rocksalt-structured oxide, periclase.
At the high pressure conditions of the Earth's lower mantle, the pyroxene enstatite, MgSiO3, transforms into a denser perovskite-structured polymorph; this phase may be the most common mineral in the Earth. This phase has the orthorhombically distorted perovskite structure (GdFeO3-type structure) that is stable at pressures from ~24 GPa to ~110 GPa. However, it cannot be transported from depths of several hundred km to the Earth's surface without transforming back into less dense materials. At higher pressures, MgSiO3 perovskite, commonly known as silicate perovskite, transforms to post-perovskite.
Complex perovskites
Although there is a large number of simple known ABX3 perovskites, this number can be greatly expanded if the A and B sites are increasingly doubled / complex ABX6. Ordered double perovskites are usually denoted as A2BO6 where disordered are denoted as A(B)O3. In ordered perovskites, three different types of ordering are possible: rock-salt, layered, and columnar. The most common ordering is rock-salt followed by the much more uncommon disordered and very distant columnar and layered. The formation of rock-salt superstructures is dependent on the B-site cation ordering. Octahedral tilting can occur in double perovskites, however Jahn–Teller distortions and alternative modes alter the B–O bond length.
Others
Although the most common perovskite compounds contain oxygen, there are a few perovskite compounds that form without oxygen. Fluoride perovskites such as NaMgF3 are well known. A large family of metallic perovskite compounds can be represented by RT3M (R: rare-earth or other relatively large ion, T: transition metal ion and M: light metalloids). The metalloids occupy the octahedrally coordinated "B" sites in these compounds. RPd3B, RRh3B and CeRu3C are examples. MgCNi3 is a metallic perovskite compound and has received lot of attention because of its superconducting properties. An even more exotic type of perovskite is represented by the mixed oxide-aurides of Cs and Rb, such as Cs3AuO, which contain large alkali cations in the traditional "anion" sites, bonded to O2− and Au− anions.
Materials properties
Perovskite materials exhibit many interesting and intriguing properties from both the theoretical and the application point of view. Colossal magnetoresistance, ferroelectricity, superconductivity, charge ordering, spin dependent transport, high thermopower and the interplay of structural, magnetic and transport properties are commonly observed features in this family. These compounds are used as sensors and catalyst electrodes in certain types of fuel cells and are candidates for memory devices and spintronics applications.
Many superconducting ceramic materials (the high temperature superconductors) have perovskite-like structures, often with 3 or more metals including copper, and some oxygen positions left vacant. One prime example is yttrium barium copper oxide which can be insulating or superconducting depending on the oxygen content.
Chemical engineers are considering a cobalt-based perovskite material as a replacement for platinum in catalytic converters for diesel vehicles.
Aspirational applications
Physical properties of interest to materials science among perovskites include superconductivity, magnetoresistance, ionic conductivity, and a multitude of dielectric properties, which are of great importance in microelectronics and telecommunications. They are also some interests for scintillator as they have a large light yield for radiation conversion. Because of the flexibility of bond angles inherent in the perovskite structure there are many different types of distortions that can occur from the ideal structure. These include tilting of the octahedra, displacements of the cations out of the centers of their coordination polyhedra, and distortions of the octahedra driven by electronic factors (Jahn-Teller distortions). The financially biggest application of perovskites is in ceramic capacitors, in which BaTiO3 is used because of its high dielectric constant.
Photovoltaics
Synthetic perovskites are possible materials for high-efficiency photovoltaics – they showed a conversion efficiency of up to 26.3% and can be manufactured using the same thin-film manufacturing techniques as that used for thin film silicon solar cells. Methylammonium tin halides and methylammonium lead halides are of interest for use in dye-sensitized solar cells. Some perovskite PV cells reach a theoretical peak efficiency of 31%.
Among the methylammonium halides studied so far the most common is the methylammonium lead triiodide (). It has a high charge carrier mobility and charge carrier lifetime that allow light-generated electrons and holes to move far enough to be extracted as current, instead of losing their energy as heat within the cell. effective diffusion lengths are some 100 nm for both electrons and holes.
Methylammonium halides are deposited by low-temperature solution methods (typically spin-coating). Other low-temperature (below 100 °C) solution-processed films tend to have considerably smaller diffusion lengths. Stranks et al. described nanostructured cells using a mixed methylammonium lead halide () and demonstrated one amorphous thin-film solar cell with an 11.4% conversion efficiency, and another that reached 15.4% using vacuum evaporation. The film thickness of about 500 to 600 nm implies that the electron and hole diffusion lengths were at least of this order. They measured values of the diffusion length exceeding 1 μm for the mixed perovskite, an order of magnitude greater than the 100 nm for the pure iodide. They also showed that carrier lifetimes in the mixed perovskite are longer than in the pure iodide. Liu et al. applied Scanning Photo-current Microscopy to show that the electron diffusion length in mixed halide perovskite along (110) plane is in the order of 10 μm.
For , open-circuit voltage (VOC) typically approaches 1 V, while for with low Cl content, VOC > 1.1 V has been reported. Because the band gaps (Eg) of both are 1.55 eV, VOC-to-Eg ratios are higher than usually observed for similar third-generation cells. With wider bandgap perovskites, VOC up to 1.3 V has been demonstrated.
The technique offers the potential of low cost because of the low temperature solution methods and the absence of rare elements. Cell durability is currently insufficient for commercial use. However, the solar cells are prone to degradation due to volatility of the organic [CH3NH3]+I− salt. The all-inorganic perovskite cesium lead iodide perovskite (CsPbI3) circumvents this problem, but is itself phase-unstable, the low temperature solution methods of which have only been recently developed.
Planar heterojunction perovskite solar cells can be manufactured in simplified device architectures (without complex nanostructures) using only vapor deposition. This technique produces 15% solar-to-electrical power conversion as measured under simulated full sunlight.
Lasers
LaAlO3 doped with neodymium gave laser emission at 1080 nm. Mixed methylammonium lead halide () cells fashioned into optically pumped vertical-cavity surface-emitting lasers (VCSELs) convert visible pump light to near-IR laser light with a 70% efficiency.
Light-emitting diodes
Due to their high photoluminescence quantum efficiencies, perovskites may find use in light-emitting diodes (LEDs). Although the stability of perovskite LEDs is not yet as good as III-V or organic LEDs, there is ongoing research to solve this problem, such as incorporating organic molecules or potassium dopants in perovskite LEDs. Perovskite-based printing ink can be used to produce OLED display and quantum dot display panels.
Photoelectrolysis
Water electrolysis at 12.3% efficiency use perovskite photovoltaics.
Scintillators
Cerium-doped lutetium aluminum perovskite (LuAP:Ce) single crystals were reported. The main property of those crystals is a large mass density of 8.4 g/cm3, which gives short X- and gamma-ray absorption length. The scintillation light yield and the decay time with Cs137 radiation source are 11,400 photons/MeV and 17 ns, respectively. Those properties made LUAP:Ce scintillators attractive for commercials and they were used quite often in high energy physics experiments. Until eleven years later, one group in Japan proposed Ruddlesden-Popper solution-based hybrid organic-inorganic perovskite crystals as low-cost scintillators. However, the properties were not so impressive in comparison with LuAP:Ce. Until the next nine years, the solution-based hybrid organic-inorganic perovskite crystals became popular again through a report about their high light yields of more than 100,000 photons/MeV at cryogenic temperatures. Recent demonstration of perovskite nanocrystal scintillators for X-ray imaging screen was reported and it is triggering more research efforts for perovskite scintillators. Layered Ruddlesden-Popper perovskites have shown potential as fast novel scintillators with room temperature light yields up to 40,000 photons/MeV, fast decay times below 5 ns and negligible afterglow. In addition this class of materials have shown capability for wide-range particle detection, including alpha particles and thermal neutrons.
Examples of perovskites
Simple:
Strontium titanate
Calcium titanate
Lead titanate
Bismuth ferrite
Lanthanum ytterbium oxide
Silicate perovskite
Lanthanum manganite
Yttrium aluminum perovskite (YAP)
Lutetium aluminum perovskite (LuAP)
Solid solutions:
Lanthanum strontium manganite
LSAT (lanthanum aluminate – strontium aluminum tantalate)
Lead scandium tantalate
Lead zirconate titanate
Methylammonium lead halide
Methylammonium tin halide
Formamidinium tin halide
See also
Antiperovskite
Aurivillius phases
Diamond anvil
Goldschmidt tolerance factor
Ruddlesden-Popper phase
Spinel
References
Further reading
External links
(includes a Java applet with which the structure can be interactively rotated)
Перовскит в Каталоге Минералов
Mineralogy
Solar power
Crystal structure types
Crystallography
de:Perowskit#Kristallstruktur | Perovskite (structure) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,216 | [
"Applied and interdisciplinary physics",
"Crystal structure types",
"Materials science",
"Crystallography",
"Condensed matter physics",
"nan"
] |
184,308 | https://en.wikipedia.org/wiki/Titanic%20acid | Titanic acid is a general name for a family of chemical compounds of the elements titanium, hydrogen, and oxygen, with the general formula . Various simple titanic acids have been claimed, mainly in the older literature. No crystallographic and little spectroscopic support exists for these materials. Some older literature refers to as titanic acid, and the dioxide forms an unstable hydrate when TiCl4 hydrolyzes.
Metatitanic acid (),
Orthotitanic acid () or . It is described as a white salt-like powder under "".
Peroxotitanic acid () has also been described as resulting from the treatment of titanium dioxide in sulfuric acid with hydrogen peroxide. The resulting yellow solid decomposes with loss of .
Pertitanic acid ()
Pertitanic acid ()
References
Further reading
Titanium(IV) compounds
Hydroxides
Transition metal oxoacids
he:חומצה טיטנית
ru:Титанаты | Titanic acid | [
"Chemistry"
] | 208 | [
"Inorganic compounds",
"Bases (chemistry)",
"Hydroxides",
"Inorganic compound stubs"
] |
184,324 | https://en.wikipedia.org/wiki/Lepidolite | Lepidolite is a lilac-gray or rose-colored member of the mica group of minerals with chemical formula . It is the most abundant lithium-bearing mineral and is a secondary source of this metal. It is the major source of the alkali metal rubidium.
Lepidolite is found with other lithium-bearing minerals, such as spodumene, in pegmatite bodies. It has also been found in high-temperature quartz veins, greisens and granite.
Description
Lepidolite is a phyllosilicate mineral and a member of the polylithionite-trilithionite series. Lepidolite is part of a three-part series consisting of polylithionite, lepidolite, and trilithionite. All three minerals share similar properties and are caused because of varying ratios of lithium and aluminum in their chemical formulas. The Li:Al ratio varies from 2:1 in polylithionite up to 1.5:1.5 in trilithionite.
Lepidolite is found naturally in a variety of colors, mainly pink, purple, and red, but also gray and, rarely, yellow and colorless. Because lepidolite is a lithium-bearing mica, it is often wrongly assumed that lithium is what causes the pink hues that are so characteristic of this mineral. Instead, it is trace amounts of manganese that cause the pink, purple, and red colors.
Structure and composition
Lepidolite belongs to the group of trioctahedral micas, with a structure resembling biotite. This structure is sometimes described as TOT-c. The crystal consists of stacked TOT layers weakly bound together by potassium ions (c). Each TOT layer consists of two outer T (tetrahedral) sheets in which silicon or aluminium ions each bind with four oxygen atoms, which in turn bind to other aluminium and silicon to form the sheet structure. The inner O (octahedral) sheet contains iron or magnesium ions each bonded to six oxygen, fluoride, or hydroxide ions. In biotite, silicon occupies three out of every four tetrahedal sites in the crystal and aluminium occupies the remaining tetrahedral sites, while magnesium or iron fill all the available octahedral sites.
Lepidolite shares this structure, but aluminium and lithium substitute for magnesium and iron in the octahedral sites. If nearly equal quantities of aluminium and lithium occupy the octahedral sites, the resulting mineral is trilithionite, If lithium occupies two out of three octahedral sites and aluminium the remaining octahedra site, then charge balance can be preserved only if silicon occupies all the tetrahedral sites. The result is polylithionite, . Lepidolite has a composition intermediate between these end members.
Fluoride ions can substitute for some of the hydroxide in the structure, while sodium, rubidium, or caesium may substitute in small quantities for potassium.
Occurrences
Lepidolite is associated with other lithium-bearing minerals like spodumene in pegmatite bodies. It is the major source of the alkali metal rubidium. In 1861, Robert Bunsen and Gustav Kirchhoff extracted of lepidolite to yield a few grams of rubidium salts for analysis, and therefore discovered the new element rubidium.
It occurs in granite pegmatites, in some high-temperature quartz veins, greisens and granites. Associated minerals include quartz, feldspar, spodumene, amblygonite, tourmaline, columbite, cassiterite, topaz and beryl.
Notable occurrences include Brazil; Ural Mountains, Russia; California and the Black Hills, United States; Tanco Mine, Bernic Lake, Manitoba, Canada; and Madagascar.
References
Phyllosilicates
Lithium minerals
Potassium minerals
Aluminium minerals
Monoclinic minerals
Minerals in space group 8
Minerals in space group 12
Mica group
Luminescent minerals
Gemstones
Rubidium compounds | Lepidolite | [
"Physics",
"Chemistry"
] | 835 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
184,325 | https://en.wikipedia.org/wiki/Spodumene | Spodumene is a pyroxene mineral consisting of lithium aluminium inosilicate, LiAl(SiO3)2, and is a commercially important source of lithium. It occurs as colorless to yellowish, purplish, or lilac kunzite (see below), yellowish-green or emerald-green hiddenite, prismatic crystals, often of great size. Single crystals of in size are reported from the Black Hills of South Dakota, United States.
The naturally-occurring low-temperature form α-spodumene is in the monoclinic system, and the high-temperature β-spodumene crystallizes in the tetragonal system. α-spodumene converts to β-spodumene at temperatures above 900 °C. Crystals are typically heavily striated parallel to the principal axis. Crystal faces are often etched and pitted with triangular markings.
Discovery and occurrence
Spodumene was first described in 1800 for an occurrence in the type locality in Utö, Södermanland, Sweden. It was discovered by Brazilian naturalist Jose Bonifacio de Andrada e Silva. The name is derived from the Greek spodumenos (σποδούμενος), meaning "burnt to ashes", owing to the opaque ash-grey appearance of material refined for use in industry.
Spodumene occurs in lithium-rich granite pegmatites and aplites. Associated minerals include: quartz, albite, petalite, eucryptite, lepidolite and beryl.
Transparent material has long been used as a gemstone with varieties kunzite and hiddenite noted for their strong pleochroism. Source localities include Democratic Republic of Congo, Afghanistan, Australia, Brazil, Madagascar (see mining), Pakistan, Québec in Canada, and North Carolina and California in the U.S.
Since 2018, the Democratic Republic of Congo (DRC) has been known to have the largest lithium spodumene hard rock deposit in the world, with mining operations occurring in the central DRC territory of Manono, Tanganyika Province. As of 2021, the Australian company AVZ Minerals is developing the Manono Lithium and Tin project and has a resource size of 400 million tonnes of high grade low impurities at 1.65% lithium oxide (Li2O) spodumene hard-rock based on studies and drilling of Roche Dure, one of several pegmatites in the deposit.
Economic importance
Spodumene is an important source of lithium, for use in ceramics, mobile phones and batteries (including for automotive applications), medicine, Pyroceram and as a fluxing agent. As of 2019, around half of lithium is extracted from mineral ores, which mainly consist of spodumene. Lithium is recovered from spodumene by dissolution in acid, or extraction with other reagents, after roasting to convert it to the more reactive β-spodumene. The advantage of spodumene as a lithium source compared to brine sources is the higher lithium concentration, but at a higher extraction cost.
In 2016, the price was forecast to be $500–600/ton for years to come. However, price spiked above $800 in January 2018, and production increased more than consumption, reducing the price to $400 in September 2020.
World production of lithium via spodumene was around 80,000 metric tonnes per annum in 2018, primarily from the Greenbushes pegmatite of Western Australia and from some Chinese and Chilean sources. The Talison Minerals mine in Greenbushes, Western Australia (involving Tianqi Lithium, Albemarle Corporation and Global Advanced Metals), is reported to be the world's second largest and to have the highest grade of ore at 2.4% Li2O (2012 figures).
In 2020, Australia expanded spodumene mining to become the leading lithium producing country in the world.
An important economic concentrate of spodumene, known as spodumene concentrate 6 or SC6, is a high-purity lithium ore with approximately 6 percent lithium content being produced as a raw material for the subsequent production of lithium-ion batteries for electric vehicles.
Refining
Extraction of lithium from spodumene, often spodumene concentrate 6 (SC6), is challenging due to the tight binding of lithium in the crystal structure.
Traditional lithium refining in the 2010s involves acid leaching of lithium-containing ores, precipitation of impurities, concentration of the lithium solution, and then conversion to lithium carbonate or lithium hydroxide. These refining methods result in significant quantities of caustic waste effluent and tailings, which are usually either highly acidic or alkali.
Another processing method relies on pyrometallurgical processing of SC6—roasting at high temperatures exceeding to convert the spodumene from the tightly-bound alpha structure to a more open beta structure from which the lithium is more easily extracted—then cooling and reacting with various reagents in a sequence of hydrometallurgical processing steps. Some offer the use of non-caustic reagents and result in reduced waste streams, potentially allowing the use of a closed-loop refining process.
Suitable extraction reagents include alkali metal sulfates, such as sodium sulfate; sodium carbonate; chlorine; or hydrofluoric acid. A common form of more highly refined lithium is lithium hydroxide, commonly used as an input in the battery industry to manufacture lithium-ion (Li-ion) battery cathode material.
Gemstone varieties
Hiddenite
Hiddenite is a pale, emerald-green gem variety first reported from Alexander County, North Carolina, U.S. It was named in honor of William Earl Hidden (16 February 1853 – 12 June 1918), mining engineer, mineral collector, and mineral dealer.
This emerald-green variety of spodumene is colored by chromium, just as for emeralds. Some green spodumene is colored with substances other than chromium; such stones tend to have a lighter color; they are not true hiddenite.
Kunzite
Kunzite is a purple-colored gemstone, a variety of spodumene, with the color coming from minor to trace amounts of manganese. Exposure to sunlight can fade its color.
Kunzite was discovered in 1902, and was named after George Frederick Kunz, Tiffany & Co's chief jeweler at the time, and a noted mineralogist. It has been found in Brazil, the U.S., Canada, CIS, Mexico, Sweden, Western Australia, Afghanistan and Pakistan.
Triphane
Triphane is the name used for yellowish varieties of spodumene.
See also
List of minerals
Notes
References
Kunz, George Frederick (1892). Gems and Precious Stones of North America. New York: The Scientific Publishing Company.
Palache, C., Davidson, S. C., and Goranson, E. A. (1930). "The Hiddenite deposit in Alexander County, N. Carolina". American Mineralogist Vol. 15 No. 8 p. 280
Webster, R. (2000). Gems: Their Sources, Descriptions and Identification (5th ed.), pp. 186–190. Great Britain: Butterworth-Heinemann.
The key players in Quebec lithium , "Daily News", The Northern Miner, 11 August 2010.
External links
Aluminium minerals
Gemstones
Inosilicates
Lithium minerals
Monoclinic minerals
Minerals in space group 15
Pyroxene group | Spodumene | [
"Physics"
] | 1,553 | [
"Materials",
"Gemstones",
"Matter"
] |
1,060,986 | https://en.wikipedia.org/wiki/Thomas%20J.%20Goreau | Thomas J. Goreau (Tom Goreau, * 1950 in Jamaica) is a biogeochemist and marine biologist. He is the son of two other renowned marine biologists, Thomas F. Goreau and Nora I. Goreau.
Education
After studying in Jamaican primary and secondary schools, he received an undergraduate degree in planetary physics from the Massachusetts Institute of Technology (BS, 1970). He went on to earn a Master of Science in planetary astronomy from the California Institute of Technology (1972) and a Ph.D. in biogeochemistry from Harvard University (1981).
Career
With his parents, he researched the coral reefs of Jamaica and continues to conduct research on the impacts of global climate change, pollution, and new diseases in reefs across the Caribbean, Indian Ocean, and Pacific. His current work focuses on coral reef restoration, fisheries restoration, shoreline protection, renewable energy, community-based coral reef management, mariculture, soil metabolism, soil carbon, and stabilization of global carbon dioxide. He was formerly Senior Scientific Affairs Officer at the United Nations Centre for Science and Technology for Development. He is currently President of the Global Coral Reef Alliance and Director of Remineralize The Earth.
See also
Biorock
Remineralize The Earth
References
External links
https://web.archive.org/web/20120318112219/http://oneworldgroup.org/2010/02/03/coral-reef-expert-thomas-goreau-talked-to-oneclimate-at-cop15-in-copenhagen/ Interview filmed at COP15 in Copenhagen by OneClimate
https://web.archive.org/web/20040818211124/http://www2.abc.net.au/science/coral/goreau.htm
https://web.archive.org/web/20041009192616/http://globalcoral.org/Goreau%20Bio.htm
https://www.pewtrusts.org/en/projects/marine-fellows/fellows-directory/1994/thomas-goreau
American marine biologists
California Institute of Technology alumni
Jamaican people of German descent
Harvard University alumni
Living people
Biogeochemists
Massachusetts Institute of Technology School of Science alumni
Year of birth missing (living people)
Jamaican people of Panamanian descent | Thomas J. Goreau | [
"Chemistry"
] | 486 | [
"Geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
1,061,019 | https://en.wikipedia.org/wiki/Point-blank%20range | Point-blank range is any distance over which a certain firearm or gun can hit a target without the need to elevate the barrel to compensate for bullet drop, i.e. the gun can be pointed horizontally at the target. For targets beyond-blank range, the shooter will have to point the barrel of their firearm at a position above the target, and firearms that are designed for long range firefights usually have adjustable sights to help the shooter hit targets beyond point-blank range. The maximum point-blank range of a firearm will depend on a variety of factors such as muzzle velocity and the size of the target.
In popular usage, point-blank range has come to mean extremely close range with a firearm, yet not close enough to be a contact shot.
History
The term point-blank dates to the 1570s and is probably of French origin, deriving from , "pointed at white". It is thought the word blanc may be used to describe a small white aiming spot formerly at the center of shooting targets. However, since none of the early sources mention a white center target, blanc may refer to empty space or zero point of elevation when testing range.
The term originated with the techniques used to aim muzzle-loading cannon. Their barrels tapered from breech to muzzle, so that when the top of the cannon was held horizontal, its bore actually sat at an elevated angle. This caused the projectile to rise above the natural line of sight shortly after leaving the muzzle, then drop below it after the apex of its slightly parabolic trajectory was reached.
By repeatedly firing a given projectile with the same charge, the point where the shot fell below the bottom of the bore could be measured. This distance was considered the point-blank range: any target within it required the gun to be depressed; any beyond it required elevation, up to the angle of greatest range at somewhat before 45 degrees.
Various cannon of the 19th century had point-blank ranges from (12 lb howitzer, powder charge) to nearly (30 lb carronade, solid shot, powder charge).
Small arms
Maximum point-blank range
Small arms are often sighted in so that their sight line and bullet path are within a certain acceptable margin out to the longest possible range, called the maximum point-blank range. Maximum point-blank range is principally a function of a cartridge's external ballistics and target size: high-velocity rounds have long point-blank ranges, while slow rounds have much shorter point-blank ranges. Target size determines how far above and below the line of sight a projectile's trajectory may deviate. Other considerations include sight height and acceptable drop before a shot is ineffective.
Hunting
A large target, like the vitals area of a deer, allows a deviation of a few inches (as much as 10 cm) while still ensuring a quickly disabling hit. Vermin such as prairie dogs require a much smaller deviation, less than an inch (about 2 cm). The height of the sights has two effects on point blank range. If the sights are lower than the allowable deviation, then point blank range starts at the muzzle, and any difference between the sight height and the allowable deviation is lost distance that could have been in point blank range. Higher sights, up to the maximum allowable deviation, push the maximum point blank range farther from the gun. Sights that are higher than the maximum allowable deviation push the start of the point blank range farther out from the muzzle; this is common with varmint rifles, where close shots are only sometimes made, as it places the point blank range out to the expected range of the usual targets.
Military
Known also as "battle zero", maximum point-blank range is crucial in the military. Soldiers are instructed to fire at any target within this range by simply placing their weapon's sights on the center of mass of the enemy target. Any errors in range estimation are effectively irrelevant, as a well-aimed shot will hit the torso of the enemy soldier. No height correction is needed at the "battle zero" or less distance, but it can result in a headshot or even a complete miss. The belt buckle is used as battle zero point of aim in Russian and former Soviet military doctrine.
The first mass-produced assault rifle, the World War II StG 44, and its preceding prototypes had iron sight lines elevated over the bore axis to extend point-blank range. The current trend for elevated sights and flatter shooting higher-velocity cartridges in assault rifles is in part due to a desire to further extend the maximum point-blank range, which makes the rifle easier to use. Raising the sight line over the bore axis, introduces an inherent parallax problem as the projectile path crosses the horizontal sighting plane twice. The point closest to the gun occurs while the bullet is climbing through the line of sight and is called the near zero. The second point occurs as the projectile is descending through the line of sight and is called the far zero. At closer ranges under the near zero range (typically inside ), the shooter must aim high to place shots where desired.
See also
Table of handgun and rifle cartridges
Notes
References
Nosworthy, Brent. marconibrenner. Constable and Co. Ltd, 1995
External links
Tables for Cannon & Artillery Projectiles used in the American Civil War (includes point blank ranges).
Ballistics
Firearm terminology | Point-blank range | [
"Physics"
] | 1,087 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
1,061,151 | https://en.wikipedia.org/wiki/Trans-Planckian%20problem | In black hole physics and inflationary cosmology, the trans-Planckian problem is the problem of the appearance of quantities beyond the Planck scale, which raise doubts on the physical validity of some results in these two areas, since one expects the physical laws to suffer radical modifications beyond the Planck scale.
In black hole physics, the original derivation of Hawking radiation involved field modes that, near the black hole horizon, have arbitrarily high frequencies—in particular, higher than the inverse Planck time, although these do not appear in the final results. A number of different alternative derivations have been proposed in order to overcome this problem.
The trans-Planckian problem can be conveniently considered in the framework of sonic black holes, condensed matter systems which can be described in a similar way as real black holes. In these systems, the analogue of the Planck scale is the interatomic scale, where the continuum description loses its validity. One can study whether in these systems the analogous process to Hawking radiation still occurs despite the short-scale cutoff represented by the interatomic distance.
The trans-Planckian problem also appears in inflationary cosmology. The cosmological scales that we nowadays observe correspond to length scales smaller than the Planck length at the onset of inflation.
Trans-Planckian problem in Hawking radiation
The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength.
The Unruh effect and the Hawking effect both talk about field modes in the superficially stationary spacetime that change frequency relative to other coordinates which are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration which constantly Doppler shifts the modes.
An outgoing Hawking radiated photon, if the mode is traced back in time, has a frequency which diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to "scrunch up" infinitely at the horizon of the black hole. In a maximally extended external Schwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go. That region seems to be unobservable and is physically suspect, so Hawking used a black hole solution without a past region which forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed.
The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon, that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing.
The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto a white hole solution. Matter which falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes which end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon.
There exist alternative physical pictures which give the Hawking radiation in which the trans-Planckian problem is addressed. The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time. In the Unruh effect, the magnitude of the temperature can be calculated from ordinary Minkowski field theory, and is not controversial.
Notes
Quantum gravity
Black holes
Inflation (cosmology) | Trans-Planckian problem | [
"Physics",
"Astronomy"
] | 898 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Quantum gravity",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Physics beyond the Standard Model"
] |
1,061,330 | https://en.wikipedia.org/wiki/Zytglogge | The Zytglogge (Bernese German: ; ) is a landmark medieval tower in Bern, Switzerland. Built in the early 13th century, it has served the city as a guard tower, prison, clock tower, centre of urban life and civic memorial.
Despite the many redecorations and renovations it has undergone in its 800 years of existence, the Zytglogge is one of Bern's most recognisable symbols and the oldest monument of the city, and with its 15th-century astronomical clock, a major tourist attraction. It is a heritage site of national significance, and part of the Old City of Bern, a UNESCO World Heritage Site.
History
When it was built around 1218–1220, the Zytglogge served as the gate tower of Bern's western fortifications. These were erected after the city's first westward expansion following its de facto independence from the Empire. At that time, the Zytglogge was a squat building of only in height. When the rapid growth of the city and the further expansion of the fortifications (up to the Käfigturm) relegated the tower to second-line status at around 1270–1275, it was heightened by to overlook the surrounding houses.
Only after the city's western defences were extended again in 1344–1346 up to the now-destroyed Christoffelturm, the Zytglogge was converted to a women's prison, notably housing Pfaffendirnen – "priests' whores", women convicted of sexual relations with clerics. At this time, the Zytglogge also received its first slanted roof.
In the great fire of 1405, the tower burnt out completely. It suffered severe structural damage that required thorough repairs, which were not complete until after the last restoration in 1983. The prison cells were abandoned and a clock was first installed above the gate in the early 15th century, probably including a simple astronomical clock and musical mechanism. This clock, together with the great bell cast in 1405, gave the Zytglogge its name, which in Bernese German means "time bell".
In the late 15th century, the Zytglogge and the other Bernese gate towers were extended and decorated after the Burgundian Romantic fashion. The Zytglogge received a new lantern (including the metal bellman visible today), four decorative corner towerlets, heraldic decorations and probably its stair tower. The astronomical clock was extended to its current state. In 1527–30, the clockwork was completely rebuilt by Kaspar Brunner, and the gateway was overarched to provide a secure foundation for the heavy machinery.
The Zytglogge's exterior was repainted by Gotthard Ringgli and Kaspar Haldenstein in 1607–1610, who introduced the large clock faces that now dominate the east and west façades of the tower. The corner towerlets were removed again some time before 1603. In 1770–1771, the Zytglogge was renovated by Niklaus Hebler and Ludwig Emanuel Zehnder, who refurbished the structure in order to suit the tastes of the late Baroque, giving the tower its contemporary outline.
Both façades were again repainted in the Rococo style by Rudolf von Steiger in 1890. The idealising historicism of the design came to be disliked in the 20th century, and a 1929 competition produced the façade designs visible today: on the west façade, Victor Surbek's fresco "Beginning of Time" and on the east façade, a reconstruction of the 1770 design by Kurt Indermühle. In 1981–1983, the Zytglogge was thoroughly renovated again and generally restored to its 1770 appearance. In the advent season and from Easter until the end of October, it is illuminated after dusk.
Name
The Bernese German Zytglogge translates to Zeitglocke in Standard German and to time bell in English; 'Glocke' is German for 'bell', as in the related term 'glockenspiel'. A "time bell" was one of the earliest public timekeeping devices, consisting of a clockwork connected to a hammer that rang a small bell at the full hour. Such a device was installed in the Wendelstein in Bern – the tower of the Leutkirche church which the Münster later replaced – in 1383 at the latest; it alerted the bell-ringer to ring the tower bells.
The name of Zytglogge was first recorded in 1413. Previously, the tower was referred to as the kebie ("cage", i.e., prison) and after its post-1405 reconstruction, the nüwer turm ("new tower").
Exterior
External structure
The Zytglogge has an overall height of , and a height of up to the roof-edge. Its rectangular floor plan measures . The wall strengths vary widely, ranging from in the west, where the tower formed part of the city walls, to in the east.
The outward appearance of the Zytglogge is determined by the 1770 renovation. Only the late Gothic cornice below the roof and the stair tower are visible artifacts of the tower's earlier history.
The main body of the tower is divided into the two-storey plinth, whose exterior is made of alpine limestone, and the three-storey tower shaft sheathed in sandstone. The shaft's seemingly massive corner blocks are decorative fixtures held in place by visible iron hooks. Below the roof, the cornice spans around the still-visible bases of the former corner towerlets. The two-story attic is covered by the sweeping, red-tiled, late Gothic spire, in which two spire lights are set to the West and East. They are crowned by ornamental urns with pinecone knobs reconstructed in 1983 from 18th-century drawings.
From atop the spire, the wooden pinnacle, copper-sheathed since 1930, rises an additional into the skies, crowned with a gilded knob and a weather vane displaying a cut-out coat of arms of Bern.
Bells and bell-striker
The tower's two namesake bronze bells hang in the cupola at its very top.
The great hour bell, cast by Johann Reber, has remained unchanged since the tower's reconstruction in 1405. It has a diameter of , a weight of and rings with a nominal tone of e'''. The inscription on the bell reads, in Latin: "In the October month of the year 1405 I was cast by Master John called Reber of Aarau. I am vessel and wax, and to all I tell the hours of the day."
When the great bell rings out every full hour, struck by a large clockwork-operated hammer, passers-by see a gilded figure in full harness moving its arm to strike it. The larger-than-life figure of bearded Chronos, the Greek personification of time, is traditionally nicknamed Hans von Thann by the Bernese. The wooden bell-striker, which has been replaced several times, has been a fixture of the Zytglogge since the renewal of the astronomical clock in 1530, whose clockwork also controls the figure's motions. The original wooden Chronos might have been created by master craftsman Albrecht von Nürnberg, while the current and most recent Hans is a 1930 reconstruction of a Baroque original. The bell-striker has been gilded, just like the bells, since 1770.
Below the hour bell hangs the smaller quarter-hour bell, also rung by a clockwork hammer. It was cast in 1887 to replace the cracked 1486 original.
Clock faces and façade decorations
Both principal façades, East and West, are dominated by large clockfaces. The Zytglogge's first clockface was likely located on the plinth, but was moved up to the center of the shaft during the tower's 15th-century reconfiguration.
The eastern clock face features an outer ring of large golden Roman numerals, on which the larger hand indicates the hour, and an inner ring on which the smaller hand indicates the minutes. The golden Sun on the hour hand is pivot-mounted so that it always faces up.
The western clock face has similar hands, but is an integral part of Victor Surbek's 1929 fresco "Beginning of Time". The painting depicts Chronos swooping down with cape fluttering, and, below the clockface, Adam and Eve's eviction from Paradise by an angel.
Astronomical clock
The dial of the Zytglogge's astronomical clock is built in the form of an astrolabe. It is backed by a stereographically projected planisphere divided into three zones: the black night sky, the deep blue zone of dawn and the light blue day sky. The skies are crisscrossed with the golden lines of the horizon, dawn, the tropics and the temporal hours, which divide the time of daylight into twelve hours whose length varies with the time of year.
Around the planisphere moves the rete, a web-like metal cutout representing the zodiac, which also features a Julian calendar dial. Above the rete, a display indicates the day of the week. Because leap days are not supported by the clockwork, the calendar hand has to be reset manually each leap year on 29 February. A moon dial circles the inner ring of the zodiac, displaying the moon phase. The principal hand of the clock indicates the time of day on the outer ring of 24 golden Roman numerals, which run twice from I to XII. It features two suns, the smaller one indicating the date on the rete'''s calendar dial. The larger one circles the zodiac at one revolution per year and also rotates across the planisphere once per day. Its crossing of the horizon and dawn lines twice per day allows the timing of sunrise, dawn, dusk and sunset.
The painted frieze above the astronomical clock shows five deities from classical antiquity, each representing both a day of the week and a planet in their order according to Ptolemaic cosmology. From left to right, they are: Saturn with sickle and club for Saturday, Jupiter with thunderbolts for Thursday, Mars with sword and shield for Tuesday, Venus with Cupid for Friday and Mercury with staff and bag for Wednesday.
Movement
The clock dial has been dated to either the building phases of 1405 or 1467-83, or to the installation of the Brunner clockwork in 1527-30. Ueli Bellwald notes that the planisphere uses a southern projection, as was characteristic for 15th-century astronomical clocks; all later such clocks use a northern projection. This would seem to confirm the dating of the clock to the 1405 or 1467/83 renovations.
A clock is documented in this tower since 1405, when a new bell was installed.
Interior
The Zytglogge's internal layout has changed over time to reflect the tower's change of purpose from guard tower to city prison to clock tower. The thirteenth-century guard tower was not much more than a hollow shell of walls that was open towards the city in the east. Only in the fourteenth century was a layer of four storeys inserted.
The rooms above the clockwork mechanism were used by the city administration for various purposes up until the late 20th century, including as archives, storerooms, as a firehose magazine and even as an air raid shelter. The interior was frequently remodelled in a careless, even vandalistic fashion; for instance, all but three of the original wooden beams supporting the intermediate floors were destroyed. Since 1979, the tower's interior is empty again and only accessible in the course of guided tours.
References
Bibliography
External links
Zeitglockenturm Bern Official Website in English and German
Zytglogge Bern Flash 3D Panorama
Tourist information by Bern Tourism
Source texts relating to the Zytglogge on www.g26.ch
Daily audio/video timelapse of the Zytglogge
Buildings and structures completed in 1220
Old City (Bern)
Astronomical clocks in Switzerland
Horology
Monuments and memorials in Switzerland
Tourist attractions in Bern
Cultural property of national significance in the canton of Bern
Buildings and structures in Bern
Clock towers in Switzerland | Zytglogge | [
"Physics"
] | 2,535 | [
"Spacetime",
"Horology",
"Physical quantities",
"Time"
] |
1,061,356 | https://en.wikipedia.org/wiki/First%20Department | The First Department () was in charge of secrecy and political security of the workplace of every enterprise or institution of the Soviet Union that dealt with any kind of technical or scientific information (plants, R&D institutions, etc.) or had printing capabilities (e.g., publishing houses).
Every branch of the Central Statistical Administration and its successor the State Statistics Committee (Goskomstat) also had a First Department to control access, distribution, and publication of official economic, population, and social statistics. Copies of especially sensitive documents were numbered and labeled or stamped as secret or "For official use only". In some cases, the "official use" version of documents mimicked the public use versions in format but provided much more detailed information.
The first departments were a part of the KGB and not subordinated to the management of the enterprise or institution. Among its functions was control of access to information considered state secret, of foreign travel, and of publications. The First Department also kept account of the usage of copying devices (xerographers, printing presses, typewriters, etc.) to prevent unsanctioned copying, including samizdat.
See also
Censorship
Spetskhran
References
Soviet internal politics
Soviet phraseology
Data security
KGB | First Department | [
"Engineering"
] | 254 | [
"Cybersecurity engineering",
"Data security"
] |
1,061,554 | https://en.wikipedia.org/wiki/Dystrophic%20calcification | Dystrophic calcification (DC) is the calcification occurring in degenerated or necrotic tissue, as in hyalinized scars, degenerated foci in leiomyomas, and caseous nodules. This occurs as a reaction to tissue damage, including as a consequence of medical device implantation. Dystrophic calcification can occur even if the amount of calcium in the blood is not elevated, in contrast to metastatic calcification, which is a consequence of a systemic mineral imbalance, including hypercalcemia and/or hyperphosphatemia, that leads to calcium deposition in healthy tissues. In dystrophic calcification, basophilic calcium salt deposits aggregate, first in the mitochondria, then progressively throughout the cell. These calcifications are an indication of previous microscopic cell injury, occurring in areas of cell necrosis when activated phosphatases bind calcium ions to phospholipids in the membrane.
Calcification in dead tissue
Caseous necrosis in T.B. is most common site of dystrophic calcification.
Liquefactive necrosis in chronic abscesses may get calcified.
Fat necrosis following acute pancreatitis or traumatic fat necrosis in breasts results in deposition of calcium soaps.
Infarcts may undergo D.C.
Thrombi, especially in veins, may produce phleboliths.
Haematomas in the vicinity of bones may undergo D.C.
Dead parasites like schistosoma eggs may calcify.
Congenital toxoplasmosis, CMV or rubella may be seen on X-ray as calcifications in the brain.
Calcification in degenerated tissue
Dense scars may undergo hyaline degeneration and calcification.
Atheroma in aorta and coronaries frequently undergo calcification.
Cysts can show calcification.
Calcinosis cutis is condition in which there are irregular nodular deposits of calcium salts in skin and subcutaneous tissue.
Senile degenerative changes may be accompanied by calcification.
The inherited disorder pseudoxanthoma elasticum may lead to angioid streaks with calcification of Bruch's membrane, the elastic tissue below the retinal ring.
See also
Calcinosis
Monckeberg's arteriosclerosis
References
Histopathology | Dystrophic calcification | [
"Chemistry"
] | 507 | [
"Histopathology",
"Microscopy"
] |
1,061,573 | https://en.wikipedia.org/wiki/Neurotrophin | Neurotrophins are a family of proteins that induce the survival, development, and function of neurons.
They belong to a class of growth factors, secreted proteins that can signal particular cells to survive, differentiate, or grow. Growth factors such as neurotrophins that promote the survival of neurons are known as neurotrophic factors. Neurotrophic factors are secreted by target tissue and act by preventing the associated neuron from initiating programmed cell death – allowing the neurons to survive. Neurotrophins also induce differentiation of progenitor cells, to form neurons.
Although the vast majority of neurons in the mammalian brain are formed prenatally, parts of the adult brain (for example, the hippocampus) retain the ability to grow new neurons from neural stem cells, a process known as neurogenesis. Neurotrophins are chemicals that help to stimulate and control neurogenesis.
Terminology
According to the United States National Library of Medicine's medical subject headings, the term neurotrophin may be used as a synonym for neurotrophic factor, but the term neurotrophin is more generally reserved for four structurally related factors: nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3), and neurotrophin-4 (NT-4). The term neurotrophic factor generally refers to these four neurotrophins, the GDNF family of ligands, and ciliary neurotrophic factor (CNTF), among other biomolecules. Neurotrophin-6 and neurotrophin-7 also exist, but are only found in zebrafish.
Function
During the development of the vertebrate nervous system, many neurons become redundant (because they have died, failed to connect to target cells, etc.) and are eliminated. At the same time, developing neurons send out axon outgrowths that contact their target cells. Such cells control their degree of innervation (the number of axon connections) by the secretion of various specific neurotrophic factors that are essential for neuron survival. One of these is nerve growth factor (NGF or beta-NGF), a vertebrate protein that stimulates division and differentiation of sympathetic and embryonic sensory neurons. NGF is mostly found outside the central nervous system (CNS), but slight traces have been detected in adult CNS tissues, although a physiological role for this is unknown. It has also been found in several snake venoms.
In the peripheral and central neurons, neurotrophins are important regulators for survival, differentiation, and maintenance of nerve cells. They are small proteins that secrete into the nervous system to help keep nerve cells alive. There are two distinct classes of glycosylated receptors that can bind to neurotrophins. These two proteins are p75 (NTR), which binds to all neurotrophins, and subtypes of Trk, which are each specific for different neurotrophins. The reported structure above is a 2.6 Å-resolution crystal structure of neurotrophin-3 (NT-3) complexed to the ectodomain of glycosylated p75 (NRT), forming a symmetrical crystal structure.
Receptors
There are two classes of receptors for neurotrophins: p75 and the "Trk" family of Tyrosine kinases receptors.
Types
Nerve growth factor
Nerve growth factor (NGF), the prototypical growth factor, is a protein secreted by a neuron's target cell. NGF is critical for the survival and maintenance of sympathetic and sensory neurons. NGF is released from the target cells, binds to and activates its high affinity receptor TrkA on the neuron, and is internalized into the responsive neuron. The NGF/TrkA complex is subsequently trafficked back to the neuron's cell body. This movement of NGF from axon tip to soma is thought to be involved in the long-distance signaling of neurons.
Brain-derived neurotrophic factor
Brain-derived neurotrophic factor (BDNF) is a neurotrophic factor found originally in the brain, but also found in the periphery. To be specific, it is a protein that has activity on certain neurons of the central nervous system and the peripheral nervous system; it helps to support the survival of existing neurons, and encourage the growth and differentiation of new neurons and synapses through axonal and dendritic sprouting. In the brain, it is active in the hippocampus, cortex, cerebellum, and basal forebrain – areas vital to learning, memory, and higher thinking. BDNF was the second neurotrophic factor to be characterized, after NGF and before neurotrophin-3.
BDNF is one of the most active substances to stimulate neurogenesis. Mice born without the ability to make BDNF suffer developmental defects in the brain and sensory nervous system, and usually die soon after birth, suggesting that BDNF plays an important role in normal neural development.
Despite its name, BDNF is actually found in a range of tissue and cell types, not just the brain. Expression can be seen in the retina, the CNS, motor neurons, the kidneys, and the prostate. Exercise has been shown to increase the amount of BDNF and therefore serve as a vehicle for neuroplasticity.
Neurotrophin-3
Neurotrophin-3, or NT-3, is a neurotrophic factor, in the NGF-family of neurotrophins. It is a protein growth factor that has activity on certain neurons of the peripheral and central nervous system; it helps to support the survival and differentiation of existing neurons, and encourages the growth and differentiation of new neurons and synapses. NT-3 is the third neurotrophic factor to be characterized, after NGF and BDNF.
NT-3 is unique among the neurotrophins in the number of neurons it has potential to stimulate, given its ability to activate two of the receptor tyrosine kinase neurotrophin receptors (TrkC and TrkB). Mice born without the ability to make NT-3 have loss of proprioceptive and subsets of mechanoreceptive sensory neurons.
Neurotrophin-4
Neurotrophin-4 (NT-4) is a neurotrophic factor that signals predominantly through the TrkB receptor tyrosine kinase. It is also known as NT4, NT5, NTF4, and NT-4/5.
DHEA and DHEA sulfate
The endogenous steroids dehydroepiandrosterone (DHEA) and its sulfate ester, DHEA sulfate (DHEA-S), have been identified as small-molecule agonists of the TrkA and p75NTR with high affinity (around 5 nM), and hence as so-called "microneurotrophins". DHEA has also been found to bind to the TrkB and TrkC, though while it activated the TrkC, it was unable to activate the TrkB. It has been proposed that DHEA may have been the ancestral ligand of the Trk receptors early on in the evolution of the nervous system, eventually being superseded by the polypeptide neurotrophins.
Role in programmed cell death
During neuron development neurotrophins play a key role in growth, differentiation, and survival. They also play an important role in the apoptotic programmed cell death (PCD) of neurons. Neurotrophic survival signals in neurons are mediated by the high-affinity binding of neurotrophins to their respective Trk receptor. In turn, a majority of neuronal apoptotic signals are mediated by neurotrophins binding to the p75NTR. The PCD which occurs during brain development is responsible for the loss of a majority of neuroblasts and differentiating neurons. It is necessary because during development there is a massive over production of neurons which must be killed off to attain optimal function.
In the development of both the peripheral nervous system (PNS) and the central nervous system (CNS) the p75NTR-neurotrophin binding activates multiple intracellular pathways which are important in regulating apoptosis. Proneurotrophins (proNTs) are neurotrophins which are released as biologically active uncleaved pro-peptides. Unlike mature neurotrophins which bind to the p75NTR with a low affinity, proNTs preferentially bind to the p75NTR with high affinity. The p75NTR contains a death domain on its cytoplasmic tail which when cleaved activates an apoptotic pathway. The binding of a proNT (proNGF or proBDNF) to p75NTR and its sortilin co-receptor (which binds the pro-domain of proNTs) causes a p75NTR-dependent signal transduction cascade. The cleaved death domain of p75NTR activates c-Jun N-terminal kinase (JNK). The activated JNK translocates into the nucleus, where it phosphorylates and transactivates c-Jun. The transactivation of c-Jun results in the transcription of pro-apoptotic factors TFF-a, Fas-L and Bak. The importance of sortilin in p75NTR-mediated apoptosis is exhibited by the fact that the inhibition of sortilin expression in neurons expressing p75NTR suppresses proNGF-mediated apoptosis, and the prevention of proBDNF binding to p75NTR and sortilin abolished apoptotic action. Activation of p75NTR-mediated apoptosis is much more effective in the absence of Trk receptors due to the fact that activated Trk receptors suppress the JNK cascade.
The expression of TrkA or TrkC receptors in the absence of neurotrophins can lead to apoptosis, but the mechanism is poorly understood. The addition of NGF (for TrkA) or NT-3 (for TrkC) prevents this apoptosis. For this reason TrkA and TrkC are referred to as dependence receptors, because whether they induce apoptosis or survival is dependent on the presence of neurotrophins. The expression of TrkB, which is found mainly in the CNS, does not cause apoptosis. This is thought to be because it is differentially located in the cell membrane while TrkA and TrkC are co-localized with p75NTR in lipid rafts.
In the PNS (where NGF, NT-3 and NT-4 are mainly secreted) cell fate is determined by a single growth factor (i.e. neurotrophins). However, in the CNS (where BDNF is mainly secreted in the spinal cord, substantia nigra, amygdala, hypothalamus, cerebellum, hippocampus and cortex) more factors determine cell fate, including neural activity and neurotransmitter input. Neurotrophins in the CNS have also been shown to play a more important role in neural cell differentiation and function rather than survival. For these reasons, compared to neurons in the PNS, neurons of the CNS are less sensitive to the absence of a single neurotrophin or neurotrophin receptor during development; with the exception being neurons in the thalamus and substantia nigra.
Gene knockout experiments were conducted to identify the neuronal populations in both the PNS and CNS that were affected by the loss of different neurotrophins during development and the extent to which these populations were affected. These knockout experiments resulted in the loss of several neuron populations including the retina, cholinergic brainstem and the spinal cord. It was found that NGF-knockout mice had losses of a majority of their dorsal root ganglia (DRG), trigeminal ganglia and superior cervical ganglia. The viability of these mice was poor. The BDNF-knockout mice had losses of a majority of their vestibular ganglia and moderate losses of their DRG, trigeminal ganglia, nodose petrosal ganglia and cochlear ganglia. In addition they also had minor losses of their facial motoneurons located in the CNS. The viability of these mice was moderate. The NT-4-knockout mice had moderate losses of their nodose petrosal ganglia and minor losses of their DRG, trigeminal ganglia and vestibular ganglia. The NT-4-knockout mice also had minor losses of facial motoneurons. These mice were very viable. The NT-3 knockout mice had losses of a majority of their DRG, trigeminal ganglia, cochlear ganglia and superior cervical ganglia and moderate losses of nodose petrosal ganglia and vestibular ganglia. In addition the NT-3-knockout mice had moderate losses of spinal moroneurons. These mice had very poor viability. These results show that the absence of different neurotrophins result in losses of different neuron populations (mainly in the PNS). Furthermore, the absence of the neurotrophin survival signal leads to apoptosis.
See also
Neurotrophic electrode
Neuron
Programmed cell death
References
External links
DevBio.com – 'Neurotrophin Receptors: The neurotrophin family consists of four members: nerve growth factor (NGF), brain derived neurotrophic factor (BDNF), neurotrophin 3 (NT-3), and neurotrophin 4 (NT-4)' (April 4, 2003)
Dr.Koop.com – 'New Clues to Neurological Diseases Discovered: Findings could lead to new treatments, two studies suggest', Steven Reinberg, HealthDay (July 5, 2006)
Helsinki.fi – 'Neurotrophic factors'
– Neurotrophin-3 image
Neurochemistry
Neurotrophins
Programmed cell death
Single-pass transmembrane proteins | Neurotrophin | [
"Chemistry",
"Biology"
] | 3,051 | [
"Signal transduction",
"Senescence",
"Biochemistry",
"Neurotrophic factors",
"Neurochemistry",
"Programmed cell death"
] |
1,061,639 | https://en.wikipedia.org/wiki/Salt%20Lake%20Tabernacle | The Salt Lake Tabernacle, formerly known as the Mormon Tabernacle, is located on Temple Square in Salt Lake City, in the U.S. state of Utah. The Tabernacle was built from 1863 to 1875 to house meetings for the Church of Jesus Christ of Latter-day Saints (LDS Church). It was the location of the church's semi-annual general conference until the meeting was moved to the new and larger LDS Conference Center in 2000. Now a historic building on Temple Square, the Salt Lake Tabernacle is still used for overflow crowds during general conference. It is renowned for its remarkable acoustics and iconic pipe organ. The Tabernacle Choir has performed there for over 100 years.
Background
The Salt Lake Tabernacle was inspired by an attempt to build a Canvas Tabernacle in Nauvoo, Illinois, in the 1840s. That tabernacle was to be situated just to the west of the Nauvoo Temple and was to be oval-shaped, much the same as the Salt Lake Tabernacle. However, the Nauvoo edifice (never built) was to have amphitheater-style or terraced seating, and was to have canvas roofing.
Construction
The Tabernacle was built between 1864 and 1867 on the west center-line axis of the Salt Lake Temple. In 1892 it was the largest assembly hall in the United States. The roof was constructed in the lattice-truss arch system, which was devised by Ithiel Town and is held together by dowels and wedges. The building has a sandstone foundation, and the dome is supported by forty-four sandstone piers. Prior to its refurbishing in 2007, the overall seating capacity of the building was around 7,000, which included the choir area and gallery (balcony).
Henry Grow, a civil engineer, oversaw the initial construction of the Tabernacle, where the domed roof was the most innovative portion of the building. Brigham Young, church president at the time, wanted the Tabernacle roof constructed in an elongated dome shape with no interior pillars or posts to obstruct the view for the audience (the gallery was added later). When Young asked Grow how large a roof he could construct using the style of lattice that he had used on the Remington bridge, Grow replied that it could be "100 feet wide and as long as is wanted." Eventually, Grow engineered the Tabernacle roof exterior to be 150 feet across, 250 feet long, and 80 feet high. Skeptics insisted that when the interior scaffolding was removed, the whole roof would collapse. The roof structure was nine feet thick, formed by a "Remington lattice truss" of timbers pinned together with wooden pegs. Green rawhide was wrapped around the timbers so that when the rawhide dried it tightened its grip on the pegs. When the roof's structural work was completed, sheeting was applied on the roof, which was then covered with shingles. The interior was lathed and then plastered; the hair of cattle was mixed with the plaster to give it strength.
Construction of the Tabernacle began on July 26, 1864, but construction of the roof did not begin until 1865, when all 44 supporting sandstone piers designed by William H. Folsom were in place. Grow rapidly built the roof structure from the center out, but encountered difficulty engineering the semicircular ends of the roof. This difficulty dragged structural work on the roof into the fall of 1866 even as other parts of the roof were being shingled. However, Grow finished and shingled the entire roof by the spring of 1867, before the interior of the building was finished. The Tabernacle was first used for the October 1867 conference. The roof has lasted for over a century without any structural problems, though the shingles were replaced with aluminum in 1947.
The original benches and columns supporting the balcony were made from the native "white pine" (Engelmann Spruce) that the Mormon pioneers found in the area. Because they wanted to "give their best to the Lord", they hand-painted grain on the benches to look like oak and the pillars to resemble marble. During the renovations completed in 2007, the original benches were replaced with new oak pews, and legroom was increased from nine to 14 inches, causing an overall loss of capacity of 1000 seats.
The Salt Lake Tabernacle organ has its case positioned at the west end above the choir seats, and is the focal point of the Tabernacle's interior. The original organ was made by Joseph H. Ridges in 1867 and contained 700 pipes. The organ has been rebuilt several times with the total pipe count being 11,623, making the Tabernacle organ one of the largest pipe organs in the world. The current organ is the work of G. Donald Harrison of the Aeolian-Skinner organ company, and was completed in 1948. The organ was renovated and restored in 1989 with a few minor changes and additions. The largest 32-foot display pipes in the façade are made of wood and were constructed in the same manner as the balcony columns.
Architecture
The structure was an architectural wonder in its day, prompting a writer for Scientific American to comment on "the mechanical difficulties of attending the construction of so ponderous a roof." In 1882, while on a lecture tour of America, Oscar Wilde noted that the building had the appearance of a soup kettle; he added that it was the most purely dreadful building he ever saw. Some visitors around the beginning of the 20th century criticized it as "a prodigious tortoise that has lost its way" or "the Church of the Holy Turtle," but Frank Lloyd Wright dubbed the tabernacle "one of the architectural masterpieces of the country and perhaps the world."
Art and statuary
For several years the Tabernacle had various pieces of art on its walls. There was a mural depicting Joseph Smith receiving the Gold plates from an angel. There was also a portrait of Joseph Smith that hung between the organ's largest pipes. In 1875 there was a fountain placed in the middle of the building, "it represented the 'living water' offered by Christ and His gospel." In the same year, a statue of an angel sounding a trumpet was placed between the Organ's two largest pipes. The highest pulpit was lined with Lion statues to represent Brigham Young as the "Lion of the Lord."
Decorations
After its initial construction the structure had little paint so decorations were often used. Some examples include a blue banner with a gold beehive on the rear wall (1875), a star with the word Utah, a banner that read "Under the everlasting covenant God must and shall be glorified," American flags (one of the largest measured 75 ft by 160 ft), bunting, garland, festoons, and flowers. Some people complained because the decorations were left up for extended periods of time.
Purpose
The Tabernacle was the location of the church's semi-annual general conference for 132 years. Because of the growth in the number of attendees, general conference was moved to the new and larger Conference Center in 2000. In the October 1999 General Conference, church president Gordon B. Hinckley gave a talk honoring the Tabernacle and introducing the new Conference Center. The building is still used for overflow crowds during general conference.
The Tabernacle is the home of The Tabernacle Choir at Temple Square, and was the previous home of the Utah Symphony Orchestra until the construction of Abravanel Hall. It is the historic broadcasting home for the radio and television program known as Music and the Spoken Word.
Acoustics
Built at a time before electronics and audio amplifiers, the Tabernacle was constructed with remarkable acoustic qualities so the entire congregation could hear sermons given there. The roof was constructed in a three-dimensional ellipse with the pulpit at one focus of the ellipse. The elliptical concept came from church president Brigham Young, who reportedly said that the design was inspired by "the best sounding board in the world ... the roof of my mouth." The elliptical design causes a large portion of the sound from the pulpit end of the building to be concentrated and projected to the focus at the opposite end of the building.
Several years after the initial construction was completed, Truman O. Angell was brought in to further improve the building's acoustics, and was responsible for adding the gallery (balcony) in 1870 that resolved the outstanding acoustical issues. The building has an international reputation as one of the most acoustically perfect buildings in the world. A pin drop can be heard from one side of the building to another over 250 feet away.
Refurbishing
The Tabernacle was closed from January 2005 to March 2007 for seismic retrofitting and extensive renovations. The baptistry, which was located in the lower portion at the rear of the Tabernacle, was removed as part of the renovation. New gold leafing was applied to the visible organ pipes, the ceiling was repaired and repainted, new dressing rooms and a music library for choir members were created, three recording studios were built underneath the main floor, the rostrum was remodeled to accommodate a secondary seating arrangement or a stage for performances, and all plumbing was replaced. The building was reopened in March 2007, and rededicated for use on March 31, 2007. An opening gala concert with the Tabernacle Choir was held on April 6–7, 2007.
As part of the renovation, all 44 piers that support the Tabernacle's roof were reinforced with steel bars, which were inserted into the piers from the bottom. The foundation of each pier was also reinforced with concrete. Steel boxes were used to connect trusses, and were also attached to the piers, clinched tight with structural steel.
Notable speakers and guests
Twelve presidents of the United States have spoken from the Tabernacle pulpit: Theodore Roosevelt (1903), William Howard Taft (1909 and 1911), Woodrow Wilson (1919), Warren G. Harding (1923), Franklin D. Roosevelt (1932, then Governor of New York), Herbert Hoover (1932), Harry S. Truman (1948), Dwight D. Eisenhower (1952), John F. Kennedy (1963), Lyndon B. Johnson (1964), Richard Nixon (1970), and Jimmy Carter (1978).
Other notable people who have spoken in the Tabernacle include Susan B. Anthony (1895), Charles Lindbergh (1927), and Helen Keller (1941). Along with praising the decision to allow women equal voting rights in Utah Territory, Anthony praised the Tabernacle itself: "It is just about twenty-four years ago that I was present in this great Tabernacle on the day upon which you dedicated it to the service of the Lord, and every nook and corner, of this great building was packed on the occasion with people from every part of the Territory, many being unable to gain admittance. It was the most magnificent gathering I ever saw."
In 1980, James Stewart guest-conducted the Tabernacle Choir in the building as part of the filming of Mr. Krueger's Christmas, a television special broadcast on NBC.
Tourism
Initially tourists were discouraged from visiting the Tabernacle because it was considered as holy as the Temple or Endowment House. There were even some debates among church leadership in the late 19th century about disciplining members of the church who led tours, but tours were led through 1900.
Currently tours are offered to anyone who would like one. It is common for LDS Church missionaries tour guides to demonstrate the acoustic properties of the Tabernacle by dropping a pin on the pulpit or tearing a newspaper there, which can be heard throughout the building.
See also
List of concert halls
Salt Lake Assembly Hall
Salt Lake Temple
List of pipe organs
Notes
References
http://www.waymarking.com/waymarks/WMF1X2_Salt_Lake_Tabernacle_Salt_Lake_City_Utah
External links
Official Site of the Salt Lake Tabernacle
Official Site of The Tabernacle Choir at Temple Square
"Tabernacle on Temple Square" from Utah.com
Seismic Retrofitting of the Tabernacle
House of Saints, a documentary on the Salt Lake Tabernacle from BYU Television
Salt Lake Tabernacle page on templesquare.com
Salt Lake Tabernacle Virtual Tour
1867 establishments in Utah Territory
19th-century Latter Day Saint church buildings
Historic Civil Engineering Landmarks
Religious buildings and structures in Salt Lake City
Churches completed in 1867
Tabernacles (LDS Church) in Utah
Temple Square
Historic American Buildings Survey in Utah | Salt Lake Tabernacle | [
"Engineering"
] | 2,576 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
1,061,675 | https://en.wikipedia.org/wiki/JBIG | JBIG is an early lossless image compression standard from the Joint Bi-level Image Experts Group, standardized as ISO/IEC standard 11544 and as ITU-T recommendation T.82 in March 1993. It is widely implemented in fax machines. Now that the newer bi-level image compression standard JBIG2 has been released, JBIG is also known as JBIG1. JBIG was designed for compression of binary images, particularly for faxes, but can also be used on other images. In most situations JBIG offers between a 20% and 50% increase in compression efficiency over Fax Group 4 compression, and in some situations, it offers a 30-fold improvement.
JBIG is based on a form of arithmetic coding developed by IBM (known as the Q-coder) that also uses a relatively minor refinement developed by Mitsubishi, resulting in what became known as the QM-coder. It bases the probability estimates for each encoded bit on the values of the previous bits and the values in previous lines of the picture. JBIG also supports progressive transmission, which generally incurs a small overhead in bit rate (around 5%).
Patents
Doubts about patent licence requirements for JBIG1 implementations by IBM, Mitsubishi and AT&T prevented the codec from being widely implemented in open-source software. For example, as of 2012, none of the commonly used web browsers supported it. Since 2012, there are now no more JBIG1 patents in force – the last ones to expire were Mitsubishi's patents in Canada and Australia (on 25 February 2011) and in the United States (on 4 April 2012).
See also
ISO/IEC JTC 1/SC 29
JBIG2
References
External links
JBIG-KIT – a free C implementation of the JBIG encoder and decoder
ISO/IEC 11544
ITU-T Recommendations T.82, T.85
– Content Feature Schema for Internet Fax (V2)
– File Format for Internet Fax
Lossless compression algorithms
Lossy compression algorithms
Graphics file formats
T.82
ISO standards
IEC standards
Image compression | JBIG | [
"Technology"
] | 443 | [
"Computer standards",
"IEC standards"
] |
1,061,822 | https://en.wikipedia.org/wiki/Humberto%20Fern%C3%A1ndez-Mor%C3%A1n | Humberto Fernández-Morán Villalobos (18 February 192417 March 1999) was a Venezuelan research scientist born in Maracaibo, Venezuela, known for inventing the diamond knife or scalpel, significantly advancing the development of electromagnetic lenses for electron microscopy based on superconducting technology, and many other scientific contributions.
Career
Fernández-Morán founded the Venezuelan Institute for Neurological and Brain Studies, the predecessor of the current Venezuelan Institute for Scientific Research (IVIC). He studied medicine at the University of Munich, where he graduated summa cum laude in 1944. He contributed to the development of the electron microscope and was the first person to use the concept of cryo-ultramicrotomy. After flying over Angel Falls in his home country of Venezuela he was inspired by the concept of the smoothly reoccurring flow system inherent in a waterfall to take his diamond knife invention and combine it with an ultramicrotome to dramatically improve the ultra-thin sectioning of electron microscopy samples. The ultramictrotome advances the rotating, drum-mounted specimen sample in such small increments (utilizing the very low thermal expansion coefficient of Invar) past the stationary diamond knife that sectioning thicknesses of several Angstrom units are possible. He also helped to advance the field of electron cryomicroscopy - the use of superconductive electromagnetic lenses cooled with liquid helium in electron microscopes to achieve the highest resolution possible - among many other research topics.
Fernández-Morán was commissioned in 1957 with the supervision of the first Venezuelan research nuclear reactor, the RV-1 nuclear reactor, one of the first in Latin America.
He was appointed Minister of Education during the last year of the regime of Marcos Pérez Jiménez and was forced to leave Venezuela when the dictatorship was overthrown in 1958. He worked with NASA for the Apollo Project and taught in many universities, such as MIT, University of Chicago and the University of Stockholm.
He donated a collection of his papers to the National Library of Medicine in 1986.
Personal life
His wife Anna was Swedish and together they had two daughters, Brigida Elena and Verónica.
The body of Humberto Fernández-Morán was cremated and his ashes rest today in Cemetery The Square Luxburg-Carolath in his hometown, Maracaibo.
Inventions
Diamond knife
Ultra microtome
Awards and honors
1967, the John Scott Award, for his invention of the diamond scalpel.
Knight of the Order of the Polar Star
Claude Bernard Medal, University of Montreal
Cambridge annual Medical Prize
See also
List of Venezuelans
References
External links
The Patent of the Diamond Scalpel - September 1955.
Research done for NASA by Fernández Morán
1924 births
1999 deaths
People from Maracaibo
Venezuelan inventors
20th-century Venezuelan scientists
Microscopists
Academic staff of the Central University of Venezuela
Knights of the Order of the Polar Star
20th-century inventors
Venezuelan emigrants to Sweden
Education ministers of Venezuela
Marcos Pérez Jiménez ministers | Humberto Fernández-Morán | [
"Chemistry"
] | 585 | [
"Microscopists",
"Microscopy"
] |
1,062,000 | https://en.wikipedia.org/wiki/Triangle%20of%20Life | The Triangle of Life is an unsubstantiated idea about how to survive a major earthquake, typically promoted via viral emails. The idea advocates methods of protection very different from the mainstream advice of "drop, cover, and hold on" method that is widely supported by reputable agencies.
In particular, the method's developer and key proponent, Doug Copp, recommends that at the onset of a major earthquake, building occupants should seek shelter near solid items that will provide a protective space, a void or space that could prevent injury or permit survival in the event of a major structural failure, a "pancake collapse", and specifically advises against sheltering under tables.
Officials of many agencies, including the American Red Cross and the United States Geological Survey, have criticized the "Triangle of Life" idea, saying that it is a "misguided idea" and inappropriate for countries with modern building construction standards where total building collapse is unlikely.
Purpose
Copp's idea is focused on situations when a building completely collapses, falling straight down, rather than the far more common situations, when side-to-side shaking causes falling objects (such as trees, chimneys, furniture, and objects on shelves) to land on top of people. According to his idea, if the building collapses, then the weight of the ceilings falling upon the objects or furniture inside tends to crush them, but the height of the object that remains acts as a kind of roof beam over the space or void next to it, which will tend to end up with a sloping roof over it. Copp terms this space for survival the "triangle of life". The larger and stronger the object, the less it will compact; the less it compacts, the larger the void next to it will be. Such triangles are the most common shape to be found in a collapsed building.
Criticisms
According to the United States Geological Survey (USGS), the Triangle of Life is a misguided idea about the best location a person should try to occupy during an earthquake. Critics have argued that it is actually very difficult to know where these triangles will be formed, as objects (including large, heavy objects) often move around during earthquakes. It is also argued that this movement means that lying beside heavy objects is very dangerous. Statistical studies of earthquake deaths show most injuries and deaths occur due to falling objects, not structures.
A person is more likely to be injured trying to move during an earthquake rather than immediately seeking a safe space by furniture, or near an interior wall, not doorways, as they are often not structural. Different architectural standards in different countries mean that the best strategy for earthquake survival could also be different; however, for the United States, "Drop, Cover, and Hold On" is recommended.
An Iranian peer-reviewed article analyzed and compared both methods in detail, considering their application, the extent of people who are under the coverage, simplicity in transferring concepts, and the probability of reducing casualties and damage in developing countries such as Iran. It argued that "Drop, Cover and Hold on" was useful advice for people who experience smaller earthquakes without total building collapse, which is the vast majority of earthquake survivors. It found that the "Triangle of life" theoretically could be a better strategy during larger earthquakes in buildings with a skeleton (wood or concrete) during a building pancake-type collapse, but acknowledged the possible problems of large objects shifting and crushing the person from horizontal movement, inability to predict which side of an object would create a survivable space, and that the triangle of life method is also difficult to teach and communicate. It concluded that the "Triangle of Life" could harm individuals who attempted to follow the advice in buildings that did not collapse. Neither strategy was useful for the majority of the population in rural Iran because of the mud-brick architecture which has no structure. Based on the simplicity of teaching and the fact that 12,000 times more people are affected by smaller earthquakes and injured, they concluded that "Drop, Cover and Hold On" is still regarded as a better option for people during an earthquake.
Testing
In 1996, Copp claims to have made a film to prove this methodology and to have recreated a model school and home, filling them with 20 mannequins. The buildings were collapsed by earthmoving equipment that knocked the supporting pillars out. Half the mannequins were in "Duck and Cover" positions and the others in Copp's "Triangle of Life" positions. When Copp and his crew re-entered the building after the blast, they calculated that there would have been no survivors among the mannequins in "Duck and Cover" positions, but 100% survival for those hiding in the triangles beside solid objects. Copp is categorical about the importance of this technique, saying "Everyone who simply ducks and covers when buildings collapse is crushed to death – every time without exception."
However, a critic of Copp has stated that this was a rescue exercise rather than an experiment. Additionally, the exercise did not simulate the lateral movement of earthquakes, instead causing a pancake collapse, which is more common in areas of extremely poor construction and rare in developed countries. The critic concluded that Copp's results are therefore misleading.
See also
Earthquake engineering
Earthquake preparedness
Earthquake early warning system
Seismic retrofit
References
External links
Triangle of Life item at Snopes fact-checking website
Expert recommendation against the Triangle of Life from the Statewide California Earthquake Center
Doug Copp's Wordpress blog
American Rescue Team International, organization run by Doug Copp
Survival skills
Earthquake and seismic risk mitigation
Disaster preparedness
Earthquakes | Triangle of Life | [
"Engineering"
] | 1,134 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
1,062,015 | https://en.wikipedia.org/wiki/Associated%20Legendre%20polynomials | In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation
or equivalently
where the indices ℓ and m (which are integers) are referred to as the degree and order of the associated Legendre polynomial respectively. This equation has nonzero solutions that are nonsingular on only if ℓ and m are integers with 0 ≤ m ≤ ℓ, or with trivially equivalent negative values. When in addition m is even, the function is a polynomial. When m is zero and ℓ integer, these functions are identical to the Legendre polynomials. In general, when ℓ and m are integers, the regular solutions are sometimes called "associated Legendre polynomials", even though they are not polynomials when m is odd. The fully general class of functions with arbitrary real or complex values of ℓ and m are Legendre functions. In that case the parameters are usually labelled with Greek letters.
The Legendre ordinary differential equation is frequently encountered in physics and other technical fields. In particular, it occurs when solving Laplace's equation (and related partial differential equations) in spherical coordinates. Associated Legendre polynomials play a vital role in the definition of spherical harmonics.
Definition for non-negative integer parameters and
These functions are denoted , where the superscript indicates the order and not a power of P. Their most straightforward definition is in terms
of derivatives of ordinary Legendre polynomials (m ≥ 0)
The factor in this formula is known as the Condon–Shortley phase. Some authors omit it. That the functions described by this equation satisfy the general Legendre differential equation with the indicated values of the parameters ℓ and m follows by differentiating m times the Legendre equation for :
Moreover, since by Rodrigues' formula,
the P can be expressed in the form
This equation allows extension of the range of m to: . The definitions of , resulting from this expression by substitution of , are proportional. Indeed, equate the coefficients of equal powers on the left and right hand side of
then it follows that the proportionality constant is
so that
Alternative notations
The following alternative notations are also used in literature:
Closed Form
The Associated Legendre Polynomial can also be written as:
with simple monomials and the generalized form of the binomial coefficient.
Orthogonality
The associated Legendre polynomials are not mutually orthogonal in general. For example, is not orthogonal to . However, some subsets are orthogonal. Assuming 0 ≤ m ≤ ℓ, they satisfy the orthogonality condition for fixed m:
Where is the Kronecker delta.
Also, they satisfy the orthogonality condition for fixed :
Negative and/or negative
The differential equation is clearly invariant under a change in sign of m.
The functions for negative m were shown above to be proportional to those of positive m:
(This followed from the Rodrigues' formula definition. This definition also makes the various recurrence formulas work for positive or negative .)
The differential equation is also invariant under a change from to , and the functions for negative are defined by
Parity
From their definition, one can verify that the Associated Legendre functions are either even or odd according to
The first few associated Legendre functions
The first few associated Legendre functions, including those for negative values of m, are:
Recurrence formula
These functions have a number of recurrence properties:
Helpful identities (initial values for the first recursion):
with the double factorial.
Gaunt's formula
The integral over the product of three associated Legendre polynomials (with orders matching as shown below) is a necessary ingredient when developing products of Legendre polynomials into a series linear in the Legendre polynomials. For instance, this turns out to be necessary when doing atomic calculations of the Hartree–Fock variety where matrix elements of the Coulomb operator are needed. For this we have Gaunt's formula
This formula is to be used under the following assumptions:
the degrees are non-negative integers
all three orders are non-negative integers
is the largest of the three orders
the orders sum up
the degrees obey
Other quantities appearing in the formula are defined as
The integral is zero unless
the sum of degrees is even so that is an integer
the triangular condition is satisfied
Dong and Lemus (2002) generalized the derivation of this formula to integrals over a product of an arbitrary number of associated Legendre polynomials.
Generalization via hypergeometric functions
These functions may actually be defined for general complex parameters and argument:
where is the gamma function and is the hypergeometric function
They are called the Legendre functions when defined in this more general way. They satisfy the same differential equation as before:
Since this is a second order differential equation, it has a second solution, , defined as:
and both obey the various recurrence formulas given previously.
Reparameterization in terms of angles
These functions are most useful when the argument is reparameterized in terms of angles, letting :
Using the relation , the list given above yields the first few polynomials, parameterized this way, as:
The orthogonality relations given above become in this formulation:
for fixed m, are orthogonal, parameterized by θ over , with weight :
Also, for fixed ℓ:
In terms of θ, are solutions of
More precisely, given an integer m0, the above equation has
nonsingular solutions only when for ℓ
an integer ≥ m, and those solutions are proportional to
.
Applications in physics: spherical harmonics
In many occasions in physics, associated Legendre polynomials in terms of angles occur where spherical symmetry is involved. The colatitude angle in spherical coordinates is
the angle used above. The longitude angle, , appears in a multiplying factor. Together, they make a set of functions called spherical harmonics. These functions express the symmetry of the two-sphere under the action of the Lie group SO(3).
What makes these functions useful is that they are central to the solution of the equation
on the surface of a sphere. In spherical coordinates θ (colatitude) and φ (longitude), the Laplacian is
When the partial differential equation
is solved by the method of separation of variables, one gets a φ-dependent part or for integer m≥0, and an equation for the θ-dependent part
for which the solutions are with
and .
Therefore, the equation
has nonsingular separated solutions only when ,
and those solutions are proportional to
and
For each choice of ℓ, there are functions
for the various values of m and choices of sine and cosine.
They are all orthogonal in both ℓ and m when integrated over the
surface of the sphere.
The solutions are usually written in terms of complex exponentials:
The functions are the spherical harmonics, and the quantity in the square root is a normalizing factor.
Recalling the relation between the associated Legendre functions of positive and negative m, it is easily shown that the spherical harmonics satisfy the identity
The spherical harmonic functions form a complete orthonormal set of functions in the sense of Fourier series. Workers in the fields of geodesy, geomagnetism and spectral analysis use a different phase and normalization factor than given here (see spherical harmonics).
When a 3-dimensional spherically symmetric partial differential equation is solved by the method of separation of variables in spherical coordinates, the part that remains after removal of the radial part is typically
of the form
and hence the solutions are spherical harmonics.
Generalizations
The Legendre polynomials are closely related to hypergeometric series. In the form of spherical harmonics, they express the symmetry of the two-sphere under the action of the Lie group SO(3). There are many other Lie groups besides SO(3), and analogous generalizations of the Legendre polynomials exist to express the symmetries of semi-simple Lie groups and Riemannian symmetric spaces. Crudely speaking, one may define a Laplacian on symmetric spaces; the eigenfunctions of the Laplacian can be thought of as generalizations of the spherical harmonics to other settings.
See also
Angular momentum
Gaussian quadrature
Legendre polynomials
Spherical harmonics
Whipple's transformation of Legendre functions
Laguerre polynomials
Hermite polynomials
Notes and references
; Section 12.5. (Uses a different sign convention.)
.
; Chapter 3.
.
; Chapter 2.
.
Schach, S. R. (1973) New Identities for Legendre Associated Functions of Integral Order and Degree , Society for Industrial and Applied Mathematics Journal on Mathematical Analysis, 1976, Vol. 7, No. 1 : pp. 59–69
External links
Associated Legendre polynomials in MathWorld
Legendre polynomials in MathWorld
Legendre and Related Functions in DLMF
Atomic physics
Orthogonal polynomials | Associated Legendre polynomials | [
"Physics",
"Chemistry"
] | 1,756 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
1,062,137 | https://en.wikipedia.org/wiki/Top-down%20cosmology | In theoretical physics, top-down cosmology is a proposal to regard the many possible histories of a given event as having real existence. This idea of multiple histories has been applied to cosmology, in a theoretical interpretation in which the universe has multiple possible cosmologies, and in which reasoning backwards from the current state of the universe to a quantum superposition of possible cosmic histories makes sense. Stephen Hawking has argued that the principles of quantum mechanics forbid a single cosmic history, and has proposed cosmological theories in which the lack of a past boundary condition naturally leads to multiple histories, called the 'no-boundary proposal', the proposed Hartle–Hawking state.
According to Hawking and Thomas Hertog, "The top-down approach we have described leads to a profoundly different view of cosmology, and the relation between cause and effect. Top down cosmology is a framework in which one essentially traces the histories backwards, from a spacelike surface at the present time. The noboundary histories of the universe thus depend on what is being observed, contrary to the usual idea that the universe has a unique, observer independent history."
See also
Consistent histories
Multiverse
Quantum cosmology
Hartle–Hawking state
References
Physical cosmology
Quantum measurement | Top-down cosmology | [
"Physics",
"Astronomy"
] | 260 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Quantum mechanics",
"Astrophysics",
"Quantum measurement",
"Physical cosmology"
] |
1,062,243 | https://en.wikipedia.org/wiki/Bicycle%20culture | Bicycle culture can refer to a mainstream culture that supports the use of bicycles or to a subculture. Although "bike culture" is often used to refer to various forms of associated fashion, it is erroneous to call fashion in and of itself a culture.
Cycling culture refers to cities and countries which support a large percentage of utility cycling. Examples include the Netherlands, Denmark, Germany, Belgium (Flanders in particular), Sweden, Italy, China, Bangladesh and Japan. There are also towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal. North American cities with strong bicycle cultures include Madison, Portland, San Francisco, Little Rock, Boston, Toronto, Montreal, Lincoln, Peoria, and the Twin Cities. In Latin America, Bogotá is often regarded as one of the most bike-friendly cities.
A city with a strong bicycle culture usually has a well-developed cycling infrastructure, including segregated bike lanes and extensive facilities catering to urban bicycles, such as bike racks.
Advocacy and activism subcultures
In some cities and countries, transportation infrastructure is focused on automobiles, and large portions of the population use cars as their only local mechanical transport. Bicycling advocates include those who advocate for an increase in population-wide commuting, acceptance of cycling, and legislation and infrastructure to promote and protect the safety and rights of cyclists.
Cycling advocacy often aims to improve community bike infrastructure, including aspects such as bike lanes, parking facilities, and access to public transportation.
Within the cycling community, activism may take many forms, and may include creative and practical approaches. These include bike-related music, bike-related films, international exchange of hospitality (Warm Showers), organized bike rides (often noncompetitive—i.e. Critical Mass and World Naked Bike Ride), art bikes displays, printed-word materials (such as blogs, zines and magazines, stickers, and spoke cards), and the publication and distribution of books (such as: Thomas Stevens's Around the World on a Bicycle, Mark Twain's essay "Taming the Bicycle" and H. G. Wells's novel The Wheels of Chance). There are hundreds of bicycle cooperatives offering spaces for cyclists to replace their own bikes and socialise.
Examples
Many cities contain subcultures of bicycle enthusiasts that include racers, bicycle messengers, bicycle transportation activists, mutant bicycle fabricators, bicycle mechanics, and bicycle commuters. Some such groups are affiliated with activism or counterculture groups. These hybrid groups often organize activities such as competitive cycling, fun rides, protests, and civil disobedience, such as Critical Mass. Some groups work to promote bicycle transportation (community bicycle program); others fix bicycles to give to children or the homeless (Bikes Not Bombs). There are also feminist groups of women of color who promote the empowerment of women through their rides such as Ovarian Psycos.
Bicycle magazines and organizations give awards to cities for being "bicycle friendly". Examples include Boulder, Minneapolis, Austin, Philadelphia, Madison, Seattle, and Portland—all cities that promote bicycle culture.
Midnight Ridazz is a group of bicycle enthusiasts who ride every second Friday of the month in Los Angeles, California to inspire more people to ride bicycles. Rides often exceed 1,000 cyclists. Similar midnight rides such as the Midnight Mystery rides of Portland and Victoria, the bi-monthly Midnight Mass of Vancouver BC, and similar rides across the US and Europe have been growing in popularity.
San Jose Bike Party is another example of a large monthly social ride that regularly exceeds a thousand riders. It occurs on third Fridays of each month after the evening commute. Typically there are two regroup points allowing slower riders to catch up, which include music and food trucks.
Mainstream bike cultures
Cycling is the norm in countries like the Netherlands and Denmark. In Denmark, 16 percent of all trips are made by bike—and as much as 50 percent of urban populations cycle to work and school. In the Netherlands, 63 percent of Amsterdam residents ride their bikes every day. Strong cycling infrastructure helps encourage cycling in these cities, and so cycling is the fastest, most convenient way to get from one place to another.
Mainstream bike cultures are characterized by notions of function over form. In mainstream bike cultures, there is less of a differentiation between cyclists and the rest of the population. People of all demographics cycle regularly, and most are less concerned about cycling attire and bike performance. It is not uncommon to see people cycle in business attire or on an old rusty bike.
See also
Cycling mobility
Cyclability
Bicycle Film Festival
Bicycle-friendly
Car-free movement
Critical Mass
International Cycling Film Festival
Cycling in Denmark
Cycling in the Netherlands
History of cycling
List of films about bicycles and cycling
Cycling infrastructure
Cycle touring
Utility cycling
Mamil
References
Further reading
"An American in Denmark: Close encounters with European bicycle culture," Grist, August 5, 2013
"Spin cycle: Copenhagen's rise, fall, and rise again to cycling supremacy." Grist, August 7, 2013
"Riding lessons for U.S. cities from one of Europe's bike capitals." Grist, August 9, 2013
Zack Furness, One Less Car: Bicycling and the Politics of Automobility. Temple University Press, 2010.
Culture
Transport culture | Bicycle culture | [
"Physics"
] | 1,086 | [
"Physical systems",
"Transport",
"Transport culture"
] |
1,062,339 | https://en.wikipedia.org/wiki/Polynesian%20rat | The Polynesian rat, Pacific rat or little rat (Rattus exulans), or , is the third most widespread species of rat in the world behind the brown rat and black rat. Contrary to its vernacular name, the Polynesian rat originated in Southeast Asia, and like its relatives has become widespread, migrating to most of Polynesia, including New Zealand, Easter Island, and Hawaii. It shares high adaptability with other rat species extending to many environments, from grasslands to forests. It is also closely associated with humans, who provide easy access to food. It has become a major pest in most areas of its distribution.
Description
The Polynesian rat is similar in appearance to other rats, such as the black rat and the brown rat. It has large, round ears, a pointed snout, black/brown hair with a lighter belly, and comparatively small feet. It has a thin, long body, reaching up to in length from the nose to the base of the tail, making it slightly smaller than other human-associated rats. Where it exists on smaller islands, it tends to be smaller still – . It is commonly distinguished by a dark upper edge of the hind foot near the ankle; the rest of its foot is pale.
Distribution and habitat
The Polynesian rat is widespread throughout the Pacific and Southeast Asia. Mitochondrial DNA analysis suggests that the species originated on the island of Flores. The IUCN Red List considers it native to Bangladesh, all of mainland Southeast Asia, and Indonesia, but introduced to all of its Pacific range (including the island of New Guinea), the Philippines, Brunei, and Singapore, and of uncertain origin in Taiwan. It cannot swim over long distances, so is considered to be a significant marker of the human migrations across the Pacific, as the Polynesians accidentally or deliberately introduced it to the islands they settled. The species has been implicated in many of the extinctions that occurred in the Pacific amongst the native birds and insects; these species had evolved in the absence of mammals and were unable to cope with the predation pressures posed by the rat. This rat also may have played a role in the complete deforestation of Easter Island by eating the nuts of the local palm tree Paschalococos, thus preventing regrowth of the forest.
Although remains of the Polynesian rat in New Zealand were dated to over 2,000 years old during the 1990s, which was much earlier than the accepted dates for Polynesian migrations to New Zealand, this finding has been challenged by later research showing the rat was introduced to both the country's main islands circa 1280.
Behaviour
Polynesian rats are nocturnal like most rodents, and are adept climbers, often nesting in trees. In winter, when food is scarce, they commonly strip bark for consumption and satisfy themselves with plant stems. They have common rat characteristics regarding reproduction: polyestrous, with gestations of 21–24 days, litter size affected by food and other resources (6–11 pups), weaning takes around another month at 28 days. They diverge only in that they do not breed year round, instead being restricted to spring and summer.
Diet
R. exulans is an omnivorous species, eating seeds, fruit, leaves, bark, insects, earthworms, spiders, lizards, and avian eggs and hatchlings. Polynesian rats have been observed to often take pieces of food back to a safe place to properly shell a seed or otherwise prepare certain foods. This not only protects them from predators, but also from rain and other rats. These "husking stations" are often found among trees, near the roots, in fissures of the trunk, and even in the top branches. In New Zealand, for instance, such stations are found under rock piles and fronds shed by nīkau palms.
Rat control and bird conservation
New Zealand
In New Zealand and its offshore islands, many bird species evolved in the absence of terrestrial mammalian predators, so developed no behavioral defenses to rats. The introduction by the Māori of the Polynesian rat into New Zealand resulted in the eradication of several species of terrestrial and small seabirds.
Subsequent elimination of rats from islands has resulted in substantial increases in populations of certain seabirds and endemic terrestrial birds, as well as species of insects such as the Little Barrier Island giant wētā. As part of its program to restore these populations, such as the critically endangered kākāpō, the New Zealand Department of Conservation undertakes programs to eliminate the Polynesian rat on most offshore islands in its jurisdiction, and other conservation groups have adopted similar programs in other reserves seeking to be predator- and rat-free.
However, two islands in the Hen and Chickens group, Mauitaha and Araara, have now been set aside as sanctuaries for the Polynesian rat.
Rest of the Pacific
NZAID has funded rat eradication programs in the Phoenix Islands of Kiribati in order to protect the bird species of the Phoenix Islands Protected Area.
Between July and November 2011, a partnership of the Pitcairn Islands Government and the Royal Society for the Protection of Birds implemented a poison baiting programme on Henderson Island aimed at eradicating the Polynesian rat. Mortality was massive, but of the 50,000 to 100,000 population, 60 to 80 individuals survived and the population has now fully recovered.
References
External links
Rattus
Rodents of Oceania
Rodents of India
Mammals of Bangladesh
Rodents of New Guinea
Mammals of Southeast Asia
Mammals of New Zealand
Mammals of Indonesia
Mammals described in 1848
Stored-product pests
Taxa named by Titian Peale
Rodents of Borneo | Polynesian rat | [
"Biology"
] | 1,126 | [
"Pests (organism)",
"Stored-product pests"
] |
1,062,532 | https://en.wikipedia.org/wiki/Codress%20message | In military cryptography, a codress message is an encrypted message whose address is also encrypted. This is usually done to prevent traffic analysis.
References
Cryptography | Codress message | [
"Mathematics",
"Engineering"
] | 39 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
1,062,541 | https://en.wikipedia.org/wiki/Multiple%20encryption | Multiple encryption is the process of encrypting an already encrypted message one or more times, either using the same or a different algorithm. It is also known as cascade encryption, cascade ciphering, multiple encryption, and superencipherment. Superencryption refers to the outer-level encryption of a multiple encryption.
Some cryptographers, like Matthew Green of Johns Hopkins University, say multiple encryption addresses a problem that mostly doesn't exist:
However, from the previous quote an argument for multiple encryption can be made, namely poor implementation. Using two different cryptomodules and keying processes from two different vendors requires both vendors' wares to be compromised for security to fail completely.
Independent keys
Picking any two ciphers, if the key used is the same for both, the second cipher could possibly undo the first cipher, partly or entirely. This is true of ciphers where the decryption process is exactly the same as the encryption process (a reciprocal cipher) —the second cipher would completely undo the first. If an attacker were to recover the key through cryptanalysis of the first encryption layer, the attacker could possibly decrypt all the remaining layers, assuming the same key is used for all layers.
To prevent that risk, one can use keys that are statistically independent for each layer (e.g. independent RNGs).
Ideally each key should have separate and different generation, sharing, and management processes.
Independent Initialization Vectors
For en/decryption processes that require sharing an Initialization Vector (IV) / nonce these are typically, openly shared or made known to the recipient (and everyone else). Its good security policy never to provide the same data in both plaintext and ciphertext when using the same key and IV. Therefore, its recommended (although at this moment without specific evidence) to use separate IVs for each layer of encryption.
Importance of the first layer
With the exception of the one-time pad, no cipher has been theoretically proven to be unbreakable.
Furthermore, some recurring properties may be found in the ciphertexts generated by the first cipher. Since those ciphertexts are the plaintexts used by the second cipher, the second cipher may be rendered vulnerable to attacks based on known plaintext properties (see references below).
This is the case when the first layer is a program P that always adds the same string S of characters at the beginning (or end) of all ciphertexts (commonly known as a magic number). When found in a file, the string S allows an operating system to know that the program P has to be launched in order to decrypt the file. This string should be removed before adding a second layer.
To prevent this kind of attack, one can use the method provided by Bruce Schneier:
Generate a random pad R of the same size as the plaintext.
Encrypt R using the first cipher and key.
XOR the plaintext with the pad, then encrypt the result using the second cipher and a different (!) key.
Concatenate both ciphertexts in order to build the final ciphertext.
A cryptanalyst must break both ciphers to get any information. This will, however, have the drawback of making the ciphertext twice as long as the original plaintext.
Note, however, that a weak first cipher may merely make a second cipher that is vulnerable to a chosen plaintext attack also vulnerable to a known plaintext attack. However, a block cipher must not be vulnerable to a chosen plaintext attack to be considered secure. Therefore, the second cipher described above is not secure under that definition, either. Consequently, both ciphers still need to be broken. The attack illustrates why strong assumptions are made about secure block ciphers and ciphers that are even partially broken should never be used.
The Rule of Two
The Rule of Two is a data security principle from the NSA's Commercial Solutions for Classified Program (CSfC). It specifies two completely independent layers of cryptography to protect data. For example, data could be protected by both hardware encryption at its lowest level and software encryption at the application layer. It could mean using two FIPS-validated software cryptomodules from different vendors to en/decrypt data.
The importance of vendor and/or model diversity between the layers of components centers around removing the possibility that the manufacturers or models will share a vulnerability. This way if one components is compromised there is still an entire layer of encryption protecting the information at rest or in transit. The CSfC Program offers solutions to achieve diversity in two ways. "The first is to implement each layer using components produced by different manufacturers. The second is to use components from the same manufacturer, where that
manufacturer has provided NSA with sufficient evidence that the implementations of the two components are independent of one another."
The principle is practiced in the NSA's secure mobile phone called Fishbowl. The phones use two layers of encryption protocols, IPsec and Secure Real-time Transport Protocol (SRTP), to protect voice communications. The Samsung Galaxy S9 Tactical Edition is also an approved CSfC Component.
Examples
The figure shows from inside to outside the process of how the encrypted capsule is formed in the context of Echo Protocol, used by the Software Application GoldBug Messenger. GoldBug has implemented a hybrid system for authenticity and confidentiality.
First layer of the encryption:
The ciphertext of the original readable message is hashed, and subsequently the symmetric keys are encrypted via the asymmetric key - e.g. deploying the algorithm RSA.
In an intermediate step the ciphertext, and the hash digest of the ciphertext are combined into a capsule, and packed together.
It follows the approach: Encrypt-then-MAC. In order for the receiver to verify that the ciphertext has not been tampered with, the digest is computed before the ciphertext is decrypted.
Second layer of encryption:
Optionally it is still possible, therefore to encrypt the capsule of the first layer in addition with an AES-256, - comparable to a commonly shared, 32-character long symmetric password. Hybrid Encryption is then added to multiple encryption.
Third layer of the encryption:
Then, this capsule is transmitted via a secure SSL/TLS connection to the communication partner
References
Further reading
"Multiple encryption" in "Ritter's Crypto Glossary and Dictionary of Technical Cryptography"
Confidentiality through Multi-Encryption, in: Adams, David / Maier, Ann-Kathrin (2016): BIG SEVEN Study, open source crypto-messengers to be compared - or: Comprehensive Confidentiality Review & Audit of GoldBug, Encrypting E-Mail-Client & Secure Instant Messenger, Descriptions, tests and analysis reviews of 20 functions of the application GoldBug based on the essential fields and methods of evaluation of the 8 major international audit manuals for IT security investigations including 38 figures and 87 tables., URL: https://sf.net/projects/goldbug/files/bigseven-crypto-audit.pdf - English / German Language, Version 1.1, 305 pages, June 2016 (ISBN: DNB 110368003X - 2016B14779).
A "way to combine multiple block algorithms" so that "a cryptanalyst must break both algorithms" in §15.8 of Applied Cryptography, Second Edition: Protocols, Algorithms, and Source Code in C by Bruce Schneier. Wiley Computer Publishing, John Wiley & Sons, Inc.
S. Even and O. Goldreich, On the power of cascade ciphers, ACM Transactions on Computer Systems, vol. 3, pp. 108–116, 1985.
M. Maurer and J. L. Massey, Cascade ciphers: The importance of being first, Journal of Cryptology, vol. 6, no. 1, pp. 55–61, 1993.
Cryptography | Multiple encryption | [
"Mathematics",
"Engineering"
] | 1,646 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.