id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
59,921,578 | https://en.wikipedia.org/wiki/Amplitwist | In mathematics, the amplitwist is a concept created by Tristan Needham in the book Visual Complex Analysis (1997) to represent the derivative of a complex function visually.
Definition
The amplitwist associated with a given function is its derivative in the complex plane. More formally, it is a complex number such that in an infinitesimally small neighborhood of a point in the complex plane, for an infinitesimally small vector . The complex number is defined to be the derivative of at .
Uses
The concept of an amplitwist is used primarily in complex analysis to offer a way of visualizing the derivative of a complex-valued function as a local amplification and twist of vectors at a point in the complex plane.
Examples
Define the function . Consider the derivative of the function at the point . Since the derivative of is , we can say that for an infinitesimal vector at , .
References
Functions and mappings
Complex analysis | Amplitwist | Mathematics | 192 |
9,742,373 | https://en.wikipedia.org/wiki/Consolidated%20rental%20car%20facility | A consolidated rental car facility (CRCF) or consolidated rental car center (CONRAC) is a complex that hosts numerous car rental agencies, typically found at airports in the United States.
The most important incentives for building consolidated facilities are greatly reduced traffic congestion in airport pick up and drop off areas and increased convenience for travelers. A single unified fleet of shuttle buses can serve all car rental agencies, instead of each company operating their own individual shuttle buses which may come less frequently. Congestion can be further reduced by connecting the consolidated facility to the airport terminal with a people mover.
Consolidated facilities are typically built around two areas: a customer service building where each company operates retail counters to serve renters, and a "ready/return" lot or garage where cars are temporarily parked while ready and awaiting a renter, or when recently returned and in need of servicing before the next rental.
Facilities usually also feature a Quick Turn Around (QTA) area either on-site or at a nearby location, where light maintenance of vehicles can be conducted including cleaning, fueling, and inspection of engine fluids. There can be several QTA areas operated by the different companies, or the services can be shared.
The first known consolidated facility was built at Sacramento International Airport in 1994. However, as early as 1974, four companies were already sharing facilities and shuttle buses at Dallas/Fort Worth Airport, and in 1988 companies at Minneapolis–Saint Paul airport introduced common shuttle buses. These differed from modern CONRACs in that the majority of rental car companies at Dallas/Fort Worth continued to operate their own off-site facilities and shuttle buses, while at Minneapolis, only the shuttle buses and not the facilities themselves were shared (in other words, a single shuttle bus line served multiple off-site rental car companies).
Furthermore, the rental car industry has seen major mergers, creating three major holding companies that now represent ten brands commonly seen at airports, the Avis Budget Group (which operates Avis Car Rental, Budget Rent a Car, Payless Car Rental and Zipcar), Enterprise Holdings (which operates Enterprise Rent-A-Car, Alamo Rent a Car and National Car Rental) and The Hertz Corporation (which operates Hertz Rent A Car, Dollar Rent A Car and Thrifty Car Rental). Because of these mergers, even in cities without a consolidated facility, many of these companies have consolidated all their brands into one location.
Locations
Facilities under construction
The Reno–Tahoe International Airport is currently building a Rental Car and Ground Transportation Center, scheduled to open in 2028.
The Gerald R. Ford International Airport in Grand Rapids, MI is currently construction a 4-story ConRAC, scheduled to open in 2026.
References
Airport infrastructure
Car rental | Consolidated rental car facility | Engineering | 551 |
11,042,777 | https://en.wikipedia.org/wiki/IBM%20airgap | Airgap is a technique invented by IBM for fabricating small pockets of vacuum in between copper interconnects. The technique belongs to a general class of similar techniques that replaces solid low-κ dielectrics with air-filled or vacuum pockets.
Description
By insulating copper interconnects (wires) on an integrated circuit (IC) with vacuum holes, capacitance can be minimized enabling ICs to work faster or draw less power. A vacuum is believed to be the ultimate insulator for wiring capacitance, which occurs when two adjacent wires on an IC draw electrical energy from one another, generating undesirable heat and slowing the speed at which data can move through an IC. IBM estimates that this technology alone can lead to 35% higher speeds in current flow or 15% lower power consumption.
Fabrication techniques
The technique fabricates air gaps on a large scale by exploiting the self-assembly properties of certain polymers. These polymers can be easily integrated into the process modules (a collection of related steps that fabricate a structure on an integrated circuit) used in conventional CMOS fabrication, avoiding the costs of heavily modifying the process technology (the collection of process modules that produces an integrated circuit).
The technique deposits a polymer material over the entire wafer, and removes it at a later stage. When the polymer is removed, it creates trillions of evenly spaced vacuum pockets that are 20 nanometers in diameter. IBM has demonstrated this technique in the laboratory, and has deployed it in its East Fishkill, New York fabrication plant, where prototype POWER6 processors using this technology have been fabricated. The technique was scheduled to be featured in production-ready process technology in 2009, as part of IBM's 45 nm node, after which it would also be available to IBM's clients.
History
Airgap was developed in a collaborative effort between IBM's Almaden Research Center and T.J. Watson Research Center, and the University of Albany, New York.
References
Snowflakes promise faster chips, BBC
IBM Brings Nature to Computer Chip Manufacturing, IBM
IBM's catches air, touts Top Ten list, Ars Technica
IBM computer hardware
Semiconductor device fabrication | IBM airgap | Materials_science | 444 |
34,709,138 | https://en.wikipedia.org/wiki/Denjoy%E2%80%93Luzin%E2%80%93Saks%20theorem | In mathematics, the Denjoy–Luzin–Saks theorem states that a function of generalized bounded variation in the restricted sense has a derivative almost everywhere, and gives further conditions of the set of values of the function where the derivative does not exist.
N. N. Luzin and A. Denjoy proved a weaker form of the theorem, and later strengthened their theorem.
References
Theorems in analysis | Denjoy–Luzin–Saks theorem | Mathematics | 82 |
46,298,670 | https://en.wikipedia.org/wiki/Hafnium%20tetrabromide | Hafnium tetrabromide is the inorganic compound with the formula HfBr4. It is the most common bromide of hafnium. It is a colorless, diamagnetic moisture sensitive solid that sublimes in vacuum. It adopts a structure very similar to that of zirconium tetrabromide, featuring tetrahedral Hf centers, in contrast to the polymeric nature of hafnium tetrachloride.
References
Bromides
Hafnium compounds
Metal halides | Hafnium tetrabromide | Chemistry | 107 |
9,414,169 | https://en.wikipedia.org/wiki/Geomagnetically%20induced%20current | Geomagnetically induced currents (GIC) are electrical currents induced at the Earth's surface by rapid changes in the geomagnetic field caused by space weather events. GICs can affect the normal operation of long electrical conductor systems such as electric transmission grids and buried pipelines. The geomagnetic disturbances which induce GICs include geomagnetic storms and substorms where the most severe disturbances occur at high geomagnetic latitudes.
Background
The Earth's magnetic field varies over a wide range of timescales. The longer-term variations, typically occurring over decades to millennia, are predominantly the result of dynamo action in the Earth's core. Geomagnetic variations on timescales of seconds to years also occur, due to dynamic processes in the ionosphere, magnetosphere and heliosphere. These changes are ultimately tied to variations associated with the solar activity (or sunspot) cycle and are manifestations of space weather.
The fact that the geomagnetic field does respond to solar conditions can be useful, for example, in investigating Earth structure using magnetotellurics, but it also creates a hazard. This geomagnetic hazard is primarily a risk to technology under the Earth's protective atmospheric blanket.
Risk to infrastructure
A time-varying magnetic field external to the Earth induces telluric currents—electric currents in the conducting ground. These currents create a secondary (internal) magnetic field. As a consequence of Faraday's law of induction, an electric field at the surface of the Earth is induced associated with time variations of the magnetic field. The surface electric field causes electrical currents, known as geomagnetically induced currents (GIC), to flow in any conducting structure, for example, a power or pipeline grid grounded in the Earth. This electric field, measured in V/km, acts as a voltage source across networks.
Examples of conducting networks are electrical power transmission grids, oil and gas pipelines, non-fiber optic undersea communication cables, non-fiber optic telephone and telegraph networks and railways. GIC are often described as being quasi direct current (DC), although the variation frequency of GIC is governed by the time variation of the electric field. For GIC to be a hazard to technology, the current has to be of a magnitude and occurrence frequency that makes the equipment susceptible to either immediate or cumulative damage. The size of the GIC in any network is governed by the electrical properties and the topology of the network. The largest magnetospheric-ionospheric current variations, resulting in the largest external magnetic field variations, occur during geomagnetic storms and it is then that the largest GIC occur. Significant variation periods are typically from seconds to about an hour, so the induction process involves the upper mantle and lithosphere. Since the largest magnetic field variations are observed at higher magnetic latitudes, GIC have been regularly measured in Canadian, Finnish and Scandinavian power grids and pipelines since the 1970s. GIC of tens to hundreds of amperes have been recorded. GIC have also been recorded at mid-latitudes during major storms. There may even be a risk to low latitude areas, especially during a storm commencing suddenly because of the high, short-period rate of change of the field that occurs on the day side of the Earth.
GIC were first observed on the emerging electric telegraph network in 1847–8 during Solar cycle 9. Technological change and the growth of conducting networks have made the significance of GIC greater in modern society. The technical considerations for undersea cables, telephone and telegraph networks and railways are similar. Fewer problems have been reported in the open literature, about these systems because efforts have been made to ensure resiliency.
In power grids
Modern electric power transmission systems consist of generating plants inter-connected by electrical circuits that operate at fixed transmission voltages controlled at substations. The grid voltages employed are largely dependent on the path length between these substations and 200-700 kV system voltages are common. There is a trend towards using higher voltages and lower line resistances to reduce transmission losses over longer and longer path lengths. Low line resistances produce a situation favourable to the flow of GIC. Power transformers have a magnetic circuit that is disrupted by the quasi-DC GIC: the field produced by the GIC offsets the operating point of the magnetic circuit and the transformer may go into half-cycle saturation. This produces harmonics in the AC waveform, localised heating and leads to higher reactive power demands, inefficient power transmission and possible mis-operation of protective measures. Balancing the network in such situations requires significant additional reactive power capacity. The magnitude of GIC that will cause significant problems to transformers varies with transformer type. Modern industry practice is to specify GIC tolerance levels on new transformers.
On 13 March 1989, a severe geomagnetic storm caused the collapse of the Hydro-Québec power grid in a matter of seconds as equipment protective relays tripped in a cascading sequence of events. Six million people were left without power for nine hours, with significant economic loss. Since 1989, power companies in North America, the United Kingdom, Northern Europe, and elsewhere have invested in evaluating the GIC risk and in developing mitigation strategies.
GIC risk can, to some extent, be reduced by capacitor blocking systems, maintenance schedule changes, additional on-demand generating capacity, and ultimately, load shedding. These options are expensive and sometimes impractical. The continued growth of high voltage power networks results in higher risk. This is partly due to the increase in the interconnectedness at higher voltages, connections in terms of power transmission to grids in the auroral zone, and grids operating closer to capacity than in the past.
To understand the flow of GIC in power grids and to advise on GIC risk, analysis of the quasi-DC properties of the grid is necessary. This must be coupled with a geophysical model of the Earth that provides the driving surface electric field, determined by combining time-varying ionospheric source fields and a conductivity model of the Earth. Such analyses have been performed for North America, the UK and in Northern Europe. The complexity of power grids, the source ionospheric current systems and the 3D ground conductivity make an accurate analysis difficult. By being able to analyze major storms and their consequences we can build a picture of the weak spots in a transmission system and run hypothetical event scenarios.
Grid management is also aided by space weather forecasts of major geomagnetic storms. This allows for mitigation strategies to be implemented. Solar observations provide a one- to three-day warning of an Earthbound coronal mass ejection (CME), depending on CME speed. Following this, detection of the solar wind shock that precedes the CME in the solar wind, by spacecraft at the Lagrangian point, gives a definite 20 to 60 minutes warning of a geomagnetic storm (again depending on local solar wind speed). It takes approximately two to three days after a CME launches from the Sun for a geomagnetic storm to reach Earth and to affect the Earth's geomagnetic field.
GIC hazard in pipelines
Major pipeline networks exist at all latitudes and many systems are on a continental scale. Pipeline networks are constructed from steel to contain high-pressure liquid or gas and have corrosion resistant coatings. Damage to the pipeline coating can result in the steel being exposed to the soil or water possibly causing localised corrosion. If the pipeline is buried, cathodic protection is used to minimise corrosion by maintaining the steel at a negative potential with respect to the ground. The operating potential is determined from the electro-chemical properties of the soil and Earth in the vicinity of the pipeline. The GIC hazard to pipelines is that GIC cause swings in the pipe-to-soil potential, increasing the rate of corrosion during major geomagnetic storms. GIC risk is not a risk of catastrophic failure, but a reduced service life of the pipeline.
Pipeline networks are modeled in a similar manner to power grids, for example through distributed source transmission line models that provide the pipe-to-soil potential at any point along the pipe (Boteler, 1997). These models need to consider complicated pipeline topologies, including bends and branches, as well as electrical insulators (or flanges) that electrically isolate different sections. From a detailed knowledge of the pipeline response to GIC, pipeline engineers can understand the behaviour of the cathodic protection system even during a geomagnetic storm, when pipeline surveying and maintenance may be suspended.
See also
List of solar storms
Solar storm of 1859
Aurora (astronomy)
Footnotes and references
Further reading
Bolduc, L., GIC observations and studies in the Hydro-Québec power system. J. Atmos. Sol. Terr. Phys., 64(16), 1793–1802, 2002.
Boteler, D. H., Distributed source transmission line theory for electromagnetic induction studies. In Supplement of the Proceedings of the 12th International Zurich Symposium and Technical Exhibition on Electromagnetic Compatibility. pp. 401–408, 1997.
Boteler, D. H., Pirjola, R. J. and Nevanlinna, H., The effects of geomagnetic disturbances on electrical systems at the Earth's surface. Adv. Space. Res., 22(1), 17-27, 1998.
Erinmez, I. A., Kappenman, J. G. and Radasky, W. A., Management of the geomagnetically induced current risks on the national grid company's electric power transmission system. J. Atmos. Sol. Terr. Phys., 64(5-6), 743-756, 2002.
Lanzerotti, L. J., Space weather effects on technologies. In Song, P., Singer, H. J., Siscoe, G. L. (eds.), Space Weather. American Geophysical Union, Geophysical Monograph, 125, pp. 11–22, 2001.
Lehtinen, M., and R. Pirjola, Currents produced in earthed conductor networks by geomagnetically-induced electric fields, Annales Geophysicae, 3, 4, 479-484, 1985.
Pirjola, R., Kauristie, K., Lappalainen, H. and Viljanen, A. and Pulkkinen A., Space weather risk. AGU Space Weather, 3, S02A02, , 2005.
Thomson, A. W. P., A. J. McKay, E. Clarke, and S. J. Reay, Surface electric fields and geomagnetically induced currents in the Scottish Power grid during the 30 October 2003 geomagnetic storm, AGU Space Weather, 3, S11002, , 2005.
Pulkkinen, A. Geomagnetic Induction During Highly Disturbed Space Weather Conditions: Studies of Ground Effects, PhD thesis, University of Helsinki, 2003. (available at eThesis)
External links
Solar Shield — experimental GIC forecasting system
Solar Terrestrial Dispatch — GIC warning distribution center
GICnow! Service by Finnish Meteorological Institute
Ground Effects Topical Group of ESA Space Weather Working Team
GIC measurements
Metatech Corporation's GIC site
Space Weather Canada
Power grid related links
Geomagnetic Storm Induced HVAC Transformer Failure is Avoidable
NOAA Economics -- Geomagnetic Storm datasets and Economic Research
Geomagnetic Storms Can Threaten Electric Power Grid GICs: The Bane of Technology-Dependent Societies by Delores J. Knipp (AGU)
Exploration geophysics
Geomagnetism
Space physics
Space weather | Geomagnetically induced current | Astronomy | 2,445 |
40,042,954 | https://en.wikipedia.org/wiki/Document%20type%20declaration | A document type declaration, or DOCTYPE, is an instruction that associates a particular XML or SGML document (for example, a web page) with a document type definition (DTD) (for example, the formal definition of a particular version of HTML 2.0 - 4.0). In the serialized form of the document, it manifests as a short string of markup that conforms to a particular syntax.
The HTML layout engines in modern web browsers perform DOCTYPE "sniffing" or "switching", wherein the DOCTYPE in a document served as text/html determines a layout mode, such as "quirks mode" or "standards mode". The text/html serialization of HTML5, which is not SGML-based, uses the DOCTYPE only for mode selection. Since web browsers are implemented with special-purpose HTML parsers, rather than general-purpose DTD-based parsers, they do not use DTDs and never access them even if a URL is provided. The DOCTYPE is retained in HTML5 as a "mostly useless, but required" header only to trigger "standards mode" in common browsers.
Syntax
The general syntax for a document type declaration is:
<!DOCTYPE root-element PUBLIC "/quotedFPI/" "/quotedURI/" [
<!-- internal subset declarations -->
]>
or
<!DOCTYPE root-element SYSTEM "/quotedURI/" [
<!-- internal subset declarations -->
]>
Document type name
The opening syntax is followed by separating syntax (such as spaces, or (except in XML) comments opened and closed by a doubled ASCII hyphen), followed by a document type name (i.e. the name of the root element that the DTD applies to trees descending from). In XML, the root element that represents the document is the first element in the document. For example, in XHTML, the root element is <html>, being the first element opened (after the doctype declaration) and last closed.
Since the syntax for the external identifier and internal subset are both optional, the document type name is the only information which it is mandatory to give in a DOCTYPE declaration.
External identifier
The DOCTYPE declaration can optionally contain an external identifier, following the root element name (and separating syntax such as spaces), but before any internal subset. This begins with either the keyword or the keyword , specifying whether the DTD is specified using a public identifier identifying it as a public text, i.e. one shared between multiple computer systems (regardless of whether it is an available public text available to the general public, or an unavailable public text shared only within an organisation). If the PUBLIC keyword is used, it is followed by the public identifier enclosed in double or single ASCII quotation marks. The public identifier does not point to a storage location, but is rather a unique fixed string intended to be looked up in a table (such as an SGML catalog); however, in some (but not all) SGML profiles, the public identifier must be constructed using a particular syntax called Formal Public Identifier (FPI), which specifies the owner as well as whether it is available to the general public.
The public identifier (if present) or keyword (otherwise) may (and, in XML, must) be followed by a "system identifier" that is likewise enclosed in quotation marks. Although the interpretation of system identifiers in general SGML is entirely system-dependent (and might be a filename, database key, offset, or something else), XML requires that they be URIs. For example, the FPI for XHTML 1.1 is and, there are 3 possible system identifiers available for XHTML 1.1 depending on the needs. One of them is the URL reference . It means that the XML parser must locate the DTD in a system specific fashion, in this case, by means of a URL reference of the DTD enclosed in double quote marks.
In XHTML documents, the doctype declaration must always explicitly specify a system identifier. In SGML-based documents like HTML, on the other hand, the appropriate system identifier may automatically be inferred from the given public identifier. This association might e.g. be performed by means of a catalog file resolving the FPI to a system identifier. The keyword can (except in XML) also be used without a system identifier following, indicating that a DTD exists but should be inferred from the document type name.
Internal subset
The last, optional, part of a DOCTYPE declaration is surrounded by literal square brackets (), and called an internal subset. It can be used to add/edit entities or add/edit PUBLIC keyword behaviors. It is possible, but uncommon, to include the entire DTD in-line in the document, within the internal subset, rather than referencing it from an external file. Conversely, the internal subset is sometimes forbidden within simple SGML profiles, notably those for basic HTML parsers that don't implement a full SGML parser.
If both an internal DTD subset and an external identifier are included in a DOCTYPE declaration, the internal subset is processed first, and the external DTD subset is treated as if it were transcluded at the end of the internal subset. Since earlier definitions take precedence over later definitions in a DTD, this allows the internal subset to override definitions in the external subset.
Example
The first line of a World Wide Web page may read as follows:
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang="ar" dir="ltr" xmlns="http://www.w3.org/1999/xhtml">
This document type declaration for XHTML includes by reference a DTD, whose public identifier is -//W3C//DTD XHTML 1.0 Transitional//EN and whose system identifier is http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd. An entity resolver may use either identifier for locating the referenced external entity. No internal subset has been indicated in this example or the next ones. The root element is declared to be html and, therefore, it is the first tag to be opened after the end of the doctype declaration in this example and the next ones, too. The HTML tag is not part of the doctype declaration but has been included in the examples for orientation purposes.
Common DTDs
Some common DTDs have been put into lists. W3C has produced a list of DTDs commonly used in the web, which contains the "bare" HTML5 DTD, older XHTML/HTML DTDs, DTDs of common embedded XML-based formats like MathML and SVG as well as "compound" documents that combine those formats. Both W3C HTML5 and its corresponding WHATWG version recommend browsers to only accept XHTML DTDs of certain FPIs and to prefer using internal logic over fetching external DTD files. It further specifies an "internal DTD" for XHTML which is merely a list of HTML entity names.
HTML 4.01 DTDs
Strict DTD does not allow presentational markup with the argument that Cascading Style Sheets should be used for that instead. This is how the Strict DTD looks:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
Transitional DTD allows some older PUBLIC and attributes that have been deprecated:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
If frames are used, the Frameset DTD must be used instead, like this:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN"
"http://www.w3.org/TR/html4/frameset.dtd">
<html>
XHTML 1.0 DTDs
XHTML's DTDs are also Strict, Transitional and Frameset.
XHTML Strict DTD. No deprecated tags are supported and the code must be written correctly according to XML Specification.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
XHTML Transitional DTD is like the XHTML Strict DTD, but deprecated tags are allowed.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
XHTML Frameset DTD is the only XHTML DTD that supports Frameset. The DTD is below.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html
PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
XHTML 1.1 DTD
XHTML 1.1 is the most current finalized revision of XHTML, introducing support for XHTML Modularization. XHTML 1.1 has the stringency of XHTML 1.0 Strict.
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
XHTML Basic DTDs
XHTML Basic 1.0
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML Basic 1.0//EN"
"http://www.w3.org/TR/xhtml-basic/xhtml-basic10.dtd">
XHTML Basic 1.1
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML Basic 1.1//EN"
"http://www.w3.org/TR/xhtml-basic/xhtml-basic11.dtd">
HTML5 DTD-less DOCTYPE
HTML5 uses a DOCTYPE declaration which is very short, due to its lack of references to a DTD in the form of a URL or FPI. All it contains is the tag name of the root element of the document, HTML. In the words of the specification draft itself:
, case-insensitively.
With the exception of the lack of a URI or the FPI string (the FPI string is treated case sensitively by validators), this format (a case-insensitive match of the string !DOCTYPE HTML) is the same as found in the syntax of the SGML based HTML 4.01 DOCTYPE. Both in HTML4 and in HTML5, the formal syntax is defined in upper case letters, even if both lower case and mixes of lower case upper case are also treated as valid.
In XHTML5 the DOCTYPE must be a case-sensitive match of the string "". This is because in XHTML syntax all HTML element names are required to be in lower case, including the root element referenced inside the HTML5 DOCTYPE.
The DOCTYPE is optional in XHTML5 and may simply be omitted. However, if the markup is to be processed as both XML and HTML, a DOCTYPE should be used.
See also
Document type definition contains an example
RDFa
XML schema
References
External links
HTML Doctype overview
Recommended DTDs to use in your Web document - an informative (not normative) W3C Quality Assurance publication
DOCTYPE grid - another overview table [Last modified 27 November 2006]
Quirks mode and transitional mode
Box model tweaking
XML-based standards
SGML | Document type declaration | Technology | 2,877 |
10,446,876 | https://en.wikipedia.org/wiki/Fate%20mapping | Fate mapping is a method used in developmental biology to study the embryonic origin of various adult tissues and structures. The "fate" of each cell or group of cells is mapped onto the embryo, showing which parts of the embryo will develop into which tissue. When carried out at single-cell resolution, this process is called cell lineage tracing. It is also used to trace the development of tumors. Fate mapping and cell lineage are similar methods for tracing the history of cells.
History
Fate maps were created with the intent of tracing a specified region during the early developmental transition of an embryo to a distinct body structure. The first fate maps originate in the 1880s. The early fate maps in 1905 were created by Edwin Conklin and were based on direct observation of the embryos of ascidians (sea squirts) and other marine invertebrates. Modern fate mapping began in 1929 when Walter Vogt invented a process which involved marking a specific region of a developing embryo using a dyed agar chip and tracking the cells through gastrulation. To achieve this experiment, Vogt allowed dye and agar to dry on a microscope plate, and placed small pieces onto specific embryo locations. As the embryo developed, he repeated this process to analyze the movement of cells. This procedure enabled Vogt to create accurate fate maps, introducing an innovative approach to morphogenesis reearch. In 1978, horseradish peroxidase (HRP) was introduced as a more effective marker that required embryos to be fixed before viewing. Fate mapping can also be done through the use of molecular barcodes, which are introduced to the cell by retroviruses.
Genetic fate mapping is a technique developed in 1981 which uses a site-specific recombinase to track cell lineage genetically. This process does not require manipulating the embryo or the organ. The genetic basis of the labelling guarantees the inheritance of the marker by all offspring originating from the initially labelled cells, overcoming the issue of dilution associated with dye markers during cell division, thus offering high precision and resolution.
Overall, fate mapping serves an important tool in many fields of biology research today, such as developmental biology, stem cell research, and kidney research.
How Fate Mapping Differs from Cell Lineage
In 1905, the first experiment using cell lineage was conducted, involving tracking cells of the tunicate Styela partita. Cell lineage entails tracing a particular cell's path from one of the three germ layers. Fate mapping and cell lineage are related concepts that often overlap. For example, the development of the complete cell lineage of C. elegans can be described as the fate maps of each cell division stacked hierarchically. The distinction between the topics lies within the type of information being analyzed. Fate mapping shows which tissues come from which part of the embryo at a certain stage in development, whereas cell lineage shows the relationships between cells at each division. A cell lineage can be used to generate a fate map, and in cases like C. elegans, successive fate mapping can be used to develop a cell lineage.
See also
Cell fate determination
References
External links
http://worms.zoology.wisc.edu/frogs/gast/gast_fatemap.html
Fate-Mapping Technique: Using Carbocyanine Dyes for Vital Labeling of Cells in Gastrula-Stage Mouse Embryos Cultured in Vitro
Developmental biology
Molecular biology techniques | Fate mapping | Chemistry,Biology | 688 |
73,535,353 | https://en.wikipedia.org/wiki/Lys-MDA | Lys-MDA (Lysine-MDA, N-(L-lysinamidyl)-3,4-methylenedioxyamphetamine) is a substituted amphetamine derivative with empathogenic effects, which acts as a prodrug for MDA with a slower onset of effects and longer duration of action. Lys-MDA, along with the related derivative Lys-MDMA, are in early stage human clinical trials as potential treatments for treatment-resistant depression and post-traumatic stress disorder. New MDMA prodrugs are in the development phase at MiHKAL GmbH in Switzerland. A phase 1 clinical trial comparing MDMA, MDA, Lys-MDMA, and Lys-MDA has been completed as of August 2024.
See also
Lisdexamfetamine
Serdexmethylphenidate
N-t-BOC-MDMA
References
Designer drugs
Entactogens and empathogens
Prodrugs
Serotonin-norepinephrine-dopamine releasing agents
Substituted amphetamines | Lys-MDA | Chemistry | 230 |
35,929,255 | https://en.wikipedia.org/wiki/List%20of%20sequenced%20protist%20genomes | This list of sequenced protist genomes contains all the protist species known to have publicly available complete genome sequences that have been assembled, annotated and published; draft genomes are not included, nor are organelle only sequences.
Alveolata
Alveolata are a group of protists which includes the Ciliophora, Apicomplexa and Dinoflagellata. Members of this group are of particular interest to science as the cause of serious human and livestock diseases.
Amoebozoa
Amoebozoa are a group of motile amoeboid protists, members of this group move or feed by means of temporary projections, called pseudopods. The best known member of this group is the slime mold, which has been studied for centuries; other members include the Archamoebae, Tubulinea and Flabellinia. Some Amoeboza cause disease.
Chromista
The Chromista are a group of protists that contains the algal phyla Heterokontophyta (stramenopiles), Haptophyta and Cryptophyta. Members of this group are mostly studied for evolutionary interest.
Excavata
Excavata is a group of related free living and symbiotic protists; it includes the Metamonada, Loukozoa, Euglenozoa and Percolozoa. They are researched for their role in human disease.
Opisthokonts, basal
Opisthokonts are a group of eukaryotes that include both animals and fungi as well as basal groups that are not classified in these groups. These basal opisthokonts are reasonably categorized as protists and include choanoflagellates, which are the sister or near-sister group of animals.
See also
List of sequenced bacterial genomes
List of sequenced animal genomes
List of sequenced eukaryotic genomes
List of sequenced fungi genomes
List of sequenced plant genomes
List of sequenced algae genomes
References
Biology-related lists
Protist | List of sequenced protist genomes | Engineering,Biology | 433 |
22,265,215 | https://en.wikipedia.org/wiki/807%20%28vacuum%20tube%29 | The 807 is a beam tetrode vacuum tube, widely used in audio- and radio-frequency power amplifier applications.
Audio uses
807s were used in audio power amplifiers, both for public address and Hi-Fi application, usually being run in push-pull pairs in class AB1 or AB2 giving up to 120 watts of usable power. The plate voltage limit is 750 volts and the screen grid limited to 300 volts. Because of the 300 volt screen grid voltage limit, the 807 cannot be triode connected for high power applications. Failure to observe this precaution will cause screen grid failure. Less commonly a single 807 was used in a pure class-A, single-ended audio output stage delivering about 10 watts.
RF uses
The 807 is fully rated to 60 MHz, derated to 55% at 125 MHz in Class C, Plate-modulated operation, thus they were popular with amateur radio operators (radio hams).
In this application a single 807 could be run in class-C as an oscillator or amplifier which could be keyed on and off to transmit Morse Code in CW mode. For voice transmission on AM a final amplifier with one or more 807s, up to about four, could be connected in parallel running class-C. Connecting multiple 807s in parallel produced more power to feed to the antenna. Often the modulator stage (simply a transformer-coupled audio amplifier for A.M., with the secondary of its output transformer in series with the anode supply of the final amplifier), was also constructed using 807s. Many hams found multiple paralleled 807s a cheaper alternative to a single larger valve, such as a single 813, as many military surplus 807s became available cheaply after World War II. In Australia 807s are affectionately referred to as "stubbies" because they are almost as ubiquitous as that common Australian beer container.
The class C operational values in the info box at the right are for anode modulated A.M. operation; for CW operation a maximum anode voltage of 600 is permissible, whereby the anode current increases to 100 mA and the anode/plate dissipation rises to 25 watts. The screen voltage is the same, at 300, but its dissipation rises to 3.5 watts.
37 watts of R.F. power is produced from 220 mW of drive but only a 50% duty cycle is allowed. The maximum allowable negative control grid, g1 excursion allowable is -200 volts and average control grid current is 5mA in both A.M. and CW modes.
Later versions could be used on CW with a supply voltage up to 750 V and a current of 100 mA to produce 50-55 watts of output power.
Differences from 6L6
The electrically similar 6L6 was not favored by hams because high transient voltages on the anode when operating in class C could cause a flashover between pins 2 and 3 on the octal base, whereas the 807 had the anode connected to a top cap, physically distant from all the base pins.
Derivatives
The 1624 (VT-165) is an 807 variant with a directly heated filamentary cathode operating at 2.5 V, 2 A.
The 1625 (VT-136) is an 807 variant with a 12.6 V heater and a 7-pin base. These tubes were used as RF power amplifiers in some of the SCR-274 and AN/ARC-5 "command set" transmitters of WW2. Postwar, 1625 tubes flooded the surplus market, and were available for pennies apiece. Surplus 1625s found some commercial use, notably the use of a pair as modulator tubes in the Heathkit DX-100 amateur transmitter.
The HY-69 is an 807 variant with a 5-pin base and a directly heated filamentary cathode operating at 6.3 V, 1.6 A.
The 5933/807W is a ruggedized military version of the 807. It uses a shorter, straight-sided T12 bulb, which provides better element support for improved microphonics and shock/vibration resistance.
The ATS-25 is a military version with ceramic base.
The Г-807 (G-807) is a Soviet/Russian version. The 6П7С (6P7S) is similar to Г-807, but with an 8-pin octal base.
The 807 also found some use as a horizontal output tube in early TV receivers, particularly those manufactured by DuMont. The 807 design (with some "value engineering" to reduce production cost) was the basis for the first application-specific horizontal sweep tubes such as the 6BG6G and 6CD6G. The redesign mainly involved the omission of some of the internal RF shielding, and the substitution of a bakelite octal base for the micanol or ceramic 5-pin.
In turn, these low cost sweep tube derivatives found some use as RF power amplifiers in homebrew amateur radio transmitters in the 1950s.
Slang
Ham operators in the US sometimes use the term "807" to refer to bottles of beer due to the shape of the tube.
See also
KT66
KT88
6L6
6CA7 / EL34
6V6
SY4307A
References
Vacuum tubes | 807 (vacuum tube) | Physics | 1,119 |
2,886,280 | https://en.wikipedia.org/wiki/Tetrakis%20square%20tiling | In geometry, the tetrakis square tiling is a tiling of the Euclidean plane. It is a square tiling with each square divided into four isosceles right triangles from the center point, forming an infinite arrangement of lines. It can also be formed by subdividing each square of a grid into two triangles by a diagonal, with the diagonals alternating in direction, or by overlaying two square grids, one rotated by 45 degrees from the other and scaled by a factor of √2.
Conway, Burgiel, and Goodman-Strauss call it a kisquadrille, represented by a kis operation that adds a center point and triangles to replace the faces of a square tiling (quadrille). It is also called the Union Jack lattice because of the resemblance to the UK flag of the triangles surrounding its degree-8 vertices.
It is labeled V4.8.8 because each isosceles triangle face has two types of vertices: one with 4 triangles, and two with 8 triangles.
As a dual uniform tiling
It is the dual tessellation of the truncated square tiling which has one square and two octagons at each vertex.
Applications
A 5 × 9 portion of the tetrakis square tiling is used to form the board for the Malagasy board game Fanorona. In this game, pieces are placed on the vertices of the tiling, and move along the edges, capturing pieces of the other color until one side has captured all of the other side's pieces. In this game, the degree-4 and degree-8 vertices of the tiling are called respectively weak intersections and strong intersections, a distinction that plays an important role in the strategy of the game. A similar board is also used for the Brazilian game Adugo, and for the game of Hare and Hounds.
The tetrakis square tiling was used for a set of commemorative postage stamps issued by the United States Postal Service in 1997, with an alternating pattern of two different stamps. Compared to the simpler pattern for triangular stamps in which all diagonal perforations are parallel to each other, the tetrakis pattern has the advantage that, when folded along any of its perforations, the other perforations line up with each other, making repeated folding possible.
This tiling also forms the basis for a commonly used "pinwheel", "windmill", and "broken dishes" patterns in quilting.
Symmetry
The symmetry type is:
with the coloring: cmm; a primitive cell is 8 triangles, a fundamental domain 2 triangles (1/2 for each color)
with the dark triangles in black and the light ones in white: p4g; a primitive cell is 8 triangles, a fundamental domain 1 triangle (1/2 each for black and white)
with the edges in black and the interiors in white: p4m; a primitive cell is 2 triangles, a fundamental domain 1/2
The edges of the tetrakis square tiling form a simplicial arrangement of lines, a property it shares with the triangular tiling and the kisrhombille tiling.
These lines form the axes of symmetry of a reflection group (the wallpaper group [4,4], (*442) or p4m), which has the triangles of the tiling as its fundamental domains. This group is isomorphic to, but not the same as, the group of automorphisms of the tiling, which has additional axes of symmetry bisecting the triangles and which has half-triangles as its fundamental domains.
There are many small index subgroups of p4m, [4,4] symmetry (*442 orbifold notation), that can be seen in relation to the Coxeter diagram, with nodes colored to correspond to reflection lines, and gyration points labeled numerically. Rotational symmetry is shown by alternately white and blue colored areas with a single fundamental domain for each subgroup is filled in yellow. Glide reflections are given with dashed lines.
Subgroups can be expressed as Coxeter diagrams, along with fundamental domain diagrams.
See also
Tilings of regular polygons
List of uniform tilings
Percolation threshold
Notes
References
(Chapter 2.1: Regular and uniform tilings, p. 58-65)
Keith Critchlow, Order in Space: A design source book, 1970, p. 77-76, pattern 8
Euclidean tilings
Isohedral tilings | Tetrakis square tiling | Physics,Mathematics | 907 |
14,862,085 | https://en.wikipedia.org/wiki/3C%20303 | 3C 303 is a Seyfert galaxy with a quasar-like appearance located in the constellation Boötes.
3C 303 is also a radio galaxy It also contains an extragalactic radio source. A clear defined jet is also seen, showing a polarization variation trend. With a diffuse patch of optical emission located at the intersection of the jet where a radio lobe is connected, the jet is confirmed to target a region of interstellar medium.
References
External links
www.jb.man.ac.uk/atlas/ (J. P. Leahy)
Radio galaxies
Seyfert galaxies
303
3C 303
Boötes | 3C 303 | Astronomy | 131 |
34,039,715 | https://en.wikipedia.org/wiki/Mitosis%20inducer%20protein%20kinase%20cdr2 | Cdr2 is a serine/threonine protein kinase mitotic regulator in the fission yeast S. pombe. It is encoded by the P87050 2247 bp ORF on the cosmid 57A10. The protein is 775 amino acids in length. Cdr2 is a member of the GIN4 family of kinases, which prevent progression of mitosis if there is a problem with septin. The N-terminus contains a sequence characteristic of serine/threonine protein kinase activity. The C-terminus, while non-catalytic, is necessary for proper localization of Cdr2 during interphase.
Cdr2 null constructs behave similarly to wild-type constructs; the only difference being a slight delay into mitosis and consequently, cells are slightly larger than in wild-type constructs. Therefore, Cdr2 is non-essential. Cdr2 regulates mitotic entry through direct inhibition of Wee1, which is then unable to continue to Cdk1 and subsequently start mitosis.
Cell localization
During interphase (G1, S, G2), Cdr2 is localized in a wide medial band that is centered on the nucleus. The C-terminus is required for correct localization; cleavage of any number of residues close to the carboxy terminus results in abnormal distribution. Pom1 phosphorylates Cdr2 on the C-terminus, and prevents it from spreading beyond the medial band. The width of the cortical band increases proportionately with the length of the growing cell; the final limit is at approximately 30% of the total cell length before the cell enters mitosis. When the cell enters mitosis, Cdr2 is distributed diffusely through the cytoplasm; there is no detectable cortical band in metaphase in anaphase. During septation at the end of anaphase, Cdr2 localizes to the contractile ring. After cytokinesis, Cdr2 is again distributed in a broad medial band centered on the nucleus.
Regulation of Mitotic Entry
Pom1 is a serine/threonine protein kinase that localizes to the cell tips. It is a partial mechanism for the formation of the medial distribution of Cdr2 in the cell; Pom1 has been demonstrated to prevent Cdr2 from diffusing into the non-growing end of the cell in interphase. As seen in figure 1, Pom1 directly inhibits Cdr2. This is done through phosphorylation of Cdr2 on a residue between 423 and 532 on the non-catalytic C-terminus. Once phosphorylated, Cdr2 is unable to inhibit the kinase Wee1, which is then able to maintain CDK1 in a hyper-phosphorylated state incapable of progression into mitosis. Mutation and deletion of Cdr2 result in a delay into mitotic entry, leading to larger cells. However, the cells still enter mitosis, presumably because Cdr2 is the link in only one pathway that couples cell length to mitotic entry. Thus, Cdr2 is non-essential to the decision to enter mitosis.
Interaction with Pom1 Gradient
Pom1 is distributed in a gradient from the cell tips, with the maximum concentration in the cell tips and the lower concentrations in the medial region of the cell, roughly overlapping with the wide medial band occupied by Cdr2. In small cells, there is a higher level of overlap between Pom1 and Cdr2, leading to a greater degree of inhibition of Cdr2; conversely Cdr2 is unable to inhibit Wee1, which is then free to inhibit CDK1. Therefore, Cdr2 functions as part of a cell-size sensor pathway. In larger cells, there is much lower degree of overlap between Pom1 and Cdr2, allowing Cdr2 to inhibit Wee1; CDK1 is then able to promote mitotic entry.
References
Cell cycle | Mitosis inducer protein kinase cdr2 | Biology | 831 |
42,556,315 | https://en.wikipedia.org/wiki/Grabyo | Grabyo is a browser-based live video production suite integrated with other social media platforms such as Facebook, YouTube, Instagram, Snapchat, Twitter, and Periscope. Sports federations and media companies use cloud-based technology to produce professional-quality live streams and video clips for digital audiences.
Founded in 2013, the company produces and distributes live shows (such as sports or music events) and video clips (such as pre-match warm-ups, behind-the-scene activities, and instant highlights). It is used to build digital fan bases, drive TV audiences and generate revenue from third-party sponsors and pay-TV subscriptions. Its customers include major sports rights owners and media companies such as La Liga, NHL, Eurosport, Sky Sports, FIFA World Cup, FIA Formula E Championship, The Championships, Wimbledon, the Premier League and Real Madrid C.F.
Grabyo ranked 77th in the Financial Times' FT 1000 Europe's Fastest Growing Companies 2018.
Investors
The company's investors include Oliver Slipper, Nicole Junkermann, Cesc Fàbregas, Thierry Henry, Robin van Persie and Tony Parker.
References
External links
Company home page
Bloomberg TV interview of Grabyo CEO Gareth Capon
Real-time web
Social software
Mobile content
Technology companies established in 2013
British companies established in 2013
Technology companies based in London | Grabyo | Technology | 275 |
1,014,414 | https://en.wikipedia.org/wiki/Antiporter | An antiporter (also called exchanger or counter-transporter) is an integral membrane protein that uses secondary active transport to move two or more molecules in opposite directions across a phospholipid membrane. It is a type of cotransporter, which means that uses the energetically favorable movement of one molecule down its electrochemical gradient to power the energetically unfavorable movement of another molecule up its electrochemical gradient. This is in contrast to symporters, which are another type of cotransporter that moves two or more ions in the same direction, and primary active transport, which is directly powered by ATP.
Transport may involve one or more of each type of solute. For example, the Na+/Ca2+ exchanger, found in the plasma membrane of many cells, moves three sodium ions in one direction, and one calcium ion in the other. As with sodium in this example, antiporters rely on an established gradient that makes entry of one ion energetically favorable to force the unfavorable movement of a second molecule in the opposite direction. Through their diverse functions, antiporters are involved in various important physiological processes, such as regulation of the strength of cardiac muscle contraction, transport of carbon dioxide by erythrocytes, regulation of cytosolic pH, and accumulation of sucrose in plant vacuoles.
Background
Cotransporters are found in all organisms and fall under the broader category of transport proteins, a diverse group of transmembrane proteins that includes uniporters, symporters, and antiporters. Each of them are responsible for providing a means of movement for water-soluble molecules that otherwise would not be able to pass through lipid-based plasma membrane. The simplest of these are the uniporters, which facilitate the movement of one type of molecule in the direction that follows its concentration gradient. In mammals, they are most commonly responsible for bringing glucose and amino acids into cells.
Symporters and antiporters are more complex because they move more than one ion and the movement of one of those ions is in an energetically unfavorable direction. As multiple molecules are involved, multiple binding processes must occur as the transporter undergoes a cycle of conformational changes to move them from one side of the membrane to the other. The mechanism used by these transporters limits their functioning to moving only a few molecules at a time. As a result, symporters and antiporters are characterized by a slower transport speed, moving between 102 and 104 molecules per second. Compare this to ion channels that provide a means for facilitated diffusion to occur and allow between 107 and 108 ions pass through the plasma membrane per second.
Though ATP-powered pumps also move molecules in an energetically unfavorable direction and undergo conformational changes to do so, they fall under a different category of membrane proteins because they couple the energy derived from ATP hydrolysis to transport their respective ions. These ion pumps are very selective, consisting of a double gating system where at least one of the gates is always shut. The ion is allowed to enter from one side of the membrane while one of the gates is open, after which it will shut. Only then will the second gate open to allow the ion to leave on the membrane's opposite side. The time between the alternating gate opening is referred to as the occluded state, where the ions are bound and both gates are shut. These gating reactions limit the speed of these pumps, causing them to function even slower than transport proteins, moving between 100 and 103 ions per second.
Structure and function
To function in active transport, a membrane protein must meet certain requirements. The first of these is that the interior of the protein must contain a cavity that is able to contain its corresponding molecule or ion. Next, the protein must be able to assume at least two different conformations, one with its cavity open to the extracellular space and the other with its cavity open to the cytosol. This is crucial for the movement of molecules from one side of the membrane to the other. Finally, the cavity of the protein must contain binding sites for its ligands, and these binding sites must have a different affinity for the ligand in each of the protein's conformations. Without this, the ligand will not be able to bind to the transporter on one side of the plasma membrane and be released from it on the other side. As transporters, antiporters have all of these features.
Because antiporters are highly diverse, their structure can vary widely depending upon the type of molecules being transported and their location in the cell. However, there are some common features that all antiporters share. One of these is multiple transmembrane regions that span the lipid bilayer of the plasma membrane and form a channel through which hydrophilic molecules can pass. These transmembrane regions are typically structured from alpha helices and are connected by loops in both the extracellular space and cytosol. These loops are what contain the binding sites for the molecules associated with the antiporter.
These features of antiporters allow them to carry out their function in maintaining cellular homeostasis. They provide a space where a hydrophilic molecule can pass through the hydrophobic lipid bilayer, allowing them to bypass the hydrophobic interactions of the plasma membrane. This enables the efficient movement of molecules needed for the environment of the cell, such as in the acidification of organelles. The varying affinity of the antiporter for each ion or molecule on either side of the plasma membrane allows it to bind to and release its ligands on the appropriate side of the membrane according to the electrochemical gradient of the ion being harnessed for its energetically favorable concentration.
Mechanism
The mechanism of antiporter transport involves several key steps and a series of conformational changes that are dictated by the structural element described above:
The substrate binds to its specific binding site on the extracellular side of the plasma membrane, forming a temporary substrate-bound open form of the antiporter.
This becomes an occluded, substrate-bound state that is still facing the extracellular space.
The antiporter undergoes a conformational change to become an occluded, substrate-bound protein that is now facing the cytosol. As it does so, it passes through a temporary fully-occluded intermediate stage.
The substrate is released from the antiporter as it takes on an open, inward-facing conformation.
The antiporter can now bind to its second substrate and transport it in the opposite direction by taking on its transient substrate-bound open state.
This is followed by an occluded, substrate-bound state that is still facing the cytosol, a conformation change with a temporary fully-occluded intermediate stage, and a return to the antiporter's open, outward-facing conformation.
The second substrate is released and the antiporter can return to its original conformation state, where it is ready to bind to new molecules or ions and repeat its transport process.
History
Antiporters were discovered as scientists were exploring ion transport mechanisms across biological membranes. The early studies took place in the mid-20th century and were focused on the mechanisms that transported ions such as sodium, potassium, and calcium across the plasma membrane. Researchers made the observation that these ions were moved in opposite directions and hypothesized the existence of membrane proteins that could facilitate this type of transport.
In the 1960's, biochemist Efraim Racker made a breakthrough in the discovery of antiporters. Through purification from bovine heart mitochondria, Racker and his colleagues found a mitochondrial protein that could exchange inorganic phosphate for hydroxide ions. The protein is located in the inner mitochondrial membrane and transports phosphate ions for use in oxidative phosphorylation. It became known as the phosphate-hydroxide antiporter, or mitochondrial phosphate carrier protein, and was the first example of an antiporter identified in living cells.
As time went on, researchers discovered other antiporters in different membranes and in various organisms. This includes the sodium-calcium exchanger (NCX), another crucial antiporter that regulates intracellular calcium levels through the exchange of sodium ions for calcium ions across the plasma membrane. It was discovered in the 1970s and is now a well-characterized antiporter known to be found in many different types of cells.
Advances in the fields of biochemistry and molecular biology have enabled the identification and characterization of a wide range of antiporters. Understanding the transport processes of various molecules and ions has provided insight into cellular transport mechanisms, as well as the role of antiporters in various physiological functions and in the maintenance of homeostasis
Role in homeostasis
Sodium-calcium exchanger
The sodium-calcium exchanger, also known as the Na+/Ca2+ exchanger or NCX, is an antiporter responsible for removing calcium from cells. This title encompasses a class of ion transporters that are commonly found in the heart, kidney, and brain. They use the energy stored in the electrochemical gradient of sodium to exchange the flow of three sodium ions into the cell for the export of one calcium ion. Though this exchanger is most common in the membranes of the mitochondria and the endoplasmic reticulum of excitable cells, it can be found in many different cell types in various species.
Although the sodium-calcium exchanger has a low affinity for calcium ions, it can transport a high amount of the ion in a short period of time. Because of these properties, it is useful in situations where there is an urgent need to export high amounts of calcium, such as after an action potential has occurred. Its characteristics also enable NCX to work with other proteins that have a greater affinity for calcium ions without interfering with their functions. NCX works with these proteins to carry out functions such as cardiac muscle relaxation, excitation-contraction coupling, and photoreceptor activity. They also maintain the concentration of calcium ions in the sarcoplasmic reticulum of cardiac cells, endoplasmic reticulum of excitable and nonexcitable cells, and the mitochondria.
Another key characteristic of this antiporter is its reversibility. This means that if the cell is depolarized enough, the extracellular sodium level is low enough, or the intracellular level of sodium is high enough, NCX will operate in the reverse direction and begin bringing calcium into the cell. For example, when NCX functions during excitotoxicity, this characteristic allows it to have a protective effect because the accompanying increase in intracellular calcium levels enables the exchanger to work in its normal direction regardless of the sodium concentration. Another example is the depolarization of cardiac muscle cells, which is accompanied by a large increase in the intracellular sodium concentration that causes NCX to work in reverse. Because the concentration of calcium is carefully regulated during the cardiac action potential, this is only a temporary effect as calcium is pumped out of the cell.
The sodium-calcium exchanger's role in maintaining calcium homeostasis in cardiac muscle cells allows it to help relax the heart muscle as it exports calcium during diastole. Therefore, its dysfunction can result in abnormal calcium movement and the development of various cardiac diseases. Abnormally high intracellular calcium levels can hinder diastole and cause abnormal systole and arrhythmias. Arrhythmias can occur when calcium is not properly exported by NCX, causing delayed afterdepolarizations and triggering abnormal activity that can possibly lead to atrial fibrillation and ventricular tachycardia.
If the heart experiences ischemia, the inadequate oxygen supply can disrupt ion homeostasis. When the body tries to stabilize this by returning blood to the area, ischemia-reperfusion injury, a type of oxidative stress, occurs. If NCX is dysfunctional, it can exacerbate the increase of calcium that accompanies reperfusion, causing cell death and tissue damage. Similarly, NCX dysfunction has found to be involved in ischemic strokes. Its activity is upregulated, causing a increased cytosolic calcium level, which can lead to neuronal cell death.
The Na+/Ca2+ exchanger has also been implicated in neurological disorders such as Alzheimer's disease and Parkinson's disease. Its dysfunction can result in oxidative stress and neuronal cell death, contributing to the cognitive decline that characterizes Alzheimer's disease. The dysregulation of calcium homeostasis has been found to be a key part of neuron death and Alzheimer's pathogenesis. For example, neurons that have neurofibrillary tangles contain high levels of calcium and show hyperactivation of calcium-dependent proteins. The abnormal calcium handling of atypical NCX function can also cause the mitochondrial dysfunction, oxidative stress, and neuronal cell death that characterize Parkinson's. In this case, if dopaminergic neurons of the substantia nigra are affected, it can contribute to the onset and development of Parkinson's disease. Although the mechanism is not entirely understood, disease models have shown a link between NCX and Parkinson's and that NCX inhibitors can prevent death of dopaminergic neurons.
Sodium-hydrogen antiporter
The sodium–hydrogen antiporter, also known as the sodium-proton exchanger, Na+/H+ exchanger, or NHE, is an antiporter responsible for transporting sodium into the cell and hydrogen out of the cell. As such, it is important in the regulation of cellular pH and sodium levels. There are differences among the types of NHE antiporter families present in eukaryotes and prokaryotes. The 9 isoforms of this transporter that are found in the human genome fall under several families, including the cation-proton antiporters (CPA 1, CPA 2, and CPA 3) and sodium-transporting carboxylic acid decarboxylase (NaT-DC). Prokaryotic organisms contain the Na+/H+ antiporter families NhaA, NhaB, NhaC, NhaD, and NhaE.
Because enzymes can only function at certain pH ranges, it is critical for cells to tightly regulate cytosolic pH. When a cell's pH is outside of the optimal range, the sodium-hydrogen antiporter detects this and is activated to transport ions as a homeostatic mechanism to restore pH balance. Since ion flux can be reversed in mammalian cells, NHE can also be used to transport sodium out of the cell to prevent excess sodium from accumulating and causing toxicity.
As suggested by its functions, this antiporter is located in the kidney for sodium reabsorption regulation and in the heart for intracellular pH and contractility regulation. NHE plays an important role in the nephron of the kidney, especially in the cells of the proximal convoluted tubule and collecting duct. The sodium-hydrogen antiporter's function is upregulated by Angiotensin II in the proximal convoluted tubule when the body needs to reabsorb sodium and excrete hydrogen.
Plants are sensitive to high amounts of salt, which can halt certain necessary functions of the eukaryotic organism, including photosynthesis. For the organisms to maintain homeostasis and carry out crucial functions, Na+/H+ antiporters are used to rid the cytoplasm of excess sodium by pumping Na+ out of the cell. These antiporters can also close their channel to stop sodium from entering the cell, along with allowing excess sodium within the cell to enter into a vacuole.
Dysregulation of the sodium-hydrogen antiporter's activity has been linked to cardiovascular diseases, renal disorders, and neurological conditions NHE inhibitors are being developed to treat these issues. One of the isoforms of the antiporter, NHE1, is essential to the function of the mammalian myocardium. NHE is involved in the case of hypertrophy and when damage to the heart muscle occurs, such as during ischemia and reperfusion. Studies have shown that NHE1 is more active in animal models experiencing myocardial infarction and left ventricular hypertrophy. During these cardiac events, the function of the sodium-hydrogen antiporter causes an increase in the sodium levels of cardiac muscle cells. In turn, the work of the sodium-calcium antiporter leads to more calcium being brought into the cell, which is what results in damage to the myocardium.
Five isoforms of NHE are found in kidney's epithelial cells. The best studied one is NHE3, which is mainly located in the proximal tubules of the kidney and plays a key role in acid-base homeostasis. Issues with NHE3 disrupt the reabsorption of sodium and secretion of hydrogen. The main conditions that NHE3 dysregulation can cause are hypertension and renal tubular acidosis (RTA). Hypertension can occur when more sodium is reabsorbed in the kidneys because water will follow the sodium ions and create an elevated blood volume. This, in turn, leads to elevated blood pressure. RTA is characterized by the inability of the kidneys to acidify the urine due to underactive NHE3 and reduced secretion of hydrogen ions, resulting in metabolic acidosis. On the other hand, overactive NHE3 can lead to excess secretion of hydrogen ions and metabolic alkalosis, where the blood is too alkaline.
NHE can also be linked to neurodegeneration. The dysregulation or loss of the isoform NHE6 can lead to pathological changes in the tau proteins of human neurons, which can have huge consequences. For example, Christianson Syndrome (CS) is an X-linked disorder caused by a loss-of-function mutation in NHE6, which leads to the over acidification of endosomes. In studies done on postmortem brains of individuals with CS, lower NHE6 function was linked to higher levels of tau deposition. The level of tau phosphorylation was also found to be elevated, which leads to the formation of insoluble tangles that can cause neuronal damage and death. Tau proteins are also implicated in other neurodegenerative diseases, such as Alzheimer's and Parkinson's diseases.
Chloride-bicarbonate antiporter
The chloride-bicarbonate antiporter is crucial to maintaining pH and fluid balance through its function of exchanging bicarbonate and chloride ions through cell membranes. This exchange occurs in many different types of body cells. In the cardiac Purkinje fibers and smooth muscle cells of the ureters, this antiporter is the main mechanism of chloride transport into the cells. Epithelial cells such as those of the kidney use chloride-bicarbonate exchange to regulate their volume, intracellular pH, and extracellular pH. Gastric parietal cells, osteoclasts, and other acid-secreting cells have chloride-bicarbonate antiporters that function in the basolateral membrane to dispose of excess bicarbonate left behind by the function of carbonic anhydrase and apical proton pumps. However, base-secreting cells exhibit apical chloride-bicarbonate exchange and basolateral proton pumps.
An example of a chloride-bicarbonate antiporter is the chloride anion exchanger, also known as down-regulated in adenoma (protein DRA). It is found in the intestinal mucosa, especially in the columnar epithelium and goblet cells of the apical surface of the membrane, where it carries out the function of chloride and bicarbonate exchange. Protein DRA's reuptake of chloride is critical to creating an osmotic gradient that allows the intestine to reabsorb water.
Another well-studied chloride-bicarbonate antiporter is anion exchanger 1 (AE1), which is also known as band 3 anion transport protein or solute carrier family 4 member 1 (SLC4A1). This exchanger is found in red blood cells, where it helps transport bicarbonate and carbon dioxide between the lungs and tissues to maintain acid-base homeostasis. AE1 also expressed in the basolateral side of cells of the renal tubules. It is crucial in the collecting duct of the nephron, which is where its acid-secreting α-intercalated cells are located. These cells use carbon dioxide and water to generate hydrogen and bicarbonate ions, which is catalyzed by carbonic anhydrase. The hydrogen is exchanged across the membrane into the lumen of the collecting duct, and thus acid is excreted into the urine.
Because of its importance to the reabsorption of water in the intestine, mutations in protein DRA cause a condition called congenital chloride diarrhea (CCD). This disorder is caused by an autosomal recessive mutation in the DRA gene on chromosome 7. CCD symptoms in newborns are chronic diarrhea with failure to thrive, and the disorder is characterized by diarrhea that causes metabolic alkalosis.
Mutations of kidney AE1 can lead to distal renal tubular acidosis, a disorder characterized by the inability to secrete acid into the urine. This causes metabolic acidosis, where the blood is too acidic. A chronic state of metabolic acidosis can the health of the bones, kidneys, muscles, and cardiovascular system. Mutations in erythrocyte AE1 cause alterations of its function, leading to changes in red blood cell morphology and function. This can have serious consequences because the shape of red blood cells is closely tied to their function of gas exchange in the lungs and tissues. One such condition is hereditary spherocytosis, a genetic disorder characterized by spherical red blood cells. Another is Southeast Asian ovalocytosis, where a deletion in the AE1 gene generates oval-shaped erythrocytes. Finally, overhydrated hereditary stomatocytosis is a rare genetic disorder where red blood cells have an abnormally high volume, leading to changes in hydration status.
The proper function of AE2, an isoform of AE1, is important in gastric secretion, osteoclast differentiation and function, and the synthesis of enamel. The hydrochloric acid secretion at the apical surface of both gastric parietal cells and osteoclasts relies on chloride-bicarbonate exchange in the basolateral surface. Studies found that mice with nonfunctional AE2 did not secrete hydrochloric acid, and it was concluded that the exchanger is necessary for hydrochloric acid loading in parietal cells. When AE2 expression was suppressed in an animal model, cell lines were unable to differentiate into osteoclasts and perform their functions. Additionally, cells that had osteoclast markers but were deficient in AE2 were abnormal compared to the wild-type cells and were unable to resorb mineralized tissue. This demonstrates the importance of AE2 in osteoclast function. Finally, as the hydroxyapatite crystals of enamel are being formed, a lot of hydrogen is produced, which must be neutralized so that mineralization can proceed. Mice with inactivated AE2 were toothless and suffered from incomplete enamel maturation.
Chloride-hydrogen antiporter
The chloride-hydrogen antiporter facilitates the exchange of chloride ions for hydrogen ions across plasma membranes, thus playing a critical role in maintaining acid-base balance and chloride homeostasis. It is found in various tissues, including the gastrointestinal tract, kidneys, and pancreas. The well-known chloride-hydrogen antiporters belong in the CLC family, which have isoforms from CLC-1 to CLC-7, each with a distinct tissue distribution. Their structure involves two CLC proteins coming together to form a homodimer or a heterodimer where both monomers contain an ion translocation pathway. CLC proteins can either be ion channels or anion-proton exchangers, so CLC-1 and CLC-2 are membrane chloride channels, while CLC-3 through CLC-7 are chloride-hydrogen exchangers.
CLC-4 is a member of the CLC family that is prominent in the brain, but is also located in the liver, kidneys, heart, skeletal muscle, and intestine. It likely resides in endosomes and participates in their acidification, but can also be expressed in the endoplasmic reticulum and plasma membrane. Its roles are not entirely clear, but CLC-4 has been found to possibly participate in endosomal acidification, transferrin trafficking, renal endocytosis, and the hepatic secretory pathway.
CLC-5 is one of the best-studied members of this protein family. It shares 80% of its amino acid sequence with CLC-3 and CLC-4, but it is mainly found in the kidney, especially in the proximal tubule, collecting duct, and ascending limb of the loop of Henle. It functions to transport substances through the endosomal membrane, so it is crucial for pinocytosis, receptor-mediated endocytosis, and endocytosis of plasma membrane proteins from the apical surface.
CLC-7 is another example of a CLC family protein. It is ubiquitously expressed as the chloride-hydrogen antiporter in lysosomes and in the ruffled border of osteoclasts. CLC-7 may be important for regulating to concentration of chloride in lysosomes. It is associated with a protein called Ostm1, forming a complex that allows CLC-7 to carry out its functions. For example, these proteins are crucial to the process of acidifying the resorption lacuna, which enables bone remodeling to occur.
CLC-4 has been connected with mental retardation involving seizure disorders, facial abnormalities, and behavior disorders. Studies found frameshift and missense mutations in patients exhibiting these symptoms. Because these symptoms were mostly exhibited in males, with less severe pathology in females, it is likely X-linked. Studies done on animal models have also shown the possibility of a connection between nonfunctional CLC-4 and impaired neural branching of hippocampus neurons.
Defects in the CLC-5 gene were shown to be the cause of 60% of cases of Dent's disease, which is characterized by tubular proteinuria, formation of kidney stones, excess calcium in the urine, nephrocalcinosis, and chronic kidney failure. This is caused by abnormalities that occur in the endocytosis process when CLC-5 is mutated. Dent's disease itself is one of the causes of Fanconi syndrome, which occurs when the proximal convoluted tubules of the kidney do not perform an adequate level of reabsorption. It causes molecules produced by metabolic pathways, such as amino acids, glucose, and uric acid to be excreted in the urine instead of being reabsorbed. The result is polyuria, dehydration, rickets in children, osteomalacia in adults, acidosis, and hypokalemia.
CLC-7's role in osteoclast function was revealed by studies on knockout mice that developed severe osteopetrosis. These mice were smaller, had shortened long bones, disorganized trabecular structure, a missing medullary cavity, and their teeth did not erupt. This was found to be caused by deletion mutations, missense mutations, and gain-of-function mutations that sped up the gating of CLC-7. CLC-7 is expressed in almost every neuronal cell type, and its loss led to widespread neurodegeneration in mice, especially in the hippocampus. In longer-lived models, the cortex and hippocampus had almost entirely disappeared after 1.5 years. Finally, because of its importance in lysosomes, altered expression of CLC-7 can lead to lysosomal storage disorders. Mice with a mutation introduced to the CLC-7 gene developed lysosomal storage disease and retinal degeneration.
Reduced folate carrier protein
The reduced folate carrier protein (RFC) is a transmembrane protein responsible for the transport of folate, or vitamin B9, into cells. It uses the large gradient of organic phosphate to move folate into the cell against its concentration gradient. The RFC protein can transport folates, reduced folates, the derivatives of reduced folate, and the drug methotrexate. The transporter is encoded by the SLC19A1 gene and is ubiquitously expressed in human cells. Its peak activity occurs at pH 7.4, with no activity occurring below pH 6.4. The RFC protein is critical because folates take the form of hydrophilic anions at physiological pH, so they do not diffuse naturally across biological membranes. Folate is essential for processes such as DNA synthesis, repair,and methylation, and without entry into cells, these could not occur.
Because folates are essential for various life-sustaining processes, a deficiency in this molecule can lead to fetal abnormalities, neurological disorders, cardiovascular disease, and cancer. Folates cannot be synthesized in the body, so it must be taken in through diet and moved into cells. Without the RFC protein facilitating this movement, processes such as embryological development and DNA repair cannot occur.
Adequate folate levels are required for the development of the neural tube in the fetus. Folate deficiency during pregnancy increases the risk of defects such as spina bifida and anencephaly. In mouse models, inactivating both alleles of the FRC protein gene causes death of the embryo. Even if folate is supplemented during gestation, the mice died within two weeks of birth from the failure of hematopoietic tissues.
Altered function of the RFC protein can increase folate deficiency, enhancing cardiovascular disease, neurodegenerative diseases, and cancer. In terms of cardiovascular issues, folate contributes to homocysteine metabolism. Low folate levels result in elevated homocysteine levels, which is a risk factor for cardiovascular diseases. In terms of cancer, folate deficiency is related to an increased risk, especially that of colorectal cancers. In mouse models with altered RFC protein expression showed increased transcripts of genes related to colon cancer and increased proliferation of colonocytes. The cancer risk is likely related to the FRC protein's role in DNA synthesis because inadequate levels of folate can lead to DNA damage and aberrant DNA methylation.
Vesicle neurotransmitter antiporters
Vesicle neurotransmitter antiporters are responsible for packaging neurotransmitters into vesicles in neurons. They utilize the electrochemical gradient of hydrogen protons across the membranes of synaptic vesicles to move neurotransmitters into them. This is essential for the process of synaptic transmission, which requires neurotransmitters to be released into the synapse to bind to receptors on the next neuron.
One of the best characterized of these antiporters is the vesicular monoamine transporter (VMAT). It is responsible for the storage, sorting, and release of neurotransmitters, as well as for protecting them from autoxidation. VMAT's transport functions are dependent on the electrochemical gradient created by a vesicular hydrogen proton-ATPase. VMAT1 and VMAT2 are two isoforms that can transport monoamines such as serotonin, norepinephrine, and dopamine in a proton-dependent fashion. VMAT1 can be found in neuroendocrine cells, while VMAT2 can be found in the neurons of the central and peripheral nervous systems, as well as in adrenal chromaffin cells.
Another important vesicle neurotransmitter antiporter is the vesicular glutamate transporter (VGLUT). This family of proteins includes three isoforms, VGLUT1, VGLUT2, and VGLUT3, that are responsible for packaging glutamate - the most abundant excitatory neurotransmitter in the brain - into synaptic vesicles. These antiporters vary by location. VGLUT1 is found in areas of the brain related to higher cognitive functions, such as the neocortex. VGLUT2 works to regulate basic physiological functions and is expressed in subcortical regions such as the brainstem and hypothalamus. Finally, VGLUT3 can be seen in neurons that also express other neurotransmitters.
VMAT2 has been found to contribute to neurological conditions such as mood disorders and Parkinson's disease. Studies done on an animal model of clinical depression showed that functional alterations of VMAT2 were associated with depression. The nucleus accumbens, pars compacta of the substantia nigra, and ventral tegmental area - all subregions of the brain involved in clinical depression - were found to have lower VMAT2 levels. The likely cause for this is VMAT's relationship with serotonin and norepinephrine, neurotransmitters that are related to depression. VMAT dysfunction may contribute to the altered levels of these neurotransmitters that occur in mood disorders.
Lower expression of VMAT2 was found to correlate with a higher susceptibility to Parkinson's disease and the antiporter's mRNA was found in all cell groups damaged by Parkinson's. This is likely because VMAT2 dysfunction can lead to a decrease in dopamine packaging into vesicles, accounting for the dopamine depletion that characterizes the disease. For this reason, the antiporter has been identified as a protective factor that could be targeted for the prevention of Parkinson's.
Because alterations in glutamate release have been linked to the generation of seizures in epilepsy, alterations in the function of VGLUT may be implicated. A study was conducted where the VGLUT1 gene was inactivated in the astrocytes and neurons of an animal model. When the gene was inactivated in astrocytes, there was an 80% loss in the antiporter protein itself and, in turn, a reduction in glutamate uptake. The mice in this condition experienced seizures, lower body mass, and higher mortality rates. The researchers concluded that VGLUT1 function in astrocytes is therefore critical to epilepsy resistance and normal weight gain.
There is a lot of evidence that the glutamate system plays a role in long-term cell growth and synaptic plasticity. Disturbances of these processes has been linked to the pathology of mood disorders. The link between the function of the glutamatergic neurotransmitter system and mood disorders sets up VGLUT as one of the targets for treatment.
See also
Active transport
Adenine nucleotide translocator
Cotransporter
Reduced folate carrier family
Sodium-calcium exchanger
Sodium-hydrogen antiporter
Symporter
Uniporter
Vesicular monoamine transporter
References
Further reading
External links
Integral membrane proteins
Transport phenomena | Antiporter | Physics,Chemistry,Engineering | 7,406 |
75,222,055 | https://en.wikipedia.org/wiki/NGC%202001 | NGC 2001 (also known as PGC 3518062, 056-SC137, SL 507 and part of LH 64) is an open cluster located in the Dorado constellation and is part of the Large Magellanic Cloud.
Background
It was discovered by James Dunlop on September 27, 1826. It's apparent magnitude is 7 by 3.5 arc minutes. and is also known as GC 1204, h 2888, Dunlop 178 according to both cseligman and seds. However, Wolfgang Steinicke lists this as Dunlop 136, not Dunlop 178.
It is around 160 to 165 thousand light year distance of the Large Magellanic Cloud and the loose grouping of stars is about 330 to 335 light years across. NGC 2001 is also listed as part of Lucke-Hodge stellar association 64, along with ANONb4 and e135.
References
External links
open clusters
2001
3518062
Dorado
Large Magellanic Cloud
Astronomical objects discovered in 1826
Discoveries by James Dunlop
056-SC137 | NGC 2001 | Astronomy | 213 |
17,865,569 | https://en.wikipedia.org/wiki/3-Aminoisobutyric%20acid | 3-Aminoisobutyric acid (also known as β-aminoisobutyric acid or BAIBA) is a product formed by the catabolism of thymine.
During exercise, the increase of PGC-1α protein triggers the secretion of BAIBA from exercising muscles to blood (concentration 2 to 3 μM in human serum). When BAIBA reaches the white fat tissue, it activates the expression of thermogenic genes via PPARα receptors, resulting in a browning of white fat cells. One of the consequences of the BAIBA activity is the increase of the background metabolism of the BAIBA target cells.
It is thought to play a number of roles in cell metabolism, how body burns fat and regulates insulin, triglycerides, and total cholesterol.
BAIBA is found as a normal metabolite of skeletal muscle in 2014. The plasma concentrations are increased in human by exercise. The production is likely a result of enhanced mitochondrial activity as the increase is also observed in the muscle of PGC-1a overexpression mice. BAIBA is proposed as protective factor against metabolic disorder since it can induce brown fat function.
See also
β-Alanine
beta-Hydroxy beta-methylbutyric acid
GABA
MB-3 (drug)
References
Beta-Amino acids | 3-Aminoisobutyric acid | Chemistry,Biology | 269 |
2,432,530 | https://en.wikipedia.org/wiki/Octahedral%20cluster | Octahedral clusters are inorganic or organometallic cluster compounds composed of six metals in an octahedral array. Many types of compounds are known, but all are synthetic.
Octahedral chalcogenide and halide clusters
These compounds are bound together by metal-metal bonding as well as two kinds of ligands. Ligands that span the faces or edges of the M6 core are labeled Li, for inner (innen in the original German description), and those ligands attached only to one metal are labeled outer, or La for ausser. Typically, the outer ligands can be exchanged whereas the bridging ligands are more inert toward substitution.
Face-capped halide clusters
The premier example is of the class is Mo6Cl142−. This dianion is available as a variety of salts by treating the polymer molybdenum(II) chloride with sources of chloride, even hydrochloric acid. A related example is W6Cl142− anion, which is obtained by extraction of tungsten(II) chloride.
Chalcohalide clusters
A related class of octahedral clusters are of the type M6X8L6 where M is a metal usually of group 6 or group 7, X is a ligand and more specifically an inner ligand of the chalcohalide group such as chloride or sulfide and L is an "outer ligand." The metal atoms define the vertices of an octahedron. The overall point group symmetry is Oh. Each face of the octahedron is capped with a chalcohalide and eight such atoms are at the corners of a cube. For this reason this geometry is called a face capped octahedral cluster. Examples of this type of clusters are the Re6S8Cl64− anion.
Chevrel clusters
A well-studied class of solid-state compounds related to the chalcohalides are molybdenum clusters of the type AxMo6X8 with X sulfur or selenium and Ax an interstitial atom such as Pb. These materials, called Chevrel phases or Chevrel clusters, have been actively studied because they are type II superconductors with relatively high critical fields. Such materials are prepared by high temperature (1100 °C) reactions of the chalcogen and Mo metal. Structurally related, soluble analogues have been prepared, e.g., Mo6S8(PEt3)6.
Edge-Capped Halide Clusters
With metals in group 4 or 5 a so-called edge-capped octahedral clusters are more common. Twelve halides are located along the edge of the octahedron and six are terminal. Examples of this structure type are tungsten(III) chloride, Ta6Cl14(H2O)4, Nb6F15, and Nb6F182−.
Many of the early metal clusters can only be prepared when they incorporate interstitial atoms. One example is Zr6CCl12.
Tin(II) clusters
Octahedral clusters of tin(II) have been observed in several solid state compounds. The reaction of tin(II) salts with an aqueous base leads to the formation of tin(II) oxyhydroxide (Sn6O4(OH)4), the structure of which comprises discrete Sn6O4(OH)4 clusters. In Sn6O4(OH)4 clusters, the six tin atoms form an octahedral array with alternate faces of the octahedron occupied by an oxide or hydroxide moiety, each bonded in a μ3-binding mode to three tin atoms. Crystal structures have been reported for compounds with the formula Sn6O4(OR)4, where R is an alkoxide such as a methyl or ethyl group.
Recently, it was demonstrated that anionic tin(II) clusters [Sn6O8]4- may form the close packed arrays as in the case of α-Sn6SiO8, which adopts the zinc blende structure, comprising a face-centred-cubic array of [Sn6O8]4- clusters with Si4+ occupying half of the tetrahedral holes. A polymorph, β-Sn6SiO8, has been identified as a product of pewter corrosion in aqueous conditions, and is a structural analogue of wurtzite.
Electron counting in octahedral halide and chalcogenide clusters
The species Mo6Cl142− feature Mo(II) (d4) centers. Six Mo(II) centers gives rise to a total of 24 valence electrons, or 2e/Mo-Mo vector. More electron-deficient derivatives such as Ta6Cl184− have fewer d-electrons. For example, the naked cluster Ta614+, the core of Ta6Cl184− would have 5(6) - 14 = 16 valence electrons. Fewer d-electrons result in weakened M-M bonding and the extended Ta---Ta distances accommodate doubly bridging halides.
Other classes of octahedral clusters
In the area of metal carbonyl clusters, a prototypical octahedral cluster is [Fe6C(CO)16]2−, which is obtained by heating iron pentacarbonyl with sodium. Some of the CO ligands are bridging and many are terminal. A carbide ligand resides at the center of the cluster. A variety of analogous compounds have been reported where some or all of the Fe centres are replaced by Ru, Mn and other metals.
Outside of carbonyl clusters, gold forms octahedral clusters.
References
Cluster chemistry | Octahedral cluster | Chemistry | 1,189 |
390,864 | https://en.wikipedia.org/wiki/Rhinogradentia | Rhinogradentia is a fictitious order of extinct shrew-like mammals invented by German zoologist Gerolf Steiner. Members of the order, known as rhinogrades or snouters, are characterized by a nose-like feature called a "nasorium", which evolved to fulfill a wide variety of functions in different species. Steiner also created a fictional persona, naturalist Harald Stümpke, who is credited as author of the 1961 book Bau und Leben der Rhinogradentia (translated into English in 1967 as The Snouters: Form and Life of the Rhinogrades). According to Steiner, it is the only remaining record of the animals, which were wiped out, along with all the world's Rhinogradentia researchers, when the small Pacific archipelago they inhabited sank into the ocean due to nearby atomic bomb testing.
Successfully mimicking a genuine scientific work, Rhinogradentia has appeared in several publications without any note of its fictitious nature, sometimes in connection with April Fools' Day.
Background
Rhinogradentia, their island home of Hy-yi-yi, zoologist Harald Stümpke, and a host of other people, places, and documents are fictional creations of Gerolf Steiner (1908–2009), a German zoologist. Steiner is best known for his fictional work as Stümpke, but he was an accomplished zoologist in his own right. He held a professorship at the University of Heidelberg and later the Technical University of Karlsruhe, where he occupied the department chair from 1962 to 1973.
Steiner was also interested in illustration, and in 1945 drew a picture for one of his students as thanks for some food. He took inspiration from a short nonsense poem by Christian Morgenstern, The Nasobame (Das Nasobēm) about an animal that walked using its nose. He took to the drawing, made a copy for himself, and later incorporated the creatures into his teaching. According to Bud Webster, Steiner's motivation for writing a book about them was instructional, to illustrate "how animals evolve in isolation", but Joe Cain speculates that the success of the joke may have led to a teaching and writing career based on that rather than the other way around.
Harald Stümpke's account
Steiner's fictional author, credited as "quondam curator of the Museum of the Darwin Institute of Hy-yi-yi, Mairuwili", provides a very detailed account of the order and individual species, written in a dry, scholarly tone. Michael Ohl wrote that the book is written "in truly amusing attention to detail and using what is immediately recognizable as a practiced scientific patois". The evidently expert voice of the author, his competent writing, and apparent familiarity with conventions of academic literature set the work apart as a rare example at the intersection of fiction and scholarship. Steiner credits himself by name as illustrator of the book, and explains how that role led him to possess the only remaining record of Rhinogradentia.
Discovery and study at Hy-yi-yi
According to Stümpke, Rhinogradentia were native to Hy-yi-yi, a small Pacific archipelago comprising eighteen islands: Annoorussawubbissy, Awkoavussa, Hiddudify, Koavussa, Lowlukha, Lownunnoia, Mara, Miroovilly, Mittuddinna, Naty, Nawissy, Noorubbissy, Osovitissy, Ownavussa, Owsuddowsa, Shanelukha, Towteng-Awko, and Vinsy. The islands occupied and the archipelago's highest peak, , was on its main island, Hiddudify (Hy-dud-dye-fee).
The first description of Hy-yi-yi published in Europe was that of Einar Pettersson-Skämtkvist, a Swedish explorer who arrived in Hiddudify by chance in 1941, after escaping from a Japanese prisoner-of-war camp. Each of the islands was home to distinctive fauna, dominated by Rhinogradentia, the only mammals other than humans and one species of shrew. In the time after the war, a number of scientists took interest in the rhinogrades and began formal research into their physiology, morphology, behaviors, and evolution.
In the late 1950s, nearby nuclear weapons testing by the United States military accidentally caused all of the islands of Hy-yi-yi to sink into the ocean, destroying all traces of the rhinogrades and their unique ecosystem. Also killed were all the world's Rhinogradentia researchers, who were attending a conference on Hy-yi-yi at the time. The book's epilogue, credited to Steiner in his capacity as the book's illustrator, explains that Stümpke had sent the book's materials to Steiner to serve as the basis for illustrations in preparation for publication. Following the disaster, it is the only remaining record of the subjects it describes.
Biological characteristics and behavior
Rhinogrades are mammals characterized by a nose-like feature called a "nasorium", the form and function of which vary significantly between species. According to Stümpke, the order's remarkable variety was the natural outcome of evolution acting over millions of years in the remote Hy-yi-yi islands. All the 14 families and 189 known snouter species descended from a small shrew-like animal, which gradually evolved and diversified to fill most of the ecological niches in the archipelago — from tiny worm-like beings to large herbivores and predators.
Many rhinogrades used their nose for locomotion, for example the "snout leapers" like Hopsorrhinus aureus, whose nasorium was used for jumping, or the "earwings" like Otopteryx, which flew backwards by flapping its ears and used its nose as a rudder. Some species used their nasorium for catching food, for example by using it to fish or to attract and trap insects. Other species included the fierce Tyrannonasus imperator and the shaggy Mammontops.
Pettersson-Skämtkvist's early descriptions of the animals he encountered on Hy-yi-yi led zoologists to name them after the title creature in Christian Morgenstern's The Nasobame. In the poem, which exists outside of this fictional universe and also served as an inspiration for Steiner, the Nasobame is seen "striding on its noses" (auf seinen Nasen schreitet).
Genera
Stümpke's book classifies 138 species of rhinograde in the following fictitious genera:
The names generally refer to particular forms or functions of the nasorium of animals in that genus, typically providing vernacular names for clarity.
Publication history
Steiner's books as Stümpke have been translated into other languages, sometimes crediting other names based on the country of publication. "Harald Stümpke", "Massimo Pandolfi", "Hararuto Shutyunpuke", and "Karl D. S. Geeste" are pseudonyms. Translator names are real.
Stümpke, Harald (1957). Bau und Leben der Rhinogradentia. Stuttgart: Gustav Fischer Verlag. . .
Stümpke, Harald (1962). Anatomie et Biologie des Rhinogrades — Un Nouvel Ordre de Mammifères (Trans. Robert Weill). Paris: Masson. . .
Stümpke, Harald (1967). The Snouters: Form and Life of the Rhinogrades (Trans. Leigh Chadwick). Garden City, NY: The Natural History Press. .
Pandolfi, Massimo (1992). I Rinogradi di Harald Stümpke e la zoologia fantastica (Trans. Achaz von Hardenberg). Padua: Franco Muzzio. . .
Shutyunpuke, Hararuto (1997). Bikōri: atarashiku-hakken-sareta-honyūrui-no-kōzō-to-seikatsu. Tokyo: Hakuhinsha. . .
Geeste, Karl D. S. (1988). Stümpke's Rhinogradentia: Versuch einer Analyse. Stuttgart: Gustav Fischer Verlag. . .
Legacy
Rhinogradentia is considered one of the best known biological hoaxes and scientific jokes and Steiner's pseudonymous works on the subject continue to be reprinted and translated. The first edition did not explicitly state that it was a hoax.
Following the publication of the French translation, George Gaylord Simpson wrote a seemingly serious review which extended the hoax in a 1963 issue of the journal Science, taking issue with the way Stümpke named the animals as "criminal violations of the International Code of Zoological Nomenclature". Simpson also noted that Stümpke neglected to include an unrelated mathematical concept, a "rotated matrix".
Since the book's original publication several scientists and publishers have written about Rhinogradentia as though Steiner's account were true, though it is unclear how many of those who continued and popularized the joke did so intentionally. Wulf Ankle wrote that the order "is not a poetic invention, but has really lived". Rolf Siewing's Zoology Primer lists them as an order of mammal, noting that their existence is doubted. Erich von Holst celebrated the discovery of "a completely new animal world". Timothy E. Lawlor's widely read textbook Handbook to the Orders and Families of Living Mammals includes an entry for Rhinogradentia that does not acknowledge its fictional nature. The East German Liberal Democratic Newspaper took note of the nuclear demise of the rhinogrades, writing that they would still be alive "had we, the peaceable powers, managed in time to implement widespread disarmament and prohibit the production and testing of nuclear weapons."
Prior to the publication of Leigh Chadwick's English translation, an abbreviated version ran in the April 1967 edition of Natural History, a magazine published by the American Museum of Natural History. It comprised material from the book's introduction, first chapter, selected descriptions of genera, and the epilogue, and was presented as the lead story, without qualification, by the normally serious publication. The following month, The New York Times ran a story about the snouters on the front page, based on the Natural History article. According to the magazine's editorial director, they had "received more than 100 letters and telegraphs about the snouters, most of them from people who forgot that the article was published on April Fool's Day." Natural History printed several letters to the editor in its June–July issue, and conveyed to the Times the content of several more, ranging from skeptical to fascinated and continuations of the joke. One reader, entomologist Alice Gray, expressed thanks for the article, which enabled her family to identify an animal-shaped metal bracelet from the South Pacific as having been modeled after a "Hoop Snouter", and included a drawing to preserve the record because, she said, it had been melted down with some toy soldiers and a spoon by a young cousin with a new casting set.
Decades later, papers are still published purporting to continue Stümpke's research or otherwise paying homage to Steiner's hoax. In a 2004 paper in the Russian Journal of Marine Biology, authors Kashkina & Bukashkina claim to have discovered two new marine genera: Dendronasus and an as yet unnamed parasitic taxon. The Max Planck Institute for Limnology announced a new species discovered in Großer Plöner See. On April Fools' Day in 2012, the National Museum of Natural History in France announced the discovery of a wood-eating termite-like genus, Nasoperferator, with a rotating nose resembling a drill.
Rhinogradentia has been included in a number of museum exhibitions and collections. The National Museum of Natural History's Nasoperferator announcement was accompanied by a two-month exhibit honoring the animals, featuring purported stuffed specimens in its gallery of extinct species. Mock taxidermies of rhinogrades have also been included in an exhibit at the Musée d'ethnographie de Neuchâtel, and in the permanent collections of the Musée zoologique de la ville de Strasbourg and the Salzburg Haus der Natur.
Three real species have been named after Steiner and Stümpke: Rhinogradentia steineri, a snout moth, Hyorhinomys stuempkei, a shrew rat also known as the Sulawesi snouter, and Tateomys rhinogradoides, the Tate's shrew rat.
See also
Caminalcules, another fictional group of animals introduced as a tool for understanding phylogenetics
Codex Seraphinianus
Eoörnis pterovelox gobiensis – an older biological hoax, a fictional bird
Fictitious entry
Lists of fictional species
Pacific Northwest tree octopus
References
External links
Les Rhinogrades
Fictional mammals
1950s hoaxes
Hoaxes in science
Speculative evolution | Rhinogradentia | Biology | 2,662 |
20,041,810 | https://en.wikipedia.org/wiki/Speed%20of%20light%20%28cellular%20automaton%29 | In Conway's Game of Life (and related cellular automata), the speed of light is a propagation rate across the grid of exactly one step (either horizontally, vertically or diagonally) per generation. In a single generation, a cell can only influence its nearest neighbours, and so the speed of light (by analogy with the speed of light in physics) is the maximum rate at which information can propagate. It is therefore an upper bound to the speed at which any pattern can move.
Notation
As in physics, the speed of light is represented by the letter c. This in turn is used as a reference for describing the average propagation speed of any given type of spaceship. For example, a glider is said to have a speed of c/4, as it takes four generations for a given state to be translated by one cell. Similarly, the "lightweight spaceship" is said to have a speed of c/2, as it takes four generations for a given state to be translated by two cells.
Lightspeed propagation
While c is an absolute upper bound to propagation speed, the maximum speed of a spaceship in Conway's Game of Life is c/2. This is because it is impossible to build a spaceship that can move every generation. (This is not true, though, for cellular automata in general; for instance, many light-speed spaceships exist in Seeds.) It is, however, possible for objects to travel at the speed of light if they move through a medium other than empty space. Such media include trails of hives, and alternating stripes of live and dead cells.
Faster than light propagation
Certain patterns can appear to move at a speed greater than one cell per generation, but like faster than light phenomena in physics this is illusory.
An example is the "Star Gate", an arrangement of three converging gliders that will mutually annihilate on collision. If a lightweight spaceship (LWSS) hits the colliding gliders, it will appear to move forwards by 11 cells in only 6 generations, and thus travel faster than light. This illusion happens because the glider annihilation reaction proceeds by the creation and soon-after destruction of another LWSS. When the incoming LWSS hits the colliding gliders, it is not transported, but instead modifies the reaction so that the newly created LWSS can survive. The only signal being transmitted is determining whether the outgoing LWSS should survive or not. This does not need to reach its destination until after the LWSS has been "transported", so no information needs to travel faster than light.
References
Cellular automata | Speed of light (cellular automaton) | Mathematics | 539 |
75,313,681 | https://en.wikipedia.org/wiki/SpaceX%20Starship%20design%20history | Before settling on the 2018 Starship design, SpaceX successively presented a number of reusable super-heavy lift vehicle proposals. These preliminary spacecraft designs were known under various names (Mars Colonial Transporter, Interplanetary Transport System, BFR).
In November 2005, before SpaceX had launched its first rocket, the Falcon 1, CEO Elon Musk first mentioned a high-capacity rocket concept able to launch to low Earth orbit, dubbed the BFR. Later in 2012, Elon Musk first publicly announced plans to develop a rocket surpassing the capabilities of the existing Falcon 9. SpaceX called it the Mars Colonial Transporter, as the rocket was to transport humans to Mars and back. In 2016, the name was changed to Interplanetary Transport System, as the rocket was planned to travel beyond Mars as well. The design called for a carbon fiber structure, a mass in excess of when fully-fueled, a payload of to low Earth orbit while being fully reusable. By 2017, the concept was temporarily re-dubbed the BFR.
In December 2018, the structural material was changed from carbon composites to stainless steel, marking the transition from early design concepts of the Starship. Musk cited numerous reasons for the design change; low cost, ease of manufacture, increased strength of stainless steel at cryogenic temperatures, and ability to withstand high temperatures. In 2019, SpaceX began to refer to the entire vehicle as Starship, with the second stage being called Starship and the booster Super Heavy. They also announced that Starship would use reusable heat shield tiles similar to those of the Space Shuttle. The second-stage design had also settled on six Raptor engines by 2019; three optimized for sea-level and three optimized for vacuum. In 2019 SpaceX announced a change to the second stage's design, reducing the number of aft flaps from three to two to reduce weight. In March 2020, SpaceX released a Starship Users Guide, in which they stated the payload of Starship to low Earth orbit (LEO) would be in excess of , with a payload to geostationary transfer orbit (GTO) of .
Early heavy-lift concepts
In November 2005, before SpaceX launched the Falcon 1, its first rocket, CEO Elon Musk first referenced a long-term and high-capacity rocket concept named BFR. The BFR would be able to launch to LEO and would be equipped with Merlin 2 engines. The Merlin 2 would have been in direct lineage to the Merlin engines used on the Falcon 9, described as a scaled up regeneratively cooled engine comparable to the F-1 engines used on the Saturn V.
In July 2010, after the final launch of Falcon 1 a year prior, SpaceX presented launch vehicle and Mars space tug concepts at a conference. The launch vehicle concepts were called Falcon X (later named Falcon 9), Falcon X Heavy (later named Falcon Heavy), and Falcon XX (later named Starship); the largest of all was the Falcon XX with a capacity to low Earth orbit. To deliver such payload, the rocket would have been as tall as the Saturn V and use six powerful Merlin 2 engines.
Mars Colonial Transporter
In October 2012, the company made the first public articulation of plans to develop a fully reusable rocket system with substantially greater capabilities than SpaceX's existing Falcon 9. Later in 2012, the company first mentioned the Mars Colonial Transporter rocket concept in public. It was going to be able to carry 100 people or of cargo to Mars and would be powered by methane-fueled Raptor engines. Musk referred to this new launch vehicle under the unspecified acronym "MCT", revealed to stand for "Mars Colonial Transporter" in 2013, which would serve the company's Mars system architecture. SpaceX COO Gwynne Shotwell gave a potential payload range between 150–200 tons to low Earth orbit for the planned rocket. For Mars missions, the spacecraft would carry up to of passengers and cargo. According to SpaceX engine development head Tom Mueller, SpaceX could use nine Raptor engines on a single MCT booster or spacecraft. The preliminary design would be at least in diameter, and was expected to have up to three cores totaling at least 27 booster engines.
Interplanetary Transport System
In 2016, the name of the Mars Colonial Transporter system was changed to the Interplanetary Transport System (ITS), due to the vehicle being capable of other destinations. Additionally, Elon Musk provided more details about the space mission architecture, launch vehicle, spacecraft, and Raptor engines. The first test firing of a Raptor engine on a test stand took place in September 2016.
On September 26, 2016, a day before the 67th International Astronautical Congress, a Raptor engine fired for the first time. At the event, Musk announced SpaceX was developing a new rocket using Raptor engines called the Interplanetary Transport System. It would have two stages, a reusable booster and spacecraft. The stages' tanks were to be made from carbon composite, storing liquid methane and liquid oxygen. Despite the rocket's launch capacity to low Earth orbit, it was expected to have a low launch price. The spacecraft featured three variants: crew, cargo, and tanker; the tanker variant is used to transfer propellant to spacecraft in orbit. The concept, especially the technological feats required to make such a system possible and the funds needed, garnered substantial skepticism. Both stages would use autogenous pressurization of the propellant tanks, eliminating the Falcon 9's problematic high-pressure helium pressurization system.
In October 2016, Musk indicated that the initial tank test article, made of carbon-fiber pre-preg, and built with no sealing liner, had performed well in cryogenic fluid testing. A pressure test at about 2/3 of the design burst pressure was completed in November 2016. In July 2017, Musk indicated that the architecture design had evolved since 2016 in order to support commercial transport via Earth-orbit and cislunar launches.
The ITS booster was to be a , , reusable first stage powered by 42 engines, each producing of thrust. Total booster thrust would have been at liftoff, increasing to in a vacuum, several times the thrust of the Saturn V. It weighed when empty and when completely filled with propellant. It would have used grid fins to help guide the booster through the atmosphere for a precise landing. The engine configuration included 21 engines in an outer ring and 14 in an inner ring. The center cluster of seven engines would be able to gimbal for directional control, although some directional control would be achieved via differential thrust with the fixed engines. Each engine would be capable of throttling between 20 and 100 percent of rated thrust.
The design goal was to achieve a separation velocity of about while retaining about 7% of the initial propellant to achieve a vertical landing at the launch pad.The design called for grid fins to guide the booster during atmospheric reentry. The booster return flights were expected to encounter loads lower than the Falcon 9, principally because the ITS would have both a lower mass ratio and a lower density. The booster was to be designed for 20 g nominal loads, and possibly as high as 30–40 g.
In contrast to the landing approach used on SpaceX's Falcon 9—either a large, flat concrete pad or downrange floating landing platform, the ITS booster was to be designed to land on the launch mount itself, for immediate refueling and relaunch.
The ITS second stage was planned to be used for long-duration spaceflight, instead of solely being used for reaching orbit. The two proposed variants aimed to be reusable. Its maximum width would be , with three sea level Raptor engines, and six optimized for vacuum firing. Total engine thrust in a vacuum was to be about .
The Interplanetary Spaceship would have operated as a second-stage and interplanetary transport vehicle for cargo and passengers. It aimed to transport up to per trip to Mars following refueling in Earth orbit. Its three sea-level Raptor engines were designed to be used for maneuvering, descent, landing, and initial ascent from the Mars surface. It would have had a maximum capacity of of propellant, and a dry mass of 150 tonnes (330,000 lb).
The ITS tanker would serve as a propellant tanker, transporting up to of propellants to low Earth orbit in a single launch. After refueling operations, it would land and be prepared for another flight. It had a maximum capacity of of propellant and had a dry mass of .
Big Falcon Rocket
In September 2017, at the 68th annual meeting of the International Astronautical Congress, Musk announced a new launch vehicle calling it the BFR, again changing the name, though stating that the name was temporary. The acronym was alternatively stated as standing for Big Falcon Rocket or Big Fucking Rocket, a tongue-in-cheek reference to the BFG from the Doom video game series. Musk foresaw the first two cargo missions to Mars as early as 2022, with the goal to "confirm water resources and identify hazards" while deploying "power, mining, and life support infrastructure" for future flights. This would be followed by four ships in 2024, two crewed BFR spaceships plus two cargo-only ships carrying equipment and supplies for a propellant plant.
The design balanced objectives such as payload mass, landing capabilities, and reliability. The initial design showed the ship with six Raptor engines (two sea-level, four vacuum) down from nine in the previous ITS design.
By September 2017, Raptors had been test-fired for a combined total of 20 minutes across 42 test cycles. The longest test was 100 seconds, limited by the size of the propellant tanks. The test engine operated at . The flight engine aimed for , on the way to in later iterations. In November 2017, Shotwell indicated that about half of all development work on BFR was focused on the engine.
SpaceX looked for manufacturing sites in California, Texas, Louisiana, and Florida. By September 2017, SpaceX had started building launch vehicle components: "The tooling for the main tanks has been ordered, the facility is being built, we will start construction of the first ship [in the second quarter of 2018.]"
By early 2018, the first carbon composite prototype ship was under construction, and SpaceX had begun building a new production facility at the Port of Los Angeles, California.
In March, SpaceX announced that it would manufacture its launch vehicle and spaceship at a new facility on Seaside Drive at the port. By May, about 40 SpaceX employees were working on the BFR. SpaceX planned to transport the launch vehicle by barge, through the Panama Canal, to Cape Canaveral for launch. Since then, the company has terminated the agreements to do this.
In August 2018, the head of the US Air Force Air Mobility Command expressed interest in the ability of the BFR to move up to of cargo anywhere in the world in under 30 minutes, for "less than the cost of a C-5".
The BFR was designed to be tall, in diameter, and made of carbon composites. The upper stage, known as Big Falcon Ship (BFS), included a small delta wing at the rear end with split flaps for pitch and roll control. The delta wing and split flaps were said to expand the flight envelope to allow the ship to land in a variety of atmospheric densities (vacuum, thin, or heavy atmosphere) with a wide range of payloads. The BFS design originally had six Raptor engines, with four vacuum and two sea-level. By late 2017, SpaceX added a third sea-level engine (totaling 7) to allow greater Earth-to-Earth payload landings and still ensure capability if one of the engines fails.
Three BFS versions were described: BFS cargo, BFS tanker, and BFS crew. The cargo version would have been used to reach Earth orbit as well as carry cargo to the Moon or Mars. After refueling in an elliptical Earth orbit, BFS was designed to eventually be able to land on the Moon and return to Earth without another refueling. The BFR also aimed to carry passengers/cargo in Earth-to-Earth transport, delivering its payload anywhere within 90 minutes.
Changes to early Starship design
In December 2018, the structural material was changed from carbon composites to stainless steel, marking the transition from early design concepts of the Starship. Musk cited numerous reasons for the design change; low cost and ease of manufacture, increased strength of stainless steel at cryogenic temperatures, as well as its ability to withstand high heat. The high temperature at which 300-series steel transitions to plastic deformation would eliminate the need for a heat shield on Starship's leeward side, while the much hotter windward side would be cooled by allowing fuel or water to bleed through micropores in a double-wall stainless steel skin, removing heat by evaporation. The liquid-cooled windward side was changed in 2019 to use reusable heat shield tiles similar to those of the Space Shuttle.
In 2019, SpaceX began to refer to the entire vehicle as Starship, with the second stage being called Starship and the booster Super Heavy. In September 2019, Musk held an event about Starship development during which he further detailed the lower-stage booster, the upper-stage's method of controlling its descent, the heat shield, orbital refueling capacity, and potential destinations besides Mars.
Over the years of design, the proportion of sea-level engines to vacuum engines on the second stage varied drastically. By 2019, the second stage design had settled on six Raptor engines—three optimized for sea-level and three optimized for vacuum. To decrease weight, aft flaps on the second stage were reduced from three to two. Later in 2019, Musk stated that Starship was expected to have a mass of and be able to initially transport a payload of , growing to over time. Musk hinted at an expendable variant that could place 250 tonnes into low orbit.
One possible future use of Starship that SpaceX has proposed is point-to-point flights (called "Earth to Earth" flights by SpaceX), traveling anywhere on Earth in under an hour. In 2017 SpaceX president and chief operating officer Gwynne Shotwell stated that point-to-point travel with passengers could become cost competitive with conventional business class flights. John Logsdon, an academic on space policy and history, said that the idea of transporting passengers in this manner was "extremely unrealistic", as the craft would switch between weightlessness to 5 g of acceleration. He also commented that “Musk calls all of this ‘aspirational,’ which is a nice code word for more than likely not achievable.”
See also
History of SpaceX
Space Shuttle design process
SpaceX ambition of colonizing Mars
Studied Space Shuttle designs
Notes
References
SpaceX Starship
Spacecraft design | SpaceX Starship design history | Engineering | 3,074 |
35,051,875 | https://en.wikipedia.org/wiki/Sodium%3Adicarboxylate%20symporter | It has been shown that integral membrane proteins that mediate the uptake of a wide variety of molecules with the concomitant uptake of sodium ions (sodium symporters) can be grouped, on the basis of sequence and functional similarities into a number of distinct families. One of these families is known as the sodium:dicarboxylate symporter family (SDF) (it is different from divalent anion–sodium symporter).
Such re-uptake of neurotransmitters from the synapses, is thought to be an important mechanism for terminating their action, by removing these chemicals from the synaptic cleft, and transporting them into presynaptic nerve terminals, and surrounding neuroglia. this removal is also believed to prevent them accumulating to the point of reaching neurotoxic.
The structure of these transporter proteins has been variously reported to contain from 8 to 10 transmembrane (TM) regions, although 10 now seems to be the accepted value.
Members of the family include: several mammalian excitatory amino acid transporters, and a number of bacterial transporters. They vary with regards to their dependence on transport of sodium, and other ions.
References
Protein families | Sodium:dicarboxylate symporter | Biology | 261 |
76,969,858 | https://en.wikipedia.org/wiki/Polevitzky%2C%20Johnson%20and%20Associates | Polevitzky, Johnson and Associates was a architectural firm with headquarters in Miami, Florida.
History
Polevitzky, Johnson and Associates, Inc. was established in 1951 in Miami, Florida. After coming back from World War II in the mid-1940s, Igor B. Polevitzky opened a new office in the Brickell neighbourhood and partnered with Verner Johnson.
Illustrator J. M. Smith, Jerome L. Schilling, Samuel S. Block, and William H. Arthur were among the firm's longtime associates. Photographers like Earl Struck, Jim Forney, Rudi Rada, Ernest Graham, Samuel H. Gottscho, and Robert R. Blanch were among those who worked as photographers frequently.
In 1957, Meyer Lanksy commissioned the firm's senior partner Igor Polevitzky to design the Hotel Habana Riviera. Along with Verner Johnson and Associates, Polevitzky collaborated with Miguel Gastón and Manuel Carrerá, two architects from Cuba. Built in the Vedado neighborhood of Havana, Cuba, the sixteen-story skyscraper was constructed on the Malecón beachfront boulevard.
The Miami-based architectural firm was brought in to redesign the original Biltmore Yacht and Country Club after the winter of 1957, but the Cuban Revolution stopped it from ever being built.
The founders of Polevitzky, Johnson and Associates disbanded in the mid-1960s. Around 1967, Igor Poletvitzky relocated permanently from his Miami home to Estes Park, Colorado. The firm took on projects until the early 1970s.
References
Architecture firms
Architecture organizations
Architecture firms of the United States
Architecture firms based in Florida | Polevitzky, Johnson and Associates | Engineering | 334 |
42,452,496 | https://en.wikipedia.org/wiki/Descent%20algebra | In algebra, Solomon's descent algebra of a Coxeter group is a subalgebra of the integral group ring of the Coxeter group, introduced by .
The descent algebra of the symmetric group
In the special case of the symmetric group Sn, the descent algebra is given by the elements of the group ring such that permutations with the same descent set have the same coefficients. (The descent set of a permutation σ consists of the indices i such that σ(i) > σ(i+1).) The descent algebra of the symmetric group Sn has dimension 2n-1. It contains the peak algebra as a left ideal.
References
Reflection groups | Descent algebra | Physics | 139 |
38,803,155 | https://en.wikipedia.org/wiki/Deep%20Near%20Infrared%20Survey%20of%20the%20Southern%20Sky | The Deep Near Infrared Survey of the Southern Sky (DENIS) was a deep astronomical survey of the southern sky in the near-infrared and optical wavelengths, using an ESO 1-metre telescope at the La Silla Observatory. It operated from 1996 to 2001.
See also
DENIS-P J1058.7-1548
DENIS-P J1228.2-1547
DENIS-P J020529.0-115925
DENIS-P J082303.1-491201 b
DENIS-P J101807.5-285931
DENIS J024011.0-014628,6dFGS gJ024011.1-014628
Edinburgh-Cape Blue Object Survey
References
External links
DENIS—Deep Near Infrared Survey of the Southern Sky
ESO 1-metre telescope
Astronomical catalogues
Astronomical surveys | Deep Near Infrared Survey of the Southern Sky | Astronomy | 180 |
1,384,760 | https://en.wikipedia.org/wiki/List%20of%20European%20medium%20wave%20transmitters | This is an incomplete list of medium wave transmitters in Europe. The emitted AM radio signal can be received on AM radios across Europe, depending on the power.
Skywave propagation at night enables some stations to be heard far beyond the target reception area, sometimes by thousands of kilometres.
Active stations
Source:
Former stations
Source:
See also
MW DX
References
European medium wave transmitters
Radio frequency propagation
Radio-related lists
de:Mittelwelle | List of European medium wave transmitters | Physics | 88 |
459,018 | https://en.wikipedia.org/wiki/Pointer%20%28computer%20programming%29 | In computer science, a pointer is an object in many programming languages that stores a memory address. This can be that of another value located in computer memory, or in some cases, that of memory-mapped computer hardware. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture.
Using pointers significantly improves performance for repetitive operations, like traversing iterable data structures (e.g. strings, lookup tables, control tables, linked lists, and tree structures). In particular, it is often much cheaper in time and space to copy and dereference pointers than it is to copy and access the data to which the pointers point.
Pointers are also used to hold the addresses of entry points for called subroutines in procedural programming and for run-time linking to dynamic link libraries (DLLs). In object-oriented programming, pointers to functions are used for binding methods, often using virtual method tables.
A pointer is a simple, more concrete implementation of the more abstract reference data type. Several languages, especially low-level languages, support some type of pointer, although some have more restrictions on their use than others. While "pointer" has been used to refer to references in general, it more properly applies to data structures whose interface explicitly allows the pointer to be manipulated (arithmetically via ) as a memory address, as opposed to a magic cookie or capability which does not allow such. Because pointers allow both protected and unprotected access to memory addresses, there are risks associated with using them, particularly in the latter case. Primitive pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" such a pointer whose value is not a valid memory address could cause a program to crash (or contain invalid data). To alleviate this potential problem, as a matter of type safety, pointers are considered a separate type parameterized by the type of data they point to, even if the underlying representation is an integer. Other measures may also be taken (such as validation and bounds checking), to verify that the pointer variable contains a value that is both a valid memory address and within the numerical range that the processor is capable of addressing.
History
In 1955, Soviet Ukrainian computer scientist Kateryna Yushchenko created the Address programming language that made possible indirect addressing and addresses of the highest rank – analogous to pointers. This language was widely used on the Soviet Union computers. However, it was unknown outside the Soviet Union and usually Harold Lawson is credited with the invention, in 1964, of the pointer. In 2000, Lawson was presented the Computer Pioneer Award by the IEEE "[f]or inventing the pointer variable and introducing this concept into PL/I, thus providing for the first time, the capability to flexibly treat linked lists in a general-purpose high-level language". His seminal paper on the concepts appeared in the June 1967 issue of CACM entitled: PL/I List Processing. According to the Oxford English Dictionary, the word pointer first appeared in print as a stack pointer in a technical memorandum by the System Development Corporation.
Formal description
In computer science, a pointer is a kind of reference.
A data primitive (or just primitive) is any datum that can be read from or written to computer memory using one memory access (for instance, both a byte and a word are primitives).
A data aggregate (or just aggregate) is a group of primitives that are logically contiguous in memory and that are viewed collectively as one datum (for instance, an aggregate could be 3 logically contiguous bytes, the values of which represent the 3 coordinates of a point in space). When an aggregate is entirely composed of the same type of primitive, the aggregate may be called an array; in a sense, a multi-byte word primitive is an array of bytes, and some programs use words in this way.
A pointer is a programming concept used in computer science to reference or point to a memory location that stores a value or an object. It is essentially a variable that stores the memory address of another variable or data structure rather than storing the data itself.
Pointers are commonly used in programming languages that support direct memory manipulation, such as C and C++. They allow programmers to work with memory directly, enabling efficient memory management and more complex data structures. By using pointers, you can access and modify data located in memory, pass data efficiently between functions, and create dynamic data structures like linked lists, trees, and graphs.
In simpler terms, you can think of a pointer as an arrow that points to a specific spot in a computer's memory, allowing you to interact with the data stored at that location.
A memory pointer (or just pointer) is a primitive, the value of which is intended to be used as a memory address; it is said that a pointer points to a memory address. It is also said that a pointer points to a datum [in memory] when the pointer's value is the datum's memory address.
More generally, a pointer is a kind of reference, and it is said that a pointer references a datum stored somewhere in memory; to obtain that datum is to dereference the pointer. The feature that separates pointers from other kinds of reference is that a pointer's value is meant to be interpreted as a memory address, which is a rather low-level concept.
References serve as a level of indirection: A pointer's value determines which memory address (that is, which datum) is to be used in a calculation. Because indirection is a fundamental aspect of algorithms, pointers are often expressed as a fundamental data type in programming languages; in statically (or strongly) typed programming languages, the type of a pointer determines the type of the datum to which the pointer points.
Architectural roots
Pointers are a very thin abstraction on top of the addressing capabilities provided by most modern architectures. In the simplest scheme, an address, or a numeric index, is assigned to each unit of memory in the system, where the unit is typically either a byte or a word – depending on whether the architecture is byte-addressable or word-addressable – effectively transforming all of memory into a very large array. The system would then also provide an operation to retrieve the value stored in the memory unit at a given address (usually utilizing the machine's general-purpose registers).
In the usual case, a pointer is large enough to hold more addresses than there are units of memory in the system. This introduces the possibility that a program may attempt to access an address which corresponds to no unit of memory, either because not enough memory is installed (i.e. beyond the range of available memory) or the architecture does not support such addresses. The first case may, in certain platforms such as the Intel x86 architecture, be called a segmentation fault (segfault). The second case is possible in the current implementation of AMD64, where pointers are 64 bit long and addresses only extend to 48 bits. Pointers must conform to certain rules (canonical addresses), so if a non-canonical pointer is dereferenced, the processor raises a general protection fault.
On the other hand, some systems have more units of memory than there are addresses. In this case, a more complex scheme such as memory segmentation or paging is employed to use different parts of the memory at different times. The last incarnations of the x86 architecture support up to 36 bits of physical memory addresses, which were mapped to the 32-bit linear address space through the PAE paging mechanism. Thus, only 1/16 of the possible total memory may be accessed at a time. Another example in the same computer family was the 16-bit protected mode of the 80286 processor, which, though supporting only 16 MB of physical memory, could access up to 1 GB of virtual memory, but the combination of 16-bit address and segment registers made accessing more than 64 KB in one data structure cumbersome.
In order to provide a consistent interface, some architectures provide memory-mapped I/O, which allows some addresses to refer to units of memory while others refer to device registers of other devices in the computer. There are analogous concepts such as file offsets, array indices, and remote object references that serve some of the same purposes as addresses for other types of objects.
Uses
Pointers are directly supported without restrictions in languages such as PL/I, C, C++, Pascal, FreeBASIC, and implicitly in most assembly languages. They are used mainly to construct references, which in turn are fundamental to construct nearly all data structures, and to pass data between different parts of a program.
In functional programming languages that rely heavily on lists, data references are managed abstractly by using primitive constructs like cons and the corresponding elements car and cdr, which can be thought of as specialised pointers to the first and second components of a cons-cell. This gives rise to some of the idiomatic "flavour" of functional programming. By structuring data in such cons-lists, these languages facilitate recursive means for building and processing data—for example, by recursively accessing the head and tail elements of lists of lists; e.g. "taking the car of the cdr of the cdr". By contrast, memory management based on pointer dereferencing in some approximation of an array of memory addresses facilitates treating variables as slots into which data can be assigned imperatively.
When dealing with arrays, the critical lookup operation typically involves a stage called address calculation which involves constructing a pointer to the desired data element in the array. In other data structures, such as linked lists, pointers are used as references to explicitly tie one piece of the structure to another.
Pointers are used to pass parameters by reference. This is useful if the programmer wants a function's modifications to a parameter to be visible to the function's caller. This is also useful for returning multiple values from a function.
Pointers can also be used to allocate and deallocate dynamic variables and arrays in memory. Since a variable will often become redundant after it has served its purpose, it is a waste of memory to keep it, and therefore it is good practice to deallocate it (using the original pointer reference) when it is no longer needed. Failure to do so may result in a memory leak (where available free memory gradually, or in severe cases rapidly, diminishes because of an accumulation of numerous redundant memory blocks).
C pointers
The basic syntax to define a pointer is:
int *ptr;
This declares ptr as the identifier of an object of the following type:
pointer that points to an object of type int
This is usually stated more succinctly as "ptr is a pointer to int."
Because the C language does not specify an implicit initialization for objects of automatic storage duration, care should often be taken to ensure that the address to which ptr points is valid; this is why it is sometimes suggested that a pointer be explicitly initialized to the null pointer value, which is traditionally specified in C with the standardized macro NULL:
int *ptr = NULL;
Dereferencing a null pointer in C produces undefined behavior, which could be catastrophic. However, most implementations simply halt execution of the program in question, usually with a segmentation fault.
However, initializing pointers unnecessarily could hinder program analysis, thereby hiding bugs.
In any case, once a pointer has been declared, the next logical step is for it to point at something:
int a = 5;
int *ptr = NULL;
ptr = &a;
This assigns the value of the address of a to ptr. For example, if a is stored at memory location of 0x8130 then the value of ptr will be 0x8130 after the assignment. To dereference the pointer, an asterisk is used again:
*ptr = 8;
This means take the contents of ptr (which is 0x8130), "locate" that address in memory and set its value to 8.
If a is later accessed again, its new value will be 8.
This example may be clearer if memory is examined directly.
Assume that a is located at address 0x8130 in memory and ptr at 0x8134; also assume this is a 32-bit machine such that an int is 32-bits wide. The following is what would be in memory after the following code snippet is executed:
int a = 5;
int *ptr = NULL;
(The NULL pointer shown here is 0x00000000.)
By assigning the address of a to ptr:
ptr = &a;
yields the following memory values:
Then by dereferencing ptr by coding:
*ptr = 8;
the computer will take the contents of ptr (which is 0x8130), 'locate' that address, and assign 8 to that location yielding the following memory:
Clearly, accessing a will yield the value of 8 because the previous instruction modified the contents of a by way of the pointer ptr.
Use in data structures
When setting up data structures like lists, queues and trees, it is necessary to have pointers to help manage how the structure is implemented and controlled. Typical examples of pointers are start pointers, end pointers, and stack pointers. These pointers can either be absolute (the actual physical address or a virtual address in virtual memory) or relative (an offset from an absolute start address ("base") that typically uses fewer bits than a full address, but will usually require one additional arithmetic operation to resolve).
Relative addresses are a form of manual memory segmentation, and share many of its advantages and disadvantages. A two-byte offset, containing a 16-bit, unsigned integer, can be used to provide relative addressing for up to 64 KiB (216 bytes) of a data structure. This can easily be extended to 128, 256 or 512 KiB if the address pointed to is forced to be aligned on a half-word, word or double-word boundary (but, requiring an additional "shift left" bitwise operation—by 1, 2 or 3 bits—in order to adjust the offset by a factor of 2, 4 or 8, before its addition to the base address). Generally, though, such schemes are a lot of trouble, and for convenience to the programmer absolute addresses (and underlying that, a flat address space) is preferred.
A one byte offset, such as the hexadecimal ASCII value of a character (e.g. X'29') can be used to point to an alternative integer value (or index) in an array (e.g., X'01'). In this way, characters can be very efficiently translated from 'raw data' to a usable sequential index and then to an absolute address without a lookup table.
C arrays
In C, array indexing is formally defined in terms of pointer arithmetic; that is, the language specification requires that array[i] be equivalent to *(array + i). Thus in C, arrays can be thought of as pointers to consecutive areas of memory (with no gaps), and the syntax for accessing arrays is identical for that which can be used to dereference pointers. For example, an array array can be declared and used in the following manner:
int array[5]; /* Declares 5 contiguous integers */
int *ptr = array; /* Arrays can be used as pointers */
ptr[0] = 1; /* Pointers can be indexed with array syntax */
*(array + 1) = 2; /* Arrays can be dereferenced with pointer syntax */
*(1 + array) = 2; /* Pointer addition is commutative */
2[array] = 4; /* Subscript operator is commutative */
This allocates a block of five integers and names the block array, which acts as a pointer to the block. Another common use of pointers is to point to dynamically allocated memory from malloc which returns a consecutive block of memory of no less than the requested size that can be used as an array.
While most operators on arrays and pointers are equivalent, the result of the sizeof operator differs. In this example, sizeof(array) will evaluate to 5*sizeof(int) (the size of the array), while sizeof(ptr) will evaluate to sizeof(int*), the size of the pointer itself.
Default values of an array can be declared like:
int array[5] = {2, 4, 3, 1, 5};
If array is located in memory starting at address 0x1000 on a 32-bit little-endian machine then memory will contain the following (values are in hexadecimal, like the addresses):
{| class="wikitable" style="font-family:monospace;"
|-
|
! 0 || 1 || 2 || 3
|-
! 1000
| 2 || 0 || 0 || 0
|-
! 1004
| 4 || 0 || 0 || 0
|-
! 1008
| 3 || 0 || 0 || 0
|-
! 100C
| 1 || 0 || 0 || 0
|-
! 1010
| 5 || 0 || 0 || 0
|}
Represented here are five integers: 2, 4, 3, 1, and 5. These five integers occupy 32 bits (4 bytes) each with the least-significant byte stored first (this is a little-endian CPU architecture) and are stored consecutively starting at address 0x1000.
The syntax for C with pointers is:
array means 0x1000;
array + 1 means 0x1004: the "+ 1" means to add the size of 1 int, which is 4 bytes;
*array means to dereference the contents of array. Considering the contents as a memory address (0x1000), look up the value at that location (0x0002);
array[i] means element number i, 0-based, of array which is translated into *(array + i).
The last example is how to access the contents of array. Breaking it down:
array + i is the memory location of the (i)th element of array, starting at i=0;
*(array + i) takes that memory address and dereferences it to access the value.
C linked list
Below is an example definition of a linked list in C.
/* the empty linked list is represented by NULL
* or some other sentinel value */
#define EMPTY_LIST NULL
struct link {
void *data; /* data of this link */
struct link *next; /* next link; EMPTY_LIST if there is none */
};
This pointer-recursive definition is essentially the same as the reference-recursive definition from the language Haskell:
data Link a = Nil
| Cons a (Link a)
Nil is the empty list, and Cons a (Link a) is a cons cell of type a with another link also of type a.
The definition with references, however, is type-checked and does not use potentially confusing signal values. For this reason, data structures in C are usually dealt with via wrapper functions, which are carefully checked for correctness.
Pass-by-address using pointers
Pointers can be used to pass variables by their address, allowing their value to be changed. For example, consider the following C code:
/* a copy of the int n can be changed within the function without affecting the calling code */
void passByValue(int n) {
n = 12;
}
/* a pointer m is passed instead. No copy of the value pointed to by m is created */
void passByAddress(int *m) {
*m = 14;
}
int main(void) {
int x = 3;
/* pass a copy of x's value as the argument */
passByValue(x);
// the value was changed inside the function, but x is still 3 from here on
/* pass x's address as the argument */
passByAddress(&x);
// x was actually changed by the function and is now equal to 14 here
return 0;
}
Dynamic memory allocation
In some programs, the required amount of memory depends on what the user may enter. In such cases the programmer needs to allocate memory dynamically. This is done by allocating memory at the heap rather than on the stack, where variables usually are stored (although variables can also be stored in the CPU registers). Dynamic memory allocation can only be made through pointers, and names – like with common variables – cannot be given.
Pointers are used to store and manage the addresses of dynamically allocated blocks of memory. Such blocks are used to store data objects or arrays of objects. Most structured and object-oriented languages provide an area of memory, called the heap or free store, from which objects are dynamically allocated.
The example C code below illustrates how structure objects are dynamically allocated and referenced. The standard C library provides the function malloc() for allocating memory blocks from the heap. It takes the size of an object to allocate as a parameter and returns a pointer to a newly allocated block of memory suitable for storing the object, or it returns a null pointer if the allocation failed.
/* Parts inventory item */
struct Item {
int id; /* Part number */
char * name; /* Part name */
float cost; /* Cost */
};
/* Allocate and initialize a new Item object */
struct Item * make_item(const char *name) {
struct Item * item;
/* Allocate a block of memory for a new Item object */
item = malloc(sizeof(struct Item));
if (item == NULL)
return NULL;
/* Initialize the members of the new Item */
memset(item, 0, sizeof(struct Item));
item->id = -1;
item->name = NULL;
item->cost = 0.0;
/* Save a copy of the name in the new Item */
item->name = malloc(strlen(name) + 1);
if (item->name == NULL) {
free(item);
return NULL;
}
strcpy(item->name, name);
/* Return the newly created Item object */
return item;
}
The code below illustrates how memory objects are dynamically deallocated, i.e., returned to the heap or free store. The standard C library provides the function free() for deallocating a previously allocated memory block and returning it back to the heap.
/* Deallocate an Item object */
void destroy_item(struct Item *item) {
/* Check for a null object pointer */
if (item == NULL)
return;
/* Deallocate the name string saved within the Item */
if (item->name != NULL) {
free(item->name);
item->name = NULL;
}
/* Deallocate the Item object itself */
free(item);
}
Memory-mapped hardware
On some computing architectures, pointers can be used to directly manipulate memory or memory-mapped devices.
Assigning addresses to pointers is an invaluable tool when programming microcontrollers. Below is a simple example declaring a pointer of type int and initialising it to a hexadecimal address in this example the constant 0x7FFF:
int *hardware_address = (int *)0x7FFF;
In the mid 80s, using the BIOS to access the video capabilities of PCs was slow. Applications that were display-intensive typically used to access CGA video memory directly by casting the hexadecimal constant 0xB8000 to a pointer to an array of 80 unsigned 16-bit int values. Each value consisted of an ASCII code in the low byte, and a colour in the high byte. Thus, to put the letter 'A' at row 5, column 2 in bright white on blue, one would write code like the following:
#define VID ((unsigned short (*)[80])0xB8000)
void foo(void) {
VID[4][1] = 0x1F00 | 'A';
}
Use in control tables
Control tables that are used to control program flow usually make extensive use of pointers. The pointers, usually embedded in a table entry, may, for instance, be used to hold the entry points to subroutines to be executed, based on certain conditions defined in the same table entry. The pointers can however be simply indexes to other separate, but associated, tables comprising an array of the actual addresses or the addresses themselves (depending upon the programming language constructs available). They can also be used to point to earlier table entries (as in loop processing) or forward to skip some table entries (as in a switch or "early" exit from a loop). For this latter purpose, the "pointer" may simply be the table entry number itself and can be transformed into an actual address by simple arithmetic.
Typed pointers and casting
In many languages, pointers have the additional restriction that the object they point to has a specific type. For example, a pointer may be declared to point to an integer; the language will then attempt to prevent the programmer from pointing it to objects which are not integers, such as floating-point numbers, eliminating some errors.
For example, in C
int *money;
char *bags;
money would be an integer pointer and bags would be a char pointer.
The following would yield a compiler warning of "assignment from incompatible pointer type" under GCC
bags = money;
because money and bags were declared with different types.
To suppress the compiler warning, it must be made explicit that you do indeed wish to make the assignment by typecasting it
bags = (char *)money;
which says to cast the integer pointer of money to a char pointer and assign to bags.
A 2005 draft of the C standard requires that casting a pointer derived from one type to one of another type should maintain the alignment correctness for both types (6.3.2.3 Pointers, par. 7):
char *external_buffer = "abcdef";
int *internal_data;
internal_data = (int *)external_buffer; // UNDEFINED BEHAVIOUR if "the resulting pointer
// is not correctly aligned"
In languages that allow pointer arithmetic, arithmetic on pointers takes into account the size of the type. For example, adding an integer number to a pointer produces another pointer that points to an address that is higher by that number times the size of the type. This allows us to easily compute the address of elements of an array of a given type, as was shown in the C arrays example above. When a pointer of one type is cast to another type of a different size, the programmer should expect that pointer arithmetic will be calculated differently. In C, for example, if the money array starts at 0x2000 and sizeof(int) is 4 bytes whereas sizeof(char) is 1 byte, then money + 1 will point to 0x2004, but bags + 1 would point to 0x2001. Other risks of casting include loss of data when "wide" data is written to "narrow" locations (e.g. bags[0] = 65537;), unexpected results when bit-shifting values, and comparison problems, especially with signed vs unsigned values.
Although it is impossible in general to determine at compile-time which casts are safe, some languages store run-time type information which can be used to confirm that these dangerous casts are valid at runtime. Other languages merely accept a conservative approximation of safe casts, or none at all.
Value of pointers
In C and C++, even if two pointers compare as equal that doesn't mean they are equivalent. In these languages and LLVM, the rule is interpreted to mean that "just because two pointers point to the same address, does not mean they are equal in the sense that they can be used interchangeably", the difference between the pointers referred to as their provenance. Casting to an integer type such as uintptr_t is implementation-defined and the comparison it provides does not provide any more insight as to whether the two pointers are interchangeable. In addition, further conversion to bytes and arithmetic will throw off optimizers trying to keep track the use of pointers, a problem still being elucidated in academic research.
Making pointers safer
As a pointer allows a program to attempt to access an object that may not be defined, pointers can be the origin of a variety of programming errors. However, the usefulness of pointers is so great that it can be difficult to perform programming tasks without them. Consequently, many languages have created constructs designed to provide some of the useful features of pointers without some of their pitfalls, also sometimes referred to as pointer hazards. In this context, pointers that directly address memory (as used in this article) are referred to as s, by contrast with smart pointers or other variants.
One major problem with pointers is that as long as they can be directly manipulated as a number, they can be made to point to unused addresses or to data which is being used for other purposes. Many languages, including most functional programming languages and recent imperative programming languages like Java, replace pointers with a more opaque type of reference, typically referred to as simply a reference, which can only be used to refer to objects and not manipulated as numbers, preventing this type of error. Array indexing is handled as a special case.
A pointer which does not have any address assigned to it is called a wild pointer. Any attempt to use such uninitialized pointers can cause unexpected behavior, either because the initial value is not a valid address, or because using it may damage other parts of the program. The result is often a segmentation fault, storage violation or wild branch (if used as a function pointer or branch address).
In systems with explicit memory allocation, it is possible to create a dangling pointer by deallocating the memory region it points into. This type of pointer is dangerous and subtle because a deallocated memory region may contain the same data as it did before it was deallocated but may be then reallocated and overwritten by unrelated code, unknown to the earlier code. Languages with garbage collection prevent this type of error because deallocation is performed automatically when there are no more references in scope.
Some languages, like C++, support smart pointers, which use a simple form of reference counting to help track allocation of dynamic memory in addition to acting as a reference. In the absence of reference cycles, where an object refers to itself indirectly through a sequence of smart pointers, these eliminate the possibility of dangling pointers and memory leaks. Delphi strings support reference counting natively.
The Rust programming language introduces a borrow checker, pointer lifetimes, and an optimisation based around option types for null pointers to eliminate pointer bugs, without resorting to garbage collection.
Special kinds of pointers
Kinds defined by value
Null pointer
A null pointer has a value reserved for indicating that the pointer does not refer to a valid object. Null pointers are routinely used to represent conditions such as the end of a list of unknown length or the failure to perform some action; this use of null pointers can be compared to nullable types and to the Nothing value in an option type.
Dangling pointer
A dangling pointer is a pointer that does not point to a valid object and consequently may make a program crash or behave oddly. In the Pascal or C programming languages, pointers that are not specifically initialized may point to unpredictable addresses in memory.
The following example code shows a dangling pointer:
int func(void) {
char *p1 = malloc(sizeof(char)); /* (undefined) value of some place on the heap */
char *p2; /* dangling (uninitialized) pointer */
*p1 = 'a'; /* This is OK, assuming malloc() has not returned NULL. */
*p2 = 'b'; /* This invokes undefined behavior */
}
Here, p2 may point to anywhere in memory, so performing the assignment *p2 = 'b'; can corrupt an unknown area of memory or trigger a segmentation fault.
Wild branch
Where a pointer is used as the address of the entry point to a program or start of a function which doesn't return anything and is also either uninitialized or corrupted, if a call or jump is nevertheless made to this address, a "wild branch" is said to have occurred. In other words, a wild branch is a function pointer that is wild (dangling).
The consequences are usually unpredictable and the error may present itself in several different ways depending upon whether or not the pointer is a "valid" address and whether or not there is (coincidentally) a valid instruction (opcode) at that address. The detection of a wild branch can present one of the most difficult and frustrating debugging exercises since much of the evidence may already have been destroyed beforehand or by execution of one or more inappropriate instructions at the branch location. If available, an instruction set simulator can usually not only detect a wild branch before it takes effect, but also provide a complete or partial trace of its history.
Kinds defined by structure
Autorelative pointer
An autorelative pointer is a pointer whose value is interpreted as an offset from the address of the pointer itself; thus, if a data structure has an autorelative pointer member that points to some portion of the data structure itself, then the data structure may be relocated in memory without having to update the value of the auto relative pointer.
The cited patent also uses the term self-relative pointer to mean the same thing. However, the meaning of that term has been used in other ways:
to mean an offset from the address of a structure rather than from the address of the pointer itself;
to mean a pointer containing its own address, which can be useful for reconstructing in any arbitrary region of memory a collection of data structures that point to each other.
Based pointer
A based pointer is a pointer whose value is an offset from the value of another pointer. This can be used to store and load blocks of data, assigning the address of the beginning of the block to the base pointer.
Kinds defined by use or datatype
Multiple indirection
In some languages, a pointer can reference another pointer, requiring multiple dereference operations to get to the original value. While each level of indirection may add a performance cost, it is sometimes necessary in order to provide correct behavior for complex data structures. For example, in C it is typical to define a linked list in terms of an element that contains a pointer to the next element of the list:
struct element {
struct element *next;
int value;
};
struct element *head = NULL;
This implementation uses a pointer to the first element in the list as a surrogate for the entire list. If a new value is added to the beginning of the list, head has to be changed to point to the new element. Since C arguments are always passed by value, using double indirection allows the insertion to be implemented correctly, and has the desirable side-effect of eliminating special case code to deal with insertions at the front of the list:
// Given a sorted list at *head, insert the element item at the first
// location where all earlier elements have lesser or equal value.
void insert(struct element **head, struct element *item) {
struct element **p; // p points to a pointer to an element
for (p = head; *p != NULL; p = &(*p)->next) {
if (item->value <= (*p)->value)
break;
}
item->next = *p;
*p = item;
}
// Caller does this:
insert(&head, item);
In this case, if the value of item is less than that of head, the caller's head is properly updated to the address of the new item.
A basic example is in the argv argument to the main function in C (and C++), which is given in the prototype as char **argv—this is because the variable argv itself is a pointer to an array of strings (an array of arrays), so *argv is a pointer to the 0th string (by convention the name of the program), and **argv is the 0th character of the 0th string.
Function pointer
In some languages, a pointer can reference executable code, i.e., it can point to a function, method, or procedure. A function pointer will store the address of a function to be invoked. While this facility can be used to call functions dynamically, it is often a favorite technique of virus and other malicious software writers.
int sum(int n1, int n2) { // Function with two integer parameters returning an integer value
return n1 + n2;
}
int main(void) {
int a, b, x, y;
int (*fp)(int, int); // Function pointer which can point to a function like sum
fp = ∑ // fp now points to function sum
x = (*fp)(a, b); // Calls function sum with arguments a and b
y = sum(a, b); // Calls function sum with arguments a and b
}
Back pointer
In doubly linked lists or tree structures, a back pointer held on an element 'points back' to the item referring to the current element. These are useful for navigation and manipulation, at the expense of greater memory use.
Simulation using an array index
It is possible to simulate pointer behavior using an index to an (normally one-dimensional) array.
Primarily for languages which do not support pointers explicitly but do support arrays, the array can be thought of and processed as if it were the entire memory range (within the scope of the particular array) and any index to it can be thought of as equivalent to a general-purpose register in assembly language (that points to the individual bytes but whose actual value is relative to the start of the array, not its absolute address in memory).
Assuming the array is, say, a contiguous 16 megabyte character data structure, individual bytes (or a string of contiguous bytes within the array) can be directly addressed and manipulated using the name of the array with a 31 bit unsigned integer as the simulated pointer (this is quite similar to the C arrays example shown above). Pointer arithmetic can be simulated by adding or subtracting from the index, with minimal additional overhead compared to genuine pointer arithmetic.
It is even theoretically possible, using the above technique, together with a suitable instruction set simulator to simulate any machine code or the intermediate (byte code) of any processor/language in another language that does not support pointers at all (for example Java / JavaScript). To achieve this, the binary code can initially be loaded into contiguous bytes of the array for the simulator to "read", interpret and execute entirely within the memory containing the same array.
If necessary, to completely avoid buffer overflow problems, bounds checking can usually be inserted by the compiler (or if not, hand coded in the simulator).
Support in various programming languages
Ada
Ada is a strongly typed language where all pointers are typed and only safe type conversions are permitted. All pointers are by default initialized to null, and any attempt to access data through a null pointer causes an exception to be raised. Pointers in Ada are called access types. Ada 83 did not permit arithmetic on access types (although many compiler vendors provided for it as a non-standard feature), but Ada 95 supports “safe” arithmetic on access types via the package System.Storage_Elements.
BASIC
Several old versions of BASIC for the Windows platform had support for STRPTR() to return the address of a string, and for VARPTR() to return the address of a variable. Visual Basic 5 also had support for OBJPTR() to return the address of an object interface, and for an ADDRESSOF operator to return the address of a function. The types of all of these are integers, but their values are equivalent to those held by pointer types.
Newer dialects of BASIC, such as FreeBASIC or BlitzMax, have exhaustive pointer implementations, however. In FreeBASIC, arithmetic on ANY pointers (equivalent to C's void*) are treated as though the ANY pointer was a byte width. ANY pointers cannot be dereferenced, as in C. Also, casting between ANY and any other type's pointers will not generate any warnings.
dim as integer f = 257
dim as any ptr g = @f
dim as integer ptr i = g
assert(*i = 257)
assert( (g + 4) = (@f + 1) )
C and C++
In C and C++ pointers are variables that store addresses and can be null. Each pointer has a type it points to, but one can freely cast between pointer types (but not between a function pointer and an object pointer). A special pointer type called the “void pointer” allows pointing to any (non-function) object, but is limited by the fact that it cannot be dereferenced directly (it shall be cast). The address itself can often be directly manipulated by casting a pointer to and from an integral type of sufficient size, though the results are implementation-defined and may indeed cause undefined behavior; while earlier C standards did not have an integral type that was guaranteed to be large enough, C99 specifies the uintptr_t typedef name defined in <stdint.h>, but an implementation need not provide it.
C++ fully supports C pointers and C typecasting. It also supports a new group of typecasting operators to help catch some unintended dangerous casts at compile-time. Since C++11, the C++ standard library also provides smart pointers (unique_ptr, shared_ptr and weak_ptr) which can be used in some situations as a safer alternative to primitive C pointers. C++ also supports another form of reference, quite different from a pointer, called simply a reference or reference type.
Pointer arithmetic, that is, the ability to modify a pointer's target address with arithmetic operations (as well as magnitude comparisons), is restricted by the language standard to remain within the bounds of a single array object (or just after it), and will otherwise invoke undefined behavior. Adding or subtracting from a pointer moves it by a multiple of the size of its datatype. For example, adding 1 to a pointer to 4-byte integer values will increment the pointer's pointed-to byte-address by 4. This has the effect of incrementing the pointer to point at the next element in a contiguous array of integers—which is often the intended result. Pointer arithmetic cannot be performed on void pointers because the void type has no size, and thus the pointed address can not be added to, although gcc and other compilers will perform byte arithmetic on void* as a non-standard extension, treating it as if it were char *.
Pointer arithmetic provides the programmer with a single way of dealing with different types: adding and subtracting the number of elements required instead of the actual offset in bytes. (Pointer arithmetic with char * pointers uses byte offsets, because sizeof(char) is 1 by definition.) In particular, the C definition explicitly declares that the syntax a[n], which is the n-th element of the array a, is equivalent to *(a + n), which is the content of the element pointed by a + n. This implies that n[a] is equivalent to a[n], and one can write, e.g., a[3] or 3[a] equally well to access the fourth element of an array a.
While powerful, pointer arithmetic can be a source of computer bugs. It tends to confuse novice programmers, forcing them into different contexts: an expression can be an ordinary arithmetic one or a pointer arithmetic one, and sometimes it is easy to mistake one for the other. In response to this, many modern high-level computer languages (for example Java) do not permit direct access to memory using addresses. Also, the safe C dialect Cyclone addresses many of the issues with pointers. See C programming language for more discussion.
The void pointer, or void*, is supported in ANSI C and C++ as a generic pointer type. A pointer to void can store the address of any object (not function), and, in C, is implicitly converted to any other object pointer type on assignment, but it must be explicitly cast if dereferenced.
K&R C used char* for the “type-agnostic pointer” purpose (before ANSI C).
int x = 4;
void* p1 = &x;
int* p2 = p1; // void* implicitly converted to int*: valid C, but not C++
int a = *p2;
int b = *(int*)p1; // when dereferencing inline, there is no implicit conversion
C++ does not allow the implicit conversion of void* to other pointer types, even in assignments. This was a design decision to avoid careless and even unintended casts, though most compilers only output warnings, not errors, when encountering other casts.
int x = 4;
void* p1 = &x;
int* p2 = p1; // this fails in C++: there is no implicit conversion from void*
int* p3 = (int*)p1; // C-style cast
int* p4 = reinterpret_cast<int*>(p1); // C++ cast
In C++, there is no void& (reference to void) to complement void* (pointer to void), because references behave like aliases to the variables they point to, and there can never be a variable whose type is void.
Pointer-to-member
In C++ pointers to non-static members of a class can be defined. If a class C has a member T a then &C::a is a pointer to the member a of type T C::*. This member can be an object or a function. They can be used on the right-hand side of operators .* and ->* to access the corresponding member.
struct S {
int a;
int f() const {return a;}
};
S s1{};
S* ptrS = &s1;
int S::* ptr = &S::a; // pointer to S::a
int (S::* fp)()const = &S::f; // pointer to S::f
s1.*ptr = 1;
std::cout << (s1.*fp)() << "\n"; // prints 1
ptrS->*ptr = 2;
std::cout << (ptrS->*fp)() << "\n"; // prints 2
Pointer declaration syntax overview
These pointer declarations cover most variants of pointer declarations. Of course it is possible to have triple pointers, but the main principles behind a triple pointer already exist in a double pointer. The naming used here is what the expression typeid(type).name() equals for each of these types when using g++ or clang.
char A5_A5_c [5][5]; /* array of arrays of chars */
char *A5_Pc [5]; /* array of pointers to chars */
char **PPc; /* pointer to pointer to char ("double pointer") */
char (*PA5_c) [5]; /* pointer to array(s) of chars */
char *FPcvE(); /* function which returns a pointer to char(s) */
char (*PFcvE)(); /* pointer to a function which returns a char */
char (*FPA5_cvE())[5]; /* function which returns pointer to an array of chars */
char (*A5_PFcvE[5])(); /* an array of pointers to functions which return a char */
The following declarations involving pointers-to-member are valid only in C++:
class C;
class D;
char C::* M1Cc; /* pointer-to-member to char */
char C::*A5_M1Cc [5]; /* array of pointers-to-member to char */
char* C::* M1CPc; /* pointer-to-member to pointer to char(s) */
char C::** PM1Cc; /* pointer to pointer-to-member to char */
char (*M1CA5_c) [5]; /* pointer-to-member to array(s) of chars */
char C::* FM1CcvE(); /* function which returns a pointer-to-member to char */
char D::* C::* M1CM1Dc; /* pointer-to-member to pointer-to-member to pointer to char(s) */
char C::* C::* M1CMS_c; /* pointer-to-member to pointer-to-member to pointer to char(s) */
char (C::* FM1CA5_cvE())[5]; /* function which returns pointer-to-member to an array of chars */
char (C::* M1CFcvE)() /* pointer-to-member-function which returns a char */
char (C::* A5_M1CFcvE[5])(); /* an array of pointers-to-member-functions which return a char */
The () and [] have a higher priority than *.
C#
In the C# programming language, pointers are supported by either marking blocks of code that include pointers with the unsafe keyword, or by using the System.Runtime.CompilerServices assembly provisions for pointer access.
The syntax is essentially the same as in C++, and the address pointed can be either managed or unmanaged memory. However, pointers to managed memory (any pointer to a managed object) must be declared using the fixed keyword, which prevents the garbage collector from moving the pointed object as part of memory management while the pointer is in scope, thus keeping the pointer address valid.
However, an exception to this is from using the IntPtr structure, which is a memory managed equivalent to int*, and does not require the unsafe keyword nor the CompilerServices assembly. This type is often returned when using methods from the System.Runtime.InteropServices, for example:
// Get 16 bytes of memory from the process's unmanaged memory
IntPtr pointer = System.Runtime.InteropServices.Marshal.AllocHGlobal(16);
// Do something with the allocated memory
// Free the allocated memory
System.Runtime.InteropServices.Marshal.FreeHGlobal(pointer);
The .NET framework includes many classes and methods in the System and System.Runtime.InteropServices namespaces (such as the Marshal class) which convert .NET types (for example, System.String) to and from many unmanaged types and pointers (for example, LPWSTR or void*) to allow communication with unmanaged code. Most such methods have the same security permission requirements as unmanaged code, since they can affect arbitrary places in memory.
COBOL
The COBOL programming language supports pointers to variables. Primitive or group (record) data objects declared within the LINKAGE SECTION of a program are inherently pointer-based, where the only memory allocated within the program is space for the address of the data item (typically a single memory word). In program source code, these data items are used just like any other WORKING-STORAGE variable, but their contents are implicitly accessed indirectly through their LINKAGE pointers.
Memory space for each pointed-to data object is typically allocated dynamically using external CALL statements or via embedded extended language constructs such as EXEC CICS or EXEC SQL statements.
Extended versions of COBOL also provide pointer variables declared with USAGE IS POINTER clauses. The values of such pointer variables are established and modified using SET and SET ADDRESS statements.
Some extended versions of COBOL also provide PROCEDURE-POINTER variables, which are capable of storing the addresses of executable code.
PL/I
The PL/I language provides full support for pointers to all data types (including pointers to structures), recursion, multitasking, string handling, and extensive built-in functions. PL/I was quite a leap forward compared to the programming languages of its time. PL/I pointers are untyped, and therefore no casting is required for pointer dereferencing or assignment. The declaration syntax for a pointer is DECLARE xxx POINTER;, which declares a pointer named "xxx". Pointers are used with BASED variables. A based variable can be declared with a default locator (DECLARE xxx BASED(ppp); or without (DECLARE xxx BASED;), where xxx is a based variable, which may be an element variable, a structure, or an array, and ppp is the default pointer). Such a variable can be address without an explicit pointer reference (xxx=1;, or may be addressed with an explicit reference to the default locator (ppp), or to any other pointer (qqq->xxx=1;).
Pointer arithmetic is not part of the PL/I standard, but many compilers allow expressions of the form ptr = ptr±expression. IBM PL/I also has the builtin function PTRADD to perform the arithmetic. Pointer arithmetic is always performed in bytes.
IBM Enterprise PL/I compilers have a new form of typed pointer called a HANDLE.
D
The D programming language is a derivative of C and C++ which fully supports C pointers and C typecasting.
Eiffel
The Eiffel object-oriented language employs value and reference semantics without pointer arithmetic. Nevertheless, pointer classes are provided. They offer pointer arithmetic, typecasting, explicit memory management,
interfacing with non-Eiffel software, and other features.
Fortran
Fortran-90 introduced a strongly typed pointer capability. Fortran pointers contain more than just a simple memory address. They also encapsulate the lower and upper bounds of array dimensions, strides (for example, to support arbitrary array sections), and other metadata. An association operator, => is used to associate a POINTER to a variable which has a TARGET attribute. The Fortran-90 ALLOCATE statement may also be used to associate a pointer to a block of memory. For example, the following code might be used to define and create a linked list structure:
type real_list_t
real :: sample_data(100)
type (real_list_t), pointer :: next => null ()
end type
type (real_list_t), target :: my_real_list
type (real_list_t), pointer :: real_list_temp
real_list_temp => my_real_list
do
read (1,iostat=ioerr) real_list_temp%sample_data
if (ioerr /= 0) exit
allocate (real_list_temp%next)
real_list_temp => real_list_temp%next
end do
Fortran-2003 adds support for procedure pointers. Also, as part of the C Interoperability feature, Fortran-2003 supports intrinsic functions for converting C-style pointers into Fortran pointers and back.
Go
Go has pointers. Its declaration syntax is equivalent to that of C, but written the other way around, ending with the type. Unlike C, Go has garbage collection, and disallows pointer arithmetic. Reference types, like in C++, do not exist. Some built-in types, like maps and channels, are boxed (i.e. internally they are pointers to mutable structures), and are initialized using the make function. In an approach to unified syntax between pointers and non-pointers, the arrow (->) operator has been dropped: the dot operator on a pointer refers to the field or method of the dereferenced object. This, however, only works with 1 level of indirection.
Java
There is no explicit representation of pointers in Java. Instead, more complex data structures like objects and arrays are implemented using references. The language does not provide any explicit pointer manipulation operators. It is still possible for code to attempt to dereference a null reference (null pointer), however, which results in a run-time exception being thrown. The space occupied by unreferenced memory objects is recovered automatically by garbage collection at run-time.
Modula-2
Pointers are implemented very much as in Pascal, as are VAR parameters in procedure calls. Modula-2 is even more strongly typed than Pascal, with fewer ways to escape the type system. Some of the variants of Modula-2 (such as Modula-3) include garbage collection.
Oberon
Much as with Modula-2, pointers are available. There are still fewer ways to evade the type system and so Oberon and its variants are still safer with respect to pointers than Modula-2 or its variants. As with Modula-3, garbage collection is a part of the language specification.
Pascal
Unlike many languages that feature pointers, standard ISO Pascal only allows pointers to reference dynamically created variables that are anonymous and does not allow them to reference standard static or local variables. It does not have pointer arithmetic. Pointers also must have an associated type and a pointer to one type is not compatible with a pointer to another type (e.g. a pointer to a char is not compatible with a pointer to an integer). This helps eliminate the type security issues inherent with other pointer implementations, particularly those used for PL/I or C. It also removes some risks caused by dangling pointers, but the ability to dynamically let go of referenced space by using the dispose standard procedure (which has the same effect as the free library function found in C) means that the risk of dangling pointers has not been entirely eliminated.
However, in some commercial and open source Pascal (or derivatives) compiler implementations —like Free Pascal, Turbo Pascal or the Object Pascal in Embarcadero Delphi— a pointer is allowed to reference standard static or local variables and can be cast from one pointer type to another. Moreover, pointer arithmetic is unrestricted: adding or subtracting from a pointer moves it by that number of bytes in either direction, but using the Inc or Dec standard procedures with it moves the pointer by the size of the data type it is declared to point to. An untyped pointer is also provided under the name Pointer, which is compatible with other pointer types.
Perl
The Perl programming language supports pointers, although rarely used, in the form of the pack and unpack functions. These are intended only for simple interactions with compiled OS libraries. In all other cases, Perl uses references, which are typed and do not allow any form of pointer arithmetic. They are used to construct complex data structures.
See also
Address constant
Bounded pointer
Buffer overflow
Cray pointer
Fat pointer
Function pointer
Hazard pointer
Iterator
Opaque pointer
Pointee
Pointer swizzling
Reference (computer science)
Static program analysis
Storage violation
Tagged pointer
Variable (computer science)
Zero-based numbering
Notes
References
External links
PL/I List Processing Paper from the June, 1967 issue of CACM
cdecl.org A tool to convert pointer declarations to plain English
Over IQ.com A beginner level guide describing pointers in a plain English.
Pointers and Memory Introduction to pointers – Stanford Computer Science Education Library
Pointers in C programming A visual model for beginner C programmiers
0pointer.de A terse list of minimum length source codes that dereference a null pointer in several different programming languages
"The C book" – containing pointer examples in ANSI C
Committee draft.
Articles with example C code
Pointers (computer programming)
Primitive types
American inventions
Programming language comparisons
Articles with example Ada code
Articles with example BASIC code
Articles with example C++ code
Articles with example C Sharp code
Articles with example D code
Articles with example Eiffel code
Articles with example Fortran code
Articles with example Java code
Articles with example Pascal code
sv:Datatyp#Pekare och referenstyper | Pointer (computer programming) | Technology | 12,991 |
825,748 | https://en.wikipedia.org/wiki/Propylene | Propylene, also known as propene, is an unsaturated organic compound with the chemical formula . It has one double bond, and is the second simplest member of the alkene class of hydrocarbons. It is a colorless gas with a faint petroleum-like odor.
Propylene is a product of combustion from forest fires, cigarette smoke, and motor vehicle and aircraft exhaust. It was discovered in 1850 by A. W. von Hoffmann's student Captain (later Major General) John Williams Reynolds as the only gaseous product of thermal decomposition of amyl alcohol to react with chlorine and bromine.
Production
Steam cracking
The dominant technology for producing propylene is steam cracking, using propane as the feedstock. Cracking propane yields a mixture of ethylene, propylene, methane, hydrogen gas, and other related compounds. The yield of propylene is about 15%. The other principal feedstock is naphtha, especially in the Middle East and Asia.
Propylene can be separated by fractional distillation from the hydrocarbon mixtures obtained from cracking and other refining processes; refinery-grade propene is about 50 to 70%. In the United States, shale gas is a major source of propane.
Olefin conversion technology
In the Phillips triolefin or olefin conversion technology, propylene is interconverted with ethylene and 2-butenes. Rhenium and molybdenum catalysts are used:
CH2=CH2{} + CH3CH=CHCH3 ->[][\text{Re, Mo} \atop \text{catalyst}] 2 CH2=CHCH3
The technology is founded on an olefin metathesis reaction discovered at Phillips Petroleum Company. Propylene yields of about 90 wt% are achieved.
Related is the Methanol-to-Olefins/Methanol-to-Propene process. It converts synthesis gas (syngas) to methanol, and then converts the methanol to ethylene and/or propene. The process produces water as a by-product. Synthesis gas is produced from the reformation of natural gas or by the steam-induced reformation of petroleum products such as naphtha, or by gasification of coal or natural gas.
Fluid catalytic cracking
High severity fluid catalytic cracking (FCC) uses traditional FCC technology under severe conditions (higher catalyst-to-oil ratios, higher steam injection rates, higher temperatures, etc.) in order to maximize the amount of propene and other light products. A high severity FCC unit is usually fed with gas oils (paraffins) and residues, and produces about 20–25% (by mass) of propene on feedstock together with greater volumes of motor gasoline and distillate byproducts. These high temperature processes are expensive and have a high carbon footprint. For these reasons, alternative routes to propylene continue to attract attention.
Other commercialized methods
On-purpose propylene production technologies were developed throughout the twentieth century. Of these, propane dehydrogenation technologies such as the CATOFIN and OLEFLEX processes have become common, although they still make up a minority of the market, with most of the olefin being sourced from the above mentioned cracking technologies. Platinum, chromia, and vanadium catalysts are common in propane dehydrogenation processes.
Market
Propene production has remained static at around 35 million tonnes (Europe and North America only) from 2000 to 2008, but it has been increasing in East Asia, most notably Singapore and China. Total world production of propene is currently about half that of ethylene.
Research
The use of engineered enzymes has been explored but has not been commercialized.
There is ongoing research into the use of oxygen carrier catalysts for the oxidative dehydrogenation of propane. This poses several advantages, as this reaction mechanism can occur at lower temperatures than conventional dehydrogenation, and may not be equilibrium-limited because oxygen is used to combust the hydrogen by-product.
Uses
Propylene is the second most important starting product in the petrochemical industry after ethylene. It is the raw material for a wide variety of products. Polypropylene manufacturers consume nearly two thirds of global production. Polypropylene end uses include films, fibers, containers, packaging, and caps and closures. Propene is also used for the production of chemicals such as propylene oxide, acrylonitrile, cumene, butyraldehyde, and acrylic acid. In the year 2013 about 85 million tonnes of propylene were processed worldwide.
Propylene and benzene are converted to acetone and phenol via the cumene process.
Propylene is also used to produce isopropyl alcohol (propan-2-ol), acrylonitrile, propylene oxide, and epichlorohydrin.
The industrial production of acrylic acid involves the catalytic partial oxidation of propylene. Propylene is an intermediate in the oxidation to acrylic acid.
In industry and workshops, propylene is used as an alternative fuel to acetylene in Oxy-fuel welding and cutting, brazing and heating of metal for the purpose of bending. It has become a standard in BernzOmatic products and others in MAPP substitutes, now that true MAPP gas is no longer available.
Reactions
Propylene resembles other alkenes in that it undergoes electrophilic addition reactions relatively easily at room temperature. The relative weakness of its double bond explains its tendency to react with substances that can achieve this transformation. Alkene reactions include:
Polymerization and oligomerization
Oxidation
Halogenation
Hydrohalogenation
Alkylation
Hydration
Hydroformylation
Complexes of transition metals
Foundational to hydroformylation, alkene metathesis, and polymerization are metal-propylene complexes, which are intermediates in these processes. Propylene is prochiral, meaning that binding of a reagent (such as a metal electrophile) to the C=C group yields one of two enantiomers.
Polymerization
The majority of propylene is used to form polypropylene, a very important commodity thermoplastic, through chain-growth polymerization. In the presence of a suitable catalyst (typically a Ziegler–Natta catalyst), propylene will polymerize. There are multiple ways to achieve this, such as using high pressures to suspending the catalyst in a solution of liquid propylene, or running gaseous propylene through a fluidized bed reactor.
Oligomerizationn
In the presence of catalysts, propylene will form various short oligomers. It can dimerizes to give 2,3-dimethyl-1-butene and/or 2,3-dimethyl-2-butene. or trimerise to form tripropylene.
Environmental safety
Propene is a product of combustion from forest fires, cigarette smoke, and motor vehicle and aircraft exhaust. It is an impurity in some heating gases. Observed concentrations have been in the range of 0.1–4.8 parts per billion (ppb) in rural air, 4–10.5 ppb in urban air, and 7–260 ppb in industrial air samples.
In the United States and some European countries a threshold limit value of 500 parts per million (ppm) was established for occupational (8-hour time-weighted average) exposure. It is considered a volatile organic compound (VOC) and emissions are regulated by many governments, but it is not listed by the U.S. Environmental Protection Agency (EPA) as a hazardous air pollutant under the Clean Air Act. With a relatively short half-life, it is not expected to bioaccumulate.
Propene has low acute toxicity from inhalation and is not considered to be carcinogenic. Chronic toxicity studies in mice did not yield significant evidence suggesting adverse effects. Humans briefly exposed to 4,000 ppm did not experience any noticeable effects. Propene is dangerous from its potential to displace oxygen as an asphyxiant gas, and from its high flammability/explosion risk.
Bio-propylene is the bio-based propylene.
It has been examined, motivated by diverse interests such a carbon footprint. Production from glucose has been considered. More advanced ways of addressing such issues focus on electrification alternatives to steam cracking.
Storage and handling
Propene is flammable. Propene is usually stored as liquid under pressure, although it is also possible to store it safely as gas at ambient temperature in approved containers.
Occurrence in nature
Propene is detected in the interstellar medium through microwave spectroscopy. On September 30, 2013, NASA announced the detection of small amounts of naturally occurring propene in the atmosphere of Titan using infrared spectroscopy. The detection was made by a team led by NASA GSFC scientist Conor Nixon using data from the CIRS instrument on the Cassini orbiter spacecraft, part of the Cassini-Huygens mission. Its confirmation solved a 32-year old mystery by filling a predicted gap in Titan's detected hydrocarbons, adding the C3H6 species (propene) to the already-detected C3H4 (propyne) and C3H8 (propane).
See also
Los Alfaques disaster
Inhalant abuse
2014 Kaohsiung gas explosions
2020 Houston explosion
Titan (moon)
References
Alkenes
Monomers
Commodity chemicals
Petrochemicals
Gases
Allyl compounds | Propylene | Physics,Chemistry,Materials_science | 1,984 |
53,863,138 | https://en.wikipedia.org/wiki/Ebba%20Lund | Ebba Lund (22 September 1923 – 21 June 1999) was a Danish Resistance fighter during World War II, a chemical engineer, and a microbiologist.
Early life
Ebba Lund was born in 1923 to parents Søren Aabye Kierkegaard (1875–1956) and Anna Petrea Lindberg (1890–1980). Her father was an engineer. The Copenhagen community in which she grew up was considered to be very conservative.
Resistance work
Lund began her resistance work in 1942, two years after the German invasion of Denmark, when she was 20 years old. Her work initially consisted of publishing illegal underground newspapers with her sister, Ulla. Lund worked for Frit Danmark (Free Denmark), a voguish clandestine newspaper, which would go on to publish over six million copies by the end of World War II.
After the collapse of the Danish government, Lund went on to join Holger Danske, a sabotage-oriented Resistance group. Upon joining the Holger Danske resistance group, she became responsible for fishing boats that would secretly bring Jews to safety. Due to connections on the nearby island of Christianso, Lund was able to organize almost a dozen fishing boats for the transportation of Jews to Sweden. She also managed to convince several local landowners to provide the funding for these trips. Safe houses were set up for Jews until they could safely be taken to Sweden, and Lund's own house was used as a safe house.
During her rescue operations, she became known as the "Girl with the Red Cap" or "Red Riding Hood" because of the red hat she wore to signal the Jews to be escorted to look for her. The Resistance did not only aid Jewish individuals, however; they also assisted defecting German soldiers and other Resistance members. Thanks to connections within the Holger Danske, including bribery and partnerships with members of the German army, Lund was able to avoid multiple run-ins with German forces.
The Holger Danske group helped save 700-800 Jews in only a few weeks by offering means of escape. Lund herself had a hand in about 500 of these missions. She managed to escape arrest because, at a time when many of her fellow Resistance workers were being arrested, she was hospitalized with blood poisoning. In 1944, she became pregnant with her first child, Vita, and withdrew entirely from Resistance work.
Education and research
Before her recruitment into the Holger Danske, Lund graduated from the Ingrid Jespersens Gymnasieskole in 1942. After the war, she studied chemical engineering and immunology. She went on to attend the Technical University of Denmark, where she graduated as a chemical engineer with a specialty in microbiology. Following her graduation, Lund was first employed in 1947 at the University of Copenhagen at the Carlsberg Foundation Biological Institute.
Following a move to Gothenburg with her spouse, Lund became employed at both the Sahlgrenska University Hospital from 1954 to 1966 and the University of Gothenburg's Faculty of Medicine in 1963. She performed research on the polio virus in response to a polio epidemic occurring in Denmark at the time. Her research pertained to the investigation of cell culture methods for the research and diagnosis of polio. In 1963, Lund presented her dissertation resulting from this work entitled Oxidative Inactivation of Poliovirus for her Ph.D. at the University of Copenhagen. Along with polio, Lund researched and advocated for vaccinations for foot-and-mouth disease.
Lund became the head of the Department of Virology and Immunology at the Royal Veterinary and Agricultural University in Copenhagen in 1966. She became the first female professor of this institution in 1969 and held her position there until 1993. During her time here, she taught epidemiology as well as various classes agriculture and veterinary sciences. She performed vast amounts of research during her time at the University of Copenhagen. Her research included work in the inactivation of viruses in wastewater and seawater as well as research in parasite toxoplasm. In particular, much of her research pertained to virus isolation from seawater and sewage waste. She also emphasized the importance of understanding how diseases move from animals to humans.
Lund worked with the Danish Fur Breeders to study the diseases of mink puppies. With the help of the Danish Fur Breeders, she developed the world's first economically viable antigen that could diagnose plasmacytosis, a disease that is very common in minks. This allowed breeders to know which puppies were more susceptible to the disease and helped with the question of which puppies to vaccinate.
Lund was an incredibly prolific scientist; she published 124 works in her lifetime, including 84 in English, as well as a lecture series and other content. She created two textbooks: Virology for Veterinary Students, 8th edition, and Immunology for Veterinary Students, 4th edition. She also wrote the books "Water Pollution" and "Gene Splicing" and co-authored the book "Water Reuse" with her spouse.
Organizations and awards
Lund also participated in and assisted several organizations during her time. In particular, she worked with the Danish Fur Breeders Association in researching puppy disease and vaccinations in 1969. Coupled with this association, she was the first in the world to produce an effective antigen in cell cultures that diagnosed the disease plasmacytosis. This antigen was sold throughout Europe.
Lund collaborated with the World Health Organization in 1968 on the effects of water pollution. At this time she also worked with the European Commission on the control of various diseases, such as swine fever and foot-and-mouth disease.
Lund was a chairman of the Danish Society of Pathology from 1970 to 1976. She was an active member and leader of the Danish Society for Nature Conservation. In 1968 she became a member and leader of the Academy of Engineering Sciences as well as a member of the Society of Sciences in 1978. From 1980 to 1990 Lund was a member of the Executive Board of the Carlsberg Foundation and chairman of the Carlsberg Laboratory. From 1986 to 1990 she was a member of both the National Council for Health Sciences Research and the Ethical Council. Lastly, Lund was chair of the Gene Technology Council from 1986 to 1991.
In 1975, Lund became a Knight of the Dannebrog and in 1984 was appointed to a knight in the first degree. In 1985, she received the Ebbe Muncks Award for her service in the Resistance. She gave an oral history interview about her war time experiences to the United States Holocaust Memorial Museum in 1994.
Personal life
Lund had three children, born in 1945, 1948 and 1951. She first married Professor Soren Lovtrup in 1944, divorcing in 1959. She then married Robert Berridge Dean, the Head of the United States Environmental Protection Agency, in 1978.
Lund died on 21 June 1999 in Copenhagen.
References
1923 births
1999 deaths
Danish microbiologists
Danish women chemists
Danish resistance members
Women in World War II
Technical University of Denmark alumni
Danish women academics
Knights of the Order of the Dannebrog
Academic staff of the University of Copenhagen
Danish chemical engineers
Women chemical engineers
Holger Danske resistance group | Ebba Lund | Chemistry | 1,451 |
44,863,957 | https://en.wikipedia.org/wiki/Algerian%20units%20of%20measurement | This is a list of Algerian units of measurement that were used before 1843 for things like length, mass and capacity. After that, Algeria adopted the French system of units (i.e., the metric system).
Length
Before the 1843 changeover, different units were used to measure length. One pic () was equal either to 0.64 m or 0.623 m), while a different pic () was equal to 0.48 m or 0.467 m). Some other units are given below:
1 termin = pic
1 rebia = pic
1 nus = pic.
Mass
A number of different units were used to measure mass. One (ounce) was equal to 0.03413 kg. One metical (metsquat) was equal to about 0.0047 kg. Some other units are given below:
1 = 16
1 = 18
1 = 24
1 = 100 rottolo (cantar (kebyr) = 100 = 100 , and = 100 ).
In addition to above units, one gyral was equal to 207 mg.
Capacity
Two different systems were used to measure capacity: one for dry measure, and another for liquid measure. Some units used to measure dry capacities are given below:
Dry
1 caffiso (or calisse) = 317.47 L (Note: in an old publication, one caffiso was equalised to 8 saah, even though the given values are mismatched.)
1 (or )) = 58 L
1 (or )) = caffiso.
Liquid
One (or or ) was equal to l (1 hectoliter = 6
) or 16 L. One Metalli (oil) was equal to 17.90 L.
References
Culture of Algeria
Algeria | Algerian units of measurement | Mathematics | 363 |
15,225,315 | https://en.wikipedia.org/wiki/RFWD2 | E3 ubiquitin-protein ligase RFWD2 is an enzyme that in humans is encoded by the RFWD2 gene.
Interactions
RFWD2 has been shown to interact with C-jun.
References
Further reading | RFWD2 | Chemistry | 47 |
50,589 | https://en.wikipedia.org/wiki/Spanking | Spanking is a form of corporal punishment involving the act of striking, with either the palm of the hand or an implement, the buttocks of a person to cause physical pain. The term spanking broadly encompasses the use of either the hand or implement, though the use of certain implements can also be characterized as other, more specific types of corporal punishment such as belting, caning, paddling and slippering.
Some parents spank children in response to undesired behavior. Adults more commonly spank boys than girls both at home and in school. Some countries have outlawed the spanking of children in every setting, including homes, schools, and penal institutions, while others permit it when done by a parent or guardian.
Terminology
In American English, dictionaries define spanking as being administered with either the open hand or an implement such as a paddle. Thus, the standard form of corporal punishment in US schools (use of a paddle) is often referred to as a spanking. In North America, the word "spanking" has often been used as a synonym for an official paddling in school, and sometimes even as a euphemism for the formal corporal punishment of adults in an institution.
In British English, most dictionaries define "spanking" as being given only with the open hand. In the United Kingdom, Ireland, Australia, and New Zealand, the word "smacking" is generally used in preference to "spanking" when describing striking with an open hand, rather than with an implement. Whereas a spanking is invariably administered to the bottom, a "smacking" is less specific and may refer to slapping the child's hands, arms, or legs as well as its bottom.
In the home
Parents commonly spank their children as a form of corporal punishment in the United States; however, support for this practice appears to be declining amongst U.S. parents. Spanking is typically done with one or more slaps on the child's buttocks with a bare hand, although, not uncommonly, various objects are used to spank children, such as a hairbrush or wooden spoon. Historically, adults have spanked boys more than girls. In the United States, adults commonly spank toddlers the most. The main reasons parents give for spanking their children are to make children more compliant and to promote better behavior, especially to put a stop to their children's apparent aggressive behaviors.
However, research has shown that spanking (or any other form of corporal punishment) is associated with the opposite effect. When adults physically punish children, the children tend to obey parents less with time and develop more aggressive behaviors, including toward other children. This increase in aggressive behavior appears to reflect the child's perception that hitting is the way to deal with anger and frustration. There are also many adverse physical, mental, and emotional effects correlated with spanking and other forms of corporal punishment, including various physical injuries, increased anxiety, depression, and antisocial behavior. Adults who were spanked during their childhood are more likely to abuse their children and spouse.
The American Academy of Pediatrics (AAP), Royal College of Paediatrics and Child Health (RCPCH), and the Royal Australasian College of Physicians (RACP) all recommend that no child should be spanked and instead favor the use of effective, healthy forms of discipline. Additionally, the AAP recommends that primary care providers (e.g., pediatricians and family medicine physicians) begin to discuss parents' discipline methods no later than nine months of age and consider initiating such discussions by age 3–4 months. By eight months of age, 5% of parents report spanking and 5% report starting to spank by age three months. The AAP also recommends that pediatricians discuss effective discipline strategies and counsel parents about the ineffectiveness of spanking and the risks of harmful effects associated with the practice to minimize harm to children and guide parents.
Although parents and other advocates of spanking often claim that spanking is necessary to promote child discipline, studies have shown that parents tend to apply physical punishment inconsistently and tend to spank more often when they are angry or under stress. The use of corporal punishment by parents increases the likelihood that children will suffer physical abuse, and most documented cases of physical abuse in Canada and the United States begin as disciplinary spankings. If a child is frequently spanked, this form of corporal punishment tends to become less effective at modifying behavior over time (also known as extinction). In response to the decreased effectiveness of spanking, some parents increase the frequency or severity of spanking or use an object.
Alternatives to spanking
Parents may spank less – or not at all – if they have learned effective discipline techniques since many view spanking as a last resort to discipline their children. There are many alternatives to spanking and other forms of corporal punishment:
Time-in, increasing praise, and special time to promote desired behaviors
Time outs to take a break from escalating misbehavior
Positive reinforcement of rewarding desirable behavior with a star, sticker, or treat
Implementing non-physical punishment (psychology) in which an unpleasant consequence follows misbehavior, such as taking away a privilege
Ignoring low-level misbehaviors and prioritizing attention on more significant forms of misbehavior
Avoiding the opportunity for misbehavior and thus the need for corrective discipline.
In schools
Corporal punishment, usually delivered with an implement (such as a paddle or cane) rather than with the open hand, used to be a common form of school discipline in many countries, but it is now banned in most of the Western World.
Corporal punishment, such as caning, remains a common form of discipline in schools in several Asian and African countries, even in countries in which this practice has been deemed illegal such as India and South Africa. In these cultures it is referred to as "caning" and not "spanking."
The Supreme Court of the United States in 1977 held that the paddling of school students was not per se unlawful. However, 33 states have now banned paddling in public schools. It is still common in some schools in the South, and more than 167,000 students were paddled in the 2011–2012 school year in American public schools. Students can be physically punished from kindergarten to the end of high school, meaning that even adults who have reached the age of majority are sometimes spanked by school officials.
Several medical, pediatric, or psychological societies have issued statements opposing all forms of corporal punishment in schools, citing such outcomes as poorer academic achievements, increases in antisocial behaviors, injuries to students, and an unwelcoming learning environment. They include the American Medical Association, the American Academy of Child and Adolescent Psychiatry, the American Psychoanalytic Association, the American Academy of Pediatrics (AAP), the Society for Adolescent Medicine, the American Psychological Association, the Royal College of Paediatrics and Child Health, the Royal College of Psychiatrists, the Canadian Paediatric Society and the Australian Psychological Society, as well as the United States' National Association of School Psychologists and National Association of Secondary School Principals.
Adult spanking
Most spanking performed between adults in the 21st century within the Western world is erotic spanking.
Within the early 20th century, American men spanking their wives and girlfriends was often seen as an acceptable form of domestic discipline. It was a common trope in American films, from the earliest days up through the 1960s, and was often used to allude to romance between the man and woman.
In the early 21st century, adherents of a small subculture known as Christian domestic discipline have on a literalist interpretation of the Bible justified spanking as a form of acceptable punishment of women by their husbands. Critics describe such practices as a form of domestic abuse.
A few countries have a judicial corporal punishment for adults.
Ritual spanking traditions
Asia
On the first day of the lunar Chinese New Year holidays, a week-long 'Spring Festival', the most important festival for Chinese people all over the world, thousands of Chinese visit the Taoist Dong Lung Gong temple in Tungkang to go through the century-old ritual to get rid of bad luck. Men traditionally receive spankings and women get whipped, with the number of strokes to be administered (always lightly) by the temple staff being decided in either case by the god Wang Ye and by burning incense and tossing two pieces of wood, after which all go home happily, believing their luck will improve.
Europe
On Easter Monday, there is a Slavic tradition of spanking girls and young ladies with woven willow switches (Czech: pomlázka; Slovak: korbáč) and dousing them with water.
In Slovenia, there is a jocular tradition that anyone who succeeds in climbing to the top of Mount Triglav receives a spanking or birching.
In Poland there is a tradition named Pasowanie, which is celebrated on the 18th birthday. The birthday person receives eighteen smacks with the belt from the guests at the birthday party.
North America
Birthday spanking is a tradition within some parts of the United States. Within the tradition an individual (commonly, though not exclusively, a child) upon their birthday receives, typically corresponding to their age, a number of spanks. Characteristically these spankings are playful and are administered in such a fashion so the recipient receives no or only minor discomfort.
See also
UN Convention on the Rights of the Child
Corporal punishment
Erotic spanking
Caning in Singapore
Easter whip
References
Notes
External links
American Academy of Pediatrics What's The Best Way to Discipline My Child?
The California Evidence-Based Clearinghouse for Child Welfare
Healthy Steps
Help me Grow
Triple P – Positive Parenting Program (archived 30 March 2017)
Corporal punishments
Traditions
Pain infliction methods
Youth rights
Children's rights
Parenting
Harassment and bullying | Spanking | Biology | 2,008 |
546,158 | https://en.wikipedia.org/wiki/Emetophobia | Emetophobia is a phobia that causes overwhelming, intense anxiety pertaining to vomit. This specific phobia can also include subcategories of what causes the anxiety, including a fear of vomiting or being vomited on or seeing others vomit. Emetophobes might also avoid the mentions of "barfing", vomiting, "throwing up", or "puking."
It is common for those who suffer from emetophobia to be underweight or malnourished due to strict diets and restrictions they make for themselves. The thought of someone possibly vomiting can cause the phobic person to engage in extreme behaviors to escape from their anxiety triggers, e.g. going to great lengths to avoid situations that could be perceived as "threatening".
Emetophobia is clinically considered an "elusive predicament" because limited research has been done pertaining to it. The fear of vomiting receives little attention compared to other fears.
Etymology
The root word for emetophobia is emesis, from the Greek word , which means "an act or instance of vomiting", with -phobia meaning "an exaggerated usually inexplicable fear of a particular object, class of objects, or situation."
Overview
The event of vomiting may make anyone with this particular phobia flee the scene. Some may fear other people throwing up, while others may fear themselves throwing up. Some may fear both. Some may have anxiety that makes them feel as if they will throw up when they actually might not. Other possible fears that may come with emetophobia is not being able to locate a restroom in a timely manner, not being able to stop throwing up, choking on vomit, being embarrassed due to the situation, or having to seek medical attention. People with emetophobia usually experience anxiety; they often may scream, cry, or if it is severe, pass out when someone or something has vomited.
Causes
People with emetophobia frequently report a vomit-related traumatic event, such as a long bout of stomach flu, accidentally vomiting in public or having to witness someone else vomit, as the start of the emetophobia. They may also be afraid of hearing that someone is feeling like vomiting or that someone has vomited or the mention of any word relating to the act of vomiting, usually in conjunction with the fears of seeing someone vomit or seeing vomit.
Presentation
Complications
Emetophobics may also have other complicating disorders and phobias, such as social anxiety, fear of flying and agoraphobia. These three are very common, because people who fear vomiting are often terrified of doing so or encountering it in a public place. Therefore, they may restrict their social activities so they avoid any situations with alcohol or dining out in restaurants. Emetophobics may also limit exposure to children for fear of germs. People who have a fear of vomiting may avoid travel because of the worry about motion sickness or others experiencing it around them. They may also fear roller coasters for the same reason.
Lipsitz et al.'s findings also showed that those with emetophobia often have difficulties comfortably leading a normal life. Many find that they have problems being alone with young children, and they may also avoid social gatherings where alcohol is present. Retaining an occupation becomes difficult for emetophobics. Emetophobia can also affect a person's social life. The phobia can cause people to miss out on everyday events or requirements. It is common for children to miss school, teens/adults to miss work, and for people to go great measures of not socializing with others. Professions and personal goals can be put on hold due to the high anxiety associated with the phobia, and travelling becomes almost impossible for some.
In Lipsitz et al.'s survey, women with emetophobia said that they either delayed pregnancy or avoided pregnancy altogether because of the morning sickness associated with the first trimester, and if they did become pregnant, it made pregnancy difficult.
Other inhibitions on daily life can be seen in meal preparation. Many emetophobic people also have specific "rituals" for the food they eat and how they prepare it. They frequently check the freshness of the food along with washing it several times in order to prevent any potential sicknesses that they could contract from foods not handled properly. They might overcook food products in fear of getting a foodborne illness Eating out may also be avoided and when asked Lipsitz et al.'s survey, many felt they were underweight because of the strict diets that they put upon themselves. In addition, many emetophobes avoid certain foods all together due to negative memories they may have with it relating to vomiting, and often eat a limited number of foods due to feeling like a vast majority of foods aren't 'safe'. Those who suffer from emetophobia might avoid anything that has an unpleasant smell or aroma, in fear of vomiting. This includes eating anything that might have a bad smell. They might also avoid any sight that may induce vomiting in them or other people.
Emetophobia and anorexia
There are some cases where anorexia is the result of a fear of vomiting instead of the typical psychological problems that trigger it. In Frank M. Datillio's clinical case study, a situation where anorexia results from emetophobia is mentioned. Datillio says, "...in one particular case report, atypical anorexia in several adolescent females occurred as a result of a fear of vomiting that followed a viral illness as opposed to the specific desire to lose weight or because of an anxiety reaction.". It is not clear that this should be termed "anorexia", however. In cases such as this, many emetophobes may also have avoidant/restrictive food intake disorder (ARFID), which is characterized by a general disinterest in food, sensory issues with food (taste, texture, look, smell) or a fear of adverse consequences from eating (vomiting or choking).
Oftentimes, this phobia is comorbid with several others, making it necessary to deal with each phobia individually in order for the patient to recover fully. For example, it is common for people with emetophobia to also have a fear of food, known as cibophobia, where they worry that the food they are eating is carrying pathogens that can cause vomiting. As such, people will develop specific behaviors that will, in their minds, make the food safe to eat, such as a ritualistic type of washing or the intentional overcooking of meat to avoid the intake of harmful pathogens. In time, these fears can become so ingrained that the person who has them can begin to experience anorexia nervosa. Again, it is not clear that this should be deemed "anorexia" rather than, for instance OCD, given this different presentation.
Emetophobia and obsessive–compulsive disorder
There are many cases of emetophobes that also suffer from obsessive–compulsive disorder (OCD). Both emetophobia and OCD have similar symptoms and behaviors according to Allen H. Weg, EdD. This includes: "obsessional thinking, hyper-awareness and reactivity, avoidance, compulsive rituals, and safety behaviors". Emetophobia is often misdiagnosed as OCD.
Causes and signs
There is a strong agreement in the scientific community that there is no specific cause of emetophobia. Some emetophobics report a traumatic experience with vomiting, always in childhood. Some experts believe that emetophobia may be linked to worries about lack of control. Many people try to control themselves and their environment in every possible way, but vomiting is difficult or impossible to control which can lead to anxiety or in other cases severe anxiety.
There are many factors that can cause a legitimate case of emetophobia. Dr. Angela L. Davidson et al. conducted an experiment where it was concluded through various surveys that people with emetophobia are more likely to have an internal locus of control pertaining to their everyday life as well as health-related matters. A locus of control is an individual's perception of where control comes from. Having an internal locus of control means that an individual perceives that they have their own control over a situation, whereas an external locus of control means that an individual perceives that some things are out of their control. She explains how this phobia is created through the locus of control by stating, "Thus far, it seems reasonable to stipulate that individuals with a vomiting phobia deem events as being within their control and may therefore find it difficult to relinquish this control during the act of vomiting, thus inducing a phobia."
In an internet survey conducted by Dr. Joshua D. Lipsitz et al. given to emetophobic people, respondents gave many different reasons as to why they became emetophobic. Among some of the causes listed were several severe bouts of vomiting as children and being firsthand witnesses to many severe vomiting in others due to illness, pregnancy or alcoholism.
Some possible signs may include not consuming certain foods or alcohol, not being able to watch vomit scenes during movies or shows, avoiding people that are not feeling well, regularly washing hands, steering clear from traveling and crowds, making sure bathrooms are near, consistently checking signs of illness, avoiding certain smells, or pitching food before the expiration date.
Treatments
Assessment
There are two assessment tools used to diagnose emetophobia: the Specific Phobia of Vomiting inventory and the Emetophobia Questionnaire. They are self-report questionnaires that focus on a different range of symptoms.
There have been a limited number of studies in regard to emetophobia. Victims of the phobia usually experience fear before vomiting but feel less afterwards. The fear comes back, however, if the victim fears they will throw up again.
Medications
Also noted in the emetophobia internet survey was information about medications. People were asked whether they would consider taking anxiety medication to potentially help their fear, and many in the study answered they wouldn't for fear that the drugs would make them nauseated. Others, however, stated that some psychotropic medications (such as benzodiazepines and antidepressants) did help with their phobia, and some said gastrointestinal medications were also beneficial.
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT) is a psychological treatment that can be used to help calm anxiety. It is most commonly used to treat certain behaviors by changing people's actions and thoughts by using a variety of different techniques to figure out why the fear is occurring. Speaking to a therapist can also be beneficial and develop possible coping mechanisms.
Exposure treatments
Exposure methods, using video-taped exposure to others vomiting, hypnosis, exposure to nausea and exposure to cues of vomiting, systemic behavior therapy, psychodynamic and psychotherapy have also shown positive effects for the treatment of emetophobia. However, in some cases it may cause re-traumatization, and the phobia may become more intense as a result.
Notable people with emetophobia
Ashley Benson
Jamie Borthwick
Charlie Brooker
Denise Richards
Christina Pazsitzky
Bella Ramsey
Raina Telgemeier
Matt Watson
Tuppence Middleton
See also
Bulimia
Emetophilia
List of phobias
Mysophobia
Nosocomephobia
Nosophobia
Pharmacophobia
Tokophobia
References
Vomiting
Mental disorders
Body-related phobias | Emetophobia | Biology | 2,371 |
841,077 | https://en.wikipedia.org/wiki/Metoprolol | Metoprolol, sold under the brand name Lopressor among others, is a medication used to treat angina and a number of conditions involving an abnormally fast heart rate. It is also used to prevent further heart problems after myocardial infarction and to prevent headaches in those with migraines. It is a selective β1 receptor blocker medication. It is taken by mouth or is given intravenously.
Common side effects include trouble sleeping, feeling tired, feeling faint, and abdominal discomfort. Large doses may cause serious toxicity. Risk in pregnancy has not been ruled out. It appears to be safe in breastfeeding. The metabolism of metoprolol can vary widely among patients, often as a result of hepatic impairment or CYP2D6 polymorphism.
Metoprolol was first made in 1969, patented in 1970, and approved for medical use in 1978. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the sixth most commonly prescribed medication in the United States, with more than 65million prescriptions.
Medical uses
Metoprolol is used for a number of conditions, including angina, acute myocardial infarction, supraventricular tachycardia, ventricular tachycardia, congestive heart failure, and prevention of migraine headaches. It is an adjunct in the treatment of hyperthyroidism. Both oral and intravenous forms of metoprolol are available for administration. The different salt versions of metoprolol – metoprolol tartrate and metoprolol succinate – are approved for different conditions and are not interchangeable.
Off-label uses include supraventricular tachycardia and thyroid storm.
Adverse effects
Adverse effects, especially with higher doses, include dizziness, drowsiness, fatigue, diarrhea, unusual dreams, trouble sleeping, depression, and vision problems such as blurred vision or dry eyes. β-blockers, including metoprolol, reduce salivary flow via inhibition of the direct sympathetic innervation of the salivary glands. Metoprolol may also cause the hands and feet to feel cold. Due to the high penetration across the blood–brain barrier, lipophilic beta blockers such as propranolol and metoprolol are more likely than other less lipophilic beta blockers to cause sleep disturbances such as insomnia, vivid dreams and nightmares. Patients should be cautious while driving or operating machinery due to its potential to cause decreased alertness.
There may also be an impact on blood sugar levels, and it can potentially mask signs of low blood sugar.
The safety of metoprolol during pregnancy is not fully established.
Precautions
Metoprolol reduces long-term mortality and hospitalisation due to worsening heart failure. A meta-analysis further supports reduced incidence of heart failure worsening in patients treated with beta-blockers compared to placebo. However, in some circumstances, particularly when initiating metoprolol in patients with more symptomatic disease, an increased prevalence of hospitalisation and mortality has been reported within the first two months of starting. Patients should monitor for swelling of extremities, fatigue, and shortness of breath.
A Cochrane Review concluded that although metoprolol reduces the risk of atrial fibrillation recurrence, it is unclear whether the long-term benefits outweigh the risks.
This medicine may cause changes in blood sugar levels or cover up signs of low blood sugar, such as a rapid pulse rate. It also may cause some people to become less alert than they are normally, making it dangerous for them to drive or use machines.
Pregnancy and breastfeeding
Risk for the fetus has not been ruled out, per being rated pregnancy category C in Australia, meaning that it may be suspected of causing harmful effects on the human fetus (but no malformations). It appears to be safe in breastfeeding.
Overdose
Excessive doses of metoprolol can cause bradycardia, hypotension, metabolic acidosis, seizures, and cardiorespiratory arrest. Blood or plasma concentrations may be measured to confirm a diagnosis of overdose or poisoning in hospitalized patients or to assist in a medicolegal death investigation. Plasma levels are usually less than 200 μg/L during therapeutic administration, but can range from 1–20 mg/L in overdose victims.
Pharmacology
Mechanism of action
Metoprolol is a beta blocker, or an antagonist of the β-adrenergic receptors. It is specifically a selective antagonist of the β1-adrenergic receptor and has no intrinsic sympathomimetic activity.
Metoprolol exerts its effects by blocking the action of certain neurotransmitters, specifically adrenaline and noradrenaline. It does this by selectively binding to and antagonizing β-1 adrenergic receptors in the body. When adrenaline (epinephrine) or noradrenaline (norepinephrine) are released from nerve endings or secreted by the adrenal glands, they bind to β-1 adrenergic receptors found primarily in cardiac tissues such as the heart. This binding activates these receptors, leading to various physiological responses, including an increase in heart rate, force of contraction (inotropic effect), conduction speed through electrical pathways in the heart, and release of renin from the kidneys. Metoprolol competes with adrenaline and noradrenaline for binding sites on these β-1 receptors. By occupying these receptor sites without activating them, metoprolol blocks or inhibits their activation by endogenous catecholamines like adrenaline or noradrenaline.
Metoprolol blocks β1-adrenergic receptors in heart muscle cells, thereby decreasing the slope of phase 4 in the nodal action potential (reducing Na+ uptake) and prolonging repolarization of phase 3 (slowing down K+ release). It also suppresses the norepinephrine-induced increase in the sarcoplasmic reticulum (SR) Ca2+ leak and the spontaneous SR Ca2+ release, which are the major triggers for atrial fibrillation.
Through this mechanism of selective blockade at beta-(β)-1 receptors, metoprolol exerts the following effects:
Heart rate reduction, i.e., decrease of the resting heart rate (negative chronotropic effect) and reduction of excessive elevations resulting from exercise or stress.
Reduction of the force of contraction, i.e., decrease in contractility (negative inotropic effect), which lessens how hard each heartbeat contracts.
Decrease in cardiac output, i.e., decrease in both heart rate and contractility within myocardium cells, where beta-(β)-1 is predominantly located, overall blood output per minute lowers called cardiac output/dysfunction, allowing decreased demands placed onto impaired hearts, reducing oxygen demand-supply mismatch.
Lowering of blood pressure.
Antiarrhythmic effects, such as supraventricular tachycardia prevention. Metoprolol also prevents electrical wave propagation.
Pharmacokinetics
Metoprolol is mostly absorbed from the intestine with an absorption fraction of 0.95. The systemic bioavailability after oral administration is approximately 50%. Less than 5% of an orally administered dose of metoprolol is excreted unchanged in urine; most of it is eliminated in metabolized form through feces via bile secretion into the intestines.
Metoprolol undergoes extensive metabolism in the liver, mainly α-hydroxylation and O-demethylation through various cytochrome P450 enzymes such as CYP2D6 (primary), CYP3A4, CYP2B6, and CYP2C9. The primary metabolites formed are α-hydroxymetoprolol and O-demethylmetoprolol.
Metoprolol is classified as a moderately lipophilic beta blocker. More lipophilic beta blockers tend to cross the blood–brain barrier more readily, with greater potential for effects in the central nervous system as well as associated neuropsychiatric side effects. Metoprolol binds mainly to human serum albumin with an unbound fraction of 0.88. It has a large volume of distribution at steady state (3.2 L/kg), indicating extensive distribution throughout the body.
Chemistry
Metoprolol was synthesized and its activity discovered in 1969. The specific agent in on-market formulations of metoprolol is either metoprolol tartrate or metoprolol succinate, where tartrate is an immediate-release formulation and the succinate is an extended-release formulation (with 100 mg metoprolol tartrate corresponding to 95 mg metoprolol succinate).
Stereochemistry
Metoprolol contains a stereocenter and consists of two enantiomers. This is a racemate, i.e. a 1:1 mixture of (R)- and the (S)-form:
Society and culture
Legal status
Metoprolol was approved for medical use in the United States in August 1978.
Economics
In the 2000s, a lawsuit was brought against the manufacturers of Toprol XL (a time-release formula version of metoprolol) and its generic equivalent (metoprolol succinate) claiming that to increase profits, lower cost generic versions of Toprol XL were intentionally kept off the market. It alleged that the pharmaceutical companies AstraZeneca AB, AstraZeneca LP, AstraZeneca Pharmaceuticals LP, and Aktiebolaget Hassle violated antitrust and consumer protection law. In a settlement by the companies in 2012, without admission to the claims, they agreed to a settlement pay-out of US$ 11million.
Sport
Because beta blockers can be used to reduce heart rate and minimize tremors, which can enhance performance in sports such as archery, metoprolol is banned by the world anti-doping agency in some sports.
References
Further reading
Drugs developed by AstraZeneca
Beta blockers
Chemical substances for emergency medicine
CYP2D6 inhibitors
Isopropylamino compounds
N-isopropyl-phenoxypropanolamines
Drugs developed by Novartis
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Metoprolol | Chemistry | 2,194 |
3,050,160 | https://en.wikipedia.org/wiki/Static%20universe | In cosmology, a static universe (also referred to as stationary, infinite, static infinite or static eternal) is a cosmological model in which the universe is both spatially and temporally infinite, and space is neither expanding nor contracting. Such a universe does not have so-called spatial curvature; that is to say that it is 'flat' or Euclidean. A static infinite universe was first proposed by English astronomer Thomas Digges (1546–1595).
In contrast to this model, Albert Einstein proposed a temporally infinite but spatially finite model - static eternal universe - as his preferred cosmology during 1917, in his paper Cosmological Considerations in the General Theory of Relativity.
After the discovery of the redshift-distance relationship (deduced by the inverse correlation of galactic brightness to redshift) by American astronomers Vesto Slipher and Edwin Hubble, the Belgian astrophysicist and priest Georges Lemaître interpreted the redshift as evidence of universal expansion and thus a Big Bang, whereas Swiss astronomer Fritz Zwicky proposed that the redshift was caused by the photons losing energy as they passed through the matter and/or forces in intergalactic space. Zwicky's proposal would come to be termed 'tired light'—a term invented by the major Big Bang proponent Richard Tolman.
The Einstein universe
During 1917, Albert Einstein added a positive cosmological constant to his equations of general relativity to counteract the attractive effects of gravity on ordinary matter, which would otherwise cause a static, spatially finite universe to either collapse or expand forever.
This model of the universe became known as the Einstein World or Einstein's static universe.
This motivation ended after the proposal by the astrophysicist and Roman Catholic priest Georges Lemaître that the universe seems to be not static, but expanding. Edwin Hubble had researched data from the observations made by astronomer Vesto Slipher to confirm a relationship between redshift and distance, which forms the basis for the modern expansion paradigm that was introduced by Lemaître. According to George Gamow this caused Einstein to declare this cosmological model, and especially the introduction of the cosmological constant, his "biggest blunder".
Einstein's static universe is closed (i.e. has hyperspherical topology and positive spatial curvature), and contains uniform dust and a positive cosmological constant with value precisely , where is Newtonian gravitational constant, is the energy density of the matter in the universe and is the speed of light. The radius of curvature of space of the Einstein universe is equal to
The Einstein universe is one of Friedmann's solutions to Einstein's field equation for dust with density , cosmological constant , and radius of curvature . It is the only non-trivial static solution to Friedmann's equations.
Because the Einstein universe soon was recognized to be inherently unstable, it was presently abandoned as a viable model for the universe. It is unstable in the sense that any slight change in either the value of the cosmological constant, the matter density, or the spatial curvature will result in a universe that either expands and accelerates forever or re-collapses to a singularity.
After Einstein renounced his cosmological constant, and embraced the Friedmann-LeMaitre model of an expanding universe, most physicists of the twentieth century assumed that the cosmological constant is zero. If so (absent some other form of dark energy), the expansion of the universe would be decelerating. However, after Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess introduced the theory of an accelerating universe during 1998, a positive cosmological constant has been revived as a simple explanation for dark energy.
In 1976 Irving Segal revived the static universe in his chronometric cosmology. Similar to Zwicky, he ascribed the red shift of distant galaxies to curvature in the cosmos. Though he claimed vindication in astronomic data, others find the results to be inconclusive.
Requirements of a static infinite model
In order for a static infinite universe model to be viable, it must explain three things:
First, it must explain the intergalactic redshift. Second, it must explain the cosmic microwave background radiation. Third, it must have a mechanism to re-create matter (particularly hydrogen atoms) from radiation or other sources in order to avoid a gradual 'running down' of the universe due to the conversion of matter into energy in stellar processes. With the absence of such a mechanism, the universe would consist of dead objects such as black holes and black dwarfs.
See also
Milne model
Steady State theory
Plasma cosmology
References
In George Gamow's autobiography, My World Line (1970), he says of Einstein: "Much later, when I was discussing cosmological problems with Einstein, he remarked that the introduction of the cosmological term was the biggest blunder of his life."
Physical cosmology
Exact solutions in general relativity
Universe
Obsolete theories in physics | Static universe | Physics,Astronomy,Mathematics | 1,029 |
52,179,886 | https://en.wikipedia.org/wiki/Balasubramanian%20Gopal | Balasubramanian Gopal (born 1970) is an Indian structural biologist, molecular biophysicist and a professor at the Molecular Biophysics Unit of the Indian Institute of Science. He is known for his studies on cell wall synthesis in Staphylococcus aureus and is an elected fellow of the National Academy of Sciences, India, Indian National Science Academy and the Indian Academy of Sciences. He received the National Bioscience Award for Career Development of the Department of Biotechnology in 2010. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 2015, for his contributions to biological sciences.
Biography
Balasubramnian Gopal, born on 31 August 1970, completed his master's degree at Indian Institute of Technology, Kanpur and started his career as a biochemist by joining Torrent Pharmaceuticals at their Ahmedabad station. Later, he took a break from his job and joined the Indian Institute of Science (IISc) from where he secured a PhD. Moving to the UK, he did his post-doctoral studies in crystallography at the National Institute for Medical Research. Returning to India, he joined the Molecular Biophysics Unit of IISc as a member of Lab 301 where he and his colleagues are engaged in researches on structural and mechanistic aspects of membrane-associated proteins involved in inter-cell communication, transcriptional regulation and mediate antimicrobial resistance.
Gopal is known to have done considerable research in molecular biophysics and has contributed to widening our understanding of the cell wall synthesis in Staphylococcus aureus, a common gram-positive bacterium found in human respiratory tract and skin. He has published his research findings as articles in peer-reviewed journals and Google Scholar, an online article repository has listed 77 of them. He has delivered keynote addresses at several seminars including the Graduate Students' Meet 2007 and the 2nd International Conference on Structural and Functional Genomics organised by Sastra university, Tanjore in August 2016.
Awards and honors
The Department of Biotechnology awarded him the National Bioscience Award for Career Development in 2010. The Indian Academy of Sciences elected him as its fellow in 2013, the same year as he was elected as a fellow by the National Academy of Sciences, India. He was awarded the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, by the Council of Scientific and Industrial Research in 2015. The elected fellowship of the Indian National Science Academy reached him in 2016.
Selected bibliography
See also
Staphylococcus aureus
Notes
References
External links
Recipients of the Shanti Swarup Bhatnagar Award in Biological Science
1970 births
Living people
Indian molecular biologists
Structural biologists
Molecular biophysics
IIT Kanpur alumni
Indian Institute of Science alumni
Academic staff of the Indian Institute of Science
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
Scientists from Bengaluru
Fellows of the National Academy of Sciences, India
N-BIOS Prize recipients
20th-century Indian biologists | Balasubramanian Gopal | Chemistry | 634 |
27,105,107 | https://en.wikipedia.org/wiki/Ornithonyssus%20sylviarum | Ornithonyssus sylviarum (also known as the northern fowl mite) is a haematophagous ectoparasite of poultry. In both size and appearance, it resembles the red mite, Dermanyssus gallinae. They primarily infect egg laying chickens. They contribute to economic damage and feed on their host's blood which leads to lowering the egg production and feed conversion efficiency. Anemia or death can be an effect of a high amount of infestation within the birds. While they mainly do target wild birds they can also become permanent ectoparasites in the domestic poultry. The main nesting sites are generally in a close proximity to poultry coops.
This blood-feeding parasite is broadly distributed, and has been reported on 72 host species of North American birds in 26 families. The mites have been a major pest of the poultry industry since the early 1900s. The spread of these mites are mainly because they have the ability to hide in cracks or wild birds nests. In relation to humans they can hide in equipment and rodents. They have a high range of widespread in places like North America, Brazil, Australia, and China.”
See also
Acariasis
Gamasoidosis
List of mites associated with cutaneous reactions
References
Mesostigmata
Animals described in 1877
Agricultural pest mites
Poultry diseases
Veterinary entomology
Parasites of birds
Parasites of humans
Ectoparasites
Parasitic acari | Ornithonyssus sylviarum | Biology | 297 |
4,012,069 | https://en.wikipedia.org/wiki/Pseudomonas%20virus%20phi6 | Φ6 (Phi 6) is the best-studied bacteriophage of the virus family Cystoviridae. It infects Pseudomonas bacteria (typically plant-pathogenic P. syringae). It has a three-part, segmented, double-stranded RNA genome, totalling ~13.5 kb in length. Φ6 and its relatives have a lipid membrane around their nucleocapsid, a rare trait among bacteriophages. It is a lytic phage, though under certain circumstances has been observed to display a delay in lysis which may be described as a "carrier state".
Proteins
The genome of Φ6 codes for 12 proteins. P1 is a major capsid protein which is responsible of forming the skeleton of the polymerase complex. In the interior of the shell formed by P1 is the P2 viral replicase and transcriptase protein. The spikes binding to receptors on the Φ6 virion are formed by the protein P3. P4 is a nucleoside-triphosphatase which is required for the genome packaging and transcription. P5 is a lytic enzyme. The spike protein P3 is anchored to a fusogenic envelope protein in P6. P7 is a minor capsid protein, P8 is responsible of forming the nucleocapsid surface shell and P9 is a major envelope protein. P12 is a non-structural morphogenic protein shown to be a part of the envelope assembly. P10 and P13 are proteins coding genes that are associated with the viral envelope and P14 is a non-structural protein.
Life cycle
Φ6 typically attaches to the Type IV pilus of P. syringae with its attachment protein, P3. It is thought that the cell then retracts its pilus, pulling the phage toward the bacterium. Fusion of the viral envelope with the bacterial outer membrane is facilitated by the phage protein, P6. The muralytic (peptidoglycan-digesting) enzyme, P5, then digests a portion of the cell wall, and the nucleocapsid enters the cell coated with the bacterial outer membrane.
A copy of the sense strand of the large genome segment (6374 bases) is then synthesized (transcription) on the vertices of the capsid, with the RNA-dependent RNA polymerase, P2, and released into the host cell cytosol. The four proteins translated from the large segment spontaneously assemble into procapsids, which then package a large segment sense strand, polymerizing its complement during entry through the P2 polymerase-containing vertices.
While the large segment is being translated (expressed) and synthesized (replicated), the parental phage releases copies of the sense strands of the medium segment (4061 bases) and small segment (2948 bases) into the cytosol. They are translated, and packaged into the procapsids in order: medium then small. The filled capsids are then coated with the nucleocapsid protein P8, and then outer membrane proteins somehow attract bacterial inner membrane, which then envelopes the nucleocapsid.
The lytic protein, P5, is contained between the P8 nucleocapsid shell and the viral envelope. The completed phage progeny remain in the cytosol until sufficient levels of the lytic protein P5 degrade the host cell wall. The cytosol then bursts forth, disrupting the outer membrane, releasing the phage. The bacterium is killed by this lysis.
RNA-dependent RNA polymerase
RNA-dependent RNA polymerases (RdRPs) are critical components in the life cycle of double-stranded RNA (dsRNA) viruses. However, it is not fully understood how these important enzymes function during viral replication. Expression and characterization of the purified recombinant RdRP of Φ6 is the first direct demonstration of RdRP activity catalyzed by a single protein from a dsRNA virus. The recombinant Φ6 RdRP is highly active in vitro, possesses RNA replication and transcription activities, and is capable of using both homologous and heterologous RNA molecules as templates. The crystal structure of the Φ6 polymerase, solved in complex with a number of ligands, provides insights towards understanding the mechanism of primer-independent initiation of RNA-dependent RNA polymerization. This RNA polymerase appears to operate without a sigma factor/subunit. The purified Φ6 RdRP displays processive elongation in vitro and self-assembles along with polymerase complex proteins into subviral particles that are fully functional.
Research
Φ6 has been studied as a model to understand how segmented RNA viruses package their genomes, its structure has been studied by scientists interested in lipid-containing bacteriophages, and it has been used as a model organism to test evolutionary theory such as Muller's ratchet. Phage Φ6 has been used extensively in additional phage experimental evolution studies.
See also
Double-stranded RNA viruses
References
External links
Detailed molecular description
Descriptions of tests of evolutionary theory by the Turner Lab
Descriptions of tests of evolutionary theory by the Burch Lab
The Universal Virus Database of the International Committee on the Taxonomy of Viruses
The origin of phospholipids of the enveloped bacteriophage phi6
Cystoviridae
Model organisms
Bacteriophages | Pseudomonas virus phi6 | Biology | 1,112 |
5,439,408 | https://en.wikipedia.org/wiki/Tungsten%28V%29%20chloride | Tungsten(V) chloride is an inorganic compound with the formula W2Cl10. This compound is analogous in many ways to the more familiar molybdenum pentachloride.
Synthesis
The material is prepared by reduction of tungsten hexachloride. One method involves the use of tetrachloroethylene as the reductant
2 WCl6 + C2Cl4 → W2Cl10 + C2Cl6
The blue green solid is volatile under vacuum and slightly soluble in nonpolar solvents. The compound is oxophilic and is highly reactive toward Lewis bases.
When the same reduction is conducted in the presence of tetraphenylarsonium chloride, one obtains instead the hexachlorotungstate(V) salt:
Structure
The compound exists as a dimer, with a pair of octahedral tungsten(V) centres bridged by two chloride ligands. The W---W separation is 3.814 Å, which is non-bonding. The compound is isostructural with Nb2Cl10 and Mo2Cl10. The compound evaporates to give trigonal bipyramidal WCl5 monomers.
References
Chlorides
Tungsten halides
Tungsten(V) compounds | Tungsten(V) chloride | Chemistry | 267 |
59,450 | https://en.wikipedia.org/wiki/Strontianite | Strontianite (SrCO3) is an important raw material for the extraction of strontium. It is a rare carbonate mineral and one of only a few strontium minerals. It is a member of the aragonite group.
Aragonite group members: aragonite (CaCO3), witherite (BaCO3), strontianite (SrCO3), cerussite (PbCO3)
The ideal formula of strontianite is SrCO3, with molar mass 147.63 g, but calcium (Ca) can substitute for up to 27% of the strontium (Sr) cations, and barium (Ba) up to 3.3%.
The mineral was named in 1791 for the locality, Strontian, Argyllshire, Scotland, where the element strontium had been discovered the previous year. Although good mineral specimens of strontianite are rare, strontium is a fairly common element, with abundance in the Earth's crust of 370 parts per million by weight, 87 parts per million by moles, much more common than copper with only 60 parts per million by weight, 19 by moles.
Strontium is never found free in nature. The principal strontium ores are celestine SrSO4 and strontianite SrCO3. The main commercial process for strontium metal production is reduction of strontium oxide with aluminium.
Unit cell
Strontianite is an orthorhombic mineral, belonging to the most symmetrical class in this system, 2/m 2/m 2/m, whose general form is a rhombic dipyramid. The space group is Pmcn. There are four formula units per unit cell (Z = 4) and the unit cell parameters are a = 5.1 Å, b = 8.4 Å, c = 6.0 Å.
Structure
Strontianite is isostructural with aragonite. When the CO3 group is combined with large divalent cations with ionic radii greater than 1.0 Å, the radius ratios generally do not permit stable 6-fold coordination. For small cations the structure is rhombohedral, but for large cations it is orthorhombic. This is the aragonite structure type with space group Pmcn. In this structure the CO3 groups lie perpendicular to the c axis, in two structural planes, with the CO3 triangular groups of one plane pointing in opposite directions to those of the other. These layers are separated by layers of cations.
The CO3 group is slightly non-planar; the carbon atom lies 0.007 Å out of the plane of the oxygen atoms. The groups are tilted such that the angle between a plane drawn through the oxygen atoms and a plane parallel to the a-b unit cell plane is 2°40’.
Crystal form
Strontianite occurs in several different habits. Crystals are short prismatic parallel to the c axis and often acicular. Calcium-rich varieties often show steep pyramidal forms. Crystals may be pseudo hexagonal due to equal development of different forms. Prism faces are striated horizontally. The mineral also occurs as columnar to fibrous, granular or rounded masses.
Optical properties
Strontianite is colourless, white, gray, light yellow, green or brown, colourless in transmitted light. It may be longitudinally zoned. It is transparent to translucent, with a vitreous (glassy) lustre, resinous on broken surfaces, and a white streak.
It is a biaxial(−) mineral. The direction perpendicular to the plane containing the two optic axes is called the optical direction Y. In strontianite Y is parallel to the b crystal axis. The optical direction Z lies in the plane containing the two optic axes and bisects the acute angle between them. In strontianite Z is parallel to the a crystal axis. The third direction X, perpendicular both to Y and to Z, is parallel to the c crystal axis. The refractive indices are close to nα = 1.52, nβ = 1.66, nγ = 1.67, with different sources quoting slightly different values:
nα = 1.520, nβ = 1.667, nγ = 1.669
nα = 1.516 – 1.520, nβ = 1.664 – 1.667, nγ = 1.666 – 1.668
nα = 1.517, nβ = 1.663, nγ = 1.667 (synthetic material)
The maximum birefringence δ is 0.15 and the measured value of 2V is 7°, calculated 12° to 8°.
If the colour of the incident light is changed, then the refractive indices are modified, and the value of 2V changes. This is known as dispersion of the optic axes. For strontianite the effect is weak, with 2V larger for violet light than for red light r < v.
Luminescence
Strontianite is almost always fluorescent. It fluoresces bright yellowish white under shortwave, mediumwave and longwave ultraviolet radiation. If the luminescence persists after the ultraviolet source is switched off the sample is said to be phosphorescent. Most strontianite phosphoresces a strong, medium duration, yellowish white after exposure to all three wavelengths. It is also fluorescent and phosphorescent in X-rays and electron beams. All materials will glow red hot if they are heated to a high enough temperature (provided they do not decompose first); some materials become luminescent at much lower temperatures, and this is known as thermoluminescence. Strontianite is sometimes thermoluminescent.
Physical properties
Cleavage is nearly perfect parallel to one set of prism faces, {110}, and poor on {021}. Traces of cleavage have been observed on {010}.
Twinning is very common, with twin plane {110}. The twins are usually contact twins; in a contact twin the two individuals appear to be reflections of each other in the twin plane. Penetration twins of strontainite are rarer; penetration twins are made up of interpenetrating individuals that are related to each other by rotation about a twin axis. Repeated twins are made up of three or more individuals twinned according to the same law. If all the twin planes are parallel then the twin is polysynthetic, otherwise it is cyclic. In strontianite repeated twinning forms cyclic twins with three or four individuals, or polysynthetic twins.
The mineral is brittle, and breaks with a subconchoidal to uneven fracture. It is quite soft, with a Mohs hardness of , between calcite and fluorite. The specific gravity of the pure endmember with no calcium substituting for strontium is 3.78, but most samples contain some calcium, which is lighter than strontium, giving a lower specific gravity, in the range 3.74 to 3.78. Substitutions of the heavier ions barium and/or lead increase the specific gravity, although such substitutions are never very abundant. Strontianite is soluble in dilute hydrochloric acid HCl and it is not radioactive.
Environment and associations
Strontianite is an uncommon low-temperature hydrothermal mineral formed in veins in limestone, marl, and chalk, and in geodes and concretions. It occurs rarely in hydrothermal metallic veins but is common in carbonatites. It most likely crystallises at or near 100 °C. Its occurrence in open vugs and veins suggests crystallisation at very low pressures, probably at most equal to the hydrostatic pressure of the ground water. Under appropriate conditions it alters to celestine SrSO4, and it is itself found as an alteration from celestine. These two minerals are often found in association, together with baryte, calcite, harmotome and sulfur.
Occurrences
Type locality
The type locality is Strontian, North West Highlands (Argyllshire), Scotland, UK. The type material occurred in veins in gneiss.
Other UK localities include Brownley Hill Mine (Bloomsberry Horse Level), Nenthead, Alston Moor District, North Pennines, North and Western Region (Cumberland), Cumbria, England, associated with a suite of primary minerals (bournonite, millerite and ullmannite) which are not common in other Mississippi Valley-type deposits.
Canada
The Francon quarry, Montréal, Québec.
Strontianite is very common at the Francon Quarry, in a great variety of habits. It is a late stage mineral, sometimes found as multiple generations. It is found as translucent to opaque, white to pale yellow or beige generally smooth surfaced spheroids, hemispheres and compact spherical and botryoidal aggregates to 10 cm in diameter, and as spheres consisting of numerous radiating acicular crystals, up to 1 cm across. Also as tufts, parallel bundles, and sheaf-like clusters of fibrous to acicular crystals, and as white, finely granular porcelaneous and waxy globular aggregates. Transparent, pale pink, columnar to tabular sixling twins up to 1 cm in diameter have been found, and aggregates of stacked stellate sixling twins consisting of transparent, pale yellow tabular crystals.
Another Canadian occurrence is at Nepean, Ontario, in vein deposits in limestone.
Germany
Commercially important deposits occur in marls in Westphalia, and it is also found with zeolites at Oberschaffhausen, Bötzingen, Kaiserstuhl, Baden-Württemberg.
India
In Trichy (Tiruchirappalli; Tiruchi), Tiruchirapalli District, Tamil Nadu, it occurs with celestine SrSO4, gypsum and phosphate nodules in clay.
Mexico
It occurs in the Sierra Mojada District, with celestine in a lead-silver deposit.
Russia
It occurs in the Kirovskii apatite mine, Kukisvumchorr Mt, Khibiny Massif, Kola Peninsula, Murmanskaja Oblast', Northern Region, in late hydrothermal assemblages in cavities in pegmatites, associated with kukharenkoite-(La), microcline, albite, calcite, nenadkevichite, hilairite, catapleiite, donnayite-(Y), synchysite-(Ce), pyrite and others.
It also occurs at Yukspor Mountain, Khibiny Massif, Kola Peninsula, Murmanskaja Oblast', Northern Region, in an aegerine-natrolite-microcline vein in foyaite, associated with aegirine, anatase, ancylite-(Ce), barylite, catapleiite, cerite-(Ce), cerite-(La), chabazite-(Ca), edingtonite, fluorapatite, galena, ilmenite, microcline, natrolite, sphalerite and vanadinite. At the same locality it was found in alkaline pegmatite veins associated with clinobarylite, natrolite, aegirine, microcline, catapleiite, fluorapatite, titanite, fluorite, galena, sphalerite, annite, astrophyllite, lorenzenite, labuntsovite-Mn, kuzmenkoite-Mn, cerite-(Ce), edingtonite, ilmenite and calcite.
United States
In the Gulf coast of Louisiana and Texas, strontianite occurs with celestine in calcite cap rock of salt domes.
At the Minerva Number 1 Mine (Ozark-Mahoning Number 1 Mine) Ozark-Mahoning Group, Cave-in-Rock, Illinois, in the Kentucky Fluorspar District, Hardin County strontanite occurs as white, brown or rarely pink tufts and bowties of acicular crystals with slightly curved terminations.
In the Silurian Lockport Group, Central and Western New York strontianite is observed in cavities in eastern Lockport, where it occurs as small white radiating sprays of acicular crystals.
In Schoharie County, New York, it occurs in geodes and veins with celestine and calcite in limestone, and in Mifflin County, Pennsylvania, it occurs with aragonite, again in limestone.
See also
Strontian process
References
External links
JMol
Lochaber
Strontium minerals
Carbonate minerals
Aragonite group
Orthorhombic minerals
Minerals in space group 62
Luminescent minerals
Geology of Scotland
Minerals described in 1791 | Strontianite | Chemistry | 2,681 |
56,379,488 | https://en.wikipedia.org/wiki/Biological%20effects%20of%20radiation%20on%20the%20epigenome | Ionizing radiation can cause biological effects which are passed on to offspring through the epigenome. The effects of radiation on cells has been found to be dependent on the dosage of the radiation, the location of the cell in regards to tissue, and whether the cell is a somatic or germ line cell. Generally, ionizing radiation appears to reduce methylation of DNA in cells.
Ionizing radiation has been known to cause damage to cellular components such as proteins, lipids, and nucleic acids. It has also been known to cause DNA double-strand breaks. Accumulation of DNA double strand breaks can lead to cell cycle arrest in somatic cells and cause cell death. Due to its ability to induce cell cycle arrest, ionizing radiation is used on abnormal growths in the human body such as cancer cells, in radiation therapy. Most cancer cells are fully treated with some type of radiotherapy, however some cells such as stem cell cancer cells show a reoccurrence when treated by this type of therapy.
Radiation exposure in everyday life
Non-ionising radiations, electromagnetic fields (EMF) such as radiofrequency (RF), or power frequency radiation have become very common in everyday life. All of these exist as low frequency radiation which can come from wireless cellular devices or through electrical appliances which induce extremely low frequency radiation (ELF). Exposure to these radioactive frequencies has shown negative affects on the fertility of men by impacting the DNA of the sperm and deteriorating the testes as well as an increased risk of tumor formation in salivary glands. The International Agency for Research on Cancer considers RF electromagnetic fields to be possibly carcinogenic to humans, however the evidence is limited.
Radiation and medical imaging
Advances in medical imaging have resulted in increased exposure of humans to low doses of ionizing radiation. Radiation exposure in pediatrics has been shown to have a greater impact as children's cells are still developing. The radiation obtained from medical imaging techniques is only harmful if consistently targeted multiple times in a short space of time. Safety measures have been introduced in order to limit the exposure of harmful ionizing radiation such as the usage of protective material during the use of these imaging tools. A lower dosage is also used in order to fully rid the possibility of a harmful effect from the medical imaging tools. The National Council on Radiation Protection and Measurements along with many other scientific committees have ruled in favor of continued use of medical imaging as the reward far outweighs the minimal risk obtained from these imaging techniques. If the safety protocols are not followed there is a potential increase in the risk of developing cancer. This is primarily due to the decreased methylation of cell cycle genes, such as those relating to apoptosis and DNA repair. The ionizing radiation from these techniques can cause many other detrimental effects in cells including changes in gene expression and halting the cell cycle. However, these results are extremely unlikely if the proper protocols are followed.
Target theory
Target theory concerns the models of how radiation kills biological cells and is based around two main postulates:
"Radiation is considered to be a sequence of random projectiles;
the components of the cell are considered as the targets bombarded by these projectiles"
Several models have been based around the above two points. From the various proposed models three main conclusions were found:
Physical hits obey a Poisson distribution
Failure of radioactive particles to attack sensitive areas of cells allow for survival of the cell
Cell death is an exponential function of the dose of radiation received as the number of hits received is directly proportional to the radiation dose; all hits are considered lethal
Radiation exposure through ionizing radiation (IR) affects a variety of processes inside of an exposed cell. IR can cause changes in gene expression, disruption of cell cycle arrest, and apoptotic cell death. The extent of how radiation effects cells depends on the type of cell and the dosage of the radiation. Some irradiated cancer cells have been shown to exhibit DNA methylation patterns due to epigenetic mechanisms in the cell. In medicine, medical diagnostic methods such as CT scans and radiation therapy expose the individual to ionizing radiation. Irradiated cells can also induce genomic instability in neighboring un-radiated cells via the bystander effect. Radiation exposure could also occur via many other channels than just ionizing radiation.
The basic ballistic models
The single-target single-hit model
In this model a single hit on a target is sufficient to kill a cell The equation used for this model is as follows:
Where k represents a hit on the cell and m represents the mass of the cell.
The n-target single-hit model
In this model the cell has a number of targets n. A single hit on one target is not sufficient to kill the cell but does disable the target. An accumulation of successful hits on various targets leads to cell death. The equation used for this model is as follows:
Where n represents number of the targets in the cell.
The linear quadratic model
The equation used for this model is as follows:
where αD represents a hit made by a one particle track and βD represents a hit made by a two particle track and S(D) represents the probability of survival of the cell.
The three lambda model
This model showed the accuracy of survival description for higher or repeated doses.
The equation used for this model is as follows:
The linear-quadratic-cubic model
The equation used for this model is as follows:
Sublesions hypothesis models
The repair-misrepair model
This model shows the mean number of lesions before any repair activations in a cell.
The equation used for this model is as follows:
where Uo represents the yield of initially induced lesions, with λ being the linear self-repair coefficient, and T equaling time
The lethal-potentially lethal model
This equation explores the hypothesis of a lesion becoming fatal within a given of time if it is not repair by repair enzymes.
The equation used for this model is as follows:
T is the radiation duration and tr is the available repair time.
The saturable repair model
This model illustrates the efficiency of the repair system decreasing as the dosage of radiation increases. This is due to the repair kinetics becoming increasingly saturated with the increase in radiation dosage.
The equation used for this model is as follows:
n(t) is the number of unrepaired lesions, c(t) is the number of repair molecules or enzymes, k is the proportionality coefficient, and T is the time available for repair.
Cellular environment and radiation hormesis
Radiation hormesis
Hormesis is the hypothesis that low levels of disrupting stimulus can cause beneficial adaptations in an organism. The ionizing radiation stimulates repair proteins that are usually not active. Cells use this new stimuli to adapt to the stressors they are being exposed to.
Radiation-Induced Bystander Effect (RIBE)
In biology, the bystander effect is described as changes to nearby non-targeted cells in response to changes in an initially targeted cell by some disrupting agent. In the case of Radiation-Induced Bystander Effect, the stress on the cell is caused by ionizing radiation.
The bystander effect can be broken down into two categories, long range bystander effect and short range bystander effect. In long range bystander effect, the effects of stress are seen further away from the initially targeted cell. In short range bystander, the effects of stress are seen in cells adjacent to the target cell.
Both low linear energy transfer and high linear energy transfer photons have been shown to produce RIBE. Low linear energy transfer photons were reported to cause increases in mutagenesis and a reduction in the survival of cells in clonogenic assays. X-rays and gamma rays were reported to cause increases in DNA double strand break, methylation, and apoptosis. Further studies are needed to reach a conclusive explanation of any epigenetic impact of the bystander effect.
Radiation and oxidative stress
Formation of ROS
Ionizing radiation produces fast moving particles which have the ability to damage DNA, and produce highly reactive free radicals known as reactive oxygen species (ROS). The production of ROS in cells radiated by LDIR (Low-Dose Ionizing Radiation) occur in two ways, by the radiolysis of water molecules or the promotion of nitric oxide synthesis (NOS) activity. The resulting nitric oxide formation reacts with superoxide radicals. This generates peroxynitrite which is toxic to biomolecules. Cellular ROS is also produced with the help of a mechanism involving nicotinamide adenosine dinucleotide phosphate (NADPH) oxidase. NADPH oxidase helps with the formation of ROS by generating a superoxide anion by transferring electrons from cytosolic NADPH across the cell membrane to the extracellular molecular oxygen. This process increases the potential for leakage of electrons and free radicals from the mitochondria. The exposure to the LDIR induces electron release from the mitochondria resulting in more electrons contributing to the superoxide formation in the cells.
The production of ROS in high quantity in cells results in the degradation of biomolecules such as proteins, DNA, and RNA. In one such instance the ROS are known to create double stranded and single stranded breaks in the DNA. This causes the DNA repair mechanisms to try to adapt to the increase in DNA strand breaks. Heritable changes to the DNA sequence have been seen although the DNA nucleotide sequence seems the same after the exposure with LDIR.
Activation of NOS
The formation of ROS is coupled with the formation of nitric oxide synthase activity (NOS). NO reacts with O2− generating peroxynitrite. The increase in the NOS activity causes the production of peroxynitrite (ONOO-). Peroxynitrite is a strong oxidant radical and it reacts with a wide array of biomolecules such as DNA bases, proteins and lipids. Peroxynitrite affects biomolecules function and structure and therefore effectively destabilizes the cell.
Mechanism of oxidative stress and epigenetic gene regulation
Ionizing radiation causes the cell to generate increased ROS and the increase of this species damages biological macromolecules. In order to compensate for this increased radical species, cells adapt to IR induced oxidative effects by modifying the mechanisms of epigenetic gene regulation. There are 4 epigenetic modifications that can take place:
formation of protein adducts inhibiting epigenetic regulation
alteration of genomic DNA methylation status
modification of post translational histone interactions affecting chromatin compaction
modulation of signaling pathways that control transcription factor expression
ROS-mediated protein adduct formation
ROS generated by ionizing radiation chemically modify histones which can cause a change in transcription. Oxidation of cellular lipid components result in an electrophilic molecule formation. The electrophilic molecule binds to the lysine residues of histones causing a ketoamide adduct formation. The ketoamide adduct formation blocks the lysine residues of histones from binding to acetylation proteins thus decreasing gene transcription.
ROS-mediated DNA methylation changes
DNA hypermethylation is seen in the genome with DNA breaks at a gene-specific basis, such as promoters of regulatory genes, but the global methylation of the genome shows a hypomethylation pattern during the period of reactive oxygen species stress.
DNA damage induced by reactive oxygen species results in increased gene methylation and ultimately gene silencing. Reactive oxygen species modify the mechanism of epigenetic methylation by inducing DNA breaks which are later repaired and then methylated by DNMTs. DNA damage response genes, such as GADD45A, recruit nuclear proteins Np95 to direct histone methyltransferase's towards the damaged DNA site. The breaks in DNA caused by the ionizing radiation then recruit the DNMTs in order to repair and further methylate the repair site.
Genome wide hypomethylation occurs due to reactive oxygen species hydroxylating methylcytosines to 5-hydroxymethylcytosine (5hmC). The production of 5hmC serves as an epigenetic marker for DNA damage which is recognizable by DNA repair enzymes. The DNA repair enzymes attracted by the marker convert 5hmC to an unmethylated cytosine base resulting in the hypomethylation of the genome.
Another mechanism that induces hypomethylation is the depletion of S-adenosyl methionine synthetase (SAM). The prevalence of super oxide species causes the oxidization of reduced glutathione (GSH) to GSSG. Due to this, synthesis of the cosubstrate SAM is stopped. SAM is an essential cosubstrate for the normal functioning of DNMTs and histone methyltransferase proteins.
ROS-mediated post-translation modification
Double stranded DNA breaks caused by exposure to ionizing radiation are known to alter chromatin structure. Double stranded breaks are primarily repaired by poly ADP (PAR) polymerases which accumulate at the site of the break leading to activation of the chromatin remodeling protein ALC1. ALC1 causes the nucleosome to relax resulting in the epigenetic up-regulation of genes. A similar mechanism involves the ataxia telangiectasia mutated (ATM) serine/threonine kinase which is an enzyme involved in the repair of double stranded breaks caused by ionizing radiation. ATM phosphorylates KAP1 which causes the heterochromatin to relax, allowing increased transcription to occur.
The DNA mismatch repair gene (MSH2) promoter has shown a hypermethylation pattern when exposed to ionizing radiation. Reactive oxygen species induce the oxidization of deoxyguanosine into 8-hydroxydeoxyguanosine (8-OHdG) causing a change in chromatin structure. Gene promoters that contain 8-OHdG deactivate the chromatin by inducing trimethyl-H3K27 in the genome. Other enzymes such as transglutaminases (TGs) control chromatin remodeling through proteins such as sirtuin1 (SIRT1). TGs cause transcriptional repression during reactive oxygen species stress by binding to the chromatin and inhibiting the sirtuin 1 histone deacetylase from performing its function.
ROS-mediated loss of epigenetic imprinting
Epigenetic imprinting is lost during reactive oxygen species stress. This type of oxidative stress causes a loss of NF- κB signaling. Enhancer blocking element CCCTC-binding factor (CTCF) binds to the imprint control region of insulin-like growth factor 2 (IGF2) preventing the enhancers from allowing the transcription of the gene. The NF- κB proteins interact with IκB inhibitory proteins, but during oxidative stress IκB proteins are degraded in the cell. The loss of IκB proteins for NF- κB proteins to bind to results in NF- κB proteins entering the nucleus to bind to specific response elements to counter the oxidative stress. The binding of NF- κB and corepressor HDAC1 to response elements such as the CCCTC-binding factor causes a decrease in expression of the enhancer blocking element. This decrease in expression hinders the binding to the IGF2 imprint control region therefore causing the loss of imprinting and biallelic IGF2 expression.
Mechanisms of epigenetic modifications
After the initial exposure to ionizing radiation, cellular changes are prevalent in the unexposed offspring of irradiated cells for many cell divisions. One way this non-Mendelian mode of inheritance can be explained is through epigenetic mechanisms.
Ionizing radiation and DNA methylation
Genomic instability via hypomethylation of LINE1
Ionizing radiation exposure affects patterns of DNA methylation. Breast cancer cells treated with fractionated doses of ionizing radiation showed DNA hypomethylation at the various gene loci; dose fractionation refers to breaking down one dose of radiation into separate, smaller doses. Hypomethylation of these genes correlated with decreased expression of various DNMTs and methyl CpG binding proteins. LINE1 transposable elements have been identified as targets for ionizing radiation. The hypomethylation of LINE1 elements results in activation of the elements and thus an increase in LINE1 protein levels. Increased transcription of LINE1 transposable elements results in greater mobilization of the LINE1 loci and therefore increases genomic instability.
Ionizing radiation and histone modification
Irradiated cells can be linked to a variety of histone modifications. Ionizing radiation in breast cancer cell inhibits H4 lysine tri-methylation. Mouse models exposed to high levels of X-ray irradiation exhibited a decrease in both the tri-methylation of H4-Lys20 and the compaction of the chromatin. With the loss of tri-methylation of H4-Lys20, DNA hypomethylation increased resulting in DNA damage and increased genomic instability.
Loss of methylation via repair mechanisms
Breaks in DNA due to ionizing radiation can be repaired. New DNA synthesis by DNA polymerases is one of the ways radiation induced DNA damage can be repaired. However, DNA polymerases do not insert methylated bases which leads to a decrease in methylation of the newly synthesized strand. Reactive oxygen species also inhibit DNMT activity which would normally add the missing methyl groups. This increases the chance that the demethylated state of DNA will eventually become permanent.
Clinical consequences and applications
MGMT- and LINE1- specific DNA methylation
DNA methylation influences tissue responses to ionizing radiation. Modulation of methylation in the gene MGMT or in transposable elements such as LINE1 could be used to alter tissue responses to ionizing radiation and potentially opening new areas for cancer treatment.
MGMT serves as a prognostic marker in glioblastoma. Hypermethylation of MGMT is associated with the regression of tumors. Hypermethylation of MGMT silences its transcription inhibiting alkylating agents in tumor killing cells. Studies have shown patients who received radiotherapy, but no chemotherapy after tumor extraction, had an improved response to radiotherapy due to the methylation of the MGMT promoter.
Almost all human cancers include hypomethylation of LINE1 elements. Various studies depict that the hypomethylation of LINE1 correlates with a decrease in survival after both chemotherapy and radiotheraphy.
Treatment by DNMT inhibitors
DMNT inhibitors are being explored in the treatment of malignant tumors. Recent in-vitro studies show that DNMT inhibitors can increase the effects of other anti-cancer drugs. Knowledge of in-vivo effect of DNMT inhibitors are still being investigated. The long term effects of the use of DNMT inhibitors are still unknown.
References
Radiation health effects
Cancer epigenetics | Biological effects of radiation on the epigenome | Chemistry,Materials_science | 3,933 |
11,467,712 | https://en.wikipedia.org/wiki/Agroathelia%20delphinii | Agroathelia delphinii is a plant pathogen infecting many tropical and warm temperate plants including mangoes.
References
Fungal plant pathogens and diseases
Mango tree diseases
Amylocorticiales
Fungus species | Agroathelia delphinii | Biology | 45 |
34,588,743 | https://en.wikipedia.org/wiki/Cause%20%28medicine%29 | Cause, also known as etiology () and aetiology, is the reason or origination of something.
The word etiology is derived from the Greek , aitiologia, "giving a reason for" (, aitia, "cause"; and , -logia).
Description
In medicine, etiology refers to the cause or causes of diseases or pathologies. Where no etiology can be ascertained, the disorder is said to be idiopathic.
Traditional accounts of the causes of disease may point to the "evil eye".
The Ancient Roman scholar Marcus Terentius Varro put forward early ideas about microorganisms in a 1st-century BC book titled On Agriculture.
Medieval thinking on the etiology of disease showed the influence of Galen and of Hippocrates. Medieval European doctors generally held the view that disease was related to the air and adopted a miasmatic approach to disease etiology.
Etiological discovery in medicine has a history in Robert Koch's demonstration that species of the pathogenic bacteria Mycobacterium tuberculosis causes the disease tuberculosis; Bacillus anthracis causes anthrax, and Vibrio cholerae causes cholera. This line of thinking and evidence is summarized in Koch's postulates. But proof of causation in infectious diseases is limited to individual cases that provide experimental evidence of etiology.
In epidemiology, several lines of evidence together are required to for causal inference. Austin Bradford Hill demonstrated a causal relationship between tobacco smoking and lung cancer, and summarized the line of reasoning in the Bradford Hill criteria, a group of nine principles to establish epidemiological causation. This idea of causality was later used in a proposal for a Unified concept of causation.
Disease causative agent
The infectious diseases are caused by infectious agents or pathogens. The infectious agents that cause disease fall into five groups: viruses, bacteria, fungi, protozoa, and helminths (worms).
The term can also refer to a toxin or toxic chemical that causes illness.
Chain of causation and correlation
Further thinking in epidemiology was required to distinguish causation from association or statistical correlation. Events may occur together simply due to chance, bias or confounding, instead of one event being caused by the other. It is also important to know which event is the cause. Careful sampling and measurement are more important than sophisticated statistical analysis to determine causation. Experimental evidence involving interventions (providing or removing the supposed cause) gives the most compelling evidence of etiology.
Related to this, sometimes several symptoms always appear together, or more often than what could be expected, though it is known that one cannot cause the other. These situations are called syndromes, and normally it is assumed that an underlying condition must exist that explains all the symptoms.
Other times there is not a single cause for a disease, but instead a chain of causation from an initial trigger to the development of the clinical disease. An etiological agent of disease may require an independent co-factor, and be subject to a promoter (increases expression) to cause disease. An example of all the above, which was recognized late, is that peptic ulcer disease may be induced by stress, requires the presence of acid secretion in the stomach, and has primary etiology in Helicobacter pylori infection. Many chronic diseases of unknown cause may be studied in this framework to explain multiple epidemiological associations or risk factors which may or may not be causally related, and to seek the actual etiology.
Etiological heterogeneity
Some diseases, such as diabetes or hepatitis, are syndromically defined by their signs and symptoms, but include different conditions with different etiologies. These are called heterogeneous conditions.
Conversely, a single etiology, such as Epstein–Barr virus, may in different circumstances produce different diseases such as mononucleosis, nasopharyngeal carcinoma, or Burkitt's lymphoma.
Endotype
An endotype is a subtype of a condition, which is defined by a distinct functional or pathobiological mechanism. This is distinct from a phenotype, which is any observable characteristic or trait of a disease, such as morphology, development, biochemical or physiological properties, or behavior, without any implication of a mechanism. It is envisaged that patients with a specific endotype present themselves within phenotypic clusters of diseases.
One example is asthma, which is considered to be a syndrome, consisting of a series of endotypes. This is related to the concept of disease entity.
Other example could be AIDS, where an HIV infection can produce several clinical stages. AIDS is defined as the clinical stage IV of the HIV infection.
See also
Molecular pathological epidemiology
Molecular pathology
Pathogenesis
Disease causative agent
References
External links
Pathology
Epidemiology | Cause (medicine) | Biology,Environmental_science | 1,012 |
26,193 | https://en.wikipedia.org/wiki/Roger%20Penrose | Sir Roger Penrose (born 8 August 1931) is an English mathematician, mathematical physicist, philosopher of science and Nobel Laureate in Physics. He is Emeritus Rouse Ball Professor of Mathematics in the University of Oxford, an emeritus fellow of Wadham College, Oxford, and an honorary fellow of St John's College, Cambridge, and University College London.
Penrose has contributed to the mathematical physics of general relativity and cosmology. He has received several prizes and awards, including the 1988 Wolf Prize in Physics, which he shared with Stephen Hawking for the Penrose–Hawking singularity theorems, and the 2020 Nobel Prize in Physics "for the discovery that black hole formation is a robust prediction of the general theory of relativity".
Early life and education
Born in Colchester, Essex, Roger Penrose is a son of physician Margaret (née Leathes) and psychiatrist and geneticist Lionel Penrose. His paternal grandparents were J. Doyle Penrose, an Irish-born artist, and The Hon. Elizabeth Josephine Peckover, daughter of Alexander Peckover, 1st Baron Peckover; his maternal grandparents were physiologist John Beresford Leathes and Sonia Marie Natanson, a Russian Jew. His uncle was artist Sir Roland Penrose, whose son with American photographer Lee Miller is Antony Penrose. Penrose is the brother of physicist Oliver Penrose, of geneticist Shirley Hodgson and of chess Grandmaster Jonathan Penrose. Their stepfather was the mathematician and computer scientist Max Newman.
Penrose spent World War II as a child in Canada where his father worked in London, Ontario at the Ontario Hospital and Western University. Penrose studied at University College School. He then attended University College London, where he obtained a BSc degree with First Class Honours in mathematics in 1952.
In 1955, while a doctoral student, Penrose reintroduced the E. H. Moore generalised matrix inverse, also known as the Moore–Penrose inverse, after it had been reinvented by Arne Bjerhammar in 1951. Having started research under the professor of geometry and astronomy, Sir W. V. D. Hodge, Penrose received his PhD in algebraic geometry at St John's College, Cambridge in 1957, with his thesis titled "Tensor Methods in Algebraic Geometry" supervised by algebraist and geometer John A. Todd. He devised and popularised the Penrose triangle in the 1950s in collaboration with his father, describing it as "impossibility in its purest form", and exchanged material with the artist M. C. Escher, whose earlier depictions of impossible objects partly inspired it. Escher's Waterfall and Ascending and Descending were in turn inspired by Penrose.
As reviewer Manjit Kumar puts it:
Research and career
Penrose spent the academic year 1956–57 as an assistant lecturer at Bedford College (now Royal Holloway, University of London) and was then a research fellow at St John's College, Cambridge. During that three-year post, he married Joan Isabel Wedge, in 1959. Before the fellowship ended Penrose won a NATO Research Fellowship for 1959–61, first at Princeton and then at Syracuse University. Returning to the University of London, Penrose spent 1961–63 as a researcher at King's College, London, before returning to the United States to spend 1963–64 as a visiting associate professor at the University of Texas at Austin. He later held visiting positions at Yeshiva University, Princeton and Cornell during 1966–67 and 1969.
In 1964, while a reader at Birkbeck College, London, (and having had his attention drawn from pure mathematics to astrophysics by the cosmologist Dennis Sciama, then at Cambridge) in the words of Kip Thorne of Caltech, "Roger Penrose revolutionised the mathematical tools that we use to analyse the properties of spacetime". Until then, work on the curved geometry of general relativity had been confined to configurations with sufficiently high symmetry for Einstein's equations to be solvable explicitly, and there was doubt about whether such cases were typical. One approach to this issue was by the use of perturbation theory, as developed under the leadership of John Archibald Wheeler at Princeton. The other, and more radically innovative, approach initiated by Penrose was to overlook the detailed geometrical structure of spacetime and instead concentrate attention just on the topology of the space, or at most its conformal structure, since it is the latter – as determined by the lay of the lightcones – that determines the trajectories of lightlike geodesics, and hence their causal relationships. The importance of Penrose's epoch-making paper "Gravitational Collapse and Space-Time Singularities" (summarised roughly as that if an object such as a dying star implodes beyond a certain point, then nothing can prevent the gravitational field getting so strong as to form some kind of singularity) was not its only result. It also showed a way to obtain similarly general conclusions in other contexts, notably that of the cosmological Big Bang, which he dealt with in collaboration with Sciama's most famous student, Stephen Hawking.
It was in the local context of gravitational collapse that the contribution of Penrose was most decisive, starting with his 1969 cosmic censorship conjecture, to the effect that any ensuing singularities would be confined within a well-behaved event horizon surrounding a hidden space-time region for which Wheeler coined the term black hole, leaving a visible exterior region with strong but finite curvature, from which some of the gravitational energy may be extractable by what is known as the Penrose process, while accretion of surrounding matter may release further energy that can account for astrophysical phenomena such as quasars.
Following up his "weak cosmic censorship hypothesis", Penrose went on, in 1979, to formulate a stronger version called the "strong censorship hypothesis". Together with the Belinski–Khalatnikov–Lifshitz conjecture and issues of nonlinear stability, settling the censorship conjectures is one of the most important outstanding problems in general relativity. Also from 1979, dates Penrose's influential Weyl curvature hypothesis on the initial conditions of the observable part of the universe and the origin of the second law of thermodynamics. Penrose and James Terrell independently realised that objects travelling near the speed of light will appear to undergo a peculiar skewing or rotation. This effect has come to be called the Terrell rotation or Penrose–Terrell rotation.
In 1967, Penrose invented the twistor theory, which maps geometric objects in Minkowski space into the 4-dimensional complex space with the metric signature (2,2).
Penrose is well known for his 1974 discovery of Penrose tilings, which are formed from two tiles that can only tile the plane nonperiodically, and are the first tilings to exhibit fivefold rotational symmetry. In 1984, such patterns were observed in the arrangement of atoms in quasicrystals. Another noteworthy contribution is his 1971 invention of spin networks, which later came to form the geometry of spacetime in loop quantum gravity. He was influential in popularizing what are commonly known as Penrose diagrams (causal diagrams).
In 1983, Penrose was invited to teach at Rice University in Houston, by the then provost Bill Gordon. He worked there from 1983 to 1987. His doctoral students have included, among others, Andrew Hodges, Lane Hughston, Richard Jozsa, Claude LeBrun, John McNamara, Tristan Needham, Tim Poston, Asghar Qadir, and Richard S. Ward.
In 2004, Penrose released The Road to Reality: A Complete Guide to the Laws of the Universe, a 1,099-page comprehensive guide to the Laws of Physics that includes an explanation of his own theory. The Penrose Interpretation predicts the relationship between quantum mechanics and general relativity, and proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level.
Penrose is the Francis and Helen Pentz Distinguished Visiting Professor of Physics and Mathematics at Pennsylvania State University.
An earlier universe
In 2010, Penrose reported possible evidence, based on concentric circles found in Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background sky, of an earlier universe existing before the Big Bang of our own present universe. He mentions this evidence in the epilogue of his 2010 book Cycles of Time, a book in which he presents his reasons, to do with Einstein's field equations, the Weyl curvature C, and the Weyl curvature hypothesis (WCH), that the transition at the Big Bang could have been smooth enough for a previous universe to survive it. He made several conjectures about C and the WCH, some of which were subsequently proved by others, and he also popularized his conformal cyclic cosmology (CCC) theory. In this theory, Penrose postulates that at the end of the universe all matter is eventually contained within black holes, which subsequently evaporate via Hawking radiation. At this point, everything contained within the universe consists of photons, which "experience" neither time nor space. There is essentially no difference between an infinitely large universe consisting only of photons and an infinitely small universe consisting only of photons. Therefore, a singularity for a Big Bang and an infinitely expanded universe are equivalent.
In simple terms, Penrose believes that the singularity in Einstein's field equation at the Big Bang is only an apparent singularity, similar to the well-known apparent singularity at the event horizon of a black hole. The latter singularity can be removed by a change of coordinate system, and Penrose proposes a different change of coordinate system that will remove the singularity at the big bang. One implication of this is that the major events at the Big Bang can be understood without unifying general relativity and quantum mechanics, and therefore we are not necessarily constrained by the Wheeler–DeWitt equation, which disrupts time. Alternatively, one can use the Einstein–Maxwell–Dirac equations.
Consciousness
Penrose has written books on the connection between fundamental physics and human (or animal) consciousness. In The Emperor's New Mind (1989), he argues that known laws of physics are inadequate to explain the phenomenon of consciousness. Penrose proposes the characteristics this new physics may have and specifies the requirements for a bridge between classical and quantum mechanics (what he calls correct quantum gravity). Penrose uses a variant of Turing's halting theorem to demonstrate that a system can be deterministic without being algorithmic. (For example, imagine a system with only two states, ON and OFF. If the system's state is ON when a given Turing machine halts and OFF when the Turing machine does not halt, then the system's state is completely determined by the machine; nevertheless, there is no algorithmic way to determine whether the Turing machine stops.)
Penrose believes that such deterministic yet non-algorithmic processes may come into play in the quantum mechanical wave function reduction, and may be harnessed by the brain. He argues that computers today are unable to have intelligence because they are algorithmically deterministic systems. He argues against the viewpoint that the rational processes of the mind are completely algorithmic and can thus be duplicated by a sufficiently complex computer. This contrasts with supporters of strong artificial intelligence, who contend that thought can be simulated algorithmically. He bases this on claims that consciousness transcends formal logic because factors such as the insolubility of the halting problem and Gödel's incompleteness theorem prevent an algorithmically based system of logic from reproducing such traits of human intelligence as mathematical insight. These claims were originally espoused by the philosopher John Lucas of Merton College, Oxford.
The Penrose–Lucas argument about the implications of Gödel's incompleteness theorem for computational theories of human intelligence has been criticised by mathematicians, computer scientists and philosophers. Many experts in these fields assert that Penrose's argument fails, though different authors may choose different aspects of the argument to attack. Marvin Minsky, a leading proponent of artificial intelligence, was particularly critical, stating that Penrose "tries to show, in chapter after chapter, that human thought cannot be based on any known scientific principle." Minsky's position is exactly the opposite – he believed that humans are, in fact, machines, whose functioning, although complex, is fully explainable by current physics. Minsky maintained that "one can carry that quest [for scientific explanation] too far by only seeking new basic principles instead of attacking the real detail. This is what I see in Penrose's quest for a new basic principle of physics that will account for consciousness."
Penrose responded to criticism of The Emperor's New Mind with his follow-up 1994 book Shadows of the Mind, and in 1997 with The Large, the Small and the Human Mind. In those works, he also combined his observations with those of anesthesiologist Stuart Hameroff.
Penrose and Hameroff have argued that consciousness is the result of quantum gravity effects in microtubules, which they dubbed Orch-OR (orchestrated objective reduction). Max Tegmark, in a paper in Physical Review E, calculated that the time scale of neuron firing and excitations in microtubules is slower than the decoherence time by a factor of at least 10,000,000,000. The reception of the paper is summed up by this statement in Tegmark's support: "Physicists outside the fray, such as IBM's John A. Smolin, say the calculations confirm what they had suspected all along. 'We're not working with a brain that's near absolute zero. It's reasonably unlikely that the brain evolved quantum behavior'". Tegmark's paper has been widely cited by critics of the Penrose–Hameroff position.
Phillip Tetlow, although himself supportive of Penrose's views, acknowledges that Penrose's ideas about the human thought process are at present a minority view in scientific circles, citing Minsky's criticisms and quoting science journalist Charles Seife's description of Penrose as "one of a handful of scientists" who believe that the nature of consciousness suggests a quantum process.
In January 2014, Hameroff and Penrose ventured that a discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan supports the hypothesis of Orch-OR theory. A reviewed and updated version of the theory was published along with critical commentary and debate in the March 2014 issue of Physics of Life Reviews.
Publications
His popular publications include:
The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics (1989)
Shadows of the Mind: A Search for the Missing Science of Consciousness (1994)
The Road to Reality: A Complete Guide to the Laws of the Universe (2004)
Cycles of Time: An Extraordinary New View of the Universe (2010)
Fashion, Faith, and Fantasy in the New Physics of the Universe (2016)
His co-authored publications include:
The Nature of Space and Time (with Stephen Hawking) (1996)
The Large, the Small and the Human Mind (with Abner Shimony, Nancy Cartwright, and Stephen Hawking) (1997)
White Mars: The Mind Set Free (with Brian Aldiss) (1999)
His academic books include:
Techniques of Differential Topology in Relativity (1972, )
Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields (with Wolfgang Rindler, 1987) (paperback)
Spinors and Space-Time: Volume 2, Spinor and Twistor Methods in Space-Time Geometry (with Wolfgang Rindler, 1988) (reprint), (paperback)
His forewords to other books include:
Foreword to "The Map and the Territory: Exploring the foundations of science, thought and reality" by Shyam Wuppuluri and Francisco Antonio Doria. Published by Springer in "The Frontiers Collection", 2018.
Foreword to Beating the Odds: The Life and Times of E. A. Milne, written by Meg Weston Smith. Published by World Scientific Publishing Co in June 2013.
Foreword to "A Computable Universe" by Hector Zenil. Published by World Scientific Publishing Co in December 2012.
Foreword to Quantum Aspects of Life by Derek Abbott, Paul C. W. Davies, and Arun K. Pati. Published by Imperial College Press in 2008.
Foreword to Fearful Symmetry by Anthony Zee's. Published by Princeton University Press in 2007.
Awards and honours
Penrose has been awarded many prizes for his contributions to science.
In 1971, he was awarded the Dannie Heineman Prize for Astrophysics by the American Astronomical Society and American Institute of Physics. He was elected a Fellow of the Royal Society (FRS) in 1972. In 1975, Stephen Hawking and Penrose were jointly awarded the Eddington Medal of the Royal Astronomical Society. In 1985, he was awarded the Royal Society Royal Medal. Along with Stephen Hawking, he was awarded the prestigious Wolf Prize in Physics by the Wolf Foundation (Israel) in 1988.
In 1989, Penrose was awarded the Dirac Medal and Prize of the British Institute of Physics. He was also made an Honorary Fellow of the Institute of Physics (HonFInstP). In 1990, Penrose was awarded the Albert Einstein Medal for outstanding work related to the work of Albert Einstein by the Albert Einstein Society (Switzerland). In 1991, he was awarded the Naylor Prize of the London Mathematical Society. From 1992 to 1995, he served as President of the International Society on General Relativity and Gravitation.
In 1994, Penrose was knighted for services to science. In the same year, he was also awarded an honorary degree of Doctor of Science (DSc) by the University of Bath, and became a member of Polish Academy of Sciences. In 1998, he was elected Foreign Associate of the United States National Academy of Sciences. In 2000, he was appointed a Member of the Order of Merit (OM).
In 2004, he was awarded the De Morgan Medal by the London Mathematical Society for his wide and original contributions to mathematical physics. To quote the citation from the society:
In 2005, Penrose received a Doctorate Honoris Causa (Dr.h.c.) from each the Warsaw University (Poland) and the Katholieke Universiteit Leuven (Belgium). In 2006, he was conferred the honorary degree of Doctor of the University (DUniv) by the University of York and also won the Dirac Medal given by the University of New South Wales (Australia). In 2008, Penrose was awarded the Copley Medal of the Royal Society. He is also a Distinguished Supporter of Humanists UK and one of the patrons of the Oxford University Scientific Society.
He was elected to the American Philosophical Society in 2011. The same year, he was also awarded the Fonseca Prize by the University of Santiago de Compostela (Spain).
In 2012, Penrose was awarded the Richard R. Ernst Medal by ETH Zürich (Switzerland) for his contributions to science and strengthening the connection between science and society. In that year, he was also awarded the honorary degree of Doctor of Science (DSc) by the Trinity College Dublin (Ireland) as well a Honorary Doctorate degree by the Igor Sikorsky Kyiv Polytechnic Institute (Ukraine).
In 2015, Penrose was awarded a Doctorate Honoris Causa (Dr.h.c.) by CINVESTAV (Mexico).
In 2017, he was awarded the Commandino Medal at the Urbino University (Italy) for his contributions to the history of science. In that year as well, he was awarded an Honorary Doctor of Science degree (DSc) by the University of Edinburgh.
In 2020, Penrose was awarded one half of the Nobel Prize in Physics by the Royal Swedish Academy of Sciences for the discovery that black hole formation is a robust prediction of the general theory of relativity, a half-share also going to Reinhard Genzel and Andrea Ghez for the discovery of a supermassive compact object at the centre of our galaxy. In the same year, he was also awarded the honorary degree of Doctor of Science (DSc) by the University of Cambridge.
Personal life
Penrose's first marriage was to American Joan Isabel Penrose (née Wedge), whom he married in 1959. They had three sons. Penrose is now married to Vanessa Thomas, director of Academic Development at Cokethorpe School and former head of mathematics at Abingdon School. They have one son.
Religious views
During an interview with BBC Radio 4 on 25 September 2010, Penrose stated, "I'm not a believer myself. I don't believe in established religions of any kind." He regards himself as an agnostic. In the 1991 film A Brief History of Time, he also said, "I think I would say that the universe has a purpose, it's not somehow just there by chance … some people, I think, take the view that the universe is just there and it runs along—it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."
Penrose is a patron of Humanists UK.
See also
List of things named after Roger Penrose
Notes
References
Further reading
External links
Awake in the Universe – Penrose debates how creativity, the most elusive of faculties, has helped us unlock the country of the mind and the mysteries of the cosmos with Bonnie Greer.
– Penrose was one of the principal interviewees in a BBC documentary about the mathematics of infinity directed by David Malone
Penrose's new theory "Aeons Before the Big Bang?":
Original 2005 lecture: "Before the Big Bang? A new perspective on the Weyl curvature hypothesis" (Isaac Newton Institute for Mathematical Sciences, Cambridge, 11 November 2005).
Original publication: "Before the Big Bang: an outrageous new perspective and its implications for particle physics". Proceedings of EPAC 2006. Edinburgh. 2759–2762 (cf. also Hill, C.D. & Nurowski, P. (2007) "On Penrose's 'Before the Big Bang' ideas". Ithaca)
Revised 2009 lecture: "Aeons Before the Big Bang?" (Georgia Institute of Technology, Center for Relativistic Astrophysics)
Roger Penrose on The Forum
Hilary Putnam's review of Penrose's 'Shadows of the Mind' claiming that Penrose's use of Godel's Incompleteness Theorem is fallacious
Penrose Tiling found in Islamic Architecture
Two theories for the formation of quasicrystals resembling Penrose tilings
"Biological feasibility of quantum states in the brain" – (a disputation of Tegmark's result by Hagan, Hameroff, and Tuszyński)
Tegmarks's rejoinder to Hagan et al.
– D. Trull about Penrose's lawsuit concerning the use of his Penrose tilings on toilet paper
Roger Penrose: A Knight on the tiles (Plus Magazine)
Penrose's Gifford Lecture biography
Quantum-Mind
Audio: Roger Penrose in conversation on the BBC World Service discussion show
Roger Penrose speaking about Hawking's new book on Premier Christian Radio
"The Cyclic Universe – A conversation with Roger Penrose", Ideas Roadshow, 2013
Forbidden crystal symmetry in mathematics and architecture, filmed event at the Royal Institution, October 2013
Oxford Mathematics Interviews: "Extra Time: Professor Sir Roger Penrose in conversation with Andrew Hodges." These two films explore the development of Sir Roger Penrose's thought over more than 60 years, ending with his most recent theories and predictions. 51 min and 42 min. (Mathematical Institute)
BBC Radio 4 – The Life Scientific – Roger Penrose on Black Holes – 22 November 2016 Sir Roger Penrose talks to Jim Al-Khalili about his trailblazing work on how black holes form, the problems with quantum physics and his portrayal in films about Stephen Hawking.
The Penrose Institute Website
A chess problem holds the key to human consciousness?, Chessbase
1931 births
Living people
People from Colchester
20th-century British mathematicians
Mathematics popularizers
20th-century British philosophers
20th-century British physicists
21st-century British mathematicians
21st-century British philosophers
21st-century British physicists
Academics of Birkbeck, University of London
Academics of King's College London
Albert Einstein Medal recipients
Alumni of St John's College, Cambridge
Alumni of the University of London
Alumni of University College London
British expatriate academics in the United States
British Nobel laureates
British consciousness researchers and theorists
British agnostics
English humanists
English expatriates in the United States
British geometers
English people of Russian-Jewish descent
English science writers
Recreational mathematicians
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Fellows of Wadham College, Oxford
Knights Bachelor
Mathematical physicists
Members of the Order of Merit
Nobel laureates in Physics
Pennsylvania State University faculty
People educated at University College School
British philosophers of science
Academics of Gresham College
Quantum mind
British quantum physicists
Recipients of the Copley Medal
British relativity theorists
Rice University faculty
Rouse Ball Professors of Mathematics (University of Oxford)
Royal Medal winners
Wolf Prize in Physics laureates
English people of Irish descent
Recipients of the Dalton Medal
The Joe Rogan Experience guests | Roger Penrose | Physics,Mathematics | 5,202 |
67,655,566 | https://en.wikipedia.org/wiki/Elizabeth%20Niespolo | Elizabeth Niespolo is an American geologist. Her work utilizes geochemical methods to understand archaeological sites and human activity on multiple continents. Niespolo integrates laboratory and field work studying natural materials such as ostrich egg shells, corals, and minerals in rocks to quantify potential human environmental signatures preserved in these materials and their relevance in piecing together understanding of Homo sapiens through time. In the absence of archaeological sites, Niespolo uses high precision isotopic dating of minerals from volcanoes to determine their petrologic and eruptive history.
Early life and education
Niespolo was a first generation college student when she started her studies at the university level and chose to pursue a path integrating science, the Earth, and human history. She earned her undergraduate degree, a Bachelor of Arts, with a double major from the University of California, Berkeley in Berkeley, California. One major she chose in the humanities, Classics, and one major she chose in the sciences, Astrophysics.
During her undergraduate studies, she was drawn towards ancient history. Pursuing this initial inclination she worked for some archaeologists in the classics department to get a feel for what archaeology entailed. Part of this work included on-site field work in Greece where she found herself drawn to natural materials such as soils and fossils, their story, and how their stories were intertwined with the archaeological deposits. This interest morphed into realizations about natural resources and how things including drywall in houses and cell phone electrical circuits, chips, and battery components are partially made of earth materials requiring mining and quarrying to extract from the Earth and put into everyday use. She pursued additional field work in Italy where she worked on pottery from Pompeii and saw Mount Vesuvius, which piqued her interest in volcanoes and their interactions with human civilizations. This theme of human civilization interactions with natural processes became a theme in the course of her graduate studies.
Graduate studies in geology
Niespolo began her graduate studies in geology at California State University, Long Beach in Long Beach, California focusing her thesis work on the minerals in and geochemical signatures of jadeitites from Guatemala. These signatures she used to develop a new way of finding where Mesoamerican jade artifacts originated from. As part of her research she worked at a Mayan archaeological site in Chiapas, Mexico. Her efforts at California State University, Long Beach both on-campus and in the field in Mexico earned her a Master of Science, MS, degree.
After completing her MS, Niespolo continued her graduate studies in geology by pursuing a Doctor of Philosophy, PhD, at the University of California, Berkeley. Her work was split into two settings: on-campus laboratory work including activities such as taking geochemical measurements and on-location field work in various locations including the East African Rift Valley. In 2019 Niespolo finished and filed her dissertation and earned her PhD from the University of California, Berkeley. Niespolo's dissertation focused on using both stable and radiogenic isotopes to determine the geochronology and environmental context of human evolution in the past, the Quaternary specifically.
Career
2010s: The Americas, Polynesia, and Africa beginnings
Niespolo started integrating her archaeological experience with her physics background through geochemical dating methods based on fundamental nuclear physical reactions and radioactive decay chains in the chemical elements. Her physics background also came through in instrumentation she used to measure both radioactive and stable isotope abundances including mass spectrometry. Isotopic abundances became a center of her work on jadeitites from Guatemala, Central America. Her emphasis was on finding ways to fingerprint natural materials used by humans to make tools and artwork so the origin of these artifacts and the materials used in their construction can be pin-pointed. She has also emphasized the importance of jade sourcing in terms of economics in the Mayan Empire. As part of this work, Niespolo was a recipient of a research grant in 2014 from the Geological Society of America.
Niespolo furthered her experience working with natural minerals focusing on volcanic sanidine from northern California. This work provided high precision dating to aid in understanding petrologic and eruptive processes that may have been preserved in minerals erupted in the Alder Creek rhyolite during the early Pleistocene.
Radiometric dating of corals from the Cook Islands in Polynesia Niespolo pursued to aid in providing precise dates for human arrival and inhabitation of the island of Mangaia in Polynesia. As islands in Polynesia were not originally inhabited by humans, precise geochemical dating can help to provide precise timelines for human arrival and colonization of the islands as well as the non-native plants humans brought with them. Specifically, Niespolo found that coral abraders contained chemical evidence in the form of Thorium that Polynesians arrived to Mangaia by 1011.6 ± 5.8 CE and sweet potatoes arrived by 1361-1466 CE.
Following her work on natural materials from Central America, North America, and Polynesia, Niespolo broadened her scope geographically by honing in on piecing together what past physical and chemical environments were like in Africa. The goal with her work being correlating geochemical findings with archaeological sites to understand the timing of new tool development, human evolution, and human migration in relation to the land around them. Part of why Niespolo chose to pursue geochronology was her interest in understanding past human evolution in response to changing environmental conditions that may be helpful in modern times to understand potential environment changes in response to human activity.
Foundational work on this topic for Niespolo included geochemical measurements of stable isotopes providing understanding of rainfall and vegetation variability in Eastern Africa during the Pleistocene-Holocene. This work provided environmental context for archaeological sites in Eastern Africa in the form of stable isotope abundances from ostrich egg shells and was funded by a $199,496 grant from the National Science Foundation.
2020s: Africa continues and professorship
Niespolo draws inspiration in her geology work from past geologist Charles Lyell emphasizing that to understand the present the past is particularly pertinent. Niespolo narrows this point down in an interview with Scientific American, "Geology is the direct means to understanding our resources, and we use natural resources for literally everything (your house, your drinking water, energy). If we don't know the geologic processes controlling these observable resources, we will be hard pressed to continue utilizing them safely and responsibly, and developing more sustainable resource use in the future."
In April 2021, Niespolo's research from leading a group of scientists in investigating marine resource overuse in South Africa during the Middle Stone Age and providing high precision dating to correlate Homo sapiens resource use with sea level change in the area was published in the Proceedings of the National Academy of Sciences. Specifically, the study expanded her use of isotopic dating of ostrich egg shells, this time utilizing a 230Thorium/Uranium burial dating, to remains from an archaeological site near modern day Cape Town and found the deposit of dated remains accumulated between 113,000 and 120,000 years before the study. The study also found inhabitants of the archaeological site continued maintaining a consistent diet even as sea level dropped following a high stand of the sea, which was attributed to selective foraging by the inhabitants in part due to an increase in aridity over the dated time period.
The same month, Princeton University announced that Niespolo was one of 10 new faculty members approved by Princeton University's Board of Trustees. Her faculty appointment began in the autumn of 2021 at the assistant professor level. In 2022, her further expansion of her geochronology expertise in assessing human-climate dynamics in the past was published in a collaborative study investigating tool and technology development in relation to changes in wind intensity and rainfall around 80,000 to 92,000 years before the published 2022 study in what is modern day South Africa, to which she contributed uranium-series dating of natural materials.
Personal
Niespolo enjoys the research setting, its cutting edge nature, and being a part of new discoveries. However, one of the frustrations she has with researching in an academic setting is the amount of time and effort spent on securing funding as opposed to doing the science itself. This particular aspect of research in higher education Niespolo identified as her least favorite part of what she does.
Niespolo is a vocal proponent of female mentorship in science disciplines taking initiative herself by participating in organizations including Bay Area Scientists in Schools.
See also
2021 in mammal paleontology
Kondoa Rock-Art Sites
Polynesian navigation
References
Living people
American women geologists
Women geochemists
American women archaeologists
Princeton University faculty
California State University, Long Beach people
California State University, Long Beach alumni
Year of birth missing (living people)
21st-century American women | Elizabeth Niespolo | Chemistry | 1,820 |
25,433,315 | https://en.wikipedia.org/wiki/Walking%20audit | A walking audit is an assessment of the walkability or pedestrian access of an external environment. Walking audits are often undertaken in street environments to consider and promote the needs of pedestrians as a form of transport. They can be undertaken by a range of different stakeholders including:
Local community groups
Transport planners / engineers
Urban designers
Local police officers
Local politicians / councilors
Walking audits often collect both quantitative and qualitative data on the walking environment.
Pedestrian Environment Review System
The Pedestrian Environment Review System (PERS) is the most developed and widely used walking audit tool available.
PERS is “a systematic process to assess the pedestrian environment within a framework that promotes objectivity”. The environment is reviewed from the end user perspective of a vulnerable pedestrian. PERS consists of:
An on-street audit process
A GIS software package to consolidate, map and display results
A PERS walking audit collects both quantitative and qualitative data on six types of facility in the street environment:
Links (footways, footbridges, subways)
Crossings
Routes
Public Transport Waiting Areas (bus stops, tram stops, taxi ranks)
Public Spaces (parks and squares)
Interchange Spaces (between different modes of transport)
Each facility is rated on a seven-point scale (-3 to +3) for different parameters such as effective width, dropped kerbs, permeability, or personal security. PERS also rates disabled peoples access. These PERS ratings are linked to Red/Amber/Green (RAG) colour-coding. The PERS software allows users to analyse and display walkability data using GIS maps, charts and quick win recommendation lists.
PERS was originally developed in 2001 by TRL and London Borough of Bromley. The software tool (PERS 1) was designed to allow transport professionals and community groups to quickly and cost-effectively assess and rate the walkability of local streets and recommend improvements for pedestrians. This version of the tool assessed Links, Crossings, and Routes. In 2005 Transport for London and TRL co-developed PERS 2 which expanded the original system to include Public Transport Waiting Areas (PTWA), Public Spaces and Interchange Spaces. In 2009 transport for London and TRL further developed the tool into PERS 3 which included a built-in GIS mapping tool and the ability to add photographs and georeferences of quick wins (low cost, easy to implement physical improvements). PERS 3 also has the added functionality of automatically generate quick-win recommendation work lists for Highway work crews.
The PERS tool has been used by organisations all over the world and has been used extensively in London to assess over 200 km of the street network.
Using walking audits to make a business case
Research undertaken by the Commission for Architecture and the Built Environment (CABE) have used the PERS walking audit method to show:
"how we can calculate the extra financial value that good street design contributes, over average or poor design".
The study found a direct link between an increase in PERS scores (and therefore an increase in the quality of a street for pedestrians) and residential house prices. The study demonstrates how PERS can be used to show how:
"clear financial benefits can be calculated from investing in better quality street design".
See also
Accessibility
Bicycle user group
Environmental planning
Pedestrian crossing
Street reclamation
Student transport
Transportation planning
Urban green space
Urban planning
Walkability
Walking bus
Walking tour
References
External links
TRL PERS Walking Audits
Film of PERS Walking Audit Tool demo
Transport for London - What is PERS?
RateMyStreet crowdsourcing web-tool for rating walkability
Urban studies and planning terminology
Sustainable transport
Pedestrian infrastructure | Walking audit | Physics | 732 |
196,662 | https://en.wikipedia.org/wiki/Lawrence%20Hargrave | Lawrence Hargrave, MRAeS, (29 January 18506 July 1915) was an Australian engineer, explorer, astronomer, inventor and aeronautical pioneer. He was perhaps best known for inventing the box kite, which was quickly adopted by other aircraft designers and subsequently formed the aerodynamic basis of early biplanes.
Biography
Lawrence Hargrave was born in Greenwich, England, the second son of John Fletcher Hargrave (later Attorney-General of NSW), and was educated at Queen Elizabeth's Grammar School, Kirkby Lonsdale, Westmorland, where there is now a DT building named in his honour. He immigrated to Australia at fifteen years of age with his family, arriving in Sydney on 5 November 1865 on the La Hogue. He accepted a place on the Ellesmere and circumnavigated Australia. Although he had shown ability in mathematics at his English school he failed the matriculation examination and in 1867 took an engineering apprenticeship with the Australasian Steam Navigation Company in Sydney. He later found the experience of great use in constructing his models and his theories.
In 1872, as an engineer, Hargrave sailed on the Maria on a voyage to New Guinea but the ship was wrecked. In 1875, he again sailed as an engineer on William John Macleay's expedition to the Gulf of Papua. From October 1875 to January 1876 he was exploring the hinterland of Port Moresby under Octavius Stone, and in April 1876 went on another expedition under Luigi D'Albertis for over 400 miles up the Fly River on the SS Ellengowan. In 1877 he was inspecting the newly developing pearling industry for Parbury Lamb and Co. He returned to Sydney, joined the Royal Society of New South Wales in 1877, and in 1878 became an assistant astronomical observer at Sydney Observatory. He held this position for about five years, retired in 1883 with a moderate competency, and gave the rest of his life to research work.
Hargrave was a Freemason.
Aeronautics
Hargrave had been interested in experiments of all kinds from an early age, particularly those with aircraft. When his father died in 1885, and Hargrave came into his inheritance, he resigned from the observatory to concentrate on full-time research and for a time gave particular attention to the flight of birds. He chose to live and experiment with his flying machines in Stanwell Park, a place which offers excellent wind and hang conditions and nowadays is the most famous hang gliding and paragliding venue in Australia.
In his career, Hargrave invented many devices, but never applied for a patent on any of them. He needed the money but he was a passionate believer in scientific communication as a key to furthering progress. As he wrote in 1893:
Workers must root out the idea [that] by keeping the results of their labours to themselves[,] a fortune will be assured to them. Patent fees are much wasted money. The flying machine of the future will not be born fully fledged and capable of a flight for 1000 miles or so. Like everything else it must be evolved gradually. The first difficulty is to get a thing that will fly at all. When this is made, a full description should be published as an aid to others. Excellence of design and workmanship will always defy competition.
Among many, three of Hargrave's inventions were particularly significant:
study of curved aerofoils, particularly designs with a thicker leading edge;
the box kite (1893), which greatly improved the lift to drag ratio of early gliders;
work on the rotary engine, which powered many early flying machines up until about 1920.
He made endless experiments and numerous models, and communicated his conclusions in a series of papers to the Royal Society of New South Wales. Two papers which will be found in the 1885 volume of its Journal and Proceedings show that he was early on the road to success. Other important papers will be found in the 1893 and 1895 volumes which reported on his experiments with flying-machine motors and cellular kites.
Of great significance to those pioneers working toward powered flight, Hargrave successfully lifted himself off the ground under a train of four of his box kites at Stanwell Park Beach on 12 November 1894. Aided by James Swain, the caretaker at his property, the kite line was moored via a spring balance to two sandbags (see image). Hargrave carried an anemometer and clinometer aloft to measure wind speed and the angle of the kite line. He rose 16 feet in a wind speed of 21 mph. This experiment was widely reported and established the box kite as a stable aerial platform. Hargrave claimed that "The particular steps gained are the demonstration that an extremely simple apparatus can be made, carried about, and flown by one man; and that a safe means of making an ascent with a flying machine, of trying the same without any risk of accident, and descending, is now at the service of any experimenter who wishes to use it." This was seen by Abbott Lawrence Rotch of the meteorological observatory at Harvard University who constructed a kite from the particulars in Engineering. A modification was adopted by the weather bureau of the United States and the use of box-kites for meteorological observations became widespread. The principle was applied to gliders, and in October 1906 Alberto Santos-Dumont used the box-kite principle in his aeroplane to make his first flight. Until 1909 the box-kite aeroplane was the usual type in Europe.
Hargrave had not confined himself to the problem of constructing a heavier-than-air machine that would fly, for he had given much time to the means of propulsion. In 1889 he invented a rotary engine which appears to have attracted so little notice that its principle had to be discovered over again by the Seguin brothers in 1908. This form of engine was much used in early aviation until it was superseded by later inventions. His development of the rotary engine was frustrated by the weight of materials and quality of machining available at the time, and he was unable to get sufficient power from his engines to build an independent flying machine.
Hargrave's work inspired Alexander Graham Bell to begin his own experiments with a series of tetrahedral kite designs. However, Hargrave's work, like that of many other pioneers, was not sufficiently appreciated during his lifetime. His models were offered to the premier of New South Wales as a gift to the state, and it is generally incorrectly stated that the offer was not accepted. It is not clear what really happened, but there appears to have been delays in accepting the models, and in the meantime about 100 of them were given to some visiting German professors who handed them to the Deutsches Museum in Munich. Hargrave also conducted experiments with a hydroplane, the application of the gyroscopic principle to a "one-wheeled car", and with 'wave propelled vessels'.
Hargrave's only son Geoffrey was killed at the Battle of Gallipoli in May 1915 during World War I. Hargrave was operated on for appendicitis but suffered peritonitis afterwards and died in July 1915. He was interred in Waverley Cemetery on the cliffs overlooking the open ocean.
Hargrave was an excellent experimenter and his models were well crafted. He had the optimism that is essential for an inventor, and the perseverance that will not allow itself to be damped by failures. Modest, unassuming and unselfish, he always refused to patent his inventions, and was only anxious that he might succeed in adding to the sum of human knowledge. Many men smiled at his efforts and few had faith that anything would come of them. An honourable exception was Professor Richard Threlfall who, in his presidential address to the Royal Society of New South Wales in May 1895, spoke of his "strong conviction of the importance of the work which Mr Hargrave has done towards solving the problem of artificial flight". Threlfall called Hargrave the "inventor of human flight", and the debt supposed to be owed by the Wright brothers to Hargrave. The step he made in man's conquest of the air was an important one with far-reaching consequences, and he should be remembered as an important experimenter and inventor, who "probably did as much to bring about the accomplishment of dynamic flight as any other single individual".
Honors and memorials
An engraving of Lawrence Hargrave alongside some of his gliders appeared on the reverse of the Australian $20 banknote from 1966 to 1994.
Hargrave has been the subject of two operas. The first was Barry Conyngham's opera Fly which premiered in 1984 at the Victoria State Opera. The second was by Nigel Butterley with libretto by James McDonald, titled Lawrence Hargrave Flying Alone, which premiered at the Sydney Conservatorium of Music in 1988.
There is a memorial stone cairn with dedication plaque at Bald Hill, overlooking the successful man lift site.
Lawrence Hargrave Drive is a road which stretches from the Old Princes Highway in Helensburgh to the bottom of Bulli Pass in Thirroul.
Lawrence Hargrave Reserve in Elizabeth Bay Road, Elizabeth Bay was named to commemorate Hargrave who lived nearby at 40 Roslyn Gardens from 1885 to 1893.
A centenary celebration and re-enactment was held in November 1994 to commemorate the man lift at Stanwell Park beach.
The Lawrence Hargrave Professor of Aeronautical Engineering at Sydney University and the Hargrave-Andrew Engineering and Sciences library at Monash University are named in his honour.
Australia's largest airline Qantas named its fifth Airbus A380 aircraft (registration VH-OQE) after Lawrence Hargrave.
A new technology building at his former school in Kirkby Lonsdale, England was named in his honour in 2017.
A 1988 Lawrence Hargrave memorial sculpture, Winged Figure by Bert Flugelman, is located at the base of Mount Keira.
A memorial plaque on Lawrence Hargrave's residence in Point Piper, Sydney Google Streetview Lawrence Hargrave memorial plaque
See also
Man-lifting kite
References
Footnotes
Citations
Bibliography
Michael Adams: "Wind Beneath his Wings: Lawrence Hargrave at Stanwell Park", Cultural Exchange International Pty. Ltd (2005)
External links
Hargrave's flying machines, blog post, State Library of New South Wales.
Lawrence Hargrave, Australian aviation pioneer (1850–1915) (Monash University)
Winged Figure Lawrence Hargrave Memorial
The Hargrave Files papers and drawings by Lawrence Hargrave, and miscellaneous articles about Hargrave's kites from the archives of the Australian Kite Association
1850 births
1915 deaths
People from Greenwich
British aviation pioneers
Australian aerospace engineers
Structural engineers
British emigrants to Australia
19th-century Australian inventors
Deaths from peritonitis
Burials at Waverley Cemetery
Australian Freemasons
History of Wollongong
19th-century Australian astronomers | Lawrence Hargrave | Engineering | 2,225 |
333,306 | https://en.wikipedia.org/wiki/Regular%20polygon | In Euclidean geometry, a regular polygon is a polygon that is direct equiangular (all angles are equal in measure) and equilateral (all sides have the same length). Regular polygons may be either convex or star. In the limit, a sequence of regular polygons with an increasing number of sides approximates a circle, if the perimeter or area is fixed, or a regular apeirogon (effectively a straight line), if the edge length is fixed.
General properties
These properties apply to all regular polygons, whether convex or star:
A regular n-sided polygon has rotational symmetry of order n.
All vertices of a regular polygon lie on a common circle (the circumscribed circle); i.e., they are concyclic points. That is, a regular polygon is a cyclic polygon.
Together with the property of equal-length sides, this implies that every regular polygon also has an inscribed circle or incircle that is tangent to every side at the midpoint. Thus a regular polygon is a tangential polygon.
A regular n-sided polygon can be constructed with compass and straightedge if and only if the odd prime factors of n are distinct Fermat primes.
A regular n-sided polygon can be constructed with origami if and only if for some , where each distinct is a Pierpont prime.
Symmetry
The symmetry group of an n-sided regular polygon is the dihedral group Dn (of order 2n): D2, D3, D4, ... It consists of the rotations in Cn, together with reflection symmetry in n axes that pass through the center. If n is even then half of these axes pass through two opposite vertices, and the other half through the midpoint of opposite sides. If n is odd then all axes pass through a vertex and the midpoint of the opposite side.
Regular convex polygons
All regular simple polygons (a simple polygon is one that does not intersect itself anywhere) are convex. Those having the same number of sides are also similar.
An n-sided convex regular polygon is denoted by its Schläfli symbol . For , we have two degenerate cases:
Monogon {1} Degenerate in ordinary space. (Most authorities do not regard the monogon as a true polygon, partly because of this, and also because the formulae below do not work, and its structure is not that of any abstract polygon.)
Digon {2}; a "double line segment" Degenerate in ordinary space. (Some authorities do not regard the digon as a true polygon because of this.)
In certain contexts all the polygons considered will be regular. In such circumstances it is customary to drop the prefix regular. For instance, all the faces of uniform polyhedra must be regular and the faces will be described simply as triangle, square, pentagon, etc.
Angles
For a regular convex n-gon, each interior angle has a measure of:
degrees;
radians; or
full turns,
and each exterior angle (i.e., supplementary to the interior angle) has a measure of degrees, with the sum of the exterior angles equal to 360 degrees or 2π radians or one full turn.
As n approaches infinity, the internal angle approaches 180 degrees. For a regular polygon with 10,000 sides (a myriagon) the internal angle is 179.964°. As the number of sides increases, the internal angle can come very close to 180°, and the shape of the polygon approaches that of a circle. However the polygon can never become a circle. The value of the internal angle can never become exactly equal to 180°, as the circumference would effectively become a straight line (see apeirogon). For this reason, a circle is not a polygon with an infinite number of sides.
Diagonals
For , the number of diagonals is ; i.e., 0, 2, 5, 9, ..., for a triangle, square, pentagon, hexagon, ... . The diagonals divide the polygon into 1, 4, 11, 24, ... pieces.
For a regular n-gon inscribed in a circle of radius , the product of the distances from a given vertex to all other vertices (including adjacent vertices and vertices connected by a diagonal) equals n.
Points in the plane
For a regular simple n-gon with circumradius R and distances di from an arbitrary point in the plane to the vertices, we have
For higher powers of distances from an arbitrary point in the plane to the vertices of a regular -gon, if
,
then
,
and
,
where is a positive integer less than .
If is the distance from an arbitrary point in the plane to the centroid of a regular -gon with circumradius , then
,
where = 1, 2, …, .
Interior points
For a regular n-gon, the sum of the perpendicular distances from any interior point to the n sides is n times the apothem (the apothem being the distance from the center to any side). This is a generalization of Viviani's theorem for the n = 3 case.
Circumradius
The circumradius R from the center of a regular polygon to one of the vertices is related to the side length s or to the apothem a by
For constructible polygons, algebraic expressions for these relationships exist .
The sum of the perpendiculars from a regular n-gon's vertices to any line tangent to the circumcircle equals n times the circumradius.
The sum of the squared distances from the vertices of a regular n-gon to any point on its circumcircle equals 2nR2 where R is the circumradius.
The sum of the squared distances from the midpoints of the sides of a regular n-gon to any point on the circumcircle is 2nR2 − ns2, where s is the side length and R is the circumradius.
If are the distances from the vertices of a regular -gon to any point on its circumcircle, then
.
Dissections
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into or parallelograms.
These tilings are contained as subsets of vertices, edges and faces in orthogonal projections m-cubes.
In particular, this is true for any regular polygon with an even number of sides, in which case the parallelograms are all rhombi.
The list gives the number of solutions for smaller polygons.
Area
The area A of a convex regular n-sided polygon having side s, circumradius R, apothem a, and perimeter p is given by
For regular polygons with side s = 1, circumradius R = 1, or apothem a = 1, this produces the following table: (Since as , the area when tends to as grows large.)
Of all n-gons with a given perimeter, the one with the largest area is regular.
Constructible polygon
Some regular polygons are easy to construct with compass and straightedge; other regular polygons are not constructible at all.
The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular n-gons with compass and straightedge? If not, which n-gons are constructible and which are not?
Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons:
A regular n-gon can be constructed with compass and straightedge if n is the product of a power of 2 and any number of distinct Fermat primes (including none).
(A Fermat prime is a prime number of the form ) Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem.
Equivalently, a regular n-gon is constructible if and only if the cosine of its common angle is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots.
Regular skew polygons
A regular skew polygon in 3-space can be seen as nonplanar paths zig-zagging between two parallel planes, defined as the side-edges of a uniform antiprism. All edges and internal angles are equal.
More generally regular skew polygons can be defined in n-space. Examples include the Petrie polygons, polygonal paths of edges that divide a regular polytope into two halves, and seen as a regular polygon in orthogonal projection.
In the infinite limit regular skew polygons become skew apeirogons.
Regular star polygons
A non-convex regular polygon is a regular star polygon. The most common example is the pentagram, which has the same vertices as a pentagon, but connects alternating vertices.
For an n-sided star polygon, the Schläfli symbol is modified to indicate the density or "starriness" m of the polygon, as {n/m}. If m is 2, for example, then every second point is joined. If m is 3, then every third point is joined. The boundary of the polygon winds around the center m times.
The (non-degenerate) regular stars of up to 12 sides are:
Pentagram – {5/2}
Heptagram – {7/2} and {7/3}
Octagram – {8/3}
Enneagram – {9/2} and {9/4}
Decagram – {10/3}
Hendecagram – {11/2}, {11/3}, {11/4} and {11/5}
Dodecagram – {12/5}
m and n must be coprime, or the figure will degenerate.
The degenerate regular stars of up to 12 sides are:
Tetragon – {4/2}
Hexagons – {6/2}, {6/3}
Octagons – {8/2}, {8/4}
Enneagon – {9/3}
Decagons – {10/2}, {10/4}, and {10/5}
Dodecagons – {12/2}, {12/3}, {12/4}, and {12/6}
Depending on the precise derivation of the Schläfli symbol, opinions differ as to the nature of the degenerate figure. For example, {6/2} may be treated in either of two ways:
For much of the 20th century (see for example ), we have commonly taken the /2 to indicate joining each vertex of a convex {6} to its near neighbors two steps away, to obtain the regular compound of two triangles, or hexagram. Coxeter clarifies this regular compound with a notation {kp}[k{p}]{kp} for the compound {p/k}, so the hexagram is represented as {6}[2{3}]{6}. More compactly Coxeter also writes 2{n/2}, like 2{3} for a hexagram as compound as alternations of regular even-sided polygons, with italics on the leading factor to differentiate it from the coinciding interpretation.
Many modern geometers, such as Grünbaum (2003), regard this as incorrect. They take the /2 to indicate moving two places around the {6} at each step, obtaining a "double-wound" triangle that has two vertices superimposed at each corner point and two edges along each line segment. Not only does this fit in better with modern theories of abstract polytopes, but it also more closely copies the way in which Poinsot (1809) created his star polygons – by taking a single length of wire and bending it at successive points through the same angle until the figure closed.
Duality of regular polygons
All regular polygons are self-dual to congruency, and for odd n they are self-dual to identity.
In addition, the regular star figures (compounds), being composed of regular polygons, are also self-dual.
Regular polygons as faces of polyhedra
A uniform polyhedron has regular polygons as faces, such that for every two vertices there is an isometry mapping one into the other (just as there is for a regular polygon).
A quasiregular polyhedron is a uniform polyhedron which has just two kinds of face alternating around each vertex.
A regular polyhedron is a uniform polyhedron which has just one kind of face.
The remaining (non-uniform) convex polyhedra with regular faces are known as the Johnson solids.
A polyhedron having regular triangles as faces is called a deltahedron.
See also
Euclidean tilings by convex regular polygons
Platonic solid
List of regular polytopes and compounds
Equilateral polygon
Carlyle circle
Notes
References
Further reading
Lee, Hwa Young; "Origami-Constructible Numbers".
Grünbaum, B.; Are your polyhedra the same as my polyhedra?, Discrete and comput. geom: the Goodman-Pollack festschrift, Ed. Aronov et al., Springer (2003), pp. 461–488.
Poinsot, L.; Memoire sur les polygones et polyèdres. J. de l'École Polytechnique 9 (1810), pp. 16–48.
External links
Regular Polygon description With interactive animation
Incircle of a Regular Polygon With interactive animation
Area of a Regular Polygon Three different formulae, with interactive animation
Renaissance artists' constructions of regular polygons at Convergence
Types of polygons
Regular polytopes | Regular polygon | Physics | 3,060 |
10,766,404 | https://en.wikipedia.org/wiki/Generalised%20circle | In geometry, a generalized circle, sometimes called a cline or circline, is a straight line or a circle, the curves of constant curvature in the Euclidean plane.
The natural setting for generalized circles is the extended plane, a plane along with one point at infinity through which every straight line is considered to pass. Given any three distinct points in the extended plane, there exists precisely one generalized circle passing through all three.
Generalized circles sometimes appear in Euclidean geometry, which has a well-defined notion of distance between points, and where every circle has a center and radius: the point at infinity can be considered infinitely distant from any other point, and a line can be considered as a degenerate circle without a well-defined center and with infinite radius (zero curvature). A reflection across a line is a Euclidean isometry (distance-preserving transformation) which maps lines to lines and circles to circles; but an inversion in a circle is not, distorting distances and mapping any line to a circle passing through the reference circles's center, and vice-versa.
However, generalized circles are fundamental to inversive geometry, in which circles and lines are considered indistinguishable, the point at infinity is not distinguished from any other point, and the notions of curvature and distance between points are ignored. In inversive geometry, reflections, inversions, and more generally their compositions, called Möbius transformations, map generalized circles to generalized circles, and preserve the inversive relationships between objects.
The extended plane can be identified with the sphere using a stereographic projection. The point at infinity then becomes an ordinary point on the sphere, and all generalized circles become circles on the sphere.
Extended complex plane
The extended Euclidean plane can be identified with the extended complex plane, so that equations of complex numbers can be used to describe lines, circles and inversions.
Bivariate linear equation
A circle is the set of points in a plane that lie at radius from a center point
In the complex plane, is a complex number and is a set of complex numbers. Using the property that a complex number multiplied by its conjugate is the square of its modulus (its Euclidean distance from the origin), an implicit equation for is:
This is a homogeneous bivariate linear polynomial equation in terms of the complex variable and its conjugate of the form
where coefficients and are real, and and are complex conjugates.
By dividing by and then reversing the steps above, the radius and center can be recovered from any equation of this form. The equation represents a generalized circle in the plane when is real, which occurs when so that the squared radius is positive. When is zero, the equation defines a straight line.
Complex reciprocal
That the reciprocal transformation maps generalized circles to generalized circles is straight-forward to verify:
Lines through the origin map to lines through the origin; lines not through the origin map to circles through the origin; circles through the origin map to lines not through the origin; and circles not through the origin map to circles not through the origin.
Complex matrix representation
The defining equation of a generalized circle
can be written as a matrix equation
Symbolically,
with coefficients placed into an invertible hermitian matrix representing the circle, and a vector representing an extended complex number.
Two such matrices specify the same generalized circle if and only if one is a scalar multiple of the other.
To transform the generalized circle represented by by the Möbius transformation apply the inverse of the Möbius transformation to the vector in the implicit equation,
so the new circle can be represented by the matrix
Notes
References
Hans Schwerdtfeger, Geometry of Complex Numbers, Courier Dover Publications, 1979
Michael Henle, "Modern Geometry: Non-Euclidean, Projective, and Discrete", 2nd edition, Prentice Hall, 2001
David W. Lyons (2021) Möbius Geometry from LibreTexts
Circles
Inversive geometry | Generalised circle | Mathematics | 790 |
3,197,327 | https://en.wikipedia.org/wiki/Chrysobalanus%20icaco | Chrysobalanus icaco, the cocoplum, paradise plum, abajeru or icaco, also called fat pork in Trinidad and Tobago, is a low shrub or bushy tree found near sea beaches and inland throughout tropical Africa, tropical Americas and the Caribbean, and in southern Florida and the Bahamas. An evergreen, it is also found as an exotic species on other tropical islands, where it has become a problematic invasive. Although taxonomists disagree on whether Chrysobalanus icaco has multiple subspecies or varieties, it is recognized as having two ecotypes, described as an inland, much less salt-tolerant, and more upright C. icaco var. pellocarpus and a coastal C. icaco var. icaco. Both the ripe fruit of C. icaco, and the seed inside the ridged shell it contains, are considered edible.
Description
Chrysobalanus icaco is a shrub , or bushy tree , rarely to . It has evergreen broad-oval to nearly round somewhat leathery leaves (3 to 10 cm long and 2.5 to 7 cm wide). Leaf colors range from green to light red. The bark is greyish or reddish brown, with white specks.
The clustered flowers are small, greenish-white, and appear intermittently throughout the year but more abundantly in late spring. The fruit that follows (a drupe) is variable, with that of the coastal form being round, up to 5 cm in diameter, white, pale-yellow with a rose blush or dark-purple in color, while that of the inland form is oval, up to 2.5 cm long, and dark-purple. The fruit is edible, with an almost tasteless to mildly sweet flavor, and is sometimes used for jam. It contains a five- or six-ridged brown stone with an edible white seed. The common name for this fruit in Barbados, Trinidad and Tobago and Guyana is "fat pork". The seed's kernel is used ground into a powder and dried, as a spice (variously called gbafilo, itsekiri, umilo,emilo or omilo) as part of West African Pepper Soup Mix.
Chrysobalanus icaco is unable to survive a hard frost, but is planted as an ornamental shrub in subtropical regions due to its appearance, easily manageable size, and tolerance of shallow and variable soils (for example, as alkaline as pH 8.4) and partial shade. Several cultivars are available:
'Red Tip' is of the inland ecotype, and is the most commonly planted in Florida, often as a hedge. It is a chance occurrence that has pink new growth.
'Green Tip' is another example of the inland type that has green new growth.
'Horizontal' is of the coastal type, and tends to root wherever its creeping branches touch the ground, creating clumps over time that can help stabilize the soil. Combined with the high salt tolerance of the coastal ecotype, this characteristic means it can be planted to stabilize beach edges and prevent erosion.
Chrysobalanus icaco plays a role in traditional medicine in some parts of its native range, and has been the subject of scientific investigations that have provided evidence of hypoglycemic, antioxidant, antifungal and other pharmacological properties of the leaf extract.
Gallery
References
Bush, Charles S. and Morton, Julia F. (1969) Native Trees and Plants for Florida Landscaping (pp. 64–65). Bulletin No. 193. Department of Agriculture - State of Florida.
External links
Cocoplum at Virginia Tech Dendrology
Chrysobalanaceae
Halophytes
Tropical fruit
Trees of Africa
Flora of Central America
Flora of the Caribbean
Flora of Mexico
Flora of northern South America
Flora of Florida
Trees of Îles des Saintes
Fruits originating in Africa
Plants described in 1753
Taxa named by Carl Linnaeus
Garden plants of North America
Garden plants of Central America
Garden plants of South America
Flora without expected TNC conservation status | Chrysobalanus icaco | Chemistry | 824 |
3,146,241 | https://en.wikipedia.org/wiki/Focus-plus-context%20screen | A focus-plus-context screen is a specialized type of display device that consists of one or more high-resolution "focus" displays embedded into a larger low-resolution "context" display. Image content is displayed across all display regions, such that the scaling of the image is preserved, while its resolution varies across the display regions.
The original focus-plus-context screen prototype consisted of an 18"/45 cm LCD screen embedded in a 5'/150 cm front-projected screen. Alternative designs have been proposed that achieve the mixed-resolution effect by combining two or more projectors with different focal lengths
While the high-resolution area of the original prototype was located at a fixed location, follow-up projects have obtained a movable focus area by using a Tablet PC.
Patrick Baudisch is the inventor of focus-plus-context screens (2000, while at Xerox PARC)
Advantages
Allows users to leverage their foveal and their peripheral vision
Cheaper to manufacture than a display that is high-resolution across the entire display surface
Displays entirety and details of large images in a single view. Unlike approaches that combine entirety and details in software (fisheye views), focus-plus-context screens do not introduce distortion.
Disadvantages
In existing implementations, the focus display is either fixed or moving it is physically demanding
References
Notes
Yudhijit Bhattacharjee. In a Seamless Image, the Great and Small. In The New York Times, Thursday, March 14, 2002.
External links
Focus-plus-context screens homepage
User interfaces
Computer output devices
Display technology
User interface techniques | Focus-plus-context screen | Technology,Engineering | 323 |
216,928 | https://en.wikipedia.org/wiki/Wastewater | Wastewater (or waste water) is water generated after the use of freshwater, raw water, drinking water or saline water in a variety of deliberate applications or processes. Another definition of wastewater is "Used water from any combination of domestic, industrial, commercial or agricultural activities, surface runoff / storm water, and any sewer inflow or sewer infiltration". In everyday usage, wastewater is commonly a synonym for sewage (also called domestic wastewater or municipal wastewater), which is wastewater that is produced by a community of people.
As a generic term, wastewater may also describe water containing contaminants accumulated in other settings, such as:
Industrial wastewater: waterborne waste generated from a variety of industrial processes, such as manufacturing operations, mineral extraction, power generation, or water and wastewater treatment.
Cooling water, is released with potential thermal pollution after use to condense steam or reduce machinery temperatures by conduction or evaporation.
Leachate: precipitation containing pollutants dissolved while percolating through ores, raw materials, products, or solid waste.
Return flow: the flow of water carrying suspended soil, pesticide residues, or dissolved minerals and nutrients from irrigated cropland.
Surface runoff: the flow of water occurring on the ground surface when excess rainwater, stormwater, meltwater, or other sources, can no longer sufficiently rapidly infiltrate the soil.
Urban runoff, including water used for outdoor cleaning activity and landscape irrigation in densely populated areas created by urbanization.
Agricultural wastewater: animal husbandry wastewater generated from confined animal operations.
References | Wastewater | Chemistry,Engineering,Environmental_science | 311 |
69,216,480 | https://en.wikipedia.org/wiki/Catherine%20E.%20Costello | Catherine E. Costello is the William Fairfield Warren distinguished professor in the department of biochemistry, Cell Biology and Genomics, and the director of the Center for Biomedical Mass Spectrometry at the Boston University School of Medicine.
Education
Catherine E. Costello attended the Emmanuel College in Boston for her undergraduate studies in chemistry, and minors in mathematics and physics. She received a Master of Science (1967) and a PhD from Georgetown University (1971). After graduation, she did post-doctoral research with Klaus Biemann at Massachusetts Institute of Technology.
Career
Prior to founding the Center for Biomedical Mass Spectrometry at Boston University School of Medicine in 1994, Costello was a senior research scientist and the associate director of the National Institutes of Health Research Resource for Mass Spectrometry at Massachusetts Institute of Technology for 20 years. She is a William Fairfield Warren Distinguished Professor and the director of the Center for Biomedical Mass Spectrometry at the Boston University School of Medicine.
Costello served as the president of the American Society for Mass Spectrometry (2002–2004), the Human Proteome Organization (2011–2012), and the International Mass Spectrometry Foundation (2014–2018). She currently serves on the board of directors of the US Human Proteome Organization, and the editorial board of Clinical Proteomics.
Research
Her research involves structural characterization of biopolymers using mass spectrometry-based techniques, such as liquid chromatography-mass spectrometry, thin-layer chromatography-mass spectrometry, Fourier-transform ion cyclotron resonance mass spectrometry, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, microfluidic capillary electrophoresis-mass spectrometry, and ion mobility spectrometry-mass spectrometry. She was one of the first scientists to characterize glycoconjugates with tandem mass spectrometry. Her 1988 article has been cited over two thousand times. She participated in the Human Proteome Project, the SysteMHC Atlas project, and the Minimum Information Required for a Glycomics Experiment (MIRAGE) project.
Awards
2023 Analytical Scientist the Power List - Leaders and Advocates
2020 Society for Glycobiology Molecular and Cellular Proteomics (MCP) / American Society for Biochemistry and Molecular Biology (ASBMB) Lectureship Award
2019 inaugural winner of the US Human Proteome Organization Lifetime Achievement in Proteomics Award
2019 Analytical Scientist the Power List
2017 American Society for Mass Spectrometry John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry
2016 American Association for the Advancement of Science Fellow
2015 Human Proteome Organization Distinguished Service Award
2015 German Mass Spectrometry Society (Deutsche Gesellschaft für Massenspektrometrie, DGMS) Wolfgang Paul Lecture
2013 Boston University The William Fairfield Warren Distinguished Professorship
2011 American Chemical Society Fellow
2010 American Chemical Society Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry
2009 International Mass Spectrometry Foundation Thomson Medal
2008 Human Proteome Organization Discovery in Proteomic Sciences Award
Awards in her honor
US Human Proteome Organization Catherine E. Costello Lifetime Achievement in Proteomics Award (from 2020)
Females in Mass Spectrometry Catherine E. Costello Award (from 2020)
References
External links
Georgetown University alumni
Emmanuel College (Massachusetts) alumni
Boston University School of Medicine faculty
Thomson Medal recipients
Living people
Mass spectrometrists
American women academics
American women biochemists
Year of birth missing (living people)
21st-century American women
Proteomics
Proteomics journals
Proteomics organizations
Biochemistry | Catherine E. Costello | Physics,Chemistry,Biology | 756 |
28,365,313 | https://en.wikipedia.org/wiki/Clitocybe%20brumalis | Clitocybe brumalis, commonly known as the winter funnel cap, brumalis signifying "wintry", is an inedible mushroom of the genus Clitocybe. It grows in deciduous and coniferous woodland, only in winter; sometimes even under snow.
Description
The cap is convex or umbilicate when young, soon funnel-shaped. Pale when moist, with a weakly translucent and striped margin, almost white when dry, it grows up to 5 cm in diameter. The gills are dirty white, crowded and a little decurrent. The spores are also white. The stem is pale brown, striped and soon hollow, with a white, felty base. The flesh is dirty brown.
Similar species
Several species growing in autumn look very similar and are difficult to distinguish without a microscope.
References
E. Garnweidner. Mushrooms and Toadstools of Britain and Europe. Collins. 1994.
External links
brumalis
Fungi described in 1872
Fungi of Europe
Taxa named by Elias Magnus Fries
Fungus species | Clitocybe brumalis | Biology | 208 |
66,631,533 | https://en.wikipedia.org/wiki/Kevin%20Kendall | Kevin Kendall FRS is a British physicist who received a London external BSc degree at Salford CAT in 1965 while working as an engineering apprentice at Joseph Lucas Gas Turbine Ltd. He became interested in surface science during his Ph.D. study in the Cavendish Laboratory and devised a novel method for measuring the true contact area between solids using an ultrasonic transmission. That led to new arguments about the adhesion of contacting solids, giving a theory of adhesion and fracture that applies to a wide range of problems of high industrial significance, especially in the chemical industry where fine particles stick together tenaciously. His book Crack Control published by Elsevier summarizes many of these applications.
Education
Kendall first went to school at St Edwards Darwen but when his mother Margaret died in 1950 the family moved to Accrington near his father Cyril's work at Joseph Lucas Gas Turbine Ltd. On passing the eleven plus exam at St Annes Accrington in 1955 he studied at St. Mary's College, Blackburn, completing his A levels in 1961. Cyril died in 1960 so Joseph Lucas offered Kevin a student apprenticeship in Physics at Salford CAT. His external degree followed in 1965, allowing him to do one year of R&D work on rocket modelling before leaving for Pembroke College Cambridge in October 1966. Three years of study at the Cavendish Laboratory in Free School Lane was successful in analyzing the transmission of ultrasonic waves through metal and other contacts. He received his Doctor of Philosophy from the University of Cambridge in 1970 under the supervision of David Tabor.
Career
In 1969, Kendall joined British Railways Research on London Road, Derby where the new Advanced Passenger Train (APT) was being developed, requiring industrial development of wheel-to-rail adhesion and corrosion problems. While studying the adhesion of nano-particles generated from corroding iron brake-block dust, he found that the standard pull-off testing methods gave large errors and published his first paper to show that crack theory must be used to analyze these adhesion measurements just as Griffith had postulated for glass-cracks in 1920. This coincided with a collaboration linking Ken Johnson and Alan Roberts in the Engineering Department at Cambridge University on the adhesion of elastic spheres. Roberts had performed experiments on the contact and surface attraction of optically smooth rubber spheres during his doctoral studies, while Johnson had solved the stress field problem twelve years earlier. But Johnson had not applied Griffith's energy-equilibrium condition. Kendall produced the mathematical answer in a couple of hours on 11 April 1970, fitting the experimental results reasonably well. The joint paper was published in 1971, one of the most highly cited papers in Royal Society Proceedings A.
This breakthrough in understanding adhesion problems allowed Kendall to take four years out of the industry, first at Monash University as QEII fellow from 1972 and then in Akron University during 1975 supervised by Alan Gent who co-founded the Adhesion Society in the USA during 1978 because of the widening applications of adhesive and composite materials. It was during this period from 1972 to 1975 that Kendall solved several long-standing problems of composite materials: Why are composites like Fiberglass tougher than the brittle components EG glass and polymer
How does a crack deflect along with a brittle interface
Strength of a lap joint does not exist; lap joints have been known for 5000 years but the solution to lap failure was only found in 1975The difficulty of industry R&D is that there is no time between inventing, patenting, and commercializing to analyze the science properly, so it was not until 1997 when Kendall took a sabbatical in Australia that he found the opportunity to summarize these findings in his first book 'The Sticky Universe'. Unfortunately, misapprehensions, errors, and anachronisms in science last for centuries and there has been little change in engineering courses and ASTM standards in this millennium to make necessary adjustments in faulty fracture text-books, as recounted in recent conferences that demonstrated 'strength of brittle materials' always varies with the size of the samples being tested and so has little meaning, overriding Galileo's original definition from 1638.
Kendall believed that industry was the main source of technological advancement and joined the Colloid & Interface Science Group at Imperial Chemical Industries (ICI) in Runcorn to invent new processes and materials. Several patents arose from his new process for mixing cement, using about 1% of polymer additive to make a novel low porosity product with ten times the strength of standard mortar and five times the toughness. This eventually led to improved ceramic processing giving better superconductors and fuel cells among numerous other applications. He and the ICI group received the Ambrose Congreve award for this invention because the energy crisis was intense and new low energy materials and processing were needed.
Another discovery in the 1970s was the limit of grinding fine particles in ball mills, a phenomenon that has been observed for millennia. When grinding limestone in a mill, the particles are reduced in size to a few micrometers, then go no finer. This limit was explained by studying cracks in smaller samples until the crack would fail to extend because plastic flow intervened.
Kendall was awarded the Adhesion Society award for excellence in 1998.
He returned to the industry after starting the spin-out company Adelan in 1996 and is CTO since 2021. The mission is to replace combustion with hydrogen-fuel-cell power generation to avoid climate crisis.
Research in Universities
During 1989, when ICI decided to focus its business on pharmaceuticals and drop its research in carbon fibers and other advanced materials, Kendall took early retirement and joined his long-time colleague Derek Birchall at Keele University collaborating with the ceramics institution Ceram Research in 1993. The patents on ceramic processing were used to develop new products, especially Solid Oxide Fuel Cells (SOFCs) that are expected to grow in market size to $1.4 bn by 2025. Kendall's invention of fine cell tubes allowed rapid start-up and led to many academic papers and two books that were highly cited. Kendall moved to the University of Birmingham in 2000 and built a substantial group in Chemical Engineering working on hydrogen and fuel cells. He and his colleagues, Prof. Dr. Bruno Georges Pollet and Dr Waldemar Bujalski opened the first UK green-hydrogen station refueling five fuel-cell-battery-taxis in 2008 and has continued since his retiring from teaching in 2011 to encourage city/industry leadership in clean-energy transport, not achievable by academics, linking with Asia where the growing car population nearing 1 billion is a desperate problem. He was first in showing that the hydrogen fuel cell vehicle used 50% less energy than a comparable combustion car. Meanwhile, Kendall was applying his adhesion ideas to cancer cells, viruses, and nano-particles. According to Google Scholar, his works have been cited on more than 27,000 occasions, unusual for an industrial researcher.
He was elected Fellow of the Royal Society in 1993. He continues to push forward the green hydrogen revolution, running a fleet of hydrogen-fuel-cell battery vehicles in the Birmingham Clean Air Zone.
References
External links
A public lecture by Prof. Kevin Kendall from University of Birmingham, UK
Fellows of the Royal Society
British mechanical engineers
Tribologists
Alumni of the University of Cambridge
Living people
Year of birth missing (living people) | Kevin Kendall | Materials_science | 1,470 |
33,183,306 | https://en.wikipedia.org/wiki/Fei%E2%80%93Ranis%20model%20of%20economic%20growth | The Fei–Ranis model of economic growth is a dualism model in developmental economics or welfare economics that has been developed by John C. H. Fei and Gustav Ranis and can be understood as an extension of the Lewis model. It is also known as the Surplus Labor model. It recognizes the presence of a dual economy comprising both the modern and the primitive sector and takes the economic situation of unemployment and underemployment of resources into account, unlike many other growth models that consider underdeveloped countries to be homogenous in nature. According to this theory, the primitive sector consists of the existing agricultural sector in the economy, and the modern sector is the rapidly emerging but small industrial sector. Both the sectors co-exist in the economy, wherein lies the crux of the development problem. Development can be brought about only by a complete shift in the focal point of progress from the agricultural to the industrial economy, such that there is augmentation of industrial output. This is done by transfer of labor from the agricultural sector to the industrial one, showing that underdeveloped countries do not suffer from constraints of labor supply. At the same time, growth in the agricultural sector must not be negligible and its output should be sufficient to support the whole economy with food and raw materials. Like in the Harrod–Domar model, saving and investment become the driving forces when it comes to economic development of underdeveloped countries.
Basics of the model
One of the biggest drawbacks of the Lewis model was the undermining of the role of agriculture in boosting the growth of the industrial sector. In addition to that, he did not acknowledge that the increase in productivity of labor should take place prior to the labor shift between the two sectors. However, these two ideas were taken into account in the Fei–Ranis dual economy model of three growth stages. They further argue that the model lacks in the proper application of concentrated analysis to the change that takes place with agricultural development
In Phase 1 of the Fei–Ranis model, the elasticity of the agricultural labor work-force is infinite and as a result, suffers from disguised unemployment. Also, the marginal product of labor is zero. This phase is similar to the Lewis model. In Phase 2 of the model, the agricultural sector sees a rise in productivity and this leads to increased industrial growth such that a base for the next phase is prepared. In Phase 2, agricultural surplus may exist as the increasing average product (AP), higher than the marginal product (MP) and not equal to the subsistence level of wages.
Using the help of the figure on the left, we see that
According to Fei and Ranis, AD amount of labor (see figure) can be shifted from the agricultural sector without any fall in output. Hence, it represents surplus labor.
After AD, MP begins to rise, and industrial labor rises from zero to a value equal to AD. AP of agricultural labor is shown by BYZ and we see that this curve falls downward after AD. This fall in AP can be attributed to the fact that as agricultural laborers shift to the industrial sector, the real wage of industrial laborers decreases due to shortage of food supply, since less laborers are now working in the food sector. The decrease in the real wage level decreases the level of profits, and the size of surplus that could have been re-invested for more industrialization. However, as long as surplus exists, growth rate can still be increased without a fall in the rate of industrialization. This re-investment of surplus can be graphically visualized as the shifting of MP curve outwards. In Phase2 the level of disguised unemployment is given by AK. This allows the agricultural sector to give up a part of its labor-force until
Phase 3 begins from the point of commercialization which is at K in the Figure. This is the point where the economy becomes completely commercialized in the absence of disguised unemployment. The supply curve of labor in Phase 3 is steeper and both the sectors start bidding equally for labor.
The amount of labor that is shifted and the time that this shifting takes depends upon:
The growth of surplus generated within the agricultural sector, and the growth of industrial capital stock dependent on the growth of industrial profits;
The nature of the industry's technical progress and its associated bias;
Growth rate of population.
So, the three fundamental ideas used in this model are:
Agricultural growth and industrial growth are both equally important;
Agricultural growth and industrial growth are balanced;
Only if the rate at which labor is shifted from the agricultural to the industrial sector is greater than the rate of growth of population will the economy be able to lift itself up from the Malthusian population trap.
This shifting of labor can take place by the landlords' investment activities and by the government's fiscal measures. However, the cost of shifting labor in terms of both private and social cost may be high, for example transportation cost or the cost of carrying out construction of buildings. In addition to that, per capita agricultural consumption can increase, or there can exist a wide gap between the wages of the urban and the rural people. These three occurrences- high cost, high consumption and high gap in wages, are called as leakages, and leakages prevent the creation of agricultural surplus. In fact, surplus generation might be prevented due to a backward-sloping supply curve of labor as well, which happens when high income-levels are not consumed. This would mean that the productivity of laborers with rise in income will not rise. However, the case of backward-sloping curves is mostly unpractical.
Connectivity between sectors
Fei and Ranis emphasized strongly on the industry-agriculture interdependency and said that a robust connectivity between the two would encourage and speedup development. If agricultural laborers look for industrial employment, and industrialists employ more workers by use of larger capital good stock and labor-intensive technology, this connectivity can work between the industrial and agricultural sector. Also, if the surplus owner invests in that section of industrial sector that is close to soil and is in known surroundings, he will most probably choose that productivity out of which future savings can be channelized. They took the example of Japan's dualistic economy in the 19th century and said that connectivity between the two sectors of Japan was heightened due to the presence of a decentralized rural industry which was often linked to urban production. According to them, economic progress is achieved in dualistic economies of underdeveloped countries through the work of a small number of entrepreneurs who have access to land and decision-making powers and use industrial capital and consumer goods for agricultural practices.
Agricultural sector
In (A), land is measured on the vertical axis, and labor on the horizontal axis. Ou and Ov represent two ridge lines, and the production contour lines are depicted by M, M1 and M2. The area enclosed by the ridge lines defines the region of factor substitutability, or the region where factors can easily be substituted. Let us understand the repercussions of this. If te amount of labor is the total labor in the agricultural sector, the intersection of the ridge line Ov with the production curve M1 at point s renders M1 perfectly horizontal below Ov. The horizontal behavior of the production line implies that outside the region of factor substitutability, output stops and labor becomes redundant once land is fixed and labor is increased.
If Ot is the total land in the agricultural sector, ts amount of labor can be employed without it becoming redundant, and es represents the redundant agricultural labor force. This led Fei and Ranis to develop the concept of Labor Utilization Ratio, which they define as the units of labor that can be productively employed (without redundancy) per unit of land. In the left-side figure, labor utilization ratio
which is graphically equal to the inverted slope of the ridge line Ov.
Fei and Ranis also built the concept of endowment ratio, which is a measure of the relative availability of the two factors of production. In the figure, if Ot represents agricultural land and tE represents agricultural labor, then the endowment ratio is given by
which is equal to the inverted slope of OE.
The actual point of endowment is given by E.
Finally, Fei and Ranis developed the concept of non-redundancy coefficient T which is measured by
These three concepts helped them in formulating a relationship between T, R and S. If ::
then
This mathematical relation proves that the non-redundancy coefficient is directly proportional to labor utilization ratio and is inversely proportional to the endowment ratio.
(B) displays the total physical productivity of labor (TPPL) curve. The curve increases at a decreasing rate, as more units of labor are added to a fixed amount of land. At point N, the curve shapes horizontally and this point N conforms to the point G in (C, which shows the marginal productivity of labor (MPPL) curve, and with point s on the ridge line Ov in (A).
Industrial sector
Like in the agricultural sector, Fei and Ranis assume constant returns to scale in the industrial sector. However, the main factors of production are capital and labor. In the graph (A) right hand side, the production functions have been plotted taking labor on the horizontal axis and capital on the vertical axis. The expansion path of the industrial sector is given by the line OAoA1A2. As capital increases from Ko to K1 to K2 and labor increases from Lo to L1 and L2, the industrial output represented by the production contour Ao, A1 and A3 increases accordingly.
According to this model, the prime labor supply source of the industrial sector is the agricultural sector, due to redundancy in the agricultural labor force. (B) shows the labor supply curve for the industrial sector S. PP2 represents the straight line part of the curve and is a measure of the redundant agricultural labor force on a graph with industrial labor force on the horizontal axis and output/real wage on the vertical axis. Due to the redundant agricultural labor force, the real wages remain constant but once the curve starts sloping upwards from point P2, the upward sloping indicates that additional labor would be supplied only with a corresponding rise in the real wages level.
MPPL curves corresponding to their respective capital and labor levels have been drawn as Mo, M1, M2 and M3. When capital stock rises from Ko to K1, the marginal physical productivity of labor rises from Mo to M1. When capital stock is Ko, the MPPL curve cuts the labor supply curve at equilibrium point Po. At this point, the total real wage income is Wo and is represented by the shaded area POLoPo. λ is the equilibrium profit and is represented by the shaded area qPPo. Since the laborers have extremely low income-levels, they barely save from that income and hence industrial profits (πo) become the prime source of investment funds in the industrial sector.
Here, Kt gives the total supply of investment funds (given that rural savings are represented by So)
Total industrial activity rises due to increase in the total supply of investment funds, leading to increased industrial employment.
Agricultural surplus
Agricultural surplus in general terms can be understood as the produce from agriculture which exceeds the needs of the society for which it is being produced, and may be exported or stored for future use.
Generation of agricultural surplus
To understand the formation of agricultural surplus, we must refer to graph (B) of the agricultural sector. The figure on the left is a reproduced version of a section of the previous graph, with certain additions to better explain the concept of agricultural surplus.
We first derive the average physical productivity of the total agricultural labor force (APPL). Fei and Ranis hypothesize that it is equal to the real wage and this hypothesis is known as the constant institutional wage hypothesis. It is also equal in value to the ratio of total agricultural output to the total agricultural population. Using this relation, we can obtain APPL = MP/OP. This is graphically equal to the slope of line OM, and is represented by the line WW in (C).
Observe point Y, somewhere to the left of P on the graph. If a section of the redundant agricultural labor force (PQ) is removed from the total agricultural labor force (OP) and absorbed into the industrial sector, then the labor force remaining in the industrial sector is represented by the point Y. Now, the output produced by the remaining labor force is represented by YZ and the real income of this labor force is given by XY. The difference of the two terms yields the total agricultural surplus of the economy. This surplus is produced by the reallocation of labor such that it is absorbed by the industrial sector. This can be seen as deployment of hidden rural savings for the expansion of the industrial sector. Hence, we can understand the contribution of the agricultural sector to the expansion of industrial sector by this allocation of redundant labor force and the agricultural surplus that results from it.
Agricultural surplus as wage fund
Agricultural surplus plays a major role as a wage fund. Its importance can be better explained with the help of the graph on the right, which is an integration of the industrial sector graph with an inverted agricultural sector graph, such that the origin of the agricultural sector falls on the upper-right corner. This inversion of the origin changes the way the graph is now perceived. While the labor force values are read from the left of 0, the output values are read vertically downwards from O. The sole reason for this inversion is for the sake of convenience. The point of commercialization as explained before (See Section on Basics of the model) is observed at point R, where the tangent to the line ORX runs parallel to OX.
Before a section of the redundant labor force is absorbed into the industrial sector, the entire labor OA is present in the agricultural sector. Once AG amount of labor force (say) is absorbed, it represented by OG' in the industrial sector, and the labor remaining in the agricultural sector is then OG. But how is the quantity of labor absorbed into the industrial sector determined? (A) shows the supply curve of labor SS' and several demand curves for labor df, d'f' and d"f". When the demand for labor is df, the intersection of the demand-supply curves gives the equilibrium employment point G'. Hence OG represents the amount of labor absorbed into the industrial sector. In that case, the labor remaining in the agricultural sector is OG. This OG amount of labor produces an output of GF, out of which GJ amount of labor is consumed by the agricultural sector and JF is the agricultural surplus for that level of employment. Simultaneously, the unproductive labor force from the agricultural sector turns productive once it is absorbed by the industrial sector, and produces an output of OG'Pd as shown in the graph, earning a total wage income of OG'PS.
The agricultural surplus JF created is needed for consumption by the same workers who left for the industrial sector. Hence, agriculture successfully provides not only the manpower for production activities elsewhere, but also the wage fund required for the process.
Significance of agriculture in the Fei–Ranis model
The Lewis model is criticised on the grounds that it neglects agriculture. Fei–Ranis model goes a step beyond and states that agriculture has a very major role to play in the expansion of the industrial sector. In fact, it says that the rate of growth of the industrial sector depends on the amount of total agricultural surplus and on the amount of profits that are earned in the industrial sector. So, larger the amount of surplus and the amount of surplus put into productive investment and larger the amount of industrial profits earned, the larger will be the rate of growth of the industrial economy. As the model focuses on the shifting of the focal point of progress from the agricultural to the industrial sector, Fei and Ranis believe that the ideal shifting takes place when the investment funds from surplus and industrial profits are sufficiently large so as to purchase industrial capital goods like plants and machinery. These capital goods are needed for the creation of employment opportunities. Hence, the condition put by Fei and Ranis for a successful transformation is that
Rate of increase of capital stock & rate of employment opportunities > Rate of population growth
The indispensability of labor reallocation
As an underdeveloped country goes through its development process, labor is reallocated from the agricultural to the industrial sector. More the rate of reallocation, faster is the growth of that economy. The economic rationale behind this idea of labor reallocation is that of faster economic development. The essence of labor reallocation lies in Engel's Law, which states that the proportion of income being spent on food decreases with increase in the income-level of an individual, even if there is a rise in the actual expenditure on food. For example, if 90 per cent of the entire population of the concerned economy is involved in agriculture, that leaves just 10 per cent of the population in the industrial sector. As the productivity of agriculture increases, it becomes possible for just 35 per cent of population to maintain a satisfactory food supply for the rest of the population. As a result, the industrial sector now has 65 per cent of the population under it. This is extremely desirable for the economy, as the growth of industrial goods is subject to the rate of per capita income, while the growth of agricultural goods is subject only to the rate of population growth, and so a bigger labor supply to the industrial sector would be welcome under the given conditions. In fact, this labor reallocation becomes necessary with time since consumers begin to want more of industrial goods than agricultural goods in relative terms.
However, Fei and Ranis were quick to mention that the necessity of labor reallocation must be linked more to the need to produce more capital investment goods as opposed to the thought of industrial consumer goods following the discourse of Engel's Law. This is because the assumption that the demand for industrial goods is high seems unrealistic, since the real wage in the agricultural sector is extremely low and that hinders the demand for industrial goods. In addition to that, low and mostly constant wage rates will render the wage rates in the industrial sector low and constant. This implies that demand for industrial goods will not rise at a rate as suggested by the use of Engel's Law.
Since the growth process will observes a slow-paced increase in the consumer purchasing power, the dualistic economies follow the path of natural austerity, which is characterized by more demand and hence importance of capital good industries as compared to consumer good ones. However, investment in capital goods comes with a long gestation period, which drives the private entrepreneurs away. This suggests that in order to enable growth, the government must step in and play a major role, especially in the initial few stages of growth. Additionally, the government also works on the social and economic overheads by the construction of roads, railways, bridges, educational institutions, health care facilities and so on.
Growth without development
In the Fei-Ranis model, it is possible that as technological progress takes place and there is a shift to labor-saving production techniques, growth of the economy takes place with increase in profits but no economic development takes place. This can be explained well with the help of graph in this section.
The graph displays two MPL lines plotted with real wage and MPL on the vertical axis and employment of labor on the horizontal axis. OW denotes the subsistence wage level, which is the minimum wage level at which a worker (and his family) would survive. The line WW' running parallel to the X-axis is considered to be infinitely elastics since supply of labor is assumed to be unlimited at the subsistence-wage level. The square area OWEN represents the wage bill and DWE represents the surplus or the profits collected. This surplus or profit can increase if the MPL curve changes.
If the MPL curve changes from MPL1 to MPL2 due to a change in production technique, such that it becomes labor-saving or capital-intensive, then the surplus or profit collected would increase. This increase can be seen by comparing DWE with D1WE since D1WE since is greater in area compared to DWE. However, there is no new point of equilibrium and as E continues to be the point of equilibrium, there is no increase in the level of labor employment, or in wages for that matter. Hence, labor employment continues as ON and wages as OW. The only change that accompanies the change in production technique is the one in surplus or profits.
This makes for a good example of a process of growth without development, since growth takes place with increase in profits but development is at a standstill since employment and wages of laborers remain the same.
Reactions to the model
Fei–Ranis model of economic growth has been criticized on multiple grounds, although if the model is accepted, then it will have a significant theoretical and policy implications on the underdeveloped countries' efforts towards development and on the persisting controversial statements regarding the balanced vs. unbalanced growth debate.
It has been asserted that Fei and Ranis did not have a clear understanding of the sluggish economic situation prevailing in the developing countries. If they had thoroughly scrutinized the existing nature and causes of it, they would have found that the existing agricultural backwardness was due to the institutional structure, primarily the system of feudalism that prevailed.
Fei and Ranis say, "It has been argued that money is not a simple substitute for physical capital in an aggregate production function. There are reasons to believe that the relationship between money and physical capital could be complementary to one another at some stage of economic development, to the extent that credit policies could play an important part in easing bottlenecks on the growth of agriculture and industry." This indicates that in the process of development they neglect the role of money and prices. They fail to differ between wage labor and household labor, which is a significant distinction for evaluating prices of dualistic development in an underdeveloped economy.
Fei and Ranis assume that MPPL is zero during the early phases of economic development, which has been criticized by Harry T.Oshima and some others on the grounds that MPPL of labor is zero only if the agricultural population is very large, and if it is very large, some of that labor will shift to cities in search of jobs. In the short run, this section of labor that has shifted to the cities remains unemployed, but over the long run it is either absorbed by the informal sector, or it returns to the villages and attempts to bring more marginal land into cultivation. They have also neglected seasonal unemployment, which occurs due to seasonal change in labor demand and is not permanent.
To understand this better, we refer to the graph in this section, which shows Food on the vertical axis and Leisure on the horizontal axis. OS represents the subsistence level of food consumption, or the minimum level of food consumed by agricultural labor that is necessary for their survival. I0 and I1 between the two commodities of food and leisure (of the agriculturists). The origin falls on G, such that OG represents maximum labor and labor input would be measured from the right to the left.
The transformation curve SAG falls from A, which indicates that more leisure is being used to same units of land. At A, the marginal transformation between food and leisure and MPL = 0 and the indifference curve I0 is also tangent to the transformation curve at this point. This is the point of leisure satiation.
Consider a case where a laborer shifts from the agricultural to the industrial sector. In that case, the land left behind would be divided between the remaining laborers and as a result, the transformation curve would shift from SAG to RTG. Like at point A, MPL at point T would be 0 and APL would continue to be the same as that at A (assuming constant returns to scale). If we consider MPL = 0 as the point where agriculturalists live on the subsistence level, then the curve RTG must be flat at point T in order to maintain the same level of output. However, that would imply leisure satiation or leisure as an inferior good, which are two extreme cases. It can be surmised then that under normal cases, the output would decline with shift of labor to industrial sector, although the per capita output would remain the same. This is because, a fall in the per capita output would mean fall in consumption in a way that it would be lesser than the subsistence level, and the level of labor input per head would either rise or fall.
Berry and Soligo in their 1968 paper have criticized this model for its MPL=0 assumption, and for the assumption that the transfer of labor from the agricultural sector leaves the output in that sector unchanged in Phase 1. They show that the output changes, and may fall under various land tenure systems, unless the following situations arise:
1. Leisure falls under the inferior good category
2. Leisure satiation is present.
3. There is perfect substitutability between food and leisure, and the marginal rate of substitution is constant for all real income levels.
Now if MPL>0 then leisure satiation option becomes invalid, and if MPL=0 then the option of food and leisure as perfect substitutes becomes invalid. Therefore, the only remaining viable option is leisure as an inferior good.
While mentioning the important role of high agricultural productivity and the creation of surplus for economic development, they have failed to mention the need for capital as well. Although it is important to create surplus, it is equally important to maintain it through technical progress, which is possible through capital accumulation, but the Fei-Ranis model considers only labor and output as factors of production.
The question of whether MPL = 0 is that of an empirical one. The underdeveloped countries mostly exhibit seasonality in food production, which suggests that especially during favorable climatic conditions, say that of harvesting or sowing, MPL would definitely be greater than zero.
Fei and Ranis assume a close model and hence there is no presence of foreign trade in the economy, which is very unrealistic as food or raw materials can not be imported. If we take the example of Japan again, the country imported cheap farm products from other countries and this made better the country's terms of trade. Later they relaxed the assumption and said that the presence of a foreign sector was allowed as long as it was a "facilitator" and not the main driving force.
The reluctant expansionary growth in the industrial sector of underdeveloped countries can be attributed to the lagging growth in the productivity of subsistence agriculture. This suggests that increase in surplus becomes more important a determinant as compared to re-investment of surplus, an idea that was utilized by Jorgenson in his 1961 model that centered around the necessity of surplus generation and surplus persistence.
Stagnation has not been taken into consideration, and no distinction is made between labor through family and labor through wages. There is also no explanation of the process of self-sustained growth, or of the investment function. There is complete negligence of terms of trade between agriculture and industry, foreign exchange, money and price.
References
Welfare economics
Economics models
Duality theories
Economic growth
Development economics
Economic development policy
Economic systems | Fei–Ranis model of economic growth | Mathematics | 5,555 |
7,113,944 | https://en.wikipedia.org/wiki/Augmented%20cognition | Augmented cognition is an interdisciplinary area of psychology and engineering, attracting researchers from the more traditional fields of human-computer interaction, psychology, ergonomics and neuroscience. Augmented cognition research generally focuses on tasks and environments where human–computer interaction and interfaces already exist. Developers, leveraging the tools and findings of neuroscience, aim to develop applications which capture the human user's cognitive state in order to drive real-time computer systems. In doing so, these systems are able to provide operational data specifically targeted for the user in a given context. Three major areas of research in the field are: Cognitive State Assessment (CSA), Mitigation Strategies (MS), and Robust Controllers (RC). A subfield of the science, Augmented Social Cognition, endeavours to enhance the "ability of a group of people to remember, think, and reason."
History
In 1962 Douglas C. Engelbart released the report "Augmenting Human Intellect: A Conceptual Framework" which introduced, and laid the groundwork for, augmented cognition. In this paper, Engelbart defines "augmenting human intellect" as "increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems."
Modern augmented cognition began to emerge in the early 2000s. Advances in cognitive, behavioral, and neurological sciences during the 1990s set the stage for the emerging field of augmented cognition – this period has been termed the "Decade of the Brain." Major advancements in functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have been pivotal in the emergence of augmented cognition technologies which seek to monitor the user's cognitive abilities. As these tools were primarily used in controlled environments, their further development was essential to pragmatic augmented cognition applications.
Research
DARPA's Augmented Cognition Program
The Defense Advanced Research Projects Agency (DARPA) has been one of the primary funding agencies for augmented cognition investigators. A major focus of DARPA's augmented cognition program (AugCog) has been developing more robust tools for monitoring cognitive state and integrating them with computer systems. The program envisions "order of magnitude increases in available, net thinking power resulting from linked human-machine dyads [that] will provide such clear informational superiority that few rational individuals or organizations would challenge under the consequences of mortality."
The program began in 2001, and has since be renamed to Improving Warfighter Information Intake Under Stress Program. By leveraging such tools, the program seeks to provide warfighters with enhanced cognitive abilities, especially under complex or stressful war conditions. As of 2002, the program vision is divided into four phases:
Phase 1: Real-time cognitive state detection
Phase 2: Real-time cognitive state manipulation
Phase 3: Autonomous cognitive state manipulation
Phase 4: Operation demonstration and transition
Proof of concept was carried out in two phases: near real time monitoring of the user's cognitive activity, and subsequent manipulation of the user's cognitive state.
Augmented Cognition International (ACI) Society
The Augmented Cognition International (ACI) Society held its first conference in July 2005. At the society's first conference, attendees from a diverse background including academia, government, and industry came together to create an agenda for future research. The agenda focused on near-, medium-, and long-term research and development goals in key augmented cognition science and technology areas. The International Conference on Human Computer Interaction, where the society first established itself, continues to host the society's activities.
Translation engines
Thad Starner, and the American Sign Language (ASL) Research Group at Georgia Tech, have been researching systems for the recognition of ASL. Telesign, a one-way translation system from ASL to English, was shown to have a 94% accuracy rate on a vocabulary with 141 signs.
Augmentation Factor
Ron Fulbright proposed the augmentation factor (A+), as a measure of the degree a human is cognitively enhanced by working in collaborative partnership with an artificial cognitive system (cog). If WH is the cognitive work performed by the human in a human-machine dyad, and WC is the cognitive work done by the cog then A+ = WC/WH. In situations where a human is working alone without assistance, then WC = 0 resulting in A+ = 0 meaning the human is not cognitively augmented at all. In situations where the human does more cognitive work than the cog, A+ < 1. In situations where the cog does more cognitive work than the human, A+ > 1. As cognitive systems continue to advance, A+ will increase. In situations where a cog performs all cognitive work without the assistance of a human, then WH = 0 resulting in A+ = <undefined> meaning attempting to calculate the augmentation factor is nonsensical since there is no human involved to be augmented.
Human/Cog Ensembles
Whereas DARPA's AugCog program focuses on human/machine dyads, it is possible for there to be more than one human and more than one artificial element involved. Human/Cog Ensembles involve one or more humans working with one or more cognitive systems (cogs). In a human/cog ensemble, the total amount of cognitive work performed by the ensemble, W*, is the sum of the cognitive work performed by each of the N humans in the ensemble plus the sum of the cognitive work performed by each of the M cognitive systems in the ensemble:
W* = WkH + WkC
Controversy
Privacy concerns
The increasing sophistication of brain-reading technologies has led many to investigate their potential applications for lie detection. Legally required brain scans arguably violate “the guarantee against self-incrimination” because they differ from acceptable forms of bodily evidence, such as fingerprints or blood samples, in an important way: they are not simply physical, hard evidence, but evidence that is intimately linked to the defendant's mind. Under US law, brain-scanning technologies might also raise implications for the Fourth Amendment, calling into question whether they constitute an unreasonable search and seizure.
Human augmentation
Many of the same arguments in the debate around human enhancement can be analogized to augmented cognition. Economic inequality, for instance, may serve to exacerbate societal advantages and disadvantages due to the limited availability of such technologies.
Fearing the potential applications of devices like Google Glass, certain gambling establishments (such as Caesar's Palace in Las Vegas) banned its use even before it was commercially available.
See also
Augmented reality
Intelligence amplification
Neuroergonomics
Human-computer interaction
Dylan Schmorrow
References
Further reading
Dylan Schmorrow, Ivy V. Estabrooke, Marc Grootjen: Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience, 5th International Conference, FAC 2009 Held as Part of HCI International 2009 San Diego, CA, USA, July 19–24, 2009, Proceedings Springer 2009.
Fuchs, Sven, Hale, Kelly S., Axelsson, Par, "Augmented Cognition can increase human performance in the control room," Human Factors and Power Plants and HPRCT 13th Annual Meeting, 2007 IEEE 8th, vol., no., pp. 128–132, 26–31 Aug. 2007
Neuroscience
Ergonomics
Human–computer interaction
Cognition | Augmented cognition | Engineering,Biology | 1,498 |
33,162,584 | https://en.wikipedia.org/wiki/Weyl%20equation | In physics, particularly in quantum field theory, the Weyl equation is a relativistic wave equation for describing massless spin-1/2 particles called Weyl fermions. The equation is named after Hermann Weyl. The Weyl fermions are one of the three possible types of elementary fermions, the other two being the Dirac and the Majorana fermions.
None of the elementary particles in the Standard Model are Weyl fermions. Previous to the confirmation of the neutrino oscillations, it was considered possible that the neutrino might be a Weyl fermion (it is now expected to be either a Dirac or a Majorana fermion). In condensed matter physics, some materials can display quasiparticles that behave as Weyl fermions, leading to the notion of Weyl semimetals.
Mathematically, any Dirac fermion can be decomposed as two Weyl fermions of opposite chirality coupled by the mass term.
History
The Dirac equation was published in 1928 by Paul Dirac, and was first used to model spin-1/2 particles in the framework of relativistic quantum mechanics. Hermann Weyl published his equation in 1929 as a simplified version of the Dirac equation. Wolfgang Pauli wrote in 1933 against Weyl's equation because it violated parity. However, three years before, Pauli had predicted the existence of a new elementary fermion, the neutrino, to explain the beta decay, which eventually was described using the Weyl equation.
In 1937, Conyers Herring proposed that Weyl fermions may exist as quasiparticles in condensed matter.
Neutrinos were experimentally observed in 1956 as particles with extremely small masses (and historically were even sometimes thought to be massless). The same year the Wu experiment showed that parity could be violated by the weak interaction, addressing Pauli's criticism. This was followed by the measurement of the neutrino's helicity in 1958. As experiments showed no signs of a neutrino mass, interest in the Weyl equation resurfaced. Thus, the Standard Model was built under the assumption that neutrinos were Weyl fermions.
While Italian physicist Bruno Pontecorvo had proposed in 1957 the possibility of neutrino masses and neutrino oscillations, it was not until 1998 that Super-Kamiokande eventually confirmed the existence of neutrino oscillations, and their non-zero mass. This discovery confirmed that Weyl's equation cannot completely describe the propagation of neutrinos, as the equations can only describe massless particles.
In 2015, the first Weyl semimetal was demonstrated experimentally in crystalline tantalum arsenide (TaAs) by the collaboration of M.Z. Hasan's (Princeton University) and H. Ding's (Chinese Academy of Sciences) teams. Independently, the same year, M. Soljačić team (Massachusetts Institute of Technology) also observed Weyl-like excitations in photonic crystals.
Equation
The Weyl equation comes in two forms. The right-handed form can be written as follows:
Expanding this equation, and inserting for the speed of light, it becomes
where
is a vector whose components are the 2×2 identity matrix for and the Pauli matrices for and is the wavefunction – one of the Weyl spinors. The left-handed form of the Weyl equation is usually written as:
where
The solutions of the right- and left-handed Weyl equations are different: they have right- and left-handed helicity, and thus chirality, respectively. It is convenient to indicate this explicitly, as follows: and
Plane wave solutions
The plane-wave solutions to the Weyl equation are referred to as the left and right handed Weyl spinors, each is with two components. Both have the form
,
where
is a momentum-dependent two-component spinor which satisfies
or
.
By direct manipulation, one obtains that
,
and concludes that the equations correspond to a particle that is massless. As a result, the magnitude of momentum relates directly to the wave-vector by the de Broglie relations as:
The equation can be written in terms of left and right handed spinors as:
Helicity
The left and right components correspond to the helicity of the particles, the projection of angular momentum operator onto the linear momentum :
Here
Lorentz invariance
Both equations are Lorentz invariant under the Lorentz transformation where More precisely, the equations transform as
where is the Hermitian transpose, provided that the right-handed field transforms as
The matrix is related to the Lorentz transform by means of the double covering of the Lorentz group by the special linear group given by
Thus, if the untransformed differential vanishes in one Lorentz frame, then it also vanishes in another. Similarly
provided that the left-handed field transforms as
Proof: Neither of these transformation properties are in any way "obvious", and so deserve a careful derivation. Begin with the form
for some unknown to be determined. The Lorentz transform, in coordinates, is
or, equivalently,
This leads to
In order to make use of the Weyl map
a few indexes must be raised and lowered. This is easier said than done, as it invokes the identity
where is the flat-space Minkowski metric. The above identity is often used to define the elements One takes the transpose:
to write
One thus regains the original form if that is, Performing the same manipulations for the left-handed equation, one concludes that
with
Relationship to Majorana
The Weyl equation is conventionally interpreted as describing a massless particle. However, with a slight alteration, one may obtain a two-component version of the Majorana equation. This arises because the special linear group is isomorphic to the symplectic group The symplectic group is defined as the set of all complex 2×2 matrices that satisfy
where
The defining relationship can be rewritten as where is the complex conjugate. The right handed field, as noted earlier, transforms as
and so the complex conjugate field transforms as
Applying the defining relationship, one concludes that
which is exactly the same Lorentz covariance property noted earlier. Thus, the linear combination, using an arbitrary complex phase factor
transforms in a covariant fashion; setting this to zero gives the complex two-component Majorana equation. The Majorana equation is conventionally written as a four-component real equation, rather than a two-component complex equation; the above can be brought into four-component form (see that article for details). Similarly, the left-chiral Majorana equation (including an arbitrary phase factor ) is
As noted earlier, the left and right chiral versions are related by a parity transformation. The skew complex conjugate can be recognized as the charge conjugate form of Thus, the Majorana equation can be read as an equation that connects a spinor to its charge-conjugate form. The two distinct phases on the mass term are related to the two distinct eigenvalues of the charge conjugation operator; see charge conjugation and Majorana equation for details.
Define a pair of operators, the Majorana operators,
where is a short-hand reminder to take the complex conjugate. Under Lorentz transformations, these transform as
whereas the Weyl spinors transform as
just as above. Thus, the matched combinations of these are Lorentz covariant, and one may take
as a pair of complex 2-spinor Majorana equations.
The products and are both Lorentz covariant. The product is explicitly
Verifying this requires keeping in mind that and that The RHS reduces to the Klein–Gordon operator provided that , that is These two Majorana operators are thus "square roots" of the Klein–Gordon operator.
Lagrangian densities
The equations are obtained from the Lagrangian densities
By treating the spinor and its conjugate (denoted by ) as independent variables, the relevant Weyl equation is obtained.
Weyl spinors
The term Weyl spinor is also frequently used in a more general setting, as an element of a Clifford module. This is closely related to the solutions given above, and gives a natural geometric interpretation to spinors as geometric objects living on a manifold. This general setting has multiple strengths: it clarifies their interpretation as fermions in physics, and it shows precisely how to define spin in General Relativity, or, indeed, for any Riemannian manifold or pseudo-Riemannian manifold. This is informally sketched as follows.
The Weyl equation is invariant under the action of the Lorentz group. This means that, as boosts and rotations are applied, the form of the equation itself does not change. However, the form of the spinor itself does change. Ignoring spacetime entirely, the algebra of the spinors is described by a (complexified) Clifford algebra. The spinors transform under the action of the spin group. This is entirely analogous to how one might talk about a vector, and how it transforms under the rotation group, except that now, it has been adapted to the case of spinors.
Given an arbitrary pseudo-Riemannian manifold of dimension , one may consider its tangent bundle . At any given point the tangent space is a dimensional vector space. Given this vector space, one can construct the Clifford algebra on it. If are a vector space basis on , one may construct a pair of Weyl spinors as
and
When properly examined in light of the Clifford algebra, these are naturally anti-commuting, that is, one has that This can be happily interpreted as the mathematical realization of the Pauli exclusion principle, thus allowing these abstractly defined formal structures to be interpreted as fermions. For dimensional Minkowski space-time, there are only two such spinors possible, by convention labelled "left" and "right", as described above. A more formal, general presentation of Weyl spinors can be found in the article on the spin group.
The abstract, general-relativistic form of the Weyl equation can be understood as follows: given a pseudo-Riemannian manifold one constructs a fiber bundle above it, with the spin group as the fiber. The spin group is a double cover of the special orthogonal group , and so one can identify the spin group fiber-wise with the frame bundle over When this is done, the resulting structure is called a spin structure.
Selecting a single point on the fiber corresponds to selecting a local coordinate frame for spacetime; two different points on the fiber are related by a (Lorentz) boost/rotation, that is, by a local change of coordinates. The natural inhabitants of the spin structure are the Weyl spinors, in that the spin structure completely describes how the spinors behave under (Lorentz) boosts/rotations.
Given a spin manifold, the analog of the metric connection is the spin connection; this is effectively "the same thing" as the normal connection, just with spin indexes attached to it in a consistent fashion. The covariant derivative can be defined in terms of the connection in an entirely conventional way. It acts naturally on the Clifford bundle; the Clifford bundle is the space in which the spinors live. The general exploration of such structures and their relationships is termed spin geometry.
Mathematical definition
For even , the even subalgebra of the complex Clifford algebra is isomorphic to , where . A left-handed (respectively, right-handed) complex Weyl spinor in -dimensional space is an element of (respectively, ).
Special cases
There are three important special cases that can be constructed from Weyl spinors. One is the Dirac spinor, which can be taken to be a pair of Weyl spinors, one left-handed, and one right-handed. These are coupled together in such a way as to represent an electrically charged fermion field. The electric charge arises because the Dirac field transforms under the action of the complexified spin group This group has the structure
where is the circle, and can be identified with the of electromagnetism. The product is just fancy notation denoting the product with opposite points identified (a double covering).
The Majorana spinor is again a pair of Weyl spinors, but this time arranged so that the left-handed spinor is the charge conjugate of the right-handed spinor. The result is a field with two less degrees of freedom than the Dirac spinor. It is unable to interact with the electromagnetic field, since it transforms as a scalar under the action of the group. That is, it transforms as a spinor, but transversally, such that it is invariant under the action of the spin group.
The third special case is the ELKO spinor, constructed much as the Majorana spinor, except with an additional minus sign between the charge-conjugate pair. This again renders it electrically neutral, but introduces a number of other quite surprising properties.
Notes
References
Further reading
External links
http://aesop.phys.utk.edu/qft/2004-5/2-2.pdf
http://www.nbi.dk/~kleppe/random/ll/l2.html
http://www.tfkp.physik.uni-erlangen.de/download/research/DW-derivation.pdf
http://www.weylmann.com/weyldirac.pdf
Quantum mechanics | Weyl equation | Physics | 2,826 |
2,599,213 | https://en.wikipedia.org/wiki/Yellow%20rain | Yellow rain was a 1981 political incident in which the United States Secretary of State Alexander Haig accused the Soviet Union of supplying T-2 mycotoxin to the communist states in Vietnam, Laos and Cambodia for use in counterinsurgency warfare. Refugees described many different forms of "attacks", including a sticky yellow liquid falling from planes or helicopters, which was dubbed "yellow rain". The U.S. government alleged that over ten thousand people had been killed in attacks using these supposed chemical weapons. The Soviets denied these claims and an initial United Nations investigation was inconclusive.
Samples of the supposed chemical agent that were supplied to a group of independent scientists turned out to be honeybee feces, suggesting that the "yellow rain" was due to mass defecation of digested pollen grains from large swarms of bees. Although the majority of the scientific literature on this topic now regards the hypothesis that yellow rain was a Soviet chemical weapon as disproved, the U.S. government has not retracted its allegations, arguing that the issue has not been fully resolved. Many of the U.S. documents relating to this incident remain classified.
Allegations
The charges stemmed from events in Laos and North Vietnam beginning in 1975, when the two governments, which were allied with and supported by the Soviet Union, fought against Hmong tribes, peoples who had sided with the United States and South Vietnam during the Vietnam War. Refugees described events that they believed to be chemical warfare attacks by low-flying aircraft or helicopters; several of the reports were of a yellow, oily liquid that was dubbed "yellow rain". Those exposed claimed neurological and physical symptoms including seizures, blindness, and bleeding. Similar reports came from the Vietnamese invasion of Cambodia in 1978.
A 1997 textbook produced by the U.S. Army Medical Department asserted that over ten thousand people were killed in attacks using chemical weapons in Laos, Cambodia and Afghanistan. The descriptions of the attacks were diverse and included air-dropped canisters and sprays, booby traps, artillery shells, rockets and grenades that produced droplets of liquid, dust, powders, smoke or "insect-like" materials of a yellow, red, green, white or brown color.
Secretary of State Alexander Haig announced in September 1981 that:
For some time now, the international community has been alarmed by continuing reports that the Soviet Union and its allies have been using lethal chemical weapons in Laos, Kampuchea, and Afghanistan. ... We have now found physical evidence from Southeast Asia which has been analyzed and found to contain abnormally high levels of three potent mycotoxins—poisonous substances not indigenous to the region and which are highly toxic to man and animals.
The Soviet Union described these accusations as a "big lie" and said that the US government used chemical weapons during the Vietnam War and supplied them to Afghan rebels and Salvadoran troops. The American accusations prompted a United Nations investigation in Pakistan and Thailand. This involved five doctors and scientists who interviewed alleged witnesses and collected samples that were purported to come from Afghanistan and Cambodia. However, the interviews produced conflicting testimony and the analyses of the samples were inconclusive. The UN experts also examined two refugees who claimed to be suffering from the after-effects of a chemical attack, but the refugees were instead diagnosed as having fungal skin infections. The team reported that they were unable to verify that chemical weapons had been used but noted that circumstantial evidence "suggestive of the possible use of some sort of toxic chemical substance in some instances."
The US mycotoxin analyses were reported in the scientific literature in 1983 and 1984 and reported small amounts of mycotoxins called trichothecenes, ranging from the parts per million to traces in the parts per billion range. The lowest possible limit of detection in these mycotoxin analyses is in the parts per billion range. However, several inconsistencies in these reports caused a "prolonged, and at times acrimonious, debate on the validity of the analyses". A 2003 medical review notes that this debate may have been exacerbated since "Although analytical methods were in their infancy during the controversy, they were still sensitive enough to pick up low levels of environmental trichothecene contamination."
Initial investigation
C. J. Mirocha at the University of Minnesota conducted a biochemical investigation, looking for the presence of trichothecene mycotoxins, including T-2 toxin, diacetoxyscirpenol (DAS), and deoxynivalenol (DON). This included chemical analyses of blood, urine, and tissue of alleged victims of chemical attacks in February 1982 in Laos and Kampuchea. "The finding of T-2, HT-2, and DAS toxins in blood, urine, and body tissues of alleged victims of chemical warfare in Southeast Asia provides compelling proof of the use of trichothecenes as nonconventional warfare agents. ... Additional significant findings lie in the trichothecenes found in the leaf samples (T-2, DON, nivalenol) and yellow powder (T-2, DAS). ... The most compelling evidence is the presence of T-2 and DAS in the yellow powder. Both toxins are infrequently found in nature and rarely occur together. In our experience, copious producers of T-2 toxin (F. tricinctum) do not produce DAS, and conversely, good producers of DAS (F. roseum 'Gibbosum') do not produce T-2."
Explanation
Honeybee hypothesis
In 1983, these charges were disputed by Harvard biologist and biological weapons opponent Matthew Meselson and his team, who traveled to Laos and conducted a separate investigation. Meselson's team noted that trichothecene mycotoxins occur naturally in the region and questioned the witness testimony. He suggested an alternate hypothesis that the yellow rain was the harmless fecal matter of honeybees. The Meselson team offered the following as evidence: separate "yellow rain drops" which occurred on the same leaf, and which were "accepted as authentic", consisted largely of pollen; each drop contained a different mix of pollen grains, as one would expect if they came from different bees, and the grains showed properties characteristic of pollen digested by bees (the protein inside the pollen grain was gone, while the outer indigestible shell remained). Further, the pollen mix came from plant species typical of the area where a drop was collected.
The US government responded to these findings by arguing that the pollen was added deliberately, in order to make a substance that could be easily inhaled and "ensure the retention of toxins in the human body". Meselson responded to this idea by stating that it was rather far-fetched to imagine that somebody would produce a chemical weapon by "gathering pollen predigested by honeybees." The fact that the pollen originated in Southeast Asia meant that the Soviet Union could not have manufactured the substance domestically, and would have had to import tons of pollen from Vietnam. Meselson's work was described in an independent medical review as providing "compelling evidence that yellow rain might have a benign natural explanation".
After the honeybee hypothesis was made public, a literature search turned up an earlier Chinese paper on the phenomenon of yellow droppings in Jiangsu Province in September 1976. Strikingly, the Chinese villagers had also used the term "yellow rain" to describe this phenomenon. Many villagers believed that the yellow droppings were portents of imminent earthquake activity. Others believed that the droppings were chemical weapons sprayed by the Soviet Union or Taiwan. However, the Chinese scientists also concluded that the droppings came from bees.
Mycotoxins
Analyses of putative "yellow rain" samples by the British, French and Swedish governments confirmed the presence of pollen and failed to find any trace of mycotoxins. Toxicology studies questioned the reliability of reports stating that mycotoxins had been detected in alleged victims up to two months after exposure, since these compounds are unstable in the body and are cleared from the blood in just a few hours. An autopsy on a Khmer Rouge fighter named Chan Mann, a victim of a putative yellow rain attack in 1982, turned up traces of mycotoxins, but also aflatoxin, Blackwater fever, and malaria.
Surveys also showed that both mycotoxin-producing fungi and mycotoxin contamination were common in Southeast Asia, casting doubt on the assertion that detecting these compounds was an unusual occurrence. For example, a Canadian military laboratory found mycotoxins in the blood of five people from the area who had never been exposed to yellow rain, out of 270 tested, but none in the blood of ten alleged victims, and a 1988 paper reported that illnesses from mycotoxin exposure may pose a serious threat to public health in Malaysia. It is now recognized that mycotoxin contamination of foods such as wheat and maize is a common problem, particularly in temperate regions of the world. As noted in a 2003 medical review, "The government research highlighted, if nothing else, that natural mycotoxicoses were an important health hazard in Southeast Asia."
Reliability of eyewitness accounts
In 1987, the New York Times reported that Freedom of Information requests showed that field investigations in 1983–85 by US government teams had produced no evidence to substantiate the initial allegations and instead cast doubt on the reliability of the initial reports, but these critical reports were not released to the public. A 1989 analysis of the initial reports gathered from Hmong refugees that was published in the Journal of the American Medical Association noted "marked inconsistencies that greatly compromised the validity of the testimony" and criticized the methods used in interviews by the US Army medical team that gathered this information. These issues included the US Army team only interviewing those people who claimed to have knowledge of attacks with chemical weapons and the investigators asking leading questions during interviews. The authors noted that individuals' stories changed over time, were inconsistent with other accounts, and that the people who claimed to have been eyewitnesses when first interviewed later stated that they had been relaying the accounts of others.
In 1982, Meselson had visited a Hmong refugee camp with samples of bee droppings that he had collected in Thailand. Most of the Hmong he interviewed claimed that these were samples of the chemical weapons that they had been attacked with. One man accurately identified them as insect droppings, but switched to the chemical weapons story after discussion with fellow Hmong.
Australian military scientist Rod Barton visited Thailand in 1984, and discovered that Thai villagers were blaming yellow rain for a variety of ailments, including scabies. An American doctor in Bangkok explained that the United States had been taking a special interest in yellow rain, and was providing medical care to alleged victims.
Possible U.S. origin
A CIA report from the 1960s reported allegations by the Cambodian government that their forces had been attacked with chemical weapons, leaving behind a yellow powder. The Cambodians blamed the United States for these alleged chemical attacks. Some of the samples of "yellow rain" collected from Cambodia in 1983 tested positive for CS, which the United States had used during the Vietnam War. CS is a form of tear gas and is not acutely toxic, but may account for some of the milder symptoms reported by the Hmong villagers.
Scientific conclusions and US claims
Most of the scientific community sees these allegations as supported by insufficient evidence, or as having been completely refuted. For instance, a 1992 review published in Politics and the Life Sciences described the idea of yellow rain as a biological agent as conclusively disproved and called for an assessment by the US government of the mistakes made in this episode, stating that "the present approach of sweeping the matter under the rug and hoping people will forget about it could be counterproductive." Similarly, a 1997 review of the history of biological warfare published in the Journal of the American Medical Association stated that the yellow rain allegations are "widely regarded as erroneous", a 2001 review in the Annual Reviews in Microbiology described them as "unsubstantiated for many reasons", and a 2003 article in Annual Review of Phytopathology described them as "largely discredited". A 2003 review of the history of biological warfare described these allegations as one of many cases where states have produced propaganda containing false or unsubstantiated accusations of the use of biological weapons by their enemies.
In contrast, as of 1997 the U.S. Army maintains that some experts believe that "trichothecenes were used as biological weapons in Southeast Asia and Afghanistan" although they write that "it has not been possible for the United States to prove unequivocally that trichothecene mycotoxins were used as biological weapons." They argued that presence of pollen in yellow rain samples is best explained by the idea that "during biological warfare attacks, dispersed trichothecenes landed in pollen-containing areas." (Essentially the same position is taken in a subsequent volume in the same series of U.S. Army textbooks published in 2007.) Similarly, the US Defense Threat Reduction Agency argues that the controversy has not been resolved and states that a CIA report indicated the Soviet Union did possess weapons based on T-2 mycotoxin, although the agency states that "no trace of a trichothecene-containing weapon was ever found in the areas affected by yellow rain" and concludes that the use of such weapons "may never be unequivocally proved." A 2007 review published in Politics and the Life Sciences concluded that the balance of evidence strongly supported the hypothesis that some type of chemical or biological weapon was used in Southeast Asia in the late 1970s and early 1980s, but noted that they found no definitive proof of this hypothesis and that the evidence could not "identify the specific agents used, the intent, or the root source or sources of the attacks." The Vietnamese and the Soviets have also reportedly used other chemical weapons in conflict, in Cambodia and Afghanistan, respectively.
Later events
India
An episode of mass pollen release from bees in 2002 in Sangrampur, India, prompted unfounded fears of a chemical weapons attack, although this was in fact due to a mass migration of giant Asian honeybees. This event revived memories of what New Scientist described as "cold war paranoia", and the article noted that the Wall Street Journal had covered these 1980s yellow rain allegations in particular detail. Indeed, the Wall Street Journal continues to assert that the Soviet Union used yellow rain as a chemical weapon in the 1980s and in 2003 accused Matthew Meselson of "excusing away evidence of Soviet violations."
Iraq
In the build-up to the 2003 invasion of Iraq the Wall Street Journal alleged that Saddam Hussein possessed a chemical weapon called "yellow rain". The Iraqis appear to have investigated trichothecene mycotoxins in 1990, but only purified a total of 20 ml of the agent from fungal cultures and did not manage to scale up the purification or produce any weapons containing these compounds. Although these toxins are not generally regarded as practical tactical weapons, the T-2 toxin might be a usable weapon since it can be absorbed through the skin, although it would be very difficult to manufacture it in any reasonable quantity.
Henry Wilde, a retired US Foreign Service Officer, has drawn parallels between the use of yellow rain allegations by the US government against the Soviet Union and the later exaggerated allegations on the topic of Iraq and weapons of mass destruction. Wilde considers it likely that states may again "use rumors and false or planted intelligence of such weapons use for propaganda purposes." and calls for the establishment of a more rigorous inspection process to deal with such claims. Similar concerns were expressed in a 2006 review published by the World Organisation for Animal Health, which compared the American yellow rain accusations to other Cold War-era accusations from the Soviet Union and Cuba, as well as to more recent mistaken intelligence on Iraqi weapons capabilities, concluding that such unjustified accusations have encouraged the development of biological weapons and increased the risk that they might be used, as they have discredited arms-control efforts.
Radiolab interview
In 2012 the science-themed show Radiolab aired an interview with Hmong refugee Eng Yang and his niece, author Kao Kalia Yang, to discuss Eng Yang's experience with yellow rain. The hosts took the position that yellow rain was unlikely to have been a chemical agent. The episode prompted a backlash among some listeners, who criticized Robert Krulwich for insensitivity, racism, and their disregard for Yang's personal and professional experience with the region in question. The negative response prompted host Krulwich to issue an apology for his handling of the interview.
Bulgaria
On 23 May 2015, just before the national holiday of 24 May (the day of Bulgarian writing and culture), yellow rain fell in Sofia, Bulgaria. Suspicions were raised because the Bulgarian government was criticizing Russian actions in Ukraine at the time. The Bulgarian national academy BAN explained the event as flower pollen.
Mai Der Vang's Yellow Rain
American Hmong poet Mai Der Vang published Yellow Rain (Graywolf Press, 2021) to critical acclaim and was a 2022 Finalist for the Pulitzer Prize in Poetry. The book explores yellow rain in Southeast Asia through the use of documentary poetics.
See also
Agent Orange
Red rain in Kerala
Sverdlovsk anthrax leak
Aral smallpox incident
Allegations of biological warfare in the Korean War
References
Further reading
External links
The Yellow Rain Affair Matthew Meselson and Julian Robinson
A Note from History: Yellow Rain Defense Treaty Ready Inspection Readiness Program
Soviet Union–United States relations
Soviet Union–Vietnam relations
1981 in Laos
1981 in Vietnam
Reagan administration controversies
Propaganda in the United States
Chemical weapons attacks
Medical controversies
Soviet chemical weapons program
1981 in the Soviet Union
1981 in the United States | Yellow rain | Chemistry | 3,636 |
5,277,567 | https://en.wikipedia.org/wiki/Laccaria%20bicolor | Laccaria bicolor is a small tan-colored mushroom with lilac gills. It is edible but not choice, and grows in mixed birch and pine woods. It is found in the temperate zones of the globe, in late summer and autumn. L. bicolor is an ectomycorrhizal fungus used as a soil inoculant in agriculture and horticulture.
Taxonomy
It was initially described as a subspecies of Laccaria laccata by French mycologist René Maire in 1937, before being raised to species rank by P.D. Orton in 1960. Like others in its genus it has the common name of 'Deceiver', because of its propensity to fade and become hard to identify.
Description
The cap is across, convex to flat, and with a central navel. It is often incurved at the margin, and is various shades of ochraceous-buff, and tan, depending on moisture content. The fibrillose stipe is the same color, and with a distinct lilac down towards the base. The flesh is whitish, tinged with pink, or ochraceous, and has no apparent distinctive smell, or taste. The gills are pale lilac at first, fading paler. The spores are white. The picture on the right shows young specimens with quite vivid coloration. More often, they are found duller in appearance.
Distribution and habitat
This species is mycorrhizal with a range of trees, and is found throughout the temperate zones of the world, in summer and autumn. This includes temperate and boreal forests of North America and probably Northern Europe. It seems to prefer birch and pine woods.
Carnivory
Laccaria bicolor is one of a number of species of carnivorous fungi, but one of the few that catches and kills arthropods, specifically springtails.
Ectomycorrhizae
This species forms ectomycorrhizal associations with a wide variety of tree species, such as red pine, jack pine, and black spruce. Studies have shown that L. bicolor is more effective in early colonization of pine roots compared to other ectomycorrhiza forming fungi. In field studies, it preferentially colonizes and improves the survival of red pine. Actinobacteria isolates, e.g. from the genus Streptomyces, obtained from old growth Norway spruce field sites have been shown to stimulate the growth of Laccaria bicolor in the laboratory.
Genome
Laccaria bicolor was the first ectomycorrhizal fungus to have its genome sequenced. The genome is 65 megabases long and is estimated to contain 20,000 protein coding genes. Analysis revealed a large number of small secreted proteins of unknown function, several of which are only expressed in symbiotic tissues, where they probably play a role in initiating symbiosis. It lacks enzymes that are able to degrade plant cell walls but does possess enzymes which can degrade other polysaccharides, revealing how it is able to grow both in soil and in association with plants.
References
bicolor
Edible fungi
Fungi described in 1937
Fungi of North America
Fungi of Europe
Ammonia fungi
Carnivorous fungi
Taxa named by René Maire
Fungus species | Laccaria bicolor | Chemistry,Biology | 670 |
73,780,161 | https://en.wikipedia.org/wiki/Sonja%20Glava%C5%A1ki | Sonja Glavaški is an electrical engineer. Initially focusing on nonlinear control and robust control, her interests have since shifted to include computational challenges in the control of electrical grids and their integration with building-scale energy systems. Educated in Serbia and the US, she works at the Pacific Northwest National Laboratory as Chief Energy Digitalization Scientist and Principal Technology Strategy advisor for the Energy & Environment Directorate.
Education and career
Glavaški earned an engineering degree and master's degree in electrical engineering from the University of Belgrade. She continued her education at the California Institute of Technology, earning a second master's degree and completing her Ph.D. there. Her 1998 doctoral dissertation, Robust system analysis and nonlinear system model reduction, was supervised by John Doyle.
She worked in industry, becoming a principal scientist for Honeywell, at Honeywell Labs in Minneapolis, working for the Eaton Corporation at the Eaton Innovation Center in Wisconsin, and then for United Technologies at the United Technologies Research Center in Connecticut, where she led the Control Systems Group.
Next, she moved to ARPA-E, the US Advanced Research Projects Agency–Energy, as a program director in charge of projects including the Network Optimized Distributed Energy Systems (NODES) program, focusing on the integration of small-scale renewable energy sources into the grid and the use of building-scale energy systems for grid energy storage. She moved from there to her present position at the Pacific Northwest National Laboratory.
Recognition
Glavaški was named an IEEE Fellow, in the 2020 class of fellows, "for leadership in energy systems".
References
Year of birth missing (living people)
Living people
Electrical engineers
Women electrical engineers
Control theorists
University of Belgrade alumni
California Institute of Technology alumni
United States Department of Energy National Laboratories personnel
Fellows of the IEEE | Sonja Glavaški | Engineering | 357 |
1,795,295 | https://en.wikipedia.org/wiki/Meton%20of%20Athens | Meton of Athens (; gen.: Μέτωνος) was a Greek mathematician, astronomer, geometer, and engineer who lived in Athens in the 5th century BC. He is best known for calculations involving the eponymous 19-year Metonic cycle, which he introduced in 432 BC into the lunisolar Attic calendar. Euphronios says that Colonus was Meton's deme.
Work
The Metonic calendar incorporates knowledge that 19 solar years and 235 lunar months are very nearly of the same duration. Consequently, a given day of a lunar month will often occur on the same day of the solar year as it did 19 years previously. Meton's observations were made in collaboration with Euctemon, about whom nothing else is known. The Greek astronomer Callippus expanded on the work of Meton, proposing what is now called the Callippic cycle. A Callippic cycle runs for 76 years, or four Metonic cycles. Callippus refined the lunisolar calendar, deducting one day from the fourth Metonic cycle in each Callippic cycle (i.e., after 940 synodic lunar periods had elapsed), so as to better keep the lunisolar calendar synchronized with the seasons of the solar year.
The world's oldest known astronomical calculator, the Antikythera Mechanism (2nd century BC), performs calculations based on both the Metonic and Callipic calendar cycles, with separate dials for each.
The foundations of Meton's observatory in Athens are still visible just behind the podium of the Pnyx, the ancient parliament. Meton found the dates of equinoxes and solstices by observing sunrise from his observatory. From that point of observation, during the summer solstice, sunrise was in line with the local hill of Mount Lycabetus, while six months later, during the winter solstice, sunrise occurs over the high brow of Mount Hymettos in the southeast. So from Meton's observatory the Sun appears to move along a 60° arc between these two points on the horizon every six months. The bisector of the observatory's solstitial arc lies in line with the Acropolis. These topological features are important because the summer solstice was the point in time from which the Athenians measured the start of their calendar years. The first month of the new year, Hekatombaion, began with the first new moon after the summer solstice.
Meton appears briefly as a character in Aristophanes' play The Birds (414 BC). He comes on stage carrying surveying instruments and is described as a geometer.
What little is known about Meton is related by ancient historians. According to Ptolemy, a stela or table erected in Athens contained a record of Meton's observations, and a description of the Metonic cycle. None of Meton's works survive.
Notes
References
Pannekoek, A. "Planetary Theories – the Planetary Theory of Kidinnu." Popular Astronomy 55, 10/1947, p 422
External links
Meton of Athens
Greek Astronomy
5th-century BC Athenians
Ancient Greek astronomers
Ancient Greek engineers
Ancient Greek mathematicians
5th-century BC mathematicians
5th-century BC astronomers
Summer solstice | Meton of Athens | Astronomy | 677 |
236,981 | https://en.wikipedia.org/wiki/Amniote | Amniotes are tetrapod vertebrate animals belonging to the clade Amniota, a large group that comprises the vast majority of living terrestrial and semiaquatic vertebrates. Amniotes evolved from amphibious stem tetrapod ancestors during the Carboniferous period. Those of Amniota are defined as the smallest crown clade containing humans, the Greek tortoise, and the Nile crocodile.
Amniotes are distinguished from the other living tetrapod clade — the non-amniote lissamphibians (frogs/toads, salamanders/newts and caecilians) — by the development of three extraembryonic membranes (amnion for embryonic protection, chorion for gas exchange, and allantois for metabolic waste disposal or storage), thicker and keratinized skin, costal respiration (breathing by expanding/constricting the rib cage), the presence of adrenocortical and chromaffin tissues as a discrete pair of glands near their kidneys, more complex kidneys, the presence of an astragalus for better extremity range of motion, the diminished role of skin breathing, and the complete loss of metamorphosis, gills, and lateral lines.
The presence of an amniotic buffer, of a water-impermeable skin, and of a robust, air-breathing, respiratory system, allow amniotes to live on land as true terrestrial animals. Amniotes have the ability to procreate without water bodies. Because the amnion and the fluid it secretes shields the embryo from environmental fluctuations, amniotes can reproduce on dry land by either laying shelled eggs (reptiles, birds and monotremes) or nurturing fertilized eggs within the mother (marsupial and placental mammals). This distinguishes amniotes from anamniotes (fish and amphibians) that have to spawn in aquatic environments. Most amniotes still require regular access to drinking water for rehydration, like the semiaquatic amphibians do.
They have better homeostasis in drier environments, and more efficient non-aquatic gas exchange to power terrestrial locomotion, which is facilitated by their astragalus.
Basal amniotes resembled small lizards and evolved from semiaquatic reptiliomorphs during the Carboniferous period. After the Carboniferous rainforest collapse, amniotes spread around Earth's land and became the dominant land vertebrates.
They almost immediately diverged into two groups, namely the sauropsids (including all reptiles and birds) and synapsids (including mammals and extinct ancestors like "pelycosaurs" and therapsids). Among the earliest known crown group amniotes, the oldest known sauropsid is Hylonomus and the oldest known synapsid is Asaphestera, both of which are from Nova Scotia during the Bashkirian age of the Late Carboniferous around .
This basal divergence within Amniota has also been dated by molecular studies at 310–329 Ma, or 312–330 Ma, and by a fossilized birth–death process study at 322–340 Ma.
Etymology
The term amniote comes from the amnion, which derives from Greek ἀμνίον (amnion), which denoted the membrane that surrounds a fetus. The term originally described a bowl in which the blood of sacrificed animals was caught, and derived from ἀμνός (amnos), meaning "lamb".
Description
Zoologists characterize amniotes in part by embryonic development that includes the formation of several extensive membranes, the amnion, chorion, and allantois. Amniotes develop directly into a (typically) terrestrial form with limbs and a thick stratified epithelium (rather than first entering a feeding larval tadpole stage followed by metamorphosis, as amphibians do). In amniotes, the transition from a two-layered periderm to a cornified epithelium is triggered by thyroid hormone during embryonic development, rather than by metamorphosis. The unique embryonic features of amniotes may reflect specializations for eggs to survive drier environments; or the increase in size and yolk content of eggs may have permitted, and coevolved with, direct development of the embryo to a large size.
Adaptation for terrestrial living
Features of amniotes evolved for survival on land include a sturdy but porous leathery or hard eggshell and an allantois that facilitates respiration while providing a reservoir for disposal of wastes. Their kidneys (metanephros) and large intestines are also well-suited to water retention. Most mammals do not lay eggs, but corresponding structures develop inside the placenta.
The ancestors of true amniotes, such as Casineria kiddi, which lived about 340 million years ago, evolved from amphibian reptiliomorphs and resembled small lizards. At the late Devonian mass extinction (360 million years ago), all known tetrapods were essentially aquatic and fish-like. Because the reptiliomorphs were already established 20 million years later when all their fishlike relatives were extinct, it appears they separated from the other tetrapods somewhere during Romer's gap, when the adult tetrapods became fully terrestrial (some forms would later become secondarily aquatic). The modest-sized ancestors of the amniotes laid their eggs in moist places, such as depressions under fallen logs or other suitable places in the Carboniferous swamps and forests; and dry conditions probably do not account for the emergence of the soft shell. Indeed, many modern-day amniotes require moisture to keep their eggs from desiccating. Although some modern amphibians lay eggs on land, all amphibians lack advanced traits like an amnion.
The amniotic egg formed through a series of evolutionary steps. After internal fertilization and the habit of laying eggs in terrestrial environments became a reproduction strategy amongst the amniote ancestors, the next major breakthrough appears to have involved a gradual replacement of the gelatinous coating covering the amphibian egg with a fibrous shell membrane. This allowed the egg to increase both its size and in the rate of gas exchange, permitting a larger, metabolically more active embryo to reach full development before hatching. Further developments, like extraembryonic membranes (amnion, chorion, and allantois) and a calcified shell, were not essential and probably evolved later. It has been suggested that shelled terrestrial eggs without extraembryonic membranes could still not have been more than about 1 cm (0.4-inch) in diameter because of diffusion problems, like the inability to get rid of carbon dioxide if the egg was larger. The combination of small eggs and the absence of a larval stage, where posthatching growth occurs in anamniotic tetrapods before turning into juveniles, would limit the size of the adults. This is supported by the fact that extant squamate species that lay eggs less than 1 cm in diameter have adults whose snout-vent length is less than 10 cm. The only way for the eggs to increase in size would be to develop new internal structures specialized for respiration and for waste products. As this happened, it would also affect how much the juveniles could grow before they reached adulthood.
A similar pattern can be seen in modern amphibians. Frogs that have evolved terrestrial reproduction and direct development have both smaller adults and fewer and larger eggs compared to their relatives that still reproduce in water.
The egg membranes
Fish and amphibian eggs have only one inner membrane, the embryonic membrane. Evolution of the amniote egg required increased exchange of gases and wastes between the embryo and the atmosphere. Structures to permit these traits allowed further adaption that increased the feasible size of amniote eggs and enabled breeding in progressively drier habitats. The increased size of eggs permitted increase in size of offspring and consequently of adults. Further growth for the latter, however, was limited by their position in the terrestrial food-chain, which was restricted to level three and below, with only invertebrates occupying level two. Amniotes would eventually experience adaptive radiations when some species evolved the ability to digest plants and new ecological niches opened up, permitting larger body-size for herbivores, omnivores and predators.
Amniote traits
While the early amniotes resembled their amphibian ancestors in many respects, a key difference was the lack of an otic notch at the back margin of the skull roof. In their ancestors, this notch held a spiracle, an unnecessary structure in an animal without an aquatic larval stage. There are three main lines of amniotes, which may be distinguished by the structure of the skull and in particular the number of holes behind each eye. In anapsids, the ancestral condition, there are none; in synapsids (mammals and their extinct relatives) there is one; and in diapsids (including birds, crocodilians, squamates, and tuataras), there are two. Turtles have secondarily lost their fenestrae, and were traditionally classified as anapsids because of this. Molecular testing firmly places them in the diapsid line of descent.
Post-cranial remains of amniotes can be identified from their Labyrinthodont ancestors by their having at least two pairs of sacral ribs, a sternum in the pectoral girdle (some amniotes have lost it) and an astragalus bone in the ankle.
Definition and classification
Amniota was first formally described by the embryologist Ernst Haeckel in 1866 on the presence of the amnion, hence the name. A problem with this definition is that the trait (apomorphy) in question does not fossilize, and the status of fossil forms has to be inferred from other traits.
Traditional classification
Older classifications of the amniotes traditionally recognised three classes based on major traits and physiology:
Class Reptilia (reptiles)
Subclass Anapsida ("proto-reptiles", possibly including turtles)
Subclass Diapsida (majority of reptiles, progenitors of birds)
Subclass Euryapsida (plesiosaurs, placodonts, and ichthyosaurs)
Subclass Synapsida (stem or proto-mammals, progenitors of mammals)
Class Aves (birds)
Subclass Archaeornithes (reptile-like birds, progenitors of all other birds)
Subclass Enantiornithes (early birds with an alternative shoulder joint)
Subclass Hesperornithes (toothed aquatic flightless birds)
Subclass Ichthyornithes (toothed, but otherwise modern birds)
Subclass Neornithes (all living birds)
Class Mammalia (mammals)
Subclass Prototheria (Monotremata, egg-laying mammals)
Subclass Theria (metatheria (such as marsupials) and eutheria (such as placental mammals))
This rather orderly scheme is the one most commonly found in popular and basic scientific works. It has come under critique from cladistics, as the class Reptilia is paraphyletic—it has given rise to two other classes not included in Reptilia.
Most species described as microsaurs, formerly grouped in the extinct and prehistoric amphibian group lepospondyls, has been placed in the newer clade Recumbirostra, and shares many anatomical features with amniotes which indicates they were amniotes themselves.
Classification into monophyletic taxa
A different approach is adopted by writers who reject paraphyletic groupings. One such classification, by Michael Benton, is presented in simplified form below.
Series Amniota
(Class) Clade Synapsida
A series of unassigned families, corresponding to Pelycosauria †
(Order) Clade Therapsida
Class Mammalia – mammals
(Class) Clade Sauropsida
Subclass Parareptilia †
Family Mesosauridae †
Family Millerettidae †
Family Bolosauridae †
Family Procolophonidae †
Order Pareiasauromorpha
Family Nycteroleteridae †
Family Pareiasauridae †
(Subclass) Clade Eureptilia
Family Captorhinidae †
(Infraclass) Clade Diapsida
Family Araeoscelididae †
Family Weigeltisauridae †
Order Younginiformes †
(Infraclass) Clade Neodiapsida
Order Testudinata
Suborder Testudines – turtles
Infraclass Lepidosauromorpha
Unnamed infrasubclass
Infraclass Ichthyosauria †
Order Thalattosauria †
Superorder Lepidosauriformes
Order Sphenodontida – tuatara
Order Squamata – lizards and snakes
Infrasubclass Sauropterygia †
Order Placodontia †
Order Eosauropterygia †
Suborder Pachypleurosauria †
Suborder Nothosauria †
Order Plesiosauria †
(Infraclass) Clade Archosauromorpha
Family Trilophosauridae †
Order Rhynchosauria †
Order Protorosauria †
Division Archosauriformes
Subdivision Archosauria
Infradivision Crurotarsi
Order Phytosauria†
Family Ornithosuchidae †
Family Stagonolepididae †
Family Rauisuchidae †
Superfamily Poposauroidea †
Superorder Crocodylomorpha
Order Crocodylia – crocodilians
Infradivision Avemetatarsalia
Infrasubdivision Ornithodira
Order Pterosauria †
Family Lagerpetidae †
Family Silesauridae †
(Superorder) Clade Dinosauria – dinosaurs
Order Ornithischia †
(Order) Clade Saurischia
(Suborder) Clade Theropoda – theropods
Class Aves – birds
Phylogenetic classification
With the advent of cladistics, other researchers have attempted to establish new classes, based on phylogeny, but disregarding the physiological and anatomical unity of the groups. Unlike Benton, for example, Jacques Gauthier and colleagues forwarded a definition of Amniota in 1988 as "the most recent common ancestor of extant mammals and reptiles, and all its descendants". As Gauthier makes use of a crown group definition, Amniota has a slightly different content than the biological amniotes as defined by an apomorphy. Though traditionally considered reptiliomorphs, some recent research has recovered diadectomorphs as the sister group to Synapsida within Amniota, based on inner ear anatomy.
Cladogram
The cladogram presented here illustrates the phylogeny (family tree) of amniotes, and follows a simplified version of the relationships found by Laurin & Reisz (1995), with the exception of turtles, which more recent morphological and molecular phylogenetic studies placed firmly within diapsids. The cladogram covers the group as defined under Gauthier's definition.
Following studies in 2022 and 2023, with Drepanosauromorpha placed sister to Weigeltisauridae (Coelurosauravus) in Avicephala based on Senter (2004):
References
Extant Pennsylvanian first appearances
Taxa named by Ernst Haeckel
Zoological nomenclature | Amniote | Biology | 3,211 |
72,389,171 | https://en.wikipedia.org/wiki/Eliane%20R.%20Rodrigues | Eliane Regina Rodrigues is a Brazilian applied mathematician and statistician who works in Mexico as a researcher at the Institute of Mathematics of the National Autonomous University of Mexico (UNAM). Her research involves using stochastic processes including Markov chains and Poisson point processes to model phenomena such as air pollution, noise pollution, the health effects of fat taxes, and the effectiveness of vaccination.
Education
After undergraduate study in mathematics at São Paulo State University, Rodrigues earned a master's degree in probability theory from the University of Brasília, both in Brazil. She completed a PhD in applied probability from Queen Mary and Westfield College (now Queen Mary University of London) in England.
Book
Rodrigues is the coauthor, with Brazilian mathematician Jorge Alberto Achcar, of the book Applications of Discrete-time Markov Chains and Poisson Processes to Air Pollution Modeling and Studies (Springer Briefs in Mathematics, 2013).
Recognition
Rodrigues is a member of the Mexican Academy of Sciences, and an Elected Member of the International Statistical Institute.
References
Year of birth missing (living people)
Living people
Brazilian mathematicians
Brazilian women mathematicians
Brazilian statisticians
Mexican mathematicians
Mexican women mathematicians
Mexican statisticians
Applied mathematicians
Women statisticians
São Paulo State University alumni
University of Brasília alumni
Alumni of Queen Mary University of London
Members of the Mexican Academy of Sciences
Elected Members of the International Statistical Institute | Eliane R. Rodrigues | Mathematics | 278 |
1,324,735 | https://en.wikipedia.org/wiki/Subvocalization | Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read. This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.
This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading. It is one of the components of Alan Baddeley and Graham Hitch's phonological loop proposal which accounts for the storage of these types of information into short-term memory.
History of subvocalization research
Subvocalization has been considered as far back as 1868. Only in 1899 did an experiment take place to record movement of the larynx through silent reading by a researcher named H.S. Curtis, who concluded that silent reading was the only mental activity that created considerable movement of the larynx.
In 1950 Edfelt reached a breakthrough when he created an electrically powered instrument that can record movement. He concluded that newer techniques are needed to accurately record information and that efforts should be made to understand this phenomenon instead of eliminating it. After failed attempts trying to reduce silent speech in study participants, in 1952, it came to the conclusion that silent speech is a developmental activity which reinforces learning and should not be disrupted during development. In 1960, Edfelt seconded this opinion.
Techniques for studying subvocalization
Subvocalization is commonly studied using electromyography (EMG) recordings, concurrent speaking tasks, shadowing, and other techniques.
EMG can be used to show the degree to which one is subvocalizing or to train subvocalization suppression. EMG is used to record the electrical activity produced by the articulatory muscles involved in subvocalization. Greater electrical activity suggests a stronger use of subvocalization. In the case of suppression training, the trainee is shown their own EMG recordings while attempting to decrease the movement of the articulatory muscles. The EMG recordings allows one to monitor and ideally reduce subvocalization.
In concurrent speaking tasks, participants of a study are asked to complete an activity specific to the experiment while simultaneously repeating an irrelevant word. For example, one may be asked to read a paragraph while reciting the word "cola" over and over again. Speaking the repeated irrelevant word is thought to preoccupy the articulators used in subvocalization. Subvocalization, therefore, cannot be used in the mental processing of the activity being studied. Participants who had undergone the concurrent speaking task are often compared to other participants of the study who had completed the same activity without subvocalization interference. If performance on the activity is significantly less for those in the concurrent speaking task group than for those in the non-interference group, subvocalization is believed to play a role in the mental processing of that activity. The participants in the non-interference comparison group usually also complete a different, yet equally distracting task that does not involve the articulator muscles (i.e. tapping). This ensures that the difference in performance between the two groups is in fact due to subvocalization disturbances and not due to considerations such as task difficulty or a divide in attention.
Shadowing is conceptually similar to concurrent speaking tasks. Instead of repeating an irrelevant word, shadowing requires participants to listen to a list of words and to repeat those words as fast as possible while completing a separate task being studied by experimenters.
Techniques for subvocalization interference may also include counting, chewing or locking one's jaw while placing the tongue on the roof of one's mouth.
Subvocal recognition involves monitoring actual movements of the tongue and vocal cords that can be interpreted by electromagnetic sensors. Through the use of electrodes and nanocircuitry, synthetic telepathy could be achieved allowing people to communicate silently.
Evolutionary background
The exploration into the evolutionary background of subvocalization is currently very limited. The little known is predominantly about language acquisition and memory. Evolutionary psychologists suggest that the development of subvocalization is related to modular aspects of the brain. There has been a great amount of exploration on the evolutionary basis of universal grammar. The idea is that although the specific language one initially learns is dependent on one's culture, all languages are learned through the activation of universal "language modules" that are present in each of us. This concept of a modular mind is a prevalent idea that will help explore memory and its relation to language more clearly, and possibly illuminate the evolutionary basis of subvocalization. Evidence for the mind having modules for superior function is the example that hours may be spent toiling over a car engine in an attempt to flexibly formulate a solution, but, in contrast, extremely long and complex sentences can be comprehended, understood, related and responded to in seconds. The specific inquiry into subvocalization may be minimal right now but there remains much to investigate in regard to the modular mind.
Associated brain structures and processes
The brain mechanics of subvocalization are still not well understood. It is safe to say that more than one part of the brain is used, and that no single test can reveal all the relevant processes. Studies often use event-related potentials; brief changes in an EEG (electroencephalography) to show brain activation, or fMRIs.
Subvocalization is related to inner speech; when inner speech is used, there is bilateral activation in predominantly the left frontal lobe. This activation could suggest that the frontal lobes may be involved in motor planning for speech output.
Subvocal rehearsal is controlled by top-down processing; conceptually driven, it relies on information already in memory. There is evidence for significant left hemisphere activation in the inferior and middle frontal gyri and inferior parietal gyrus during subvocal rehearsal. Broca's area has also been found to have activation in other studies exploring subvocal rehearsal.
Silent speech-reading and silent counting are also examined when experimenters look at subvocalization. These tasks show activation in the frontal cortices, hippocampus and the thalamus for silent counting. Silent-reading activates similar areas of the auditory cortex that are involved in listening.
Finally, the phonological loop; proposed by Baddeley and Hitch as "being responsible for temporary storage of speech-like information" is an active subvocal rehearsal mechanism, activation originating mostly in the left hemispheric speech areas: Broca's, lateral and medial premotor cortices and the cerebellum.
Role of subvocalization in memory processes
The phonological loop and rehearsal
The ability to store verbal material in working memory, and the storage of verbal material in short-term memory relies on a phonological loop. This loop, proposed by Baddeley and Hitch, represents a system that is composed of a short-term store in which memory is represented phonologically, and a rehearsal process. This rehearsal preserves and refreshes the material by re-enacting it and re-presenting it to short-term storage, and subvocalization is a major component of this rehearsal. The phonological loop system features an interaction between subvocal rehearsal and specific storage for phonological material. The phonological loop contributes to the study of the role of subvocalization and the inner voice in auditory imagery. Subvocalization and the phonological loop interact in a non-dependent manner demonstrated by their differential requirements on different tasks. The role of subvocalization within the workings of memory processes is heavily reliant on its involvement with Baddeley's proposed phonological loop.
Working memory
There have been findings that support a role of subvocalization in the mechanisms underlying working memory and the holding of information in an accessible and malleable state. Some forms of internal speech-like processing may function as a holding mechanism in immediate memory tasks. The working memory span is a behavioural measure of "exceptional consistency" and is a positive function of the rate of subvocalization. Experimental data has shown that this span size increases as the rate of subvocalization increases, and the time needed to subvocalize the number of items comprising a span is generally constant. fMRI data suggests that a sequence of five letters approaches the individual capacity for immediate recall that relies on subvocal rehearsal alone.
Short-term memory
The role of subvocal rehearsal is also seen in short-term memory. Research has confirmed that this form of rehearsal benefits some cognitive functioning. Subvocal movements that occur when people listen to or rehearse a series of speech sounds will help the subject to maintain the phonemic representation of these sounds in their short-term memory, and this finding is supported by the fact that interfering with the overt production of speech sound did not disrupt the encoding of the sound's features in short-term memory. This suggests a strong role played by subvocalization in the encoding of speech sounds into short-term memory. It has also been found that language differences in short-term memory performance in bilingual people is mediated, but not exclusively, by subvocal rehearsal.
The production of acoustic errors in short-term memory is also thought to be, in part, due to subvocalization. Individuals who stutter and therefore have a slower rate of subvocal articulation also demonstrate a short-term reproduction of serial material that is slower as compared to people who do not stutter.
Encoding
Subvocalization plays a large role in memory encoding. Subvocalization appears to facilitate the translating of visual linguistic information into acoustic information and vice versa. For example, subvocalization occurs when one sees a word and is asked to say it (see-say condition), or when one hears a word and is asked to write it (hear-write condition), but not when one is asked to see a word and then write it (see-write condition) or hear a word and then say it (hear-say condition). The see-say condition converts visual information into acoustic information. The hear-write condition converts acoustic information into visual information. The see-write and hear-say conditions, however, remain in the same sensory domain and do not require translation into a different type of code.
This is also supported by findings that suggest that subvocalization is not required for the encoding of speech, as words being heard are already in acoustic form and therefore enter short-term memory directly without use of subvocal articulation. Furthermore, subvocalization interference impedes reading comprehension but not listening comprehension.
Role in reading comprehension
Subvocalization's role in reading comprehension can be viewed as a function of task complexity. Subvocalization is involved minimally or not at all in immediate comprehension. For example, subvocalization is not used in the making of homophone judgements but is used more for the comprehension of sentences and even more still for the comprehension of paragraphs. Subvocalization which translates visual reading information into a more durable and flexible acoustic code is thought to allow for the integration of past concepts with those currently being processed.
Comparison to speed reading
Advocates of speed reading generally claim that subvocalization places extra burden on the cognitive resources, thus slowing the reading down. Speedreading courses often prescribe lengthy practices to eliminate subvocalizing when reading. Normal reading instructors often simply apply remedial teaching to a reader who subvocalizes to the degree that they make visible movements on the lips, jaw, or throat.
Furthermore, fMRI studies comparing fast and slow readers (during a reading task) indicate that between the two groups there are significant differences in the brain areas being activated. In particular, it was found that rapid readers show lower activation in the brain regions associated with speech, which indicates that the higher speeds were attained, in part, by the reduction in subvocalization.
At the slower rates (memorizing, learning, and reading for comprehension), subvocalizing by the reader is very detectable. At the faster rates of reading (skimming and scanning), subvocalization is less detectable. For competent readers, subvocalizing to some extent even at scanning rates is normal.
Typically, subvocalizing is an inherent part of reading and understanding a word. Micro-muscle tests suggest that full and permanent elimination of subvocalizing is impossible. This may originate in the way people learn to read by associating the sight of words with their spoken sounds. Sound associations for words are indelibly imprinted on the nervous system—even of deaf people, since they will have associated the word with the mechanism for causing the sound or a sign in a particular sign language.
At the slower reading rates (100–300 words per minute), subvocalizing may improve comprehension. Subvocalizing or actual vocalizing can indeed be of great help when one wants to learn a passage verbatim. This is because the person is repeating the information in an auditory way, as well as seeing the piece on the paper.
Auditory imagery
The definition of auditory imagery is analogous to definitions used in other modalities of imagery (such as visual, auditory and olfactory imagery) in that it is, according to Intons-Peterson (1992), "the introspective persistence of an auditory experience, including one constructed from components drawn from long-term memory, in the absence of direct sensory instigation of that experience.". Auditory imagery is often but not necessarily influenced by subvocalization, and has ties to the rehearsal process of working memory. The conception of working memory relies on a relationship between the "inner ear" and the "inner voice" (subvocalization), and this memory system is posited to be at the basis of auditory imagery. Subvocalization and the phonological store work in partnership in many auditory imagery tasks.
The extent to which an auditory image can influence detection, encoding and recall of a stimulus through its relationships to perception and memory has been documented. It has been suggested that auditory imagery may slow the decay of memory for pitch, as demonstrated by T. A. Keller, Cowan, and Saults (1995) who demonstrated that the prevention of rehearsal resulted in decreased memory performance for pitch comparison tasks through the introduction of distracting and competing stimuli. It has also been reported that auditory imagery for verbal material is impaired when subvocalization is blocked. These findings suggest that subvocalization is common to both auditory imagery and rehearsal.
In objection to a subvocalization mechanism basis for auditory imagery is in the fact that a significant amount of auditory imagery does not involve speech or stimuli similar to speech, such as music and environmental sounds. However, to combat this point, it has been suggested that rehearsal of non-speech sounds can indeed be carried out by the phonological mechanisms previously mentioned, even if the creation of nonspeech sounds within this mechanism is not possible.
Role in speech
There are two general types of individuals when it comes to subvocalization. There are Low-Vocalizers and High-Vocalizers. Using electromyography to record the muscle action potential of the larynx (i.e. muscle movement of the larynx), an individual is categorized under a high or low vocalizer depending on how much muscle movement the muscles in the larynx undergo during silent reading.
Regulation of speech intensity
Often in both high and low vocalizers, the rate of speech is constantly regulated depending on intensity/volume of words (said to be affected by long delays between readings) and increasing the delay of speech and hearing ones' voice is an effect called “delayed auditory feedback”. Increasing the voice intensity while reading was found to be higher in low-vocalizers than high-vocalizers. It is believed that because high-vocalizers have greater muscle movement of the larynx during silent reading, low-vocalizers read louder to compensate for this lack of movement so they can understand the material. When individuals undergo “feedback training”, where they are conscious of these muscle movements, this difference diminishes.
Role in articulation
Articulation during silent speech is important, though speech is not solely dependent on articulation alone. Impairing articulation can reduce sensory input from the muscle movements of the larynx to the brain to understand information being read and it also impairs ongoing speech production during reading to direct thinking. Words that are of high similarity minimize articulation, causing interference, and may reduce subvocal rehearsal. As articulation of similar words is affecting subvocalization, there is an increase in acoustic errors for short-term memory and recall.
Impairing or suppressing articulation causes a greater impact on performance. An example of articulation suppression is repeating the same word over many times such as 'the' and attempting to memorise other words into short-term memory. Even though primary cues may be given for these words in attempt to retrieve them, words will either be recalled for the incorrect cue or will not be recalled at all.
Schizophrenia and subvocalization
People with schizophrenia known to experience auditory hallucinations could show the result of over-activation of the muscles in the larynx. Using an electromyography to record muscle movement, individuals experiencing hallucinations showed greater muscle activation before these hallucinations occurred. However, this muscle activation is not easily detected which means the muscle movement must be measured on a wider range. Though a wider range is needed to detect the muscle movement, it is still considered as subvocalization. Much more research is needed to link subvocalization with hallucination but many schizophrenics report "hearing voices" (as hallucinations) coming from their throat. This small fact could be a clue to finding if there is a true link between subvocalization and hallucinations, but it is very difficult to see this connection because not many patients experience hallucinations.
References
External links
NASA Develops System to Computerize Silent, 'Subvocal Speech'
NASA researchers can hear what you're saying, even when you don't make a sound
An interview with NASA's Chuck Jorgensen on the Subvocal Speech – including pictures and video of the technology. Copy at Archive.org (no pictures/video)
Reading (process)
Human communication
Educational psychology
Learning to read
Memory
Vocal skills | Subvocalization | Biology | 3,823 |
24,466,340 | https://en.wikipedia.org/wiki/Gymnopilus%20mesosporus | Gymnopilus mesosporus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus mesosporus at Index Fungorum
mesosporus
Fungi of North America
Fungus species | Gymnopilus mesosporus | Biology | 59 |
9,804 | https://en.wikipedia.org/wiki/Electric%20charge | Electric charge (symbol q, sometimes Q) is a physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative. Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects.
Electric charge is a conserved property: the net charge of an isolated system, the quantity of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized: it comes in integer multiples of individual small units called the elementary charge, e, about which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of e, but they are found only combined in particles that have a charge that is an integer multiple of e. In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +e, and the electron has a charge of −e.
Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth.
Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics.
The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges.
Overview
Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either − or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed.
By convention, the charge of an electron is negative, −e, while that of a proton is positive, +e. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign.
The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral.
An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge.
During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral.
Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa.
Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current.
Unit
The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol q is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer.
The elementary charge is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly
After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as , , or simply denoted e, with the charge of an electron being −e. The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect.
The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e.
History
From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect.
In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon.
In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge".
Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies.
In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia.
Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745).
Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium.
Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward.
It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge.
Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path.
In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity).
In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body.
In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state.
In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level.
Role of charge in static electricity
Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects.
Electrification by sliding
When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other.
A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena:
The two pieces of glass repel each other.
Each piece of glass attracts each piece of resin.
The two pieces of resin repel each other.
This attraction and repulsion is an electrical phenomenon, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts.
If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified.
An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand.
Role of charge in electric current
Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations.
At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma.
Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners.
Conservation of electric charge
The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂V, which is in turn equal to the net current I:
Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result:
The charge transferred between times and is obtained by integrating both sides:
where I is the net outward current through a closed surface and q is the electric charge contained within the volume defined by the surface.
Relativistic invariance
Aside from the properties described in articles about electromagnetism, electric charge is a relativistic invariant. This means that any particle that has electric charge q has the same electric charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the electric charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as that of two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus).
See also
SI electromagnetism units
Color charge
Partial charge
Positron or antielectron is an antiparticle or antimatter counterpart of the electron
References
External links
How fast does a charge decay?
Chemical properties
Conservation laws
Electricity
Flavour (particle physics)
Spintronics
Electromagnetic quantities | Electric charge | Physics,Chemistry,Materials_science,Mathematics | 4,614 |
36,150,729 | https://en.wikipedia.org/wiki/C2H6ClO2PS | {{DISPLAYTITLE:C2H6ClO2PS}}
The molecular formula C2H6ClO2PS (molar mass: 160.56 g/mol, exact mass: 159.9515 u) may refer to:
Dimethyl chlorothiophosphate
Dimethyl phosphorochloridothioate | C2H6ClO2PS | Chemistry | 76 |
24,472,520 | https://en.wikipedia.org/wiki/Billon%20%28alloy%29 | Billon () is an alloy of a precious metal (most commonly silver, but also gold) with a majority base metal content (such as copper). It is used chiefly for making coins, medals, and token coins.
The word comes from the French , which means 'log'.
History
The use of billon coins dates from ancient Greece and continued through the Middle Ages. During the sixth and fifth centuries BC, some cities on Lesbos used coins made of 60% copper and 40% silver. In both ancient times and the Middle Ages, leaner mixtures were adopted, with less than 2% silver content.
Billon coins are perhaps best known from the Roman Empire, where progressive debasements of the Roman denarius and the Roman provincial tetradrachm in the third century AD led to declining silver and increasing bronze content in these denominations of coins. Eventually, by the third quarter of the third century AD, these coins were almost entirely bronze, with only a thin coating or even a wash of silver.
An example of a United States coin that is considered to be billon are the Jefferson nickels issued from 1942 through 1945.
In order to save nickel and copper for the war effort, the composition of the nickel was changed to an alloy of 35% silver, 56% copper, and 9% manganese. These coins are easily identifiable by their color and by the presence of a large mintmark on top of the dome of Monticello.
See also
Potin
Coinage metals
Bullion
List of alloys
Antoninianus
Shakudō
References
Precious metal alloys
Currency production
Silver
Numismatic terminology
Coinage metals and alloys | Billon (alloy) | Chemistry | 331 |
61,187,256 | https://en.wikipedia.org/wiki/Charge%20modulation%20spectroscopy | Charge modulation spectroscopy is an electro-optical spectroscopy technique tool. It is used to study the charge carrier behavior of organic field-effect transistors. It measures the charge introduced optical transmission variation by directly probing the accumulation charge at the burning interface of semiconductor and dielectric layer where the conduction channel forms.
Principles
Unlike ultraviolet–visible spectroscopy which measures absorbance, charge modulation spectroscopy measures the charge introduced optical transmission variation. In other words, it reveals the new features in optical transmission introduced by charges. In this setup, there are mainly four components: lamp, monochromator, photodetector and lock-in amplifier. Lamp and monochromator are used for generating and selecting the wavelength. The selected wavelength passes through the transistor, and the transmitted light is recorded by the Photodiode. When the signal to noise ratio is very low, the signal can be modulated and recovered with a Lock-in amplifier.
In the experiment, a direct current plus an alternating current bias are applied to the organic field-effect transistor. Charge carriers accumulate at the interface between the dielectric and the semiconductor (usually a few nanometers). With the appearance of the accumulation charge, the intensity of the transmitted light changes. The variation of the light intensity () is then collected though the photodetector and lock-in amplifier. The charge modulation frequency is given to Lock-in amplifier as the reference.
Modulate charge at the organic field-effect transistor
There are four typically Organic field-effect transistor architectures: Top-gate, bottom-contacts; bottom-gate, top-contacts; bottom-gate, bottom-contacts; top-gate, top-contact.
In order to create the accumulation charge layer, a positive/negative direct current voltage is applied to the gate of the organic field-effect transistor (positive for the P type transistor, negative for the N type transistor). In order to modulate the charge, an AC voltage is given between the gate and source. It is important to notice that only mobile charge can follow the modulation and that the modulation frequency given to lock-in amplifier has to be synchronous.
Charge modulation spectra
The charge modulation spectroscopy signal can be defined as the differential transmission divided by the total transmission . By modulating the mobile carriers, an increase transmission and decrease transmission features could be both observed. The former relates to the bleaching and the latter to the charge absorption and electrically induced absorption (electro-absorption). The charge modulation spectroscopy spectra is an overlap of charge-induced and electro-absorption features. In transistors, the electro-absorption is more significant during the high voltage drop. There are several ways to identify the electro-absorption contribution, such as get the second harmonic , or probe it at the depletion region.
Bleaching and charge absorption
When the accumulation charge carrier removes the ground state of the neutral polymer, there is more transmission in the ground state. This is called bleaching . With the excess hole or electrons at the polymer, there will be new transitions at low energy levels, therefore the transmission intensity is reduced , this is related to charge absorption.
Electro-absorption
The electro-absorption is a type of Stark effect in the neutral polymer, it is predominant at the electrode edge since there is a strong voltage drop. Electro-absorption can be observed from the second harmonic charge modulation spectroscopy spectra.
Charge modulation microscopy
Charge modulation microscopy is a new technology which combines the confocal microscopy with charge modulation spectroscopy. Unlike the charge modulation spectroscopy which is focused on the whole transistor, the charge modulation microscopy give us the local spectra and map. Thanks for this technology, the channel spectra and electrode spectra can be obtained individually. A more local dimension of charge modulation spectra (around submicrometer) can be observed without a significant Electro-absorption feature. Of course, this depends on the resolution of the optical microscopy.
The high resolution of charge modulation microscopy allows mapping of the charge carrier distribution at the active channel of the organic field-effect transistor. In other words, a functional carrier morphology can be observed. It is well known that the local carrier density can be related to the polymer microstructure. Based on Density functional theory calculations, a polarized charge modulation microscopy can selectively map the charge transport associated with a relative direction of the transition dipole moment. The local direction can be correlated to the orientational order of polymer domains. More ordered domains show a high carrier mobility of the organic field-effect transistor device.
See also
Confocal microscopy
Organic field-effect transistor
Stark effect
Ultraviolet–visible spectroscopy
References
Further reading
Spectroscopy
Scientific techniques
Microscopy | Charge modulation spectroscopy | Physics,Chemistry | 955 |
74,433,644 | https://en.wikipedia.org/wiki/Rhenium%20trioxynitrate | Rhenium trioxynitrate, also known as rhenium(VII) trioxide nitrate, is a chemical compound with the formula ReO3NO3. It is a white solid that readily hydrolyzes in moist air.
Preparation and properties
Rhenium trioxynitrate is prepared by the reaction of ReO3Cl (produced by reacting rhenium trioxide and chlorine) and dinitrogen pentoxide:
ReO3Cl + N2O5 → ReO3NO3 + NO2Cl
The ReO3Cl can be replaced with rhenium heptoxide, however, this produces an impure product. This compound reacts with water to produce perrhenic acid and nitric acid.
When heated above 75 °C, it decomposes to rhenium heptoxide, nitrogen dioxide, and oxygen:
4 ReO3NO3 → 2 Re2O7 + 2 NO2 + O2
A graphite intercalation compound can be produced by reacting a mixture of rhenium trioxynitrate and dinitrogen pentoxide with graphite.
Structure
X-ray diffraction and IR spectroscopic evidence rejects the formulations NO2+ReO4– or Re2O7·N2O5, but instead suggests a polymeric structure with a monodentate nitrate ligand.
References
Rhenium compounds
Nitrates | Rhenium trioxynitrate | Chemistry | 289 |
30,747,791 | https://en.wikipedia.org/wiki/High-refractive-index%20polymer | A high-refractive-index polymer (HRIP) is a polymer that has a refractive index greater than 1.50.
Such materials are required for anti-reflective coating and photonic devices such as light emitting diodes (LEDs) and image sensors. The refractive index of a polymer is based on several factors which include polarizability, chain flexibility, molecular geometry and the polymer backbone orientation.
As of 2004, the highest refractive index for a polymer was 1.76. Substituents with high molar fractions or high-n nanoparticles in a polymer matrix have been introduced to increase the refractive index in polymers.
Properties
Refractive index
A typical polymer has a refractive index of 1.30–1.70, but a higher refractive index is often required for specific applications. The refractive index is related to the molar refractivity, structure and weight of the monomer. In general, high molar refractivity and low molar volumes increase the refractive index of the polymer.
Optical properties
Optical dispersion is an important property of an HRIP. It is characterized by the Abbe number. A high refractive index material will generally have a small Abbe number, or a high optical dispersion. A low birefringence has been required along with a high refractive index for many applications. It can be achieved by using different functional groups in the initial monomer to make the HRIP. Aromatic monomers both increase refractive index and decrease the optical anisotropy and thus the birefringence.
A high clarity (optical transparency) is also desired in a high refractive index polymer. The clarity is dependent on the refractive indexes of the polymer and of the initial monomer.
Thermal stability
When looking at thermal stability, the typical variables measured include glass transition, initial decomposition temperature, degradation temperature and the melting temperature range. The thermal stability can be measured by thermogravimetric analysis and differential scanning calorimetry. Polyesters are considered thermally stable with a degradation temperature of 410 °C. The decomposition temperature changes depending on the substituent that is attached to the monomer used in the polymerization of the high refractive index polymer. Thus, longer alkyl substituents results in lower thermal stability.
Solubility
Most applications favor polymers which are soluble in as many solvents as possible. Highly refractive polyesters and polyimides are soluble in common organic solvents such as dichloromethane, methanol, hexanes, acetone and toluene.
Synthesis
The synthesis route depends on the HRIP type. The Michael polyaddition is used for a polyimide because it can be carried out at room temperature and can be used for step-growth polymerization. This synthesis was first succeeded with polyimidothiethers, resulting in optically transparent polymers with high refractive index. Polycondensation reactions are also common to make high refractive index polymers, such as polyesters and polyphosphonates.
Types
High refractive indices have been achieved either by introducing substituents with high molar refractions (intrinsic HRIPs) or by combining high-n nanoparticles with polymer matrixes (HRIP nanocomposites).
Intrinsic HRIP
Sulfur-containing substituents including linear thioether and sulfone, cyclic thiophene, thiadiazole and thianthrene are the most commonly used groups for increasing refractive index of a polymer. Polymers with sulfur-rich thianthrene and tetrathiaanthracene moieties exhibit n values above 1.72, depending on the degree of molecular packing.
Halogen elements, especially bromine and iodine, were the earliest components used for developing HRIPs. In 1992, Gaudiana et al. reported a series of polymethylacrylate compounds containing lateral brominated and iodinated carbazole rings. They had refractive indices of 1.67–1.77 depending on the components and numbers of the halogen substituents. However, recent applications of halogen elements in microelectronics have been severely limited by the WEEE directive and RoHS legislation adopted by the European Union to reduce potential pollution of the environment.
Phosphorus-containing groups, such as phosphonates and phosphazenes, often exhibit high molar refractivity and optical transmittance in the visible light region. Polyphosphonates have high refractive indices due to the phosphorus moiety even if they have chemical structures analogous to polycarbonates. Shaver et al. reported a series of polyphosphonates with varying backbones, reaching the highest refractive index reported for polyphosphonates at 1.66. In addition, polyphosphonates exhibit good thermal stability and optical transparency; they are also suitable for casting into plastic lenses.
Organometallic components result in HRIPs with good film forming ability and relatively low optical dispersion. Polyferrocenylsilanes and polyferrocenes containing phosphorus spacers and phenyl side chains show unusually high n values (n=1.74 and n=1.72). They might be good candidates for all-polymer photonic devices because of their intermediate optical dispersion between organic polymers and inorganic glasses.
HRIP nanocomposite
Hybrid techniques which combine an organic polymer matrix with highly refractive inorganic nanoparticles could result in high n values. The factors affecting the refractive index of a high-n nanocomposite include the characteristics of the polymer matrix, nanoparticles and
the hybrid technology between inorganic and organic components. The refractive index of a nanocomposite can be estimated as , where , and stand for the refractive indices of the nanocomposite, nanoparticle and organic matrix, respectively. and represent the volume fractions of the nanoparticles and organic matrix, respectively. The nanoparticle load is also important in designing HRIP nanocomposites for optical applications, because excessive concentrations increase the optical loss and decrease the processability of the nanocomposites. The choice of nanoparticles is often influenced by their size and surface characteristics. In order to increase optical transparency and reduce Rayleigh scattering of the nanocomposite, the diameter of the nanoparticle should be below 25 nm. Direct mixing of nanoparticles with the polymer matrix often results in the undesirable aggregation of nanoparticles – this is avoided by modifying their surface. The most commonly used nanoparticles for HRIPs include TiO2 (anatase, n=2.45; rutile, n=2.70), ZrO2 (n=2.10), amorphous silicon (n=4.23), PbS (n=4.20) and ZnS (n=2.36). Polyimides have high refractive indexes and thus are often used as the matrix for high-n nanoparticles. The resulting nanocomposites exhibit a tunable refractive index ranging from 1.57 to 1.99.
Applications
Image sensors
A microlens array is a key component of optoelectronics, optical communications, CMOS image sensors and displays. Polymer-based microlenses are easier to make and are more flexible than conventional glass-based lenses. The resulting devices use less power, are smaller in size and are cheaper to produce.
Lithography
Another application of HRIPs is in immersion lithography. In 2009 it was a new technique for circuit manufacturing using both photoresists and high refractive index fluids. The photoresist needs to have an n value of greater than 1.90. It has been shown that non-aromatic, sulfur-containing HRIPs are the best materials for an optical photoresist system.
LEDs
Light-emitting diodes (LEDs) are a common solid-state light source. High-brightness LEDs (HBLEDs) are often limited by the relatively low light extraction efficiency due to the mismatch of the refractive indices between the LED material (GaN, n=2.5) and the organic encapsulant (epoxy or silicone, n=1.5). Higher light outputs can be achieved by using an HRIP as the encapsulant.
See also
Refractive index
Refractometer
Abbe number
Optoelectronics
Polarizability
Birefringence
Lorentz-Lorenz equation
Dispersion
Optical anisotropy
Nanocomposite
Image sensor
Immersion lithography
Organic light emitting diode (OLED)
References
Further reading
Optical materials
Polymers | High-refractive-index polymer | Physics,Chemistry,Materials_science | 1,830 |
35,188,284 | https://en.wikipedia.org/wiki/Taylor%20Hobson | Taylor Hobson is an English company founded in 1886 and located in Leicester, England. Originally a manufacturer of still camera and cine lenses, the company now manufactures precision metrology instruments—in particular, profilometers for the analysis of surface textures and forms.
Taylor Hobson is now part of Ametek's Ultra Precision Technologies Group.
History
Early history of the company
1886 – Company founded by Thomas Smithies Taylor, an optician, and his brother Herbert William Taylor, an engineer, to make lenses. The company was initially based in Slate Street but subsequently moved to Stoughton Street Works in Leicester.
1887 – W.S.H Hobson joins the company as the sales face of Taylor, Taylor & Hobson ("TTH").
1893 – The company produces its first Cooke lens. The name Cooke came to TTH after an agreement with Cooke of York, who licensed some of their designs to TTH.
1902 – A third brother, J. Ronald Taylor opens a branch in New York, with the principal customer being the Eastman Kodak Company.
1914 – The company is reported as manufacturers of "Photographic Lenses and other Optical Goods, Engraving Machinery and other Fine Tools, Golf Ball Moulds", and "Timerecording Clocks".
1914–1918 – The Aviar lenses, developed for aerial photography, contributed to the Allied air force in World War I. The company designed and manufactured machines for the accurate polishing of lenses, making it possible to produce large numbers of such lenses for binoculars. William Taylor devised new methods of lens manufacture for aerial photography and produced lenses for range finders, gun sights, rifle bores, and telescopes.
1919 – William Taylor was awarded an OBE. The King visited the Stoughton Street Works on 10 June 1919.
1932 – TTH produced the first Cooke zoom lens for cinematography. William Taylor was nominated as President of the Institute of Mechanical Engineers.
1936 – William Taylor was nominated as Honorary Life Member of the Institute of Mechanical Engineers.
1937 – William Taylor died.
1938 – Thomas Smithies Taylor resigned as director.
1939 – Taylor Hobson supplied over 80% of the world's lenses for film studios.
How surface texture analysis was invented
William Taylor was convinced that to be a leader in optical lenses, he needed to have the best understanding and control of the surface quality of his lenses. As a result, he started to design instruments capable of helping him evaluate surface texture and roundness. After other manufacturers in various domains became aware of this, they requested to purchase these instruments. He refused, as the invented instruments were crucial to Taylor Hobson's optical lenses supremacy. After the company decided to market the instruments, Taylor Hobson became the major instrument manufacturer.
1941 – Taylor Hobson creates the first true surface texture measuring instrument, the Talysurf 1 opening the way to surface finish analysis.
1946 – Taylor Hobson becomes part of the Rank Organisation.
1949 – Taylor Hobson invents the world's first roundness measuring instrument, the Talyrond 1.
Surface texture analysis becomes industrial matter
1951 – Taylor Hobson invents a micro-alignment telescope.
1965 – Taylor Hobson introduces the Surtronic range, a hand held roughness meter that is easier to use on the shop floor, thanks to a skid pick-up. The skid pick-up loses the large waves of the surface texture (waviness and form) as the skid follows the general form of the surface, but has the major advantage of allowing the easy assessment of roughness without requiring spending time levelling a datum line to set the sensor in range.
1966 – Taylor Hobson introduced the TalyStep, that has been a reference during about two decades for the ultra-precise, low contact force measurement of step height with applications in the then raising semiconductor industry.
1969 – Taylor Hobson acquired the optical company Hilger and Watts.
1984 – Taylor Hobson introduces the Form Talysurf, that associates a range of several millimetres to a nanometric resolution, opening the way of measuring both roughness and form at the same time.
1992 – Taylor Hobson receives the Queen's award for technological achievement from the hands of Lord Lieutenant of Leicestershire during an official ceremony.
1995 – The TalyScan looking like a computer mouse (not to be confused with the TalyScan 150 and 250 introduced later, of a more classical design using slides to move the component and a fix sensor) introduced an original small 3D texture, and contact scanner.
1996 – The Form TalySurf PGI introduced sub-nanometer accuracy using a phase grating interferometer as the height sensor principle as an alternative to inductive pick-ups, but still on a contact pick-up touching the surface, and hence independent of the optical properties of the measured surface (as demonstrates the company's early history, this surface can for instance be an optical lens).
1996 – Schroeder Ventures acquired Taylor Hobson from the Rank Organisation.
1996 – Taylor Hobson is the first metrology manufacturer to adopt the Mountains software technology from Digital Surf that associates a desk top publishing tool to metrology results issued by instruments. The collaboration allowed Taylor Hobson to introduce the TalyMap 3D surface texture analysis software as an option to the TalySurf profilers and the TalyProfile 2D surface texture analysis software as an option to the Surtronic roughness testers.
1998 – The historical activity of the company, Cooke lenses, leave Taylor Hobson as part of a buy out that creates the company Cooke Optics.
2003 – Taylor Hobson introduces their first optical field profiler (i.e. based on a microscope), the TalySurf CCI as a complement to their existing scanning 3D profilometers based on styli or non-contact single-point sensors.
2004 – Taylor Hobson became part of Ametek's ultra precision technologies group.
2007 – The TalyRond 395, a fully automated roundness and cylindricity instrument, introduced a new trend of automated, multiple measurements (such as surface texture and roundness) on the same instrument.
2008 – Talysurf CCI Lite launched, Ultra Precision Benchtop 3D Optical Profiler Talyrond 290 with motorised gauge arm.
2009 – Talymaster launched. IT is a brand new inspection concept combining roughness, roundness and contour on a fully automated inspection system.
2010 – CCI MP, CCI HD launched.
2011 – Surtronic R50-R80, Surtronic R100 series launched. A range of roundness products robust enough for the shop floor but accurate enough for any inspection room.
References
External links
Official site
child development site
British brands
Engineering mechanics
Lens manufacturers
Manufacturing companies based in Leicester
Materials science organizations
Photography equipment manufacturers of the United Kingdom
Science and technology in Leicestershire | Taylor Hobson | Materials_science,Engineering | 1,394 |
13,790,456 | https://en.wikipedia.org/wiki/Laman%20graph | In graph theory, the Laman graphs are a family of sparse graphs describing the minimally rigid systems of rods and joints in the plane. Formally, a Laman graph is a graph on n vertices such that, for all k, every k-vertex subgraph has at most 2k − 3 edges, and such that the whole graph has exactly 2n − 3 edges. Laman graphs are named after Gerard Laman, of the University of Amsterdam, who in 1970 used them to characterize rigid planar structures.
However, this characterization, the Geiringer–Laman theorem, had already been discovered in 1927 by Hilda Geiringer.
Rigidity
Laman graphs arise in rigidity theory: if one places the vertices of a Laman graph in the Euclidean plane, in general position, there will in general be no simultaneous continuous motion of all the points, other than Euclidean congruences, that preserves the lengths of all the graph edges. A graph is rigid in this sense if and only if it has a Laman subgraph that spans all of its vertices. Thus, the Laman graphs are exactly the minimally rigid graphs, and they form the bases of the two-dimensional rigidity matroids.
If n points in the plane are given, then there are 2n degrees of freedom in their placement (each point has two independent coordinates), but a rigid graph has only three degrees of freedom (the position of a single one of its vertices and the rotation of the remaining graph around that vertex).
Intuitively, adding an edge of fixed length to a graph reduces its number of degrees of freedom by one, so the 2n − 3 edges in a Laman graph reduce the 2n degrees of freedom of the initial point placement to the three degrees of freedom of a rigid graph. However, not every graph with 2n − 3 edges is rigid; the condition in the definition of a Laman graph that no subgraph can have too many edges ensures that each edge contributes to reducing the overall number of degrees of freedom, and is not wasted within a subgraph that is already itself rigid due to its other edges.
Planarity
A pointed pseudotriangulation is a planar straight-line drawing of a graph, with the properties that the outer face is convex, that every bounded face is a pseudotriangle, a polygon with only three convex vertices, and that the edges incident to every vertex span an angle of less than 180 degrees. The graphs that can be drawn as pointed pseudotriangulations are exactly the planar Laman graphs. However, Laman graphs have planar embeddings that are not pseudotriangulations, and there are Laman graphs that are not planar, such as the utility graph K3,3.
Sparsity
and define a graph as being -sparse if every nonempty subgraph with vertices has at most edges, and -tight if it is -sparse and has exactly edges. Thus, in their notation, the Laman graphs are exactly the (2,3)-tight graphs, and the subgraphs of the Laman graphs are exactly the (2,3)-sparse graphs. The same notation can be used to describe other important families of sparse graphs, including trees, pseudoforests, and graphs of bounded arboricity.
Based on this characterization, it is possible to recognize -vertex Laman graphs in time , by simulating a "pebble game" that begins with a graph with vertices and no edges, with two pebbles placed on each vertex, and performs a sequence of the following two kinds of steps to create all of the edges of the graph:
Create a new directed edge connecting any two vertices that both have two pebbles, and remove one pebble from the start vertex of the new edge.
If an edge points from a vertex with at most one pebble to another vertex with at least one pebble, move a pebble from to and reverse the edge.
If these operations can be used to construct an orientation of the given graph, then it is necessarily (2,3)-sparse, and vice versa.
However, faster algorithms are possible, running in time , based on testing whether doubling one edge of the given graph results in a multigraph that is (2,2)-tight (equivalently, whether it can be decomposed into two edge-disjoint spanning trees) and then using this decomposition to check whether the given graph is a Laman graph. Network flow techniques can be used to test whether a planar graph is a Laman graph more quickly, in time .
Henneberg construction
Before Laman's and Geiringer's work, Lebrecht Henneberg characterized the two-dimensional minimally rigid graphs (that is, the Laman graphs) in a different way. Henneberg showed that the minimally rigid graphs on two or more vertices are exactly the graphs that can be obtained, starting from a single edge, by a sequence of operations of the following two types:
Add a new vertex to the graph, together with edges connecting it to two previously existing vertices.
Subdivide an edge of the graph, and add an edge connecting the newly formed vertex to a third previously existing vertex.
A sequence of these operations that forms a given graph is known as a Henneberg construction of the graph.
For instance, the complete bipartite graph K3,3 may be formed using the first operation to form a triangle and then applying the second operation to subdivide each edge of the triangle and connect each subdivision point with the opposite triangle vertex.
References
Graph families
Geometric graphs
Mathematics of rigidity | Laman graph | Physics | 1,153 |
66,357,207 | https://en.wikipedia.org/wiki/LY-88329 | LY-88329 is an opioid receptor ligand related to medicines such as pethidine. It has high affinity to the μ-opioid receptor, but unlike structurally related drugs such as 3-methylfentanyl and OPPPP, LY-88329 is a potent opioid antagonist. In animal studies it blocks the effects of morphine and has anorectic action.
See also
Alvimopan
References
Mu-opioid receptor antagonists
3-Hydroxyphenyl compounds
4-Phenylpiperidines | LY-88329 | Chemistry | 117 |
38,390 | https://en.wikipedia.org/wiki/Dementia | Dementia is a syndrome associated with many neurodegenerative diseases, characterized by a general decline in cognitive abilities that affects a person's ability to perform everyday activities. This typically involves problems with memory, thinking, behavior, and motor control. Aside from memory impairment and a disruption in thought patterns, the most common symptoms of dementia include emotional problems, difficulties with language, and decreased motivation. The symptoms may be described as occurring in a continuum over several stages. Dementia ultimately has a significant effect on the individual, their caregivers, and their social relationships in general. A diagnosis of dementia requires the observation of a change from a person's usual mental functioning and a greater cognitive decline than might be caused by the normal aging process.
Several diseases and injuries to the brain, such as a stroke, can give rise to dementia. However, the most common cause is Alzheimer's disease, a neurodegenerative disorder. The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), has re-described dementia as a mild or major neurocognitive disorder with varying degrees of severity and many causative subtypes. The International Classification of Diseases (ICD-11) also classifies dementia as a neurocognitive disorder (NCD) with many forms or subclasses. Dementia is listed as an acquired brain syndrome, marked by a decline in cognitive function, and is contrasted with neurodevelopmental disorders. It is also described as a spectrum of disorders with causative subtypes of dementia based on a known disorder, such as Parkinson's disease for Parkinson's disease dementia, Huntington's disease for Huntington's disease dementia, vascular disease for vascular dementia, HIV infection causing HIV dementia, frontotemporal lobar degeneration for frontotemporal dementia, Lewy body disease for dementia with Lewy bodies, and prion diseases. Subtypes of neurodegenerative dementias may also be based on the underlying pathology of misfolded proteins, such as synucleinopathies and tauopathies. The coexistence of more than one type of dementia is known as mixed dementia.
Many neurocognitive disorders may be caused by another medical condition or disorder, including brain tumours and subdural hematoma, endocrine disorders such as hypothyroidism and hypoglycemia, nutritional deficiencies including thiamine and niacin, infections, immune disorders, liver or kidney failure, metabolic disorders such as Kufs disease, some leukodystrophies, and neurological disorders such as epilepsy and multiple sclerosis. Some of the neurocognitive deficits may sometimes show improvement with treatment of the causative medical condition.
Diagnosis of dementia is usually based on history of the illness and cognitive testing with imaging. Blood tests may be taken to rule out other possible causes that may be reversible, such as hypothyroidism (an underactive thyroid), and to determine the dementia subtype. One commonly used cognitive test is the mini–mental state examination. Although the greatest risk factor for developing dementia is aging, dementia is not a normal part of the aging process; many people aged 90 and above show no signs of dementia. Several risk factors for dementia, such as smoking and obesity, are preventable by lifestyle changes. Screening the general older population for the disorder is not seen to affect the outcome.
Dementia is currently the seventh leading cause of death worldwide and has 10 million new cases reported every year (approximately one every three seconds). There is no known cure for dementia. Acetylcholinesterase inhibitors such as donepezil are often used and may be beneficial in mild to moderate disorder, but the overall benefit may be minor. There are many measures that can improve the quality of life of a person with dementia and their caregivers. Cognitive and behavioral interventions may be appropriate for treating the associated symptoms of depression.
Signs and symptoms
The signs and symptoms of dementia are termed as the neuropsychiatric symptoms—also known as the behavioral and psychological symptoms—of dementia.
The behavioral symptoms can include agitation, restlessness, inappropriate behavior, sexual disinhibition, and verbal or physical aggression. These symptoms may result from impairments in cognitive inhibition.
The psychological symptoms can include depression, hallucinations (most often visual), delusions, apathy, and anxiety. The most commonly affected areas of brain function include memory, language, attention, problem solving, and visuospatial function affecting perception and orientation. The symptoms progress at a continuous rate over several stages, and they vary across the dementia subtypes. Most types of dementia are slowly progressive with some deterioration of the brain well established before signs of the disorder become apparent. There are often other conditions present, such as high blood pressure or diabetes, and there can sometimes be as many as four of these comorbidities.
Signs of dementia include getting lost in a familiar neighborhood, using unusual words to refer to familiar objects, forgetting the name of a close family member or friend, forgetting old memories, and being unable to complete tasks independently. People with developing dementia often fall behind on bill payments; specifically mortgage and credit cards, and a crashing credit score can be an early indicator of the disease.
People with dementia are more likely to have problems with incontinence than those of a comparable age without dementia; they are three times more likely to have urinary incontinence and four times more likely to have fecal incontinence.
Stages
The course of dementia is often described in four stages – pre-dementia, early, middle, and late, that show a pattern of progressive cognitive and functional impairment. More detailed descriptions can be arrived at by the use of numeric scales. These scales include the Global Deterioration Scale (GDS or Reisberg Scale), the Functional Assessment Staging Tool (FAST), and the Clinical Dementia Rating (CDR). Using the GDS, which more accurately identifies each stage of the disease progression, a more detailed course is described in seven stages – two of which are broken down further into five and six degrees. Stage 7(f) is the final stage.
Pre-dementia
Pre-dementia includes pre-clinical and prodromal stages. The latter stage includes mild cognitive impairment (MCI), delirium-onset, and psychiatric-onset presentations.
Pre-clinical
Sensory dysfunction is claimed for the pre-clinical stage, which may precede the first clinical signs of dementia by up to ten years. Most notably the sense of smell is lost, associated with depression and a loss of appetite leading to poor nutrition. It is suggested that this dysfunction may come about because the olfactory epithelium is exposed to the environment, and the lack of blood–brain barrier protection allows toxic elements to enter and cause damage to the chemosensory networks.
Prodromal
Pre-dementia states considered as prodromal are mild cognitive impairment (MCI) and mild behavioral impairment (MBI). Signs and symptoms at the prodromal stage may be subtle, and the early signs often become apparent only in hindsight. Of those diagnosed with MCI, 70% later progress to dementia. In mild cognitive impairment, changes in the person's brain have been happening for a long time, but the symptoms are just beginning to appear. These problems, however, are not severe enough to affect daily function. If and when they do, the diagnosis becomes dementia. The person may have some memory problems and trouble finding words, but they can solve everyday problems and competently handle their life affairs. During this stage, it is ideal to ensure that advance care planning has occurred to protect the person's wishes. Advance directives exist that are specific to sufferers of dementia; these can be particularly helpful in addressing the decisions related to feeding which come with the progression of the illness. Mild cognitive impairment has been relisted in both DSM-5 and ICD-11 as "mild neurocognitive disorders", i.e. milder forms of the major neurocognitive disorder (dementia) subtypes.
Kynurenine is a metabolite of tryptophan that regulates microbiome signaling, immune cell response, and neuronal excitation. A disruption in the kynurenine pathway may be associated with the neuropsychiatric symptoms and cognitive prognosis in mild dementia.
Early
In the early stage of dementia, symptoms become noticeable to other people. In addition, the symptoms begin to interfere with daily activities, and will register a score on a mini–mental state examination (MMSE). MMSE scores are set at 24 to 30 for a normal cognitive rating and lower scores reflect severity of symptoms. The symptoms are dependent on the type of dementia. More complicated chores and tasks around the house or at work become more difficult. The person can usually still take care of themselves but may forget things like taking pills or doing laundry and may need prompting or reminders.
The symptoms of early dementia usually include memory difficulty, but can also include some word-finding problems, and problems with executive functions of planning and organization. Managing finances may prove difficult. Other signs might be getting lost in new places, repeating things, and personality changes.
In some types of dementia, such as dementia with Lewy bodies and frontotemporal dementia, personality changes and difficulty with organization and planning may be the first signs.
Middle
As dementia progresses, initial symptoms generally worsen. The rate of decline is different for each person. MMSE scores between 6 and 17 signal moderate dementia. For example, people with moderate Alzheimer's dementia lose almost all new information. People with dementia may be severely impaired in solving problems, and their social judgment is often impaired. They cannot usually function outside their own home, and generally should not be left alone. They may be able to do simple chores around the house but not much else, and begin to require assistance for personal care and hygiene beyond simple reminders. A lack of insight into having the condition will become evident.
Late
People with late-stage dementia typically turn increasingly inward and need assistance with most or all of their personal care. People with dementia in the late stages usually need 24-hour supervision to ensure their personal safety, and meeting of basic needs. If left unsupervised, they may wander or fall; may not recognize common dangers such as a hot stove; or may not realize that they need to use the bathroom and become incontinent. They may not want to get out of bed, or may need assistance doing so. Commonly, the person no longer recognizes familiar faces. They may have significant changes in sleeping habits or have trouble sleeping at all.
Changes in eating frequently occur. Cognitive awareness is needed for eating and swallowing and progressive cognitive decline results in eating and swallowing difficulties. This can cause food to be refused, or choked on, and help with feeding will often be required. For ease of feeding, food may be liquidized into a thick purée. They may also struggle to walk, particularly among those with Alzheimer's disease. In some cases, terminal lucidity, a form of paradoxical lucidity, occurs immediately before death; in this phenomenon, there is an unexpected recovery of mental clarity.
Causes
Many causes of dementia are neurodegenerative, and protein misfolding is a cardinal feature of these. Other common causes include vascular dementia, dementia with Lewy bodies, frontotemporal dementia, and mixed dementia (commonly Alzheimer's disease and vascular dementia). Less common causes include normal pressure hydrocephalus, Parkinson's disease dementia, syphilis, HIV, and Creutzfeldt–Jakob disease.
Alzheimer's disease
Alzheimer's disease accounts for 60–70% of cases of dementia worldwide. The most common symptoms of Alzheimer's disease are short-term memory loss and word-finding difficulties. Trouble with visuospatial functioning (getting lost often), reasoning, judgment and insight fail. Insight refers to whether or not the person realizes they have memory problems.
The part of the brain most affected by Alzheimer's is the hippocampus. Other parts that show atrophy (shrinking) include the temporal and parietal lobes. Although this pattern of brain shrinkage suggests Alzheimer's, it is variable and a brain scan is insufficient for a diagnosis.
Little is known about the events that occur during and that actually cause Alzheimer's disease. This is due to the fact that, historically, brain tissue from patients with the disease could only be studied after the person's death. Brain scans can now help diagnose and distinguish between different kinds of dementia and show severity. These include magnetic resonance imaging (MRI), computerized tomography (CT), and positron emission tomography (PET). However, it is known that one of the first aspects of Alzheimer's disease is overproduction of amyloid. Extracellular senile plaques (SPs), consisting of beta-amyloid (Aβ) peptides, and intracellular neurofibrillary tangles (NFTs) that are formed by hyperphosphorylated tau proteins, are two well-established pathological hallmarks of AD. Amyloid causes inflammation around the senile plaques of the brain, and too much buildup of this inflammation leads to changes in the brain that cannot be controlled, leading to the symptoms of Alzheimer's.
Several articles have been published on a possible relationship (as an either primary cause or exacerbation of Alzheimer's disease) between general anesthesia and Alzheimer's in specifically the elderly.
Vascular
Vascular dementia accounts for at least 20% of dementia cases, making it the second most common type. It is caused by disease or injury affecting the blood supply to the brain, typically involving a series of mini-strokes. The symptoms of this dementia depend on where in the brain the strokes occurred and whether the blood vessels affected were large or small. Repeated injury can cause progressive dementia over time, while a single injury located in an area critical for cognition such as the hippocampus, or thalamus, can lead to sudden cognitive decline. Elements of vascular dementia may be present in all other forms of dementia.
Brain scans may show evidence of multiple strokes of different sizes in various locations. People with vascular dementia tend to have risk factors for disease of the blood vessels, such as tobacco use, high blood pressure, atrial fibrillation, high cholesterol, diabetes, or other signs of vascular disease such as a previous heart attack or angina.
Lewy bodies
The prodromal symptoms of dementia with Lewy bodies (DLB) include mild cognitive impairment, and delirium onset.
The symptoms of DLB are more frequent, more severe, and earlier presenting than in the other dementia subtypes.
Dementia with Lewy bodies has the primary symptoms of fluctuating cognition, alertness or attention; REM sleep behavior disorder (RBD); one or more of the main features of parkinsonism, not due to medication or stroke; and repeated visual hallucinations. The visual hallucinations in DLB are generally vivid hallucinations of people or animals and they often occur when someone is about to fall asleep or wake up. Other prominent symptoms include problems with planning (executive function) and difficulty with visual-spatial function, and disruption in autonomic bodily functions. Abnormal sleep behaviors may begin before cognitive decline is observed and are a core feature of DLB. RBD is diagnosed either by sleep study recording or, when sleep studies cannot be performed, by medical history and validated questionnaires.
Parkinson's disease
Parkinson's disease is associated with Lewy body dementia that often progresses to Parkinson's disease dementia following a period of dementia-free Parkinson's disease.
Frontotemporal
Frontotemporal dementias (FTDs) are characterized by drastic personality changes and language difficulties. In all FTDs, the person has a relatively early social withdrawal and early lack of insight. Memory problems are not a main feature. There are six main types of FTD. The first has major symptoms in personality and behavior. This is called behavioral variant FTD (bv-FTD) and is the most common. The hallmark feature of bv-FTD is impulsive behavior, and this can be detected in pre-dementia states. In bv-FTD, the person shows a change in personal hygiene, becomes rigid in their thinking, and rarely acknowledges problems; they are socially withdrawn, and often have a drastic increase in appetite. They may become socially inappropriate. For example, they may make inappropriate sexual comments, or may begin using pornography openly. One of the most common signs is apathy, or not caring about anything. Apathy, however, is a common symptom in many dementias.
Two types of FTD feature aphasia (language problems) as the main symptom. One type is called semantic variant primary progressive aphasia (SV-PPA). The main feature of this is the loss of the meaning of words. It may begin with difficulty naming things. The person eventually may lose the meaning of objects as well. For example, a drawing of a bird, dog, and an airplane in someone with FTD may all appear almost the same. In a classic test for this, a patient is shown a picture of a pyramid and below it a picture of both a palm tree and a pine tree. The person is asked to say which one goes best with the pyramid. In SV-PPA the person cannot answer that question. The other type is called non-fluent agrammatic variant primary progressive aphasia (NFA-PPA). This is mainly a problem with producing speech. They have trouble finding the right words, but mostly they have a difficulty coordinating the muscles they need to speak. Eventually, someone with NFA-PPA only uses one-syllable words or may become totally mute.
A frontotemporal dementia associated with amyotrophic lateral sclerosis (ALS) known as (FTD-ALS) includes the symptoms of FTD (behavior, language and movement problems) co-occurring with amyotrophic lateral sclerosis (loss of motor neurons). Two FTD-related disorders are progressive supranuclear palsy (also classed as a Parkinson-plus syndrome), and corticobasal degeneration. These disorders are tau-associated.
Huntington's disease
Huntington's disease is a neurodegenerative disease caused by mutations in a single gene HTT, that encodes for huntingtin protein. Symptoms include cognitive impairment and this usually declines further into dementia.
The first main symptoms of Huntington's disease often include:
difficulty concentrating
memory lapses
depression - this can include low mood, lack of interest in things, or just abnormal feelings of hopelessness
stumbling and clumsiness that is out of the ordinary
mood swings, such as irritability or aggressive behavior to insignificant things
HIV
HIV-associated dementia results as a late stage from HIV infection, and mostly affects younger people. The essential features of HIV-associated dementia are disabling cognitive impairment accompanied by motor dysfunction, speech problems and behavioral change. Cognitive impairment is characterised by mental slowness, trouble with memory and poor concentration. Motor symptoms include a loss of fine motor control leading to clumsiness, poor balance and tremors. Behavioral changes may include apathy, lethargy and diminished emotional responses and spontaneity. Histopathologically, it is identified by the infiltration of monocytes and macrophages into the central nervous system (CNS), gliosis, pallor of myelin sheaths, abnormalities of dendritic processes and neuronal loss.
Creutzfeldt–Jakob disease
Creutzfeldt–Jakob disease is a rapidly progressive prion disease that typically causes dementia that worsens over weeks to months. Prions are disease-causing pathogens created from abnormal proteins.
Alcoholism
Alcohol-related dementia, also called alcohol-related brain damage, occurs as a result of excessive use of alcohol particularly as a substance abuse disorder. Different factors can be involved in this development including thiamine deficiency and age vulnerability. A degree of brain damage is seen in more than 70% of those with alcohol use disorder. Brain regions affected are similar to those that are affected by aging, and also by Alzheimer's disease. Regions showing loss of volume include the frontal, temporal, and parietal lobes, as well as the cerebellum, thalamus, and hippocampus. This loss can be more notable, with greater cognitive impairments seen in those aged 65 years and older.
Mixed dementia
More than one type of dementia, known as mixed dementia, may exist together in about 10% of dementia cases. The most common type of mixed dementia is Alzheimer's disease and vascular dementia. This particular type of mixed dementia's main onsets are a mixture of old age, high blood pressure, and damage to blood vessels in the brain.
Diagnosis of mixed dementia can be difficult, as often only one type will predominate. This makes the treatment of people with mixed dementia uncommon, with many people missing out on potentially helpful treatments. Mixed dementia can mean that symptoms onset earlier, and worsen more quickly since more parts of the brain will be affected.
Other
Chronic inflammatory conditions that may affect the brain and cognition include Behçet's disease, multiple sclerosis, sarcoidosis, Sjögren's syndrome, lupus, celiac disease, and non-celiac gluten sensitivity. These types of dementias can rapidly progress, but usually have a good response to early treatment. This consists of immunomodulators or steroid administration, or in certain cases, the elimination of the causative agent.
Celiac disease does not seem to raise the risk of dementia in general but it may increase the risk of vascular dementia. Both celiac disease or non-celiac gluten sensitivity might raise the risk of cognitive impairment which can be one of the early signs of subsequent dementia. A strict gluten-free diet started early may protect against dementia associated with gluten-related disorders.
Cases of easily reversible dementia include hypothyroidism, vitamin B12 deficiency, Lyme disease, and neurosyphilis. For Lyme disease and neurosyphilis, testing should be done if risk factors are present. Because risk factors are often difficult to determine, testing for neurosyphilis and Lyme disease, as well as other mentioned factors, may be undertaken as a matter of course where dementia is suspected.
Many other medical and neurological conditions include dementia only late in the illness. For example, a proportion of patients with Parkinson's disease develop dementia, though widely varying figures are quoted for this proportion. When dementia occurs in Parkinson's disease, the underlying cause may be dementia with Lewy bodies or Alzheimer's disease, or both. Cognitive impairment also occurs in the Parkinson-plus syndromes of progressive supranuclear palsy and corticobasal degeneration (and the same underlying pathology may cause the clinical syndromes of frontotemporal lobar degeneration). Although the acute porphyrias may cause episodes of confusion and psychiatric disturbance, dementia is a rare feature of these rare diseases. Limbic-predominant age-related TDP-43 encephalopathy (LATE) is a type of dementia that primarily affects people in their 80s or 90s and in which TDP-43 protein deposits in the limbic portion of the brain.
Hereditary disorders that can also cause dementia include: some metabolic disorders such as lysosomal storage disorders, leukodystrophies, and spinocerebellar ataxias.
Persistent loneliness may significantly increase the risk of dementia. Loneliness is associated with a 31% higher likelihood of developing any form of dementia, and can also raise the risk of cognitive impairment by 15%.
Diagnosis
Symptoms are similar across dementia types and it is difficult to diagnose by symptoms alone. Diagnosis may be aided by brain scanning techniques. In many cases, the diagnosis requires a brain biopsy to become final, but this is rarely recommended (though it can be performed at autopsy). In those who are getting older, general screening for cognitive impairment using cognitive testing or early diagnosis of dementia has not been shown to improve outcomes. However, screening exams are useful in 65+ persons with memory complaints.
Normally, symptoms must be present for at least six months to support a diagnosis. Cognitive dysfunction of shorter duration is called delirium. Delirium can be easily confused with dementia due to similar symptoms. Delirium is characterized by a sudden onset, fluctuating course, a short duration (often lasting from hours to weeks), and is primarily related to a somatic (or medical) disturbance. In comparison, dementia has typically a long, slow onset (except in the cases of a stroke or trauma), slow decline of mental functioning, as well as a longer trajectory (from months to years).
Some mental illnesses, including depression and psychosis, may produce symptoms that must be differentiated from both delirium and dementia. These are differently diagnosed as pseudodementias, and any dementia evaluation needs to include a depression screening such as the Neuropsychiatric Inventory or the Geriatric Depression Scale. Physicians used to think that people with memory complaints had depression and not dementia (because they thought that those with dementia are generally unaware of their memory problems). However, researchers have realized that many older people with memory complaints in fact have mild cognitive impairment the earliest stage of dementia. Depression should always remain high on the list of possibilities, however, for an elderly person with memory trouble. Changes in thinking, hearing and vision are associated with normal ageing and can cause problems when diagnosing dementia due to the similarities. Given the challenging nature of predicting the onset of dementia and making a dementia diagnosis clinical decision making aids underpinned by machine learning and artificial intelligence have the potential to enhance clinical practice.
Cognitive testing
Various brief cognitive tests (5–15 minutes) have reasonable reliability to screen for dementia, but may be affected by factors such as age, education and ethnicity. Age and education have a significant influence on the diagnosis of dementia. For example, Individuals with lower education are more likely to be diagnosed with dementia than their educated counterparts. While many tests have been studied, presently the mini mental state examination (MMSE) is the best studied and most commonly used. The MMSE is a useful tool for helping to diagnose dementia if the results are interpreted along with an assessment of a person's personality, their ability to perform activities of daily living, and their behaviour. Other cognitive tests include the abbreviated mental test score (AMTS), the, "modified mini–mental state examination" (3MS), the Cognitive Abilities Screening Instrument (CASI), the Trail-making test, and the clock drawing test. The MoCA (Montreal Cognitive Assessment) is a reliable screening test and is available online for free in 35 different languages. The MoCA has also been shown somewhat better at detecting mild cognitive impairment than the MMSE. People with hearing loss, which commonly occurs alongside dementia, score worse in the MoCA test, which could lead to a false diagnosis of dementia. Researchers have developed an adapted version of the MoCA test, which is accurate and reliable and avoids the need for people to listen and respond to questions. The AD-8 – a screening questionnaire used to assess changes in function related to cognitive decline – is potentially useful, but is not diagnostic, is variable, and has risk of bias. An integrated cognitive assessment (CognICA) is a five-minute test that is highly sensitive to the early stages of dementia, and uses an application deliverable to an iPad. Previously in use in the UK, in 2021 CognICA was given FDA approval for its commercial use as a medical device.
Another approach to screening for dementia is to ask an informant (relative or other supporter) to fill out a questionnaire about the person's everyday cognitive functioning. Informant questionnaires provide complementary information to brief cognitive tests. Probably the best known questionnaire of this sort is the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Evidence is insufficient to determine how accurate the IQCODE is for diagnosing or predicting dementia. The Alzheimer's Disease Caregiver Questionnaire is another tool. It is about 90% accurate for Alzheimer's when by a caregiver. The General Practitioner Assessment Of Cognition combines both a patient assessment and an informant interview. It was specifically designed for use in the primary care setting.
Clinical neuropsychologists provide diagnostic consultation following administration of a full battery of cognitive testing, often lasting several hours, to determine functional patterns of decline associated with varying types of dementia. Tests of memory, executive function, processing speed, attention and language skills are relevant, as well as tests of emotional and psychological adjustment. These tests assist with ruling out other etiologies and determining relative cognitive decline over time or from estimates of prior cognitive abilities.
Laboratory tests
Routine blood tests are usually performed to rule out treatable causes. These include tests for vitamin B12, folic acid, thyroid-stimulating hormone (TSH), C-reactive protein, full blood count, electrolytes, calcium, renal function, and liver enzymes. Abnormalities may suggest vitamin deficiency, infection, or other problems that commonly cause confusion or disorientation in the elderly.
Imaging
A CT scan or MRI scan is commonly performed to possibly find either normal pressure hydrocephalus, a potentially reversible cause of dementia, or connected tumor. The scans can also yield information relevant to other types of dementia, such as infarction (stroke) that would point at a vascular type of dementia. These tests do not pick up diffuse metabolic changes associated with dementia in a person who shows no gross neurological problems (such as paralysis or weakness) on a neurological exam.
The functional neuroimaging modalities of SPECT and PET are more useful in assessing long-standing cognitive dysfunction, since they have shown similar ability to diagnose dementia as a clinical exam and cognitive testing. The ability of SPECT to differentiate vascular dementia from Alzheimer's disease, appears superior to differentiation by clinical exam.
The value of PiB-PET imaging using Pittsburgh compound B (PiB) as a radiotracer has been established in predictive diagnosis, particularly Alzheimer's disease.
Prevention
Risk factors
Risk factors for dementia include high blood pressure, high levels of LDL cholesterol, vision loss, hearing loss, smoking, obesity, depression, inactivity, diabetes, lower levels of education and low social contact. Over-indulgence in alcohol, lack of sleep, anemia, traumatic brain injury, and air pollution can also increase the chance of developing dementia. Many of these risk factors, including the lower level of education, smoking, physical inactivity and diabetes, are modifiable. Several of the group are known as vascular risk factors that may be possible to be reduced or eliminated. Managing these risk factors can reduce the risk of dementia in individuals in their late midlife or older age. A reduction in a number of these risk factors can give a positive outcome. The decreased risk achieved by adopting a healthy lifestyle is seen even in those with a high genetic risk.
In addition to the above risk factors, other psychological features, including certain personality traits (high neuroticism, and low conscientiousness), low purpose in life, and high loneliness, are risk factors for Alzheimer's disease and related dementias. For example, based on the English Longitudinal Study of Ageing (ELSA), research found that loneliness in older people can increase the risk of dementia by one-third. Not having a partner (being single, divorced, or widowed) can double the risk of dementia. However, having two or three closer relationships might reduce the risk by three-fifths.
The two most modifiable risk factors for dementia are physical inactivity and lack of cognitive stimulation. Physical activity, in particular aerobic exercise, is associated with a reduction in age-related brain tissue loss, and neurotoxic factors thereby preserving brain volume and neuronal integrity. Cognitive activity strengthens neural plasticity and together they help to support cognitive reserve. The neglect of these risk factors diminishes this reserve.
Sensory impairments of vision and hearing are modifiable risk factors for dementia. These impairments may precede the cognitive symptoms of Alzheimer's disease for example, by many years. Hearing loss may lead to social isolation which negatively affects cognition. Social isolation is also identified as a modifiable risk factor. Age-related hearing loss in midlife is linked to cognitive impairment in late life, and is seen as a risk factor for the development of Alzheimer's disease and dementia. Such hearing loss may be caused by a central auditory processing disorder that makes the understanding of speech against background noise difficult. Age-related hearing loss is characterised by slowed central processing of auditory information. Worldwide, mid-life hearing loss may account for around 9% of dementia cases.
Frailty may increase the risk of cognitive decline, and dementia, and the inverse also holds of cognitive impairment increasing the risk of frailty. Prevention of frailty may help to prevent cognitive decline.
There are no medications that can prevent cognitive decline and dementia. However blood pressure lowering medications might decrease the risk of dementia or cognitive problems by around 0.5%.
Economic disadvantage has been shown to have a strong link to higher dementia prevalence, which cannot yet be fully explained by other risk factors.
Dental health
Limited evidence links poor oral health to cognitive decline. However, failure to perform tooth brushing and gingival inflammation can be used as dementia risk predictors.
Oral bacteria
The link between Alzheimer's and gum disease is oral bacteria. In the oral cavity, bacterial species include P. gingivalis, F. nucleatum, P. intermedia, and T. forsythia. Six oral treponema spirochetes have been examined in the brains of Alzheimer's patients. Spirochetes are neurotropic in nature, meaning they act to destroy nerve tissue and create inflammation. Inflammatory pathogens are an indicator of Alzheimer's disease and bacteria related to gum disease have been found in the brains of patients with Alzheimer's disease. The bacteria invade nerve tissue in the brain, increasing the permeability of the blood–brain barrier and promoting the onset of Alzheimer's. Individuals with a plethora of tooth plaque risk cognitive decline. Poor oral hygiene can have an adverse effect on speech and nutrition, causing general and cognitive health decline.
Oral viruses
Herpes simplex virus (HSV) has been found in more than 70% of those aged over 50. HSV persists in the peripheral nervous system and can be triggered by stress, illness or fatigue. High proportions of viral-associated proteins in amyloid plaques or neurofibrillary tangles (NFTs) confirm the involvement of HSV-1 in Alzheimer's disease pathology. NFTs are known as the primary marker of Alzheimer's disease. HSV-1 produces the main components of NFTs.
Diet
Diet is seen to be a modifiable risk factor for the development of dementia. Thiamine deficiency is identified to increase the risk of Alzheimer's disease in adults. The role of thiamine in brain physiology is unique and essential for the normal cognitive function of older people. Many dietary choices of the elderly population, including the higher intake of gluten-free products, compromise the intake of thiamine as these products are not fortified with thiamine.
The Mediterranean and DASH diets are both associated with less cognitive decline. A different approach has been to incorporate elements of both of these diets into one known as the MIND diet. These diets are generally low in saturated fats while providing a good source of carbohydrates, mainly those that help stabilize blood sugar and insulin levels. Raised blood sugar levels over a long time, can damage nerves and cause memory problems if they are not managed. Nutritional factors associated with the proposed diets for reducing dementia risk include unsaturated fatty acids, vitamin E, vitamin C, flavonoids, vitamin B, and vitamin D. A study conducted at the University of Exeter in the United Kingdom seems to have confirmed these findings with fruits, vegetables, whole grains, and healthy fats creating an optimum diet that can help reduce the risk of dementia by roughly 25%.
The MIND diet may be more protective but further studies are needed. The Mediterranean diet seems to be more protective against Alzheimer's than DASH but there are no consistent findings against dementia in general. The role of olive oil needs further study as it may be one of the most important components in reducing the risk of cognitive decline and dementia.
In those with celiac disease or non-celiac gluten sensitivity, a strict gluten-free diet may relieve the symptoms given a mild cognitive impairment. Once dementia is advanced no evidence suggests that a gluten-free diet is useful.
Omega-3 fatty acid supplements do not appear to benefit or harm people with mild to moderate symptoms. However, there is good evidence that omega-3 incorporation into the diet is of benefit in treating depression, a common symptom, and potentially modifiable risk factor for dementia.
Management
There are limited options for treating dementia, with most approaches focused on managing or reducing individual symptoms. There are no treatment options available to delay the onset of dementia. Acetylcholinesterase inhibitors are often used early in the disorder course; however, benefit is generally small. More than half of people with dementia may experience psychological or behavioral symptoms including agitation, sleep problems, aggression, and/or psychosis. Treatment for these symptoms is aimed at reducing the person's distress and keeping the person safe. Treatments other than medication appear to be better for agitation and aggression. Cognitive and behavioral interventions may be appropriate. Some evidence suggests that education and support for the person with dementia, as well as caregivers and family members, improves outcomes. Palliative care interventions may lead to improvements in comfort in dying, but the evidence is low. Exercise programs are beneficial with respect to activities of daily living, and potentially improve dementia.
The effect of therapies can be evaluated for example by assessing agitation using the Cohen-Mansfield Agitation Inventory (CMAI); by assessing mood and engagement with the Menorah Park Engagement Scale (MPES); and the Observed Emotion Rating Scale (OERS) or by assessing indicators for depression using the Cornell Scale for Depression in Dementia (CSDD) or a simplified version thereof.
Often overlooked in treating and managing dementia is the role of the caregiver and what is known about how they can support multiple interventions. Findings from a 2021 systematic review of the literature found caregivers of people with dementia in nursing homes do not have sufficient tools or clinical guidance for behavioral and psychological symptoms of dementia (BPSD) along with medication use. Simple measures like talking to people about their interests can improve the quality of life for care home residents living with dementia. A programme showed that such simple measures reduced residents' agitation and depression. They also needed fewer GP visits and hospital admissions, which also meant that the programme was cost-saving.
Psychological and psychosocial therapies
Psychological therapies for dementia include some limited evidence for reminiscence therapy (namely, some positive effects in the areas of quality of life, cognition, communication and mood – the first three particularly in care home settings), some benefit for cognitive reframing for caretakers, unclear evidence for validation therapy and tentative evidence for mental exercises, such as cognitive stimulation programs for people with mild to moderate dementia. Offering personally tailored activities may help reduce challenging behavior and may improve quality of life. It is not clear if personally tailored activities have an impact on affect or improve for the quality of life for the caregiver.
Adult daycare centers as well as special care units in nursing homes often provide specialized care for dementia patients. Daycare centers offer supervision, recreation, meals, and limited health care to participants, as well as providing respite for caregivers. In addition, home care can provide one-to-one support and care in the home allowing for more individualized attention that is needed as the disorder progresses. Psychiatric nurses can make a distinctive contribution to people's mental health.
Since dementia impairs normal communication due to changes in receptive and expressive language, as well as the ability to plan and problem solve, agitated behavior is often a form of communication for the person with dementia. Actively searching for a potential cause, such as pain, physical illness, or overstimulation can be helpful in reducing agitation. Additionally, using an "ABC analysis of behavior" can be a useful tool for understanding behavior in people with dementia. It involves looking at the antecedents (A), behavior (B), and consequences (C) associated with an event to help define the problem and prevent further incidents that may arise if the person's needs are misunderstood. The strongest evidence for non-pharmacological therapies for the management of changed behaviors in dementia is for using such approaches. Low quality evidence suggests that regular (at least five sessions of) music therapy may help institutionalized residents. It may reduce depressive symptoms and improve overall behaviors. It may also supply a beneficial effect on emotional well-being and quality of life, as well as reduce anxiety. In 2003, The Alzheimer's Society established 'Singing for the Brain' (SftB) a project based on pilot studies which suggested that the activity encouraged participation and facilitated the learning of new songs. The sessions combine aspects of reminiscence therapy and music. Musical and interpersonal connectedness can underscore the value of the person and improve quality of life.
Some London hospitals found that using color, designs, pictures and lights helped people with dementia adjust to being at the hospital. These adjustments to the layout of the dementia wings at these hospitals helped patients by preventing confusion.
Life story work as part of reminiscence therapy, and video biographies have been found to address the needs of clients and their caregivers in various ways, offering the client the opportunity to leave a legacy and enhance their personhood and also benefitting youth who participate in such work. Such interventions can be more beneficial when undertaken at a relatively early stage of dementia. They may also be problematic in those who have difficulties in processing past experiences
Animal-assisted therapy has been found to be helpful. Drawbacks may be that pets are not always welcomed in a communal space in the care setting. An animal may pose a risk to residents, or may be perceived to be dangerous. Certain animals may also be regarded as "unclean" or "dangerous" by some cultural groups.
Occupational therapy also addresses psychological and psychosocial needs of patients with dementia through improving daily occupational performance and caregivers' competence. When compensatory intervention strategies are added to their daily routine, the level of performance is enhanced and reduces the burden commonly placed on their caregivers. Occupational therapists can also work with other disciplines to create a client centered intervention. To manage cognitive disability, and coping with behavioral and psychological symptoms of dementia, combined occupational and behavioral therapies can support patients with dementia even further.
Cognitive training and rehabilitation
There is no strong evidence to suggest that cognitive training is beneficial for people with Parkinson's disease, dementia, or mild cognitive impairment. However, a 2023 review found that cognitive rehabilitation may be effective in helping individuals with mild to moderate dementia to manage their daily activities.
Personally tailored activities
Offering personally tailored activity sessions to people with dementia in long-term care homes may slightly reduce challenging behavior.
Medications
No medications have been shown to prevent or cure dementia. Medications may be used to treat the behavioral and cognitive symptoms, but have no effect on the underlying disease process.
Acetylcholinesterase inhibitors, such as donepezil, may be useful for Alzheimer's disease, Parkinson's disease dementia, DLB, or vascular dementia. The quality of the evidence is poor and the benefit is small. No difference has been shown between the agents in this family. In a minority of people side effects include a slow heart rate and fainting. Rivastigmine is recommended for treating symptoms in Parkinson's disease dementia.
Medications that have anticholinergic effects increase all-cause mortality in people with dementia, although the effect of these medications on cognitive function remains uncertain, according to a systematic review published in 2021.
Before prescribing antipsychotic medication in the elderly, an assessment for an underlying cause of the behavior is needed. Severe and life-threatening reactions occur in almost half of people with DLB, and can be fatal after a single dose. People with Lewy body dementias who take neuroleptics are at risk for neuroleptic malignant syndrome, a life-threatening illness. Extreme caution is required in the use of antipsychotic medication in people with DLB because of their sensitivity to these agents. Antipsychotic drugs are used to treat dementia only if non-drug therapies have not worked, and the person's actions threaten themselves or others. Aggressive behavior changes are sometimes the result of other solvable problems, that could make treatment with antipsychotics unnecessary. Because people with dementia can be aggressive, resistant to their treatment, and otherwise disruptive, sometimes antipsychotic drugs are considered as a therapy in response. These drugs have risky adverse effects, including increasing the person's chance of stroke and death. Given these adverse events and small benefit antipsychotics are avoided whenever possible. Generally, stopping antipsychotics for people with dementia does not cause problems, even in those who have been on them a long time.
N-methyl-D-aspartate (NMDA) receptor blockers such as memantine may be of benefit but the evidence is less conclusive than for AChEIs. Due to their differing mechanisms of action memantine and acetylcholinesterase inhibitors can be used in combination however the benefit is slight.
An extract of Ginkgo biloba known as EGb 761 has been widely used for treating mild to moderate dementia and other neuropsychiatric disorders. Its use is approved throughout Europe. The World Federation of Biological Psychiatry guidelines lists EGb 761 with the same weight of evidence (level B) given to acetylcholinesterase inhibitors, and memantine. EGb 761 is the only one that showed improvement of symptoms in both AD and vascular dementia. EGb 761 is seen as being able to play an important role either on its own or as an add-on particularly when other therapies prove ineffective. EGb 761 is seen to be neuroprotective; it is a free radical scavenger, improves mitochondrial function, and modulates serotonin and dopamine levels. Many studies of its use in mild to moderate dementia have shown it to significantly improve cognitive function, activities of daily living, neuropsychiatric symptoms, and quality of life. However, its use has not been shown to prevent the progression of dementia.
While depression is frequently associated with dementia, the use of antidepressants such as selective serotonin reuptake inhibitors (SSRIs) do not appear to affect outcomes. However, the SSRIs sertraline and citalopram have been demonstrated to reduce symptoms of agitation, compared to placebo.
No solid evidence indicates that folate or vitamin B12 improves outcomes in those with cognitive problems. Statins have no benefit in dementia. Medications for other health conditions may need to be managed differently for a person who has a dementia diagnosis. It is unclear whether blood pressure medication and dementia are linked. People may experience an increase in cardiovascular-related events if these medications are withdrawn.
The Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) criteria can help identify ways that a diagnosis of dementia changes medication management for other health conditions. These criteria were developed because people with dementia live with an average of five other chronic diseases, which are often managed with medications. The systematic review that informed the criteria were published subsequently in 2018 and updated in 2022.
Sleep disturbances
Over 40% of people with dementia report sleep problems. Approaches to treating these sleep problems include medications and non-pharmacological approaches. The use of medications to alleviate sleep disturbances that people with dementia often experience has not been well researched, even for medications that are commonly prescribed. In 2012 the American Geriatrics Society recommended that benzodiazepines such as diazepam, and non-benzodiazepine hypnotics, be avoided for people with dementia due to the risks of increased cognitive impairment and falls. Benzodiazepines are also known to promote delirium. Additionally, little evidence supports the effectiveness of benzodiazepines in this population. No clear evidence shows that melatonin or ramelteon improves sleep for people with dementia due to Alzheimer's, but it is used to treat REM sleep behavior disorder in dementia with Lewy bodies. Limited evidence suggests that a low dose of trazodone may improve sleep, however more research is needed.
Non-pharmacological approaches have been suggested for treating sleep problems for those with dementia, however, there is no strong evidence or firm conclusions on the effectiveness of different types of interventions, especially for those who are living in an institutionalized setting such as a nursing home or long-term care home.
Pain
As people age, they experience more health problems, and most health problems associated with aging carry a substantial burden of pain; therefore, between 25% and 50% of older adults experience persistent pain. Seniors with dementia experience the same prevalence of conditions likely to cause pain as seniors without dementia. Pain is often overlooked in older adults and, when screened for, is often poorly assessed, especially among those with dementia, since they become incapable of informing others of their pain. Beyond the issue of humane care, unrelieved pain has functional implications. Persistent pain can lead to decreased ambulation, depressed mood, sleep disturbances, impaired appetite, and exacerbation of cognitive impairment and pain-related interference with activity is a factor contributing to falls in the elderly.
Although persistent pain in people with dementia is difficult to communicate, diagnose, and treat, failure to address persistent pain has profound functional, psychosocial and quality of life implications for this vulnerable population. Health professionals often lack the skills and usually lack the time needed to recognize, accurately assess and adequately monitor pain in people with dementia. Family members and friends can make a valuable contribution to the care of a person with dementia by learning to recognize and assess their pain. Educational resources and observational assessment tools are available.
Eating difficulties
Persons with dementia may have difficulty eating. Whenever it is available as an option, the recommended response to eating problems is having a caretaker assist them. A secondary option for people who cannot swallow effectively is to consider gastrostomy feeding tube placement as a way to give nutrition. However, in bringing comfort and maintaining functional status while lowering risk of aspiration pneumonia and death, assistance with oral feeding is at least as good as tube feeding. Tube-feeding is associated with agitation, increased use of physical and chemical restraints and worsening pressure ulcers. Tube feedings may cause fluid overload, diarrhea, abdominal pain, local complications, less human interaction and may increase the risk of aspiration.
Benefits in those with advanced dementia has not been shown. The risks of using tube feeding include agitation, rejection by the person (pulling out the tube, or otherwise physical or chemical immobilization to prevent them from doing this), or developing pressure ulcers. The procedure is directly related to a 1% fatality rate with a 3% major complication rate. The percentage of people at end of life with dementia using feeding tubes in the US has dropped from 12% in 2000 to 6% as of 2014.
The immediate and long-term effects of modifying the thickness of fluids for swallowing difficulties in people with dementia are not well known. While thickening fluids may have an immediate positive effect on swallowing and improving oral intake, the long-term impact on the health of the person with dementia should also be considered.
Exercise
Exercise programs may improve the ability of people with dementia to perform daily activities, but the best type of exercise is still unclear. Getting more exercise can slow the development of cognitive problems such as dementia, proving to reduce the risk of Alzheimer's disease by about 50%. A balance of strength exercise, to help muscles pump blood to the brain, and balance exercises are recommended for aging people. A suggested amount of about hours per week can reduce risks of cognitive decay as well as other health risks like falling.
Assistive technology
There is a lack of high-quality evidence to determine whether assistive technology effectively supports people with dementia to manage memory issues. Some of the specific things that are used today that helps with dementia today are: clocks, communication aids, electrical appliances the use monitoring, GPS location/ tracking devices, home care robots, in-home cameras, and medication management are just to name a few. Technology has the potential to be a valuable intervention for alleviating loneliness and promoting social connections, supported by available evidence.
Alternative medicine
Evidence of the therapeutic values of aromatherapy and massage is unclear. It is not clear if cannabinoids are harmful or effective for people with dementia.
Palliative care
Given the progressive and terminal nature of dementia, palliative care can be helpful to patients and their caregivers by helping people with the disorder and their caregivers understand what to expect, deal with loss of physical and mental abilities, support the person's wishes and goals including surrogate decision making, and discuss wishes for or against CPR and life support. Because the decline can be rapid, and because most people prefer to allow the person with dementia to make their own decisions, palliative care involvement before the late stages of dementia is recommended. Further research is required to determine the appropriate palliative care interventions and how well they help people with advanced dementia.
Person-centered care helps maintain the dignity of people with dementia.
Remotely delivered information for caregivers
Remotely delivered interventions including support, training and information may reduce the burden for the informal caregiver and improve their depressive symptoms. There is no certain evidence that they improve health-related quality of life.
In several localities in Japan, digital surveillance may be made available to family members, if a dementia patient is prone to wandering and going missing.
Epidemiology
The number of cases of dementia worldwide in 2021 was estimated at 55 million, with close to 10 million new cases each year. According to a report by the World Health Organization, "In 2021, Alzheimer’s disease and other forms of dementia ranked as the seventh leading cause of death, killing 1.8 million lives." By 2050, the number of people living with dementia is estimated to be over 150 million globally. Around 7% of people over the age of 65 have dementia, with slightly higher rates (up to 10% of those over 65) in places with relatively high life expectancy. An estimated 58% of people with dementia are living in low and middle income countries. The prevalence of dementia differs in different world regions, ranging from 4.7% in Central Europe to 8.7% in North Africa/Middle East; the prevalence in other regions is estimated to be between 5.6 and 7.6%. The number of people living with dementia is estimated to double every 20 years. In 2016 dementia resulted in about 2.4 million deaths, up from 0.8 million in 1990. The genetic and environmental risk factors for dementia disorders vary by ethnicity. For instance, Alzheimer's disease among Hispanic/Latino and African American subjects exhibit lower risks associated with gene changes in the apolipoprotein E gene than do non-Hispanic white subjects.
The annual incidence of dementia diagnosis is nearly 10 million worldwide. Almost half of new dementia cases occur in Asia, followed by Europe (25%), the Americas (18%) and Africa (8%). The incidence of dementia increases exponentially with age, doubling with every 6.3-year increase in age. Dementia affects 5% of the population older than 65 and 20–40% of those older than 85. Rates are slightly higher in women than men at ages 65 and greater. The disease trajectory is varied and the median time from diagnosis to death depends strongly on age at diagnosis, from 6.7 years for people diagnosed aged 60–69 to 1.9 years for people diagnosed at 90 or older.
Dementia impacts not only individuals with dementia, but also their carers and the wider society. Among people aged 60 years and over, dementia is ranked the 9th most burdensome condition according to the 2010 Global Burden of Disease (GBD) estimates. The global costs of dementia was around US$818 billion in 2015, a 35.4% increase from US$604 billion in 2010.
A new 2024 study reveals that deaths from dementia in the U.S. have tripled in the past 21 years, rising from around 150,000 in 1999 to over 450,000 in 2020; the likelihood of dying from dementia increased across all demographic groups studied.
Affected ages
About 3% of people between the ages of 65–74 have dementia, 19% between 75 and 84, and nearly half of those over 85 years of age. As more people are living longer, dementia is becoming more common. For people of a specific age, however, it may be becoming less frequent in the developed world, due to a decrease in modifiable risk factors made possible by greater financial and educational resources. It is one of the most common causes of disability among the elderly but can develop before the age of 65 when it is known as early-onset dementia or presenile dementia. Less than 1% of those with Alzheimer's have gene mutations that cause a much earlier development of the disease, around the age of 45, known as early-onset Alzheimer's disease. More than 95% of people with Alzheimer's disease have the sporadic form (late onset, 80–90 years of age). Worldwide the cost of dementia in 2015 was put at US$818 billion. People with dementia are often physically or chemically restrained to a greater degree than necessary, raising issues of human rights. Social stigma is commonly perceived by those with the condition, and also by their caregivers.
History
Until the end of the 19th century, dementia was a much broader clinical concept. It included mental illness and any type of psychosocial incapacity, including reversible conditions. Dementia at this time simply referred to anyone who had lost the ability to reason, and was applied equally to psychosis, "organic" diseases like syphilis that destroy the brain, and to the dementia associated with old age, which was attributed to "hardening of the arteries".
Dementia has been referred to in medical texts since antiquity. One of the earliest known allusions to dementia is attributed to the 7th-century BC Greek philosopher Pythagoras, who divided the human lifespan into six distinct phases: 0–6 (infancy), 7–21 (adolescence), 22–49 (young adulthood), 50–62 (middle age), 63–79 (old age), and 80–death (advanced age). The last two he described as the "senium", a period of mental and physical decay, and that the final phase was when "the scene of mortal existence closes after a great length of time that very fortunately, few of the human species arrive at, where the mind is reduced to the imbecility of the first epoch of infancy". In 550 BC, the Athenian statesman and poet Solon argued that the terms of a man's will might be invalidated if he exhibited loss of judgement due to advanced age. Chinese medical texts made allusions to the condition as well, and the characters for "dementia" translate literally to "foolish old person".
Athenian philosophers Aristotle and Plato discussed the mental decline that can come with old age and predicted that this affects everyone who becomes old and nothing can be done to stop this decline from taking place. Plato specifically talked about how the elderly should not be in positions that require responsibility because, "There is not much acumen of the mind that once carried them in their youth, those characteristics one would call judgement, imagination, power of reasoning, and memory. They see them gradually blunted by deterioration and can hardly fulfill their function."
For comparison, the Roman statesman Cicero held a view much more in line with modern-day medical wisdom that loss of mental function was not inevitable in the elderly and "affected only those old men who were weak-willed". He spoke of how those who remained mentally active and eager to learn new things could stave off dementia. However, Cicero's views on aging, although progressive, were largely ignored in a world that would be dominated for centuries by Aristotle's medical writings. Physicians during the Roman Empire, such as Galen and Celsus, simply repeated the beliefs of Aristotle while adding few new contributions to medical knowledge.
Byzantine physicians sometimes wrote of dementia. It is recorded that at least seven emperors whose lifespans exceeded 70 years displayed signs of cognitive decline. In Constantinople, special hospitals housed those diagnosed with dementia or insanity, but these did not apply to the emperors, who were above the law and whose health conditions could not be publicly acknowledged.
Otherwise, little is recorded about dementia in Western medical texts for nearly 1700 years. One of the few references was the 13th-century friar Roger Bacon, who viewed old age as divine punishment for original sin. Although he repeated existing Aristotelian beliefs that dementia was inevitable, he did make the progressive assertion that the brain was the center of memory and thought rather than the heart.
Poets, playwrights, and other writers made frequent allusions to the loss of mental function in old age. William Shakespeare notably mentions it in plays such as Hamlet and King Lear.
During the 19th century, doctors generally came to believe that elderly dementia was the result of cerebral atherosclerosis, although opinions fluctuated between the idea that it was due to blockage of the major arteries supplying the brain or small strokes within the vessels of the cerebral cortex.
In 1907, Bavarian psychiatrist Alois Alzheimer was the first to identify and describe the characteristics of progressive dementia in the brain of 51-year-old Auguste Deter. Deter had begun to behave uncharacteristically, including accusing her husband of adultery, neglecting household chores, exhibiting difficulties writing and engaging in conversations, heightened insomnia, and loss of directional sense. At one point, Deter was reported to have "dragged a bed sheet outside, wandered around wildly, and cried for hours at midnight." Alzheimer began treating Deter when she entered a Frankfurt mental hospital on November 25, 1901. During her ongoing treatment, Deter and her husband struggled to afford the cost of the medical care, and Alzheimer agreed to continue her treatment in exchange for Deter's medical records and donation of her brain upon death. Deter died on April 8, 1906, after succumbing to sepsis and pneumonia. Alzheimer conducted the brain biopsy using the Bielschowsky stain method, which was a new development at the time, and he observed senile plaques, neurofibrillary tangles, and atherosclerotic alteration. At the time, the consensus among medical doctors had been that senile plaques were generally found in older patients, and the occurrence of neurofibrillary tangles was an entirely new observation at the time. Alzheimer presented his findings at the 37th psychiatry conference of southwestern Germany in Tübingen on April 11, 1906; however, the information was poorly received by his peers. By 1910, Alois Alzheimer's teacher, Emil Kraepelin, published a book in which he coined the term "Alzheimer's disease" in an attempt to acknowledge the importance of Alzheimer's discovery.
By the 1960s, the link between neurodegenerative diseases and age-related cognitive decline had become more established. By the 1970s, the medical community maintained that vascular dementia was rarer than previously thought and Alzheimer's disease caused the vast majority of old age mental impairments. More recently however, it is believed that dementia is often a mixture of conditions.
In 1976, neurologist Robert Katzmann suggested a link between senile dementia and Alzheimer's disease. Katzmann suggested that much of the senile dementia occurring (by definition) after the age of 65, was pathologically identical with Alzheimer's disease occurring in people under age 65 and therefore should not be treated differently. Katzmann thus suggested that Alzheimer's disease, if taken to occur over age 65, is actually common, not rare, and was the fourth- or 5th-leading cause of death, even though rarely reported on death certificates in 1976.
A helpful finding was that although the incidence of Alzheimer's disease increased with age (from 5–10% of 75-year-olds to as many as 40–50% of 90-year-olds), no threshold was found by which age all persons developed it. This is shown by documented supercentenarians (people living to 110 or more) who experienced no substantial cognitive impairment. Some evidence suggests that dementia is most likely to develop between ages 80 and 84 and individuals who pass that point without being affected have a lower chance of developing it. Women account for a larger percentage of dementia cases than men. This can be attributed in part to their longer overall lifespan and greater odds of attaining an age where the condition is likely to occur.
Much like other diseases associated with aging, dementia was comparatively rare before the 20th century, because few people lived past 80. Conversely, syphilitic dementia was widespread in the developed world until it was largely eradicated by the use of penicillin after World War II. With significant increases in life expectancy thereafter, the number of people over 65 started rapidly climbing. While elderly persons constituted an average of 3–5% of the population prior to 1945, by 2010 many countries reached 10–14% and in Germany and Japan, this figure exceeded 20%. Public awareness of Alzheimer's Disease greatly increased in 1994 when former US president Ronald Reagan announced that he had been diagnosed with the condition.
In the 21st century, other types of dementia were differentiated from Alzheimer's disease and vascular dementias (the most common types). This differentiation is on the basis of pathological examination of brain tissues, by symptomatology, and by different patterns of brain metabolic activity in nuclear medical imaging tests such as SPECT and PET scans of the brain. The various forms have differing prognoses and differing epidemiologic risk factors. The main cause for many diseases, including Alzheimer's disease, remains unclear.
Terminology
Dementia in the elderly was once called senile dementia or senility, and viewed as a normal and somewhat inevitable aspect of aging.
By 1913–20 the term dementia praecox was introduced to suggest the development of senile-type dementia at a younger age. Eventually the two terms fused, so that until 1952 physicians used the terms dementia praecox (precocious dementia) and schizophrenia interchangeably. Since then, science has determined that dementia and schizophrenia are two different disorders, though they share some similarities. The term precocious dementia for a mental illness suggested that a type of mental illness like schizophrenia (including paranoia and decreased cognitive capacity) could be expected to arrive normally in all persons with greater age (see paraphrenia). After about 1920, the beginning use of dementia for what is now understood as schizophrenia and senile dementia helped limit the word's meaning to "permanent, irreversible mental deterioration". This began the change to the later use of the term. In recent studies, researchers have seen a connection between those diagnosed with schizophrenia and patients who are diagnosed with dementia, finding a positive correlation between the two diseases.
The view that dementia must always be the result of a particular disease process led for a time to the proposed diagnosis of "senile dementia of the Alzheimer's type" (SDAT) in persons over the age of 65, with "Alzheimer's disease" diagnosed in persons younger than 65 who had the same pathology. Eventually, however, it was agreed that the age limit was artificial, and that Alzheimer's disease was the appropriate term for persons with that particular brain pathology, regardless of age.
After 1952, mental illnesses including schizophrenia were removed from the category of organic brain syndromes, and thus (by definition) removed from possible causes of "dementing illnesses" (dementias). At the same, however, the traditional cause of senile dementia – "hardening of the arteries" – now returned as a set of dementias of vascular cause (small strokes). These were now termed multi-infarct dementias or vascular dementias.
Society and culture
The societal cost of dementia is high, especially for caregivers. According to a UK-based study, almost two out of three carers of people with dementia feel lonely. Most of the carers in the study were family members or friends.
, the annual cost per Alzheimer's patient in the United States was around $19,144.36. The total costs for the nation is estimated to be about $167.74 billion. By 2030, it is predicted the annual socioeconomic cost will total to about $507 billion, and by 2050 that number is expected to reach $1.89 trillion. This steady increase will be seen not just within the United States but globally. Global estimates for the costs of dementia were $957.56 billion in 2015, but by 2050 the estimated global cost is $9.12 trillion.
Many countries consider the care of people living with dementia a national priority and invest in resources and education to better inform health and social service workers, unpaid caregivers, relatives and members of the wider community. Several countries have authored national plans or strategies. These plans recognize that people can live reasonably with dementia for years, as long as the right support and timely access to a diagnosis are available. Former British Prime Minister David Cameron described dementia as a "national crisis", affecting 800,000 people in the United Kingdom. In fact, dementia has become the leading cause of death for women in England.
There, as with all mental disorders, people with dementia could potentially be a danger to themselves or others, they can be detained under the Mental Health Act 1983 for assessment, care and treatment. This is a last resort, and is usually avoided by people with family or friends who can ensure care.
Some hospitals in Britain work to provide enriched and friendlier care. To make the hospital wards calmer and less overwhelming to residents, staff replaced the usual nurses' station with a collection of smaller desks, similar to a reception area. The incorporation of bright lighting helps increase positive mood and allow residents to see more easily.
Driving with dementia can lead to injury or death. Doctors should advise appropriate testing on when to quit driving. The United Kingdom DVLA (Driver & Vehicle Licensing Agency) states that people with dementia who specifically have poor short-term memory, disorientation, or lack of insight or judgment are not allowed to drive, and in these instances the DVLA must be informed so that the driving license can be revoked. They acknowledge that in low-severity cases and those with an early diagnosis, drivers may be permitted to continue driving.
Many support networks are available to people with dementia and their families and caregivers. Charitable organizations aim to raise awareness and campaign for the rights of people living with dementia. Support and guidance are available on assessing testamentary capacity in people with dementia.
In 2015, Atlantic Philanthropies announced a $177 million gift aimed at understanding and reducing dementia. The recipient was Global Brain Health Institute, a program co-led by the University of California, San Francisco and Trinity College Dublin. This donation is the largest non-capital grant Atlantic has ever made, and the biggest philanthropic donation in Irish history.
In October 2020, the Caretaker's last music release, Everywhere at the End of Time, was popularized by TikTok users for its depiction of the stages of dementia. Caregivers were in favor of this phenomenon; Leyland Kirby, the creator of the record, echoed this sentiment, explaining it could cause empathy among a younger public.
On November 2, 2020, Scottish billionaire Sir Tom Hunter donated £1 million to dementia charities, after watching a former music teacher with dementia, Paul Harvey, playing one of his own compositions on the piano in a viral video. The donation was announced to be split between the Alzheimer's Society and Music for Dementia.
Awareness
Celebrities have used their platforms to raise awareness for the different forms of dementia and the need for further support, including former First Lady of California Maria Shriver, Maria Shriver | My Brain™ | Alzheimer’s Association Academy Award Winning actor Samuel L. Jackson, Editor-in-Chief of ELLE Magazine Nina Garcia, professional skateboarder Tony Hawk, and others.
Additional Alzheimer's awareness has been raised through the diagnoses of high-profile persons themselves, including
Actor Bruce Willis
Actor Robin Williams
Activist Rosa Parks
40th President of the United States, Ronald Reagan
Former Mrs. Colorado Springs Joanna Fix Mrs. Colorado Springs uses title, young-onset Alzheimer's diagnosis to spread awareness about dementia
TV Host Wendy Williams Wendy Williams diagnosed with same form of dementia as Bruce Willis
Musician Tony Bennett
Musician Maureen McGovern
Dancer and pin-up model Rita Hayworth Rita Hayworth's misdiagnosed struggle
Notes
References
External links
Alzheimer's Association
National Institute on Aging – Alzheimer's disease
Aging-associated diseases
Cognitive disorders
Learning disabilities
Mental disorders due to brain damage
Wikipedia neurology articles ready to translate
Wikipedia medicine articles ready to translate | Dementia | Biology | 15,183 |
1,162,678 | https://en.wikipedia.org/wiki/Ferredoxin | Ferredoxins (from Latin ferrum: iron + redox, often abbreviated "fd") are iron–sulfur proteins that mediate electron transfer in a range of metabolic reactions. The term "ferredoxin" was coined by D.C. Wharton of the DuPont Co. and applied to the "iron protein" first purified in 1962 by Mortenson, Valentine, and Carnahan from the anaerobic bacterium Clostridium pasteurianum.
Another redox protein, isolated from spinach chloroplasts, was termed "chloroplast ferredoxin". The chloroplast ferredoxin is involved in both cyclic and non-cyclic photophosphorylation reactions of photosynthesis. In non-cyclic photophosphorylation, ferredoxin is the last electron acceptor thus reducing the enzyme NADP+ reductase. It accepts electrons produced from sunlight-excited chlorophyll and transfers them to the enzyme ferredoxin: NADP+ oxidoreductase .
Ferredoxins are small proteins containing iron and sulfur atoms organized as iron–sulfur clusters. These biological "capacitors" can accept or discharge electrons, with the effect of a change in the oxidation state of the iron atoms between +2 and +3. In this way, ferredoxin acts as an electron transfer agent in biological redox reactions.
Other bioinorganic electron transport systems include rubredoxins, cytochromes, blue copper proteins, and the structurally related Rieske proteins.
Ferredoxins can be classified according to the nature of their iron–sulfur clusters and by sequence similarity.
Bioenergetics of ferredoxins
Ferredoxins typically carry out a single electron transfer.
+ <=>
However a few bacterial ferredoxins (of the 2[4Fe4S] type) have two iron sulfur clusters and can carry out two electron transfer reactions. Depending on the sequence of the protein, the two transfers can have nearly identical reduction potentials or they may be significantly different.
+ <=>
+ <=>
Ferredoxins are one of the most reducing biological electron carriers. They typically have a mid point potential of -420 mV. The reduction potential of a substance in the cell will differ from its midpoint potential depending on the concentrations of its reduced and oxidized forms. For a one electron reaction, the potential changes by around 60 mV for each power of ten change in the ratio of the concentration. For example, if the ferredoxin pool is around 95% reduced, the reduction potential will be around -500 mV. In comparison, other biological reactions mostly have less reducing potentials: for example the primary biosynthetic reductant of the cell, NADPH has a cellular redox potential of -370 mV ( = -320 mV).
Depending on the sequence of the supporting protein ferredoxins have reduction potential from around -500 mV to -340 mV. A single cell can have multiple types of ferredoxins where each type is tuned to optimally carry out different reactions.
Reduction of ferredoxin
The highly reducing ferredoxins are reduced either by using another strong reducing agent, or by using some source of energy to "boost" electrons from less reducing sources to the ferredoxin.
Direct reduction
Reactions that reduce Fd include the oxidation of aldehydes to acids like the glyceraldehyde to glycerate reaction (-580 mV), the carbon monoxide dehydrogenase reaction (-520 mV), and the 2-oxoacid:Fd Oxidoreductase reactions (-500 mV) like the reaction carried out by pyruvate synthase.
Membrane potential coupled reduction
Ferredoxin can also be reduced by using NADH (-320 mV) or (-414 mV), but these processes are coupled to the consumption of the membrane potential to power the "boosting" of electrons to the higher energy state. The Rnf complex is a widespread membrane protein in bacteria that reversibly transfers electrons between NADH and ferredoxin while pumping or ions across the cell membrane. The chemiosmotic potential of the membrane is consumed to power the unfavorable reduction of by NADH. This reaction is an essential source of in many autotrophic organisms. If the cell is growing on substrates that provide excess , the Rnf complex can transfer these electrons to and store the resultant energy in the membrane potential. The energy converting hydrogenases (Ech) are a family of enzymes that reversibly couple the transfer of electrons between and while pumping ions across the membrane to balance the energy difference.
+ + <=> + +
+ + <=> + +
Electron bifurcation
The unfavourable reduction of Fd from a less reducing electron donor can be coupled simultaneously with the favourable reduction of an oxidizing agent through an electron bifurcation reaction. An example of the electron bifurcation reaction is the generation of for nitrogen fixation in certain aerobic diazotrophs. Typically, in oxidative phosphorylation the transfer of electrons from NADH to ubiquinone (Q) is coupled to charging the proton motive force. In Azotobacter the energy released by transferring one electron from NADH to Q is used to simultaneously boost the transfer of one electron from NADH to Fd.
Direct reduction of high potential ferredoxins
Some ferredoxins have a sufficiently high redox potential that they can be directly reduced by NADPH. One such ferredoxin is adrenoxin (-274 mV) which takes part in the biosynthesis of many mammalian steroids. The ferredoxin Fd3 in the roots of plants that reduces nitrate and sulfite has a midpoint potential of -337 mV and is also reduced by NADPH.
Fe2S2 ferredoxins
Members of the 2Fe–2S ferredoxin superfamily () have a general core structure consisting of beta(2)-alpha-beta(2), which includes putidaredoxin, terpredoxin, and adrenodoxin. They are proteins of around one hundred amino acids with four conserved cysteine residues to which the 2Fe–2S cluster is ligated. This conserved region is also found as a domain in various metabolic enzymes and in multidomain proteins, such as aldehyde oxidoreductase (N-terminal), xanthine oxidase (N-terminal), phthalate dioxygenase reductase (C-terminal), succinate dehydrogenase iron–sulphur protein (N-terminal), and methane monooxygenase reductase (N-terminal).
Plant-type ferredoxins
One group of ferredoxins, originally found in chloroplast membranes, has been termed "chloroplast-type" or "plant-type" (). Its active center is a [Fe2S2] cluster, where the iron atoms are tetrahedrally coordinated both by inorganic sulfur atoms and by sulfurs of four conserved cysteine (Cys) residues.
In chloroplasts, Fe2S2 ferredoxins function as electron carriers in the photosynthetic electron transport chain and as electron donors to various cellular proteins, such as glutamate synthase, nitrite reductase, sulfite reductase, and the cyclase of chlorophyll biosynthesis. Since the cyclase is a ferredoxin dependent enzyme this may provide a mechanism for coordination between photosynthesis and the chloroplasts need for chlorophyll by linking chlorophyll biosynthesis to the photosynthetic electron transport chain. In hydroxylating bacterial dioxygenase systems, they serve as intermediate electron-transfer carriers between reductase flavoproteins and oxygenase.
Thioredoxin-like ferredoxins
The Fe2S2 ferredoxin from Clostridium pasteurianum (Cp2FeFd; ) has been recognized as distinct protein family on the basis of its amino acid sequence, spectroscopic properties of its iron–sulfur cluster and the unique ligand swapping ability of two cysteine ligands to the [Fe2S2] cluster. Although the physiological role of this ferredoxin remains unclear, a strong and specific interaction of Cp2FeFd with the molybdenum-iron protein of nitrogenase has been revealed. Homologous ferredoxins from Azotobacter vinelandii (Av2FeFdI; ) and Aquifex aeolicus (AaFd; ) have been characterized. The crystal structure of AaFd has been solved. AaFd exists as a dimer. The structure of AaFd monomer is different from other Fe2S2 ferredoxins. The fold belongs to the α+β class, with first four β-strands and two α-helices adopting a variant of the thioredoxin fold. UniProt categorizes these as the "2Fe2S Shethna-type ferredoxin" family.
Adrenodoxin-type ferredoxins
Adrenodoxin (adrenal ferredoxin; ), putidaredoxin, and terpredoxin make up a family of soluble Fe2S2 proteins that act as single electron carriers, mainly found in eukaryotic mitochondria and Pseudomonadota. The human variant of adrenodoxin is referred to as ferredoxin-1 and ferredoxin-2. In mitochondrial monooxygenase systems, adrenodoxin transfers an electron from NADPH:adrenodoxin reductase to membrane-bound cytochrome P450. In bacteria, putidaredoxin and terpredoxin transfer electrons between corresponding NADH-dependent ferredoxin reductases and soluble P450s. The exact functions of other members of this family are not known, although Escherichia coli Fdx is shown to be involved in biogenesis of Fe–S clusters. Despite low sequence similarity between adrenodoxin-type and plant-type ferredoxins, the two classes have a similar folding topology.
Ferredoxin-1 in humans participates in the synthesis of thyroid hormones. It also transfers electrons from adrenodoxin reductase to CYP11A1, a CYP450 enzyme responsible for cholesterol side chain cleavage. FDX-1 has the capability to bind to metals and proteins. Ferredoxin-2 participates in heme A and iron–sulphur protein synthesis.
Fe4S4 and Fe3S4 ferredoxins
The [Fe4S4] ferredoxins may be further subdivided into low-potential (bacterial-type) and high-potential (HiPIP) ferredoxins.
Low- and high-potential ferredoxins are related by the following redox scheme:
The formal oxidation numbers of the iron ions can be [2Fe3+, 2Fe2+] or [1Fe3+, 3Fe2+] in low-potential ferredoxins. The oxidation numbers of the iron ions in high-potential ferredoxins can be [3Fe3+, 1Fe2+] or [2Fe3+, 2Fe2+].
Bacterial-type ferredoxins
A group of Fe4S4 ferredoxins, originally found in bacteria, has been termed "bacterial-type". Bacterial-type ferredoxins may in turn be subdivided into further groups, based on their sequence properties. Most contain at least one conserved domain, including four cysteine residues that bind to a [Fe4S4] cluster. In Pyrococcus furiosus Fe4S4 ferredoxin, one of the conserved Cys residues is substituted with aspartic acid.
During the evolution of bacterial-type ferredoxins, intrasequence gene duplication, transposition and fusion events occurred, resulting in the appearance of proteins with multiple iron–sulfur centers. In some bacterial ferredoxins, one of the duplicated domains has lost one or more of the four conserved Cys residues. These domains have either lost their iron–sulfur binding property or bind to a [Fe3S4] cluster instead of a [Fe4S4] cluster and dicluster-type.
3-D structures are known for a number of monocluster and dicluster bacterial-type ferredoxins. The fold belongs to the α+β class, with 2-7 α-helices and four β-strands forming a barrel-like structure, and an extruded loop containing three "proximal" Cys ligands of the iron–sulfur cluster.
High-potential iron–sulfur proteins
High potential iron–sulfur proteins (HiPIPs) form a unique family of Fe4S4 ferredoxins that function in anaerobic electron transport chains. Some HiPIPs have a redox potential higher than any other known iron–sulfur protein (e.g., HiPIP from Rhodopila globiformis has a redox potential of ca. -450 mV). Several HiPIPs have so far been characterized structurally, their folds belonging to the α+β class. As in other bacterial ferredoxins, the [Fe4S4] unit forms a cubane-type cluster and is ligated to the protein via four Cys residues.
Human proteins from ferredoxin family
2Fe–2S: AOX1; FDX1; FDX2; NDUFS1; SDHB; XDH;
4Fe–4S: ABCE1; DPYD; NDUFS8;
References
Further reading
External links
- 2Fe–2S ferredoxin subdomain
- Adrenodoxin
- 4Fe–4S ferredoxin, iron–sulfur binding
- High potential iron–sulfur protein
- X-ray structure of thioredoxin-like ferredoxin from Aquifex aeolicus (AaFd)
Iron–sulfur proteins
Photosynthesis
Steroid hormone biosynthesis | Ferredoxin | Chemistry,Biology | 3,016 |
59,997,499 | https://en.wikipedia.org/wiki/King%20of%20the%20Universe | King of the Universe (Sumerian: lugal ki-sár-ra or lugal kiš-ki, Akkadian: šarru kiššat māti, šar-kiššati or šar kiššatim), also interpreted as King of Everything, King of the Totality, King of All or King of the World, was a title of great prestige claiming domination of the universe used by powerful monarchs in ancient Mesopotamia. The title is sometimes applied to God in the Abrahamic tradition.
The etymology of the title derives from the ancient Sumerian city of Kish (Sumerian: kiš, Akkadian: kiššatu), the original meaning being King of Kish. Although the equation of šar kiššatim as literally meaning "King of the Universe" was made during the Akkadian period, the title of "King of Kish" is older and was already seen as particularly prestigious, as the city of Kish was seen as having primacy over all other Mesopotamian cities. In Sumerian legend, Kish was the location where the kingship was lowered to from heaven after the legendary Flood.
The first ruler to use the title of King of the Universe was the Akkadian Sargon of Akkad (reigned c. 2334–2284 BC) and it was used in a succession of later empires claiming symbolical descent from Sargon's Akkadian Empire. The title saw its final usage under the Seleucids, Antiochus I (reigned 281–261 BC) being the last known ruler to be referred to as "King of the Universe".
It is possible, at least among Assyrian rulers, that the title of King of the Universe was not inherited through normal means. As the title is not attested for all Neo-Assyrian kings and for some only attested several years into their reign it might have had to be earned by each king individually, possibly through completing seven successful military campaigns. The similar title of šar kibrāt erbetti ("King of the Four Corners of the World") may have required successful military campaigns in all four points of the compass. Some scholars believe that the titles of King of the Universe and King of the Four Corners of the World, with near identical meanings, differed in that King of the Universe referred to rule over the cosmological realm whereas King of the Four Corners of the World referred to dominion over the terrestrial. The verbatim translation of "King of the Universe" as a name exists in many languages; for example, in Hindi the translation would be Nikhil Shah and in Urdu, Shah Jahan.
History
Background (2900–2334 BC)
During the Early Dynastic Period in Mesopotamia (c. 2900–2350 BC), the rulers of the various city-states (the most prominent being Ur, Uruk, Lagash, Umma and Kish) in the region would often launch invasions into regions and cities far from their own, at most times with negligible consequences for themselves, in order to establish temporary and small empires to either gain or keep a superior position relative to the other city-states. This early empire-building was encouraged as the most powerful monarchs were often rewarded with the most prestigious titles, such as the title of lugal (literally "big man" but often interpreted as "king", probably with military connotations). Most of these early rulers had probably acquired these titles rather than inherited them.
Eventually this quest to be more prestigious and powerful than the other city-states resulted in a general ambition for universal rule. Since Mesopotamia was equated to correspond to the entire world and Sumerian cities had been built far and wide (cities the like of Susa, Mari and Assur were located near the perceived corners of the world) it seemed possible to reach the edges of the world (at this time thought to be the lower sea, the Persian gulf, and the upper sea, the Mediterranean).
Rulers attempting to reach a position of universal rule became more common during the Early Dynastic IIIb period (c. 2450–2350 BC) during which two prominent examples are attested. The first, Lugalannemundu, king of Adab, is claimed by the Sumerian King List (though this is a much later inscription, making the extensive rule of Lugalennemundu somewhat doubtful) to have created a great empire covering the entirety of Mesopotamia, reaching from modern Syria to Iran, saying that he "subjugated the Four Corners". The second, Lugalzaggesi, king of Uruk, conquered the entirety of Lower Mesopotamia and claimed (despite this not being the case) that his domain extended from the upper to the lower sea. Lugalzaggesi was originally titled as simply "King of Uruk" and adopted the title "King of the Land" (Sumerian: lugal-kalam-ma) to lay claim to universal rule. This title had also been employed by some earlier Sumerian kings claiming control over all of Sumer, such as Enshakushanna of Uruk.
Sargon of Akkad and his successors (2334–2154 BC)
The earliest days of Mesopotamian empire-building was most often a struggle between the kings of the most prominent cities. In these early days, the title of "King of Kish" was already recognized as one of particular prestige, with the city being seen as having a sort of primacy over the other cities. By the time of Sargon of Akkad, "King of Kish" meant a divinely authorized ruler with the right to rule over all of Sumer, and it might have already somewhat referred to a universal ruler in the Early Dynastic IIIb period. Use of the title, which was not limited to kings actually in possession of the city itself, implied that the ruler was a builder of cities, victorious in war and a righteous judge. According to the Sumerian King List, the city of Kish was where the kingship was lowered to from heaven after the Flood, its rulers being the embodiment of human kingship.
Sargon began his political career as a cupbearer of Ur-Zababa, the ruler of the city of Kish. After somehow escaping assassination, Sargon became the ruler of Kish himself, adopting the title of šar kiššatim and eventually in 2334 BC founding the first great Mesopotamian empire, the Akkadian Empire (named after Sargon's second capital, Akkad). Sargon primarily used the title King of Akkad (šar māt Akkadi).
The title of šar kiššatim was prominently used by the successors of Sargon, including his grandson Naram-Sin (r. 2254–2218 BC), who also introduced the similar title of "King of the Four Corners of the World". The transition from šar kiššatim meaning just "King of Kish" to it meaning "King of the Universe" happened already during the Old Akkadian period. It is important to note that Sargon and his successors did not rule the city of Kish directly and did thus not claim kingship over it. Until the time of Naram-Sin, Kish was ruled by a semi-independent ruler with the title ensik. "King of Kish" would have been rendered as lugal kiš in Sumerian, whilst the Akkadian kings rendered their new title as lugal ki-sár-ra or lugal kiš-ki in Sumerian.
It is possible that šar kiššatim referred to the authority to govern the cosmological realm whilst "King of the Four Corners" referred to the authority to govern the terrestrial. Eitherway, the implication of these titles was that the Mesopotamian king was the king of the entire world.
Assyrian and Babylonian Kings of the Universe (1809–627 BC)
The title šar kiššatim was perhaps most prominently used by the kings of the Neo-Assyrian Empire, more than a thousand years after the fall of the Akkadian Empire. The Assyrians took it, as the Akkadians had intended, to mean "King of the Universe" and adopted it to lay claim to continuity from the old empire of Sargon of Akkad. The title had been used sporadically by previous Assyrian kings, such as Shamshi-Adad I (r. 1809–1776 BC) of the Old Assyrian Empire and Ashur-uballit I (r. 1353–1318 BC) of the Middle Assyrian Empire. Shamshi-Adad I was the first Assyrian king to adopt the title of "King of the Universe" and other Akkadian titles, possibly to challenge the claims of sovereignty made by neighboring kingdoms. In particular, the kings of Eshnunna, a city-state in central Mesopotamia, had used similar titles since the fall of the Neo-Sumerian Empire. From the reign of Ipiq-Adad I (1800s BC), the Eshnunnans had referred to their kings with the title of "Mighty King" (šarum dannum). The Eshnunnan kings Ipiq-Adad II and Dadusha even adopted the title šar kiššatim for themselves, signifying a struggle over the title with the Assyrians. The title was also claimed by some kings of Babylon and Mari.
The Neo-Assyrian Sargon II (r. 722–705 BC), namesake of Sargon of Akkad over a thousand years prior, had the full titulature of Great King, Mighty King, King of the Universe, King of Assyria, King of Babylon, King of Sumer and Akkad. Since the title is not attested for all Neo-Assyrian kings and for some only attested several years into their reigns, it is possible that the title of "King of the Universe" had to be earned by each king individually, but the process by which a king could acquire the title is unknown. British historian Stephanie Dalley, specializing in the Ancient Near East proposed in 1998 that the title may have had to be earned through the king successfully completing seven (which would have been connected to totality in the eyes of the Assyrians) successful military campaigns. This is similar to the title of King of the Four Corners of the World, which might have required the king to successfully campaign in all four points of the compass. It thus would not have been possible for a king to claim to be "King of the Universe" before completing the required military campaigns. The title seems to have had similar requirements among Babylonian kings, the king Ayadaragalama (c. 1500 BC) was only able to claim the title late in his reign, his earliest campaigns that established control over cities such as Kish, Ur, Lagash and Akkad not being enough to justify its use. Both Ayadaragalama and the later Babylonian king Kurigalzu II only appear to have been able to claim to be King of the Universe after their realm extended as far as Bahrain.
Even in the Neo-Assyrian period when Assyria was the dominant kingdom in Mesopotamia, the Assyrian use of King of the Universe was challenged as the kings of Urartu from Sarduri I (r. 834–828 BC) onwards began using the title as well, claiming to be equal to the Assyrian kings and asserting wide territorial rights.
Later examples (626–261 BC)
The Neo-Assyrian Empire's domination over Mesopotamia ended with the establishment of the Neo-Babylonian Empire in 626 BC. With the sole exceptions of the first ruler of this empire, Nabopolassar, and the last, Nabonidus, the rulers of the Neo-Babylonian Empire abandoned most of the old Assyrian titles in their inscriptions. Nabopolassar used "mighty king" (šarru dannu) and Nabonidus utilized several of the Neo-Assyrian titles including "mighty king", "great king" (šarru rabu) and King of the Universe. Though not using them in royal inscriptions (e.g. not officially), both Nabopolassar and Nebuchadnezzar II used the title in economic documents.
The title was also among the many Mesopotamian titles assumed by Cyrus the Great of the Achaemenid Empire after his conquest of Babylon in 539 BC. In the text of the Cyrus Cylinder, Cyrus assumes several traditional Mesopotamian titles including those of "King of Babylon", "King of Sumer and Akkad" and "King of the Four Corners of the World". The title of King of the Universe was not used after the reign of Cyrus but his successors did adopt similar titles. The popular regnal title "King of Kings", used by monarchs of Iran until the modern age, was originally a title introduced by the Assyrian Tukulti-Ninurta I in the 13th century BC (rendered šar šarrāni in Akkadian). The title of "King of Lands", also used by Assyrian monarchs since at least Shalmaneser III, was also adopted by Cyrus the Great and his successors.
The title was last used in the Hellenic Seleucid Empire, which controlled Babylon following the conquests of Alexander the Great and the resulting Wars of the Diadochi. The title appears on the Antiochus Cylinder of king Antiochus I (r. 281–261 BC), which describes how Antiochus rebuilt the Ezida Temple in the city of Borsippa. It is worth noting that the last known surviving example of an Akkadian-language royal inscription preceding the Antiochus cylinder is the Cyrus Cylinder created nearly 300 years prior, and as such it is possible that more Achaemenid and Seleucid rulers would have assumed the title when in Mesopotamia. The Antiochus Cylinder was likely inspired in its composition by earlier Mesopotamian royal inscriptions and bears many similarities with Assyrian and Babylonian royal inscriptions. Titles such as "King of Kings" and "Great King" (šarru rabu), ancient titles with the connotation of holding supreme power in the lands surrounding Babylon (in a similar way as to how titles like Imperator were used in Western Europe following the fall of the Western Roman Empire to establish supremacy), would remain in use in Mesopotamia up until the Sassanid Empire in Persia of the 3rd to 7th centuries.
In religion
The title King of the Universe has sometimes been applied to deities since at least the Neo-Assyrian period, even though the title in those times was also used by contemporary monarchs. A 680 BC inscription by the Neo-Assyrian king Esarhaddon (who in the same inscription himself uses the title "King of the Universe," among other titles), in Babylon, refers to the goddess Sarpanit (Babylon's patron deity) as "Queen of the Universe."
In Judaism, the title King of the Universe came to be applied to God. To this day, Jewish liturgical blessings generally begin with the phrase "Barukh ata Adonai Eloheinu, melekh ha`olam..." (Blessed are you, Lord our God, King of the Universe...). Throughout scripture, it is made clear that the Abrahamic deity is not supposed to be the God simply of a small tribe in Israel, but the God of the entire world. In the Book of Psalms, God's universal kingship is repeatedly mentioned; for example, Psalms 47:2 refers to God as the "great King over all the earth."
In Christianity, the title is sometimes applied to Jesus. For example, Nikephoros I, Patriarch of Constantinople (c. 758–828), referred to Jesus' abandoning his terrestrial domain for a cosmic domain of infinite light and glory.
In Islam the equivalent term is "rabbil-'alamin" ("Lord of the Universe"), as found in the first chapter of the Quran.
Examples of rulers who used the title
Kings of the Universe in the Akkadian Empire:
Sargon (r. 2334–2279 BC) – not the first King of Kish, but the first ruler whose use of the title is identified with the connotation of King of the Universe.
Rimush (r. 2279–2270 BC)
Naram-Sin (r. 2254–2218 BC)
Kings of the Universe in Upper Mesopotamia:
Shamshi-Adad I (r. 1809–1776 BC)
Kings of the Universe in Eshnunna:
Dadusha (c. 1800–1779 BC)
Naram-Suen (c. 1800 BC)
Ipiq-Adad II (r. ~1700 BC)
Kings of the Universe in Mari:
Zimri-Lim (r. 1775–1761 BC)
Kings of the Universe in the Middle Assyrian Empire:
Ashur-uballit I (r. 1353–1318 BC)
Adad-nirari I (r. 1295–1264 BC)
Ashur-dan II (r. 934–912 BC)
Kings of the Universe in Babylonia:
Ayadaragalama (r. ~1500 BC)
Burna-Buriash II (r. 1359–1333 BC)
Kurigalzu II (r. 1332–1308 BC)
Nazi-Maruttash (r. 1307–1282 BC)
Ninurta-nadin-shumi (r. 1132–1126 BC)
Nebuchadnezzar I (r. 1126–1103 BC)
Enlil-nadin-apli (r. 1103–1099 BC)
Marduk-nadin-ahhe (r. 1099–1082 BC)
Marduk-shapik-zeri (r. 1082–1069 BC)
Adad-apla-iddina (r. 1069–1046 BC)
Nabu-shum-libur (r. 1033–1026 BC)
Eulmash-shakin-shumi (r. 1004–987 BC)
Mar-biti-apla-usur (r. 984–979 BC)
Kings of the Universe in the Neo-Assyrian Empire:
Adad-nirari II (r. 912–891 BC)
Tukulti-Ninurta II (r. 891–884 BC)
Adad-nirari III (r. 811–783 BC)
Tiglath-Pileser III (r. 745–727 BC)
Shalmaneser V (r. 727–722 BC)
Sargon II (r. 722–705 BC)
Sennacherib (r. 705–681 BC)
Esarhaddon (r. 681–669 BC)
Ashurbanipal (r. 669–631 BC)
Shamash-shum-ukin (Neo-Assyrian king of Babylon, r. 667–648 BC)
Ashur-etil-ilani (r. 631–627 BC)
Sinsharishkun (r. 627–612 BC)
Kings of the Universe in Urartu:
Sarduri I (r. 834–828 BC) and his successors.
Kings of the Universe of the Cimmerians:
Tugdamme (mid-7th century)
Kings of the Universe in the Neo-Babylonian Empire:
Nabopolassar (r. 626–605 BC) – in economic documents.
Nebuchadnezzar II (r. 605–562 BC) – in economic documents.
Nabonidus (r. 556–539 BC) – only Neo-Babylonian king to call himself King of the Universe in his royal inscriptions.
Kings of the Universe in the Achaemenid Empire:
Cyrus the Great (r. 559–530 BC) – claimed the title from 539 BC.
Kings of the Universe in the Seleucid Empire:
Antiochus I (r. 281–261 BC)
See also
Mesopotamian cosmology
References
Notes
Citations
Bibliography
Websites
24th-century BC establishments
3rd-century BC disestablishments
Ancient Mesopotamia
Sumer
Babylon
Akkadian Empire
Neo-Assyrian Empire
Royal titles
Space colonization
Outer space
Ancient astronomy | King of the Universe | Astronomy | 4,184 |
45,580,833 | https://en.wikipedia.org/wiki/Geometric%20morphometrics%20in%20anthropology | The study of geometric morphometrics in anthropology has made a major impact on the field of morphometrics by aiding in some of the technological and methodological advancements. Geometric morphometrics is an approach that studies shape using Cartesian landmark and semilandmark coordinates that are capable of capturing morphologically distinct shape variables. The landmarks can be analyzed using various statistical techniques separate from size, position, and orientation so that the only variables being observed are based on morphology. Geometric morphometrics is used to observe variation in numerous formats, especially those pertaining to evolutionary and biological processes, which can be used to help explore the answers to a lot of questions in physical anthropology. Geometric morphometrics is part of a larger subfield in anthropology, which has more recently been named virtual anthropology. Virtual anthropology looks at virtual morphology, the use of virtual copies of specimens to perform various quantitative analyses on shape (such as geometric morphometrics) and form...
Background
The field of geometric morphometrics grew out of the accumulation of improvements of methods and approaches over several decades beginning with Francis Galton (1822-1911). Galton was a polymath and the president of the Anthropological Institute of Great Britain. In 1907 he invented a way to quantify facial shapes using a base-line registration approach for shape comparisons. This was later adapted by Fred Bookstein and termed “two-point coordinates” or “Bookstein-shape coordinates”.
In the 1940s, D’Arcy Wentworth Thompson (biologist and mathematician, 1860-1948) looked at ways to quantify that could be attached to biological shape based on developmental and evolutionary theories. This led to the first branch of multivariate morphometrics, which emphasized matrix manipulations involving variables. In the late 1970s and early 1980s, Fred Bookstein (currently a professor of Anthropology at the University of Vienna) began using Cartesian transformations and David George Kendall (statistician, 1918-2007) showed that figures that hold the same shape can be treated as separate points in a geometric space. Finally, in 1996, Leslie Marcus (paleontologist, 1930-2002) convinced colleagues to use morphometrics on the famous Ötzi skeleton, which helped expose the importance of the applications of these methods.
Traditional morphometrics
Traditional morphometrics is the study of morphological variations between or within groups using multivariate statistical tools. Shape is defined by collecting and analyzing length measurements, counts, ratios, and angles. The statistical tools are able to quantify the covariation within and between samples. Some of the typical statistical tools used for traditional morphometrics are: principal components, factor analysis, canonical variate, and discriminant function analysis. It is also possible to study allometry, which is the observed change in shape when there is change in size. However, there are problems pertaining to size correction since linear distance is highly correlated with size. There have been multiple methods put forth to correct for this correlation, but these methods disagree and can end up with different results using the same dataset. Another problem is linear distances are not always defined by the same landmarks making it difficult to use for comparative purposes. For shape analysis itself, which is the goal of morphometrics, the biggest downside to traditional morphometrics is that it does not capture the complete variation of shape in space, which is what the measurements are supposed to be based on. For example, if one tried to compare the length and width for an oval and tear drop shape with the same dimensions they would be deemed as the same using traditional morphometrics. Geometric morphometrics tries to correct these problems by capturing more variability in shape.
Steps in a geometric morphometric study
There is a basic structure to successfully performing and completing every geometric morphometric study:
Design Study: what is your objective/hypothesis? what morphology must you capture to explore this?
Collect Data: choose your landmark set and method of collection
Standardize Data: make your landmarks comparable across all specimens (superimposition)
Analyze Data: choose a statistical approach depending on your original question and how you designed the study
Interpret Results: take the outcome of your statistical analysis and reflect it back to the context of your original specimens
Data collection methods
Landmarks
The first step is to define your landmark set. Landmarks have to be anatomically recognizable and the same for all specimens in the study. Landmarks should be selected to properly capture the shape trying to be observed and capable of being replicated. The sample size should be roughly three times the amount of landmarks chosen and they must be recorded in the same order for every specimen.
Semilandmarks
Semilandmarks, also called sliding landmarks, are used when the location of a landmark along a curvature might not be identifiable or repeatable. Semilandmarks were created in order to take landmark based geometric morphometrics to the next step by capturing the shape of difficult areas such as smooth curves and surfaces. In order to obtain a semilandmark, the curvature still has to start and end on definable landmarks, capture observed morphology, remain homologous across specimens in the same steps seen above for regular landmarks, be equal in number, and equally distant apart. When this approach was first proposed, Bookstein suggested gaining semilandmarks by densely sampling landmarks along the surface in a mesh and slowly thinning out the landmarks until the desired curvature was obtained. Newer landmark programs aid in the process but there are still some steps that must be taken in order for the semilandmarks to be the same across the whole sample. Semilandmarks are not placed on the actual curve or surface but on tangent vectors to the curve or tangent planes to the surface. The sliding of semilandmarks in new programs is performed by either selecting a specimen to be the model specimen for the rest of the specimens or using a computational sample mean from tangent vectors. Semilandmarks are automatically placed in most programs when the observer chooses a starting and ending point on definable landmarks and sliding the semilandmarks between them until the shape is captured. The semilandmarks are then mapped onto the rest of the specimens in the sample. Since shape will differ between specimens, the observer has to manually go through and make sure the landmarks and semilandmarks are on the surface for the rest of the specimens. If not they must be moved to touch the surface, but this process still maintains the correct location. There is still room for improvement to these methods but this is the most consistent option at the moment. Once mapped on, these semilandmarks can be treated just like landmarks for statistical analysis.
Deformation grid
This is a different approach to data collection than using landmarks and semilandmarks. In this approach, deformation grids are used to capture the morphological shape differences and changes. The general idea is that shape variations can be recorded from one specimen to another based on the distortion of a grid. Bookstein proposed the use of a thin-plate spline (TPS) interpolation, which is a computed deformation grid that calculates a mapping function between two individuals that measures point differences. Basically, the TPS interpolation has a template computed grid that is applied to specimens and the differences in shape can be read from the different deformations of the template. The TPS can be used for both two- and three-dimensional data, but has proved less effective for visualizing three-dimensional differences, but it can easily be applied to the pixels of an image or volumetric data from CT or MRI scans.
Superimposition
Generalized Procrustes analysis (GPA)
Landmark and semilandmark coordinates can be recorded on each specimen, but size, orientation, and position can vary for each of those specimens adding in variables that distract from the analysis of shape. This can be fixed by using superimposition, with generalized procrusted analysis (GPA) being the most common application. GPA removes the variation of size, orientation, and position by superimposing the landmarks in a common coordinate system. The landmarks for all specimens are optimally translated, rotated, and scaled based on a least-squared estimation. The first step is translation and rotation to minimize the squared and summed differences (squared Procrustes distance) between landmarks on each specimen. Then the landmarks are individually scaled to the same unit Centroid size. Centroid size is the square root of the sum of squared distances of the landmarks in configuration to their mean location. The translation, rotation, and scaling bring the landmark configurations for all specimens into a common coordinate system so that the only differing variables are based on shape alone. The new superimposed landmarks can now be analyzed in multivariate statistical analyses.
Statistical analysis
Principal components analysis (PCA)
In general, principal components analysis is used to construct overarching variables that take the place of multiple correlated variables in order to reveal the underlying structure of the dataset. This is helpful in geometric morphometrics where a large set of landmarks can create correlated relationships that might be difficult to differentiate without reducing them in order to look at the overall variability in the data. Reducing the number of variables is also necessary because the number of variables being observed and analyzed should not exceed sample size. Principal component scores are computed through an eigendecomposition of a sample’s covariance matrix and rotates the data to preserve procrustes distances. In other words, a principal components analysis preserves the shape variables that were scaled, rotated, and translated during the generalize procrustes analysis. The resulting principal component scores project the shape variables onto low-dimensional space based on eigenvectors. The scores can be plotted various ways to look at the shape variables, such as scatterplots. It is important to explore what shape variables are being observed to make sure the principal components being analyzed are pertinent to the questions being asked. Although the components might show shape variables not relevant to the question at hand, it is perfectly acceptable to leave those components out any further analysis for a specific project.
Partial least squares (PLS)
Partial least squares is similar the principal components analysis in the fact that it reduces the number of variables being observed so patterns are more easily observed in the data, but it uses a linear regression model. PLS is an approach that looks at two or more sets of variables measured on the same specimens and extracts the linear combinations that best represent the pattern of covariance across the sets. The linear combinations will optimally describe the covariances and provide a low-dimensional output to compare the different sets. With the highest shape variation covariance, mean shape, and the other shape covariances that exists among the sets, this approach is ideal for looking at the significance of group differences. PLS has been used a lot in studies that look at things such as sexual dimorphism, or other general morphological differences found at the population, subspecies, and species level. It has also been used to look at functional, environmental, or behavioral differences that could influence the found shape covariance between sets
Multivariate regression
Multiple or multivariate regression is an approach to look at the relationship between several independent or predictor variables and a dependent or influential variable. It is best used in geometric morphometrics when analyzing shape variables based on an external influence. For example, it can be used in studies with attached functional or environmental variables like age or the development over time in certain environments. The multivariate regression of shape based on the logarithm of centroid size (square root of the sum of squared distances of landmarks) is ideal for allometric studies. Allometry is the analysis of shape based on the biological parameters of growth and size. This approach is not affected by the number of dependent shape variables or their covariance, so the results of regression coefficients can be seen as a deformation in shape.
Some applications in anthropology
Human evolution
The human brain
The human brain is unique from other species based on the size of the visual cortex, temporal lobe, and parietal cortex, and increased gyrification (folds of the brain). There have been many questions as to why these changes occurred and how they contributed to cognition and behavior, which are important questions in human evolution. Geometric morphometrics has been used to explore some of these questions using virtual endocasts (casts of the inside of the cranium) to gather information since brain tissue does not preserve in the fossil record. Geometric morphometrics can reveal small shape differences between brains such as differences between modern humans and Neanderthals whose brains were similar in size. Neubauer and colleagues looked at the endocasts of chimpanzees and modern humans to observe brain growth using 3D landmarks and semilandmarks. They found that there is an early “globularization phase” in human brain development that shows expansion of the parietal and cerebellar areas, which does not occur in chimpanzees. Gunz and colleagues extended the study further and found that the “globularization phase” does not occur in Neanderthals and instead Neanderthal brain growth is more similar to chimpanzees. This difference could point to some important changes in the human brain that led to different organization and cognitive functions
Pleistocene cranial morphology
There have been many debates on the relationships between Middle Pleistocene hominin crania from Eurasia and Africa because they display a mosaic of both primitive and derived traits. Studies on cranial morphology for these specimens have created arguments that Eurasian fossils from the Middle Pleistocene are a transition between Homo erectus and later hominins like Neanderthals and modern humans. However, there are two sides to the argument with one side saying that the European and African fossils are from a single taxon while others say that the Neanderthal lineage should be included. Harvati and colleagues decided to attempt to quantify the craniofacial features of Neanderthals and European Middle Pleistocene fossils using 3D landmarks to try to add to the debate. They found that some features were more Neanderthal like while others were primitive and likely from the Middle Pleistocene African hominins, so the argument could still go either way. Freidline and colleagues further added to the debate by looking at both adult and subadult crania of modern and Pleistocene hominins using 3D landmarks and semilandmarks. They found similarities in facial morphology between Middle Pleistocene fossils from Europe and Africa and a divide in facial morphology during the Pleistocene based on time period. The study also found that some characteristics separating Neanderthals from Middle Pleistocene hominins, like the size of the nasal aperture and degree of midfacial prognathism, might be due to allometric differences
Modern human variation
Ancestry and sex estimation of crania
Crania can be used to classify ancestry and sex to aid in forensic contexts such as crime scenes and mass fatalities. In 2010, Ross and colleagues were provided federal funds by the U.S. Department of Justice to compile data for population specific classification criteria using geometric morphometrics. Their aim was to create an extensive population database from 3D landmarks on human crania, to develop and validate population specific procedures for classification of unknown individuals, and develop software to use in forensic identification. They placed 3D landmarks on 75 craniofacial landmarks from European, African, and Hispanic populations of about 1000 individuals with a Microscribe digitizer. The software they developed, called 3D-ID, can classify unknown individuals into probable sex and ancestry, and allows for fragmentary and damaged specimens to be used. A copy of the full manuscript can be found here: Geometric Morphometric Tools for the Classification of Human Skulls
Sex estimation of os coxae
Geometric morphometrics can also be used to capture the slight shape variations found in postcranial bones of the human body such as os coxae. Bierry and colleagues used 3D CT reconstructions of modern adult pelvic bones for 104 individuals to look at the shape of the obturator foramen. After a normalization technique to take out the factor of size, they outlined the obturator foramen with landmarks and semilandmarks to capture its shape. They chose the obturator foramen because it tends to be oval in males and triangular in females. The results show a classification accuracy of 88.5% for males and 80.8% for females using a Discriminant Fourier Analysis. Another study done by Gonzalez and colleagues used geometric morphometrics to capture the complete shape of the ilium and ischiopubic ramus. They placed landmarks and semilandmarks on 2D photographic images of 121 left pelvic bones from a collection of undocumented skeletons at the Museu Anthropológico de Coimbra in Portugal. Since the pelvic bones were of unknown origin, they used a K-means Cluster Analysis to determine a sex category before performing a Discriminant Function analysis. The results had a classification accuracy for the greater sciatic notch of 90.9% and the ischiopubic ramus at 93.4 to 90.1%
Shape variation of archaeological assemblages
In archaeology, Geometric morphometrics are used to examine the shape variations or standardization of artifacts to answer questions about typological and technological changes. Most applications are for stone tools to measure variations in morphology between different assemblage groups to understand their functions. Some applications to pottery shape is to identify the level of standardization to explore ceramic production and its implication about social organization.
Standard books
The books listed below are the standard suggestions for anyone who wants to obtain a comprehensive understanding of morphometrics (referred to by colors):
-The Red Book: Bookstein, F. L., B. Chernoff, R. Elder, J. Humphries, G. Smith, and R. Strauss. 1985. Morphometrics in Evolutionary Biology
One of the first collection of papers introducing the importance of morphometrics
-The Blue Book: Rohlf, F. J. and F. L. Bookstein (eds.). 1990. Proceedings of the Michigan Morphometrics Workshop
A collection of papers that cover: data acquisition, multivariate methods, methods for outline data, methods for landmark data, and the problem of homology
-The Orange Book: Bookstein, F. L. 1991. Morphometric Tools for Landmark Data. Geometry and Biology
Widely cited collection of papers with an extensive background on morphometrics
-The Black Book: Marcus, L. F., E. Bello, A. García-Valdecasas (eds.). 1993. Contributions to Morphometrics
A collection of papers that covers the basics of morphometrics and data acquisition
-The Green Book: Zelditch, M. L., D. L. Swiderski, H. D. Sheets, and W. L. Fink. 2004. Geometric Morphometrics for biologists: A Primer
First full-length book on geometric morphometrics
Equipment
2D Equipment
High-quality digital cameras: collect 2D landmarks on photograph
Spreading and Sliding Calipers/Osteometric Board: linear measurements only (traditional morphometrics)
3D Equipment
Microscribe digitizer: manually collect 3D landmarks and measurements with robotic arm
Microscribe laser scanner: manually sweep surface of object with laser to obtain a scan for 3D landmarks
NextEngine laser scanner: automatically sweeps surface of object with laser to obtain scan for 3D landmarks
Computed Tomography Scan (CT scans): x-ray image slices combined to create surface for 3D landmarks
Useful links
Morphometrics at Stony Brook: This is a website run by F. James Rohlf in the Anthropology Department at Stony Brook University in Stony Brook, NY. The website provides a plethora of information and tools for people who study morphometrics. The context sections include: meetings/workshops/course information, software downloads, usable data, bibliography, glossary, people, hardware, and more.
The Morphometrics Website: This is a website run by Dennis E. Slice and provides services relating to shape analysis such as the MORPHMET mailing list/discussion group and links to other online resources for geometric morphometrics.
3D-ID, Geometric Morphometric Classification of Crania for Forensic Scientists: 3D-ID is a software developed by Ross, Slice, and Williams that contains 3D coordinate data collected on modern crania and can be used for forensic identification purposes.
Max Planck Institute for Evolutionary Anthropology: The Max Planck Institute for Evolutionary Anthropology is an institute housing a variety of scientist related to evolutionary genetics, human evolution, linguistics, primatology, and developmental/comparative psychology. The human evolution division houses palaeoanthropologists who study fossils with an emphasis on 3D imaging to analyze phylogenetics and brain development.
New York Consortium in Evolutionary Primatology (NYCEP): NYCEP is a consortium in physical anthropology run by the American Museum of Natural History and other associated institutions. A section of this program has staff and laboratories specifically for the study of human evolution with a strong emphasis on comparative morphology with morphometric, 3D scanning, and image analysis equipment.
References
Anthropology
Bioinformatics | Geometric morphometrics in anthropology | Engineering,Biology | 4,321 |
51,087,033 | https://en.wikipedia.org/wiki/Testosterone%20propionate/testosterone%20phenylpropionate/testosterone%20isocaproate/testosterone%20caproate | Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone caproate (TP/TPP/TiC/TCa), sold under the brand name Omnadren or Omnadren 250, is an injectable combination medication of four testosterone esters, all of which are androgens/anabolic steroids, which is no longer marketed. Its constituents included:
Testosterone propionate (30 mg)
Testosterone phenylpropionate (60 mg)
Testosterone isocaproate (60 mg)
Testosterone caproate (100 mg)
See also
Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate
Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone decanoate
List of combined sex-hormonal preparations § Androgens
References
Abandoned drugs
Anabolic–androgenic steroids
Androstanes
Combined androgen formulations
Testosterone esters
Testosterone | Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone caproate | Chemistry | 205 |
7,174,467 | https://en.wikipedia.org/wiki/Connected-component%20labeling | Connected-component labeling (CCL), connected-component analysis (CCA), blob extraction, region labeling, blob discovery, or region extraction is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation.
Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed. When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. Blob extraction is generally performed on the resulting binary image from a thresholding step, but it can be applicable to gray-scale and color images as well. Blobs may be counted, filtered, and tracked.
Blob extraction is related to but distinct from blob detection.
Overview
A graph, containing vertices and connecting edges, is constructed from relevant input data. The vertices contain information required by the comparison heuristic, while the edges indicate connected 'neighbors'. An algorithm traverses the graph, labeling the vertices based on the connectivity and relative values of their neighbors. Connectivity is determined by the medium; image graphs, for example, can be 4-connected neighborhood or 8-connected neighborhood.
Following the labeling stage, the graph may be partitioned into subsets, after which the original information can be recovered and processed .
Definition
The usage of the term connected-components labeling (CCL) and its definition is quite consistent in the academic literature, whereas connected-components analysis (CCA) varies in terms of both terminology and problem definition.
Rosenfeld et al. define connected components labeling as the “[c]reation of a labeled image in which the positions associated with the same connected component of the binary input image have a unique label.” Shapiro et al. define CCL as an operator whose “input is a binary image and [...] output is a symbolic image in which the label assigned to each pixel is an integer uniquely identifying the connected component to which that pixel belongs.”
There is no consensus on the definition of CCA in the academic literature. It is often used interchangeably with CCL. A more extensive definition is given by Shapiro et al.: “Connected component analysis consists of connected component labeling of the black pixels followed by property measurement of the component regions and decision making.” The definition for connected-component analysis presented here is more general, taking the thoughts expressed in into account.
Algorithms
The algorithms discussed can be generalized to arbitrary dimensions, albeit with increased time and space complexity.
One component at a time
This is a fast and very simple method to implement and understand. It is based on graph traversal methods in graph theory. In short, once the first pixel of a connected component is found, all the connected pixels of that connected component are labelled before going onto the next pixel in the image. This algorithm is part of Vincent and Soille's watershed segmentation algorithm, other implementations also exist.
In order to do that a linked list is formed that will keep the indexes of the pixels that are connected to each other, steps (2) and (3) below. The method of defining the linked list specifies the use of a depth or a breadth first search. For this particular application, there is no difference which strategy to use. The simplest kind of a last in first out queue implemented as a singly linked list will result in a depth first search strategy.
It is assumed that the input image is a binary image, with pixels being either background or foreground and that the connected components in the foreground pixels are desired. The algorithm steps can be written as:
Start from the first pixel in the image. Set current label to 1. Go to (2).
If this pixel is a foreground pixel and it is not already labelled, give it the current label and add it as the first element in a queue, then go to (3). If it is a background pixel or it was already labelled, then repeat (2) for the next pixel in the image.
Pop out an element from the queue, and look at its neighbours (based on any type of connectivity). If a neighbour is a foreground pixel and is not already labelled, give it the current label and add it to the queue. Repeat (3) until there are no more elements in the queue.
Go to (2) for the next pixel in the image and increment current label by 1.
Note that the pixels are labelled before being put into the queue. The queue will only keep a pixel to check its neighbours and add them to the queue if necessary. This algorithm only needs to check the neighbours of each foreground pixel once and doesn't check the neighbours of background pixels.
The pseudocode is :
algorithm OneComponentAtATime(data)
input : imageData[xDim][yDim]
initialization : label = 0, labelArray[xDim][yDim] = 0, statusArray[xDim][yDim] = false, queue1, queue2;
for i = 0 to xDim do
for j = 0 to yDim do
if imageData[i][j] has not been processed do
if imageData[i][j] is a foreground pixel do
check it four neighbors(north, south, east, west) :
if neighbor is not processed do
if neighbor is a foreground pixel do
add it to the queue1
else
update its status as processed
end if
labelArray[i][j] = label (give label)
statusArray[i][j] = true (update status)
while queue1 is not empty do
For each pixel in the queue do :
check it fours neighbors
if neighbor is not processed do
if neighbor is a foreground pixel do
add it to the queue2
else
update its status as processed
end if
give it the current label
update its status as processed
remove the current element from queue1
copy queue2 into queue1
end While
increase the label
end if
else
update its status as processed
end if
end if
end if
end for
end for
Two-pass
Relatively simple to implement and understand, the two-pass algorithm, (also known as the Hoshen–Kopelman algorithm) iterates through 2-dimensional binary data. The algorithm makes two passes over the image: the first pass to assign temporary labels and record equivalences, and the second pass to replace each temporary label by the smallest label of its equivalence class.
The input data can be modified in situ (which carries the risk of data corruption), or labeling information can be maintained in an additional data structure.
Connectivity checks are carried out by checking neighbor pixels' labels (neighbor elements whose labels are not assigned yet are ignored), or say, the north-east, the north, the north-west and the west of the current pixel (assuming 8-connectivity). 4-connectivity uses only north and west neighbors of the current pixel. The following conditions are checked to determine the value of the label to be assigned to the current pixel (4-connectivity is assumed)
Conditions to check:
Does the pixel to the left (west) have the same value as the current pixel?
Yes – We are in the same region. Assign the same label to the current pixel
No – Check next condition
Do both pixels to the north and west of the current pixel have the same value as the current pixel but not the same label?
Yes – We know that the north and west pixels belong to the same region and must be merged. Assign the current pixel the minimum of the north and west labels, and record their equivalence relationship
No – Check next condition
Does the pixel to the left (west) have a different value and the one to the north the same value as the current pixel?
Yes – Assign the label of the north pixel to the current pixel
No – Check next condition
Do the pixel's north and west neighbors have different pixel values than current pixel?
Yes – Create a new label id and assign it to the current pixel
The algorithm continues this way, and creates new region labels whenever necessary. The key to a fast algorithm, however, is how this merging is done. This algorithm uses the union-find data structure which provides excellent performance for keeping track of equivalence relationships. Union-find essentially stores labels which correspond to the same blob in a disjoint-set data structure, making it easy to remember the equivalence of two labels by the use of an interface method E.g.: findSet(l). findSet(l) returns the minimum label value that is equivalent to the function argument 'l'.
Once the initial labeling and equivalence recording is completed, the second pass merely replaces each pixel label with its equivalent disjoint-set representative element.
A faster-scanning algorithm for connected-region extraction is presented below.
On the first pass:
Iterate through each element of the data by column, then by row (Raster Scanning)
If the element is not the background
Get the neighboring elements of the current element
If there are no neighbors, uniquely label the current element and continue
Otherwise, find the neighbor with the smallest label and assign it to the current element
Store the equivalence between neighboring labels
On the second pass:
Iterate through each element of the data by column, then by row
If the element is not the background
Relabel the element with the lowest equivalent label
Here, the background is a classification, specific to the data, used to distinguish salient elements from the foreground. If the background variable is omitted, then the two-pass algorithm will treat the background as another region.
Graphical example of two-pass algorithm
1. The array from which connected regions are to be extracted is given below (8-connectivity based).
We first assign different binary values to elements in the graph. The values "0~1" at the center of each of the elements in the following graph are the elements' values, whereas the "1,2,...,7" values in the next two graphs are the elements' labels. The two concepts should not be confused.
2. After the first pass, the following labels are generated:
A total of 7 labels are generated in accordance with the conditions highlighted above.
The label equivalence relationships generated are,
3. Array generated after the merging of labels is carried out. Here, the label value that was the smallest for a given region "floods" throughout the connected region and gives two distinct labels, and hence two distinct labels.
4. Final result in color to clearly see two different regions that have been found in the array.
The pseudocode is:
algorithm TwoPass(data) is
linked = []
labels = structure with dimensions of data, initialized with the value of Background
NextLabel = 0
First pass
for row in data do
for column in row do
if data[row][column] is not Background then
neighbors = connected elements with the current element's value
if neighbors is empty then
linked[NextLabel] = set containing NextLabel
labels[row][column] = NextLabel
NextLabel += 1
else
Find the smallest label
L = neighbors labels
labels[row][column] = min(L)
for label in L do
linked[label] = union(linked[label], L)
Second pass
for row in data do
for column in row do
if data[row][column] is not Background then
labels[row][column] = find(labels[row][column])
return labels
The find and union algorithms are implemented as described in union find.
Sequential algorithm
Create a region counter
Scan the image (in the following example, it is assumed that scanning is done from left to right and from top to bottom):
For every pixel check the north and west pixel (when considering 4-connectivity) or the northeast, north, northwest, and west pixel for 8-connectivity for a given region criterion (i.e. intensity value of 1 in binary image, or similar intensity to connected pixels in gray-scale image).
If none of the neighbors fit the criterion then assign to region value of the region counter. Increment region counter.
If only one neighbor fits the criterion assign pixel to that region.
If multiple neighbors match and are all members of the same region, assign pixel to their region.
If multiple neighbors match and are members of different regions, assign pixel to one of the regions (it doesn't matter which one). Indicate that all of these regions are equivalent.
Scan image again, assigning all equivalent regions the same region value.
Others
Some of the steps present in the two-pass algorithm can be merged for efficiency, allowing for a single sweep through the image. Multi-pass algorithms also exist, some of which run in linear time relative to the number of image pixels.
In the early 1990s, there was considerable interest in parallelizing connected-component algorithms in image analysis applications, due to the bottleneck of sequentially processing each pixel.
The interest to the algorithm arises again with an extensive use of CUDA.
Pseudocode for the one-component-at-a-time algorithm
Algorithm:
Connected-component matrix is initialized to size of image matrix.
A mark is initialized and incremented for every detected object in the image.
A counter is initialized to count the number of objects.
A row-major scan is started for the entire image.
If an object pixel is detected, then following steps are repeated while (Index !=0)
Set the corresponding pixel to 0 in Image.
A vector (Index) is updated with all the neighboring pixels of the currently set pixels.
Unique pixels are retained and repeated pixels are removed.
Set the pixels indicated by Index to mark in the connected-component matrix.
Increment the marker for another object in the image.
One-Component-at-a-Time(image)
[M, N] := size(image)
connected := zeros(M, N)
mark := value
difference := increment
offsets := [-1; M; 1; -M]
index := []
no_of_objects := 0
for i: 1:M do
for j: 1:N do
if (image(i, j) == 1) then
no_of_objects := no_of_objects + 1
index := [((j-1) × M + i)]
connected(index) := mark
while ~isempty(index) do
image(index) := 0
neighbors := bsxfun(@plus, index, offsets)
neighbors := unique(neighbors(:))
index := neighbors(find(image(neighbors)))
connected(index) := mark
end while
mark := mark + difference
end if
end for
end for
The run time of the algorithm depends on the size of the image and the amount of foreground. The time complexity is comparable to the two pass algorithm if the foreground covers a significant part of the image. Otherwise the time complexity is lower. However, memory access is less structured than for the two-pass algorithm, which tends to increase the run time in practice.
Performance evaluation
In the last two decades many novel approaches to connected-component labeling have been proposed, but almost none of them have been subjected to a comparative performance assessment using the same data set. YACCLAB
(acronym for Yet Another Connected Components Labeling Benchmark) is an example of C++ open source framework which collects, runs, and tests connected-component labeling algorithms.
Hardware architectures
The emergence of FPGAs with enough capacity to perform complex image processing tasks also led to high-performance architectures for connected-component labeling. Most of these architectures utilize the single pass variant of this algorithm, because of the limited memory resources available on an FPGA. These types of connected component labeling architectures can process several image pixels in parallel, thereby achieving high throughput and low processing latency.
See also
Feature extraction
Flood fill
References
General
External links
Implementation in C#
about Extracting objects from image and Direct Connected Component Labeling Algorithm
Computer vision | Connected-component labeling | Engineering | 3,311 |
28,431 | https://en.wikipedia.org/wiki/Space%20exploration | Space exploration is the use of astronomy and space technology to explore outer space. While the exploration of space is currently carried out mainly by astronomers with telescopes, its physical exploration is conducted both by uncrewed robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science.
While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.
The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States. A driving force of the start of space exploration was during the Cold War. After the ability to create nuclear weapons, the narrative of defense/offense left land and the power to control the air the focus. Both the Soviet Union and the U.S. were racing to prove their superiority in technology through exploring space. In fact, the reason NASA was created was as a response to Sputnik I.
The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).
With the substantial completion of the ISS following STS-133 in March 2011, plans for space exploration by the U.S. remained in flux. The Constellation program aiming for a return to the Moon by 2020 was judged unrealistic by an expert review panel reporting in 2009. Constellation ultimately was replaced with the Artemis Program, of which the first mission occurred in 2022, with a planned crewed landing to occur with Artemis III. The rise of the private space industry also began in earnest in the 2010s with the development of private launch vehicles, space capsules and satellite manufacturing.
In the 2000s, China initiated a successful crewed spaceflight program while India launched the Chandrayaan programme, while the European Union and Japan have also planned future crewed space missions. The two primary global programs gaining traction in the 2020s are the Chinese-led International Lunar Research Station and the US-led Artemis Program, with its plan to build the Lunar Gateway and the Artemis Base Camp, each having its own set of international partners.
History of exploration
First telescopes
The first telescope is said to have been invented in 1608 in the Netherlands by an eyeglass maker named Hans Lippershey, but their first recorded use in astronomy was by Galileo Galilei in 1609. In 1668 Isaac Newton built his own reflecting telescope, the first fully functional telescope of this kind, and a landmark for future developments due to its superior features over the previous Galilean telescope.
A string of discoveries in the Solar System (and beyond) followed, then and in the next centuries: the mountains of the Moon, the phases of Venus, the main satellites of Jupiter and Saturn, the rings of Saturn, many comets, the asteroids, the new planets Uranus and Neptune, and many more satellites.
The Orbiting Astronomical Observatory 2 was the first space telescope launched 1968, but the launching of Hubble Space Telescope in 1990 set a milestone. As of 1 December 2022, there were 5,284 confirmed exoplanets discovered. The Milky Way is estimated to contain 100–400 billion stars and more than 100 billion planets. There are at least 2 trillion galaxies in the observable universe. HD1 is the most distant known object from Earth, reported as 33.4 billion light-years away.
First outer space flights
MW 18014 was a German V-2 rocket test launch that took place on 20 June 1944, at the Peenemünde Army Research Center in Peenemünde. It was the first human-made object to reach outer space, attaining an apogee of 176 kilometers, which is well above the Kármán line. It was a vertical test launch. Although the rocket reached space, it did not reach orbital velocity, and therefore returned to Earth in an impact, becoming the first sub-orbital spaceflight. In 1949, the Bumper-WAC reached an altitude of , becoming the first human-made object to enter space, according to NASA.
First object in orbit
The first successful orbital launch was of the Soviet uncrewed Sputnik 1 ("Satellite 1") mission on 4 October 1957. The satellite weighed about , and is believed to have orbited Earth at a height of about . It had two radio transmitters (20 and 40 MHz), which emitted "beeps" that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.
First human outer space flight
The first successful human spaceflight was Vostok 1 ("East 1"), carrying the 27-year-old Russian cosmonaut, Yuri Gagarin, on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin's flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.
First astronomical body space explorations
The first artificial object to reach another celestial body was Luna 2 reaching the Moon in 1959. The first soft landing on another celestial body was performed by Luna 9 landing on the Moon on 3 February 1966. Luna 10 became the first artificial satellite of the Moon, entering in a lunar orbit on 3 April 1966.
The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969, landing on the Moon. There have been a total of six spacecraft with humans landing on the Moon starting from 1969 to the last human landing in 1972.
The first interplanetary flyby was the 1961 Venera 1 flyby of Venus, though the 1962 Mariner 2 was the first flyby of Venus to return data (closest approach 34,773 kilometers). Pioneer 6 was the first satellite to orbit the Sun, launched on 16 December 1965. The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively. This accounts for flybys of each of the eight planets in the Solar System, the Sun, the Moon, and Ceres and Pluto (two of the five recognized dwarf planets).
The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7, which returned data to Earth for 23 minutes from Venus. In 1975, Venera 9 was the first to return images from the surface of another planet, returning images from Venus. In 1971, the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later, much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission. Venus and Mars are the two planets outside of Earth on which humans have conducted surface missions with uncrewed robotic spacecraft.
First space station
Salyut 1 was the first space station of any kind, launched into low Earth orbit by the Soviet Union on 19 April 1971. The International Space Station (ISS) is currently the largest and oldest of the 2 current fully functional space stations, inhabited continuously since the year 2000. The other, Tiangong space station built by China, is now fully crewed and operational.
First interstellar space flight
Voyager 1 became the first human-made object to leave the Solar System into interstellar space on 25 August 2012. The probe passed the heliopause at 121 AU to enter interstellar space.
Farthest from Earth
The Apollo 13 flight passed the far side of the Moon at an altitude of above the lunar surface, and 400,171 km (248,655 mi) from Earth, marking the record for the farthest humans have ever traveled from Earth in 1970.
Voyager 1 was at a distance of from Earth. It is the most distant human-made object from Earth.
Targets of exploration
Starting in the mid-20th century probes and then human missions were sent into Earth orbit, and then on to the Moon. Also, probes were sent throughout the known Solar System, and into Solar orbit. Uncrewed spacecraft have been sent into orbit around Saturn, Jupiter, Mars, Venus, and Mercury by the 21st century, and the most distance active spacecraft, Voyager 1 and 2 traveled beyond 100 times the Earth-Sun distance. The instruments were enough though that it is thought they have left the Sun's heliosphere, a sort of bubble of particles made in the Galaxy by the Sun's solar wind.
The Sun
The Sun is a major focus of space exploration. Being above the atmosphere in particular and Earth's magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth's surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, launched in 2018, will approach the Sun to within 1/9th the orbit of Mercury.
Mercury
Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b). A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10's flybys.
Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.
Venus
Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first flyby was the 1961 Venera 1, though the 1962 Mariner 2 was the first flyby to successfully return data. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967, Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975, with the Soviet orbiter Venera 9, some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.
Earth
Space exploration has been used as a tool to understand Earth as a celestial object. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.
For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States' first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth's magnetic fields, which currently renders construction of habitable space stations above 1000 km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space-based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth's atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.
Moon
The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.
In 1959, the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966, the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters. China's Chang'e 4 in 2019 and Chang'e 6 in 2024 achieved the world's first landing and sample return on the far side of the Moon. India's Chandrayaan-3 in 2023 achieved the world's first landing on the lunar south pole region.
Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit. Artemis II is scheduled to complete a crewed flyby of the Moon in 2025, and Artemis III will perform the first lunar landing since Apollo 17 with it scheduled for launch no earlier than 2026. Robotic missions are still pursued vigorously.
Mars
The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the Red Planet but also yield further insight into the past, and possible future, of Earth.
The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul which subsists on a diet of Mars probes. This phenomenon is also informally known as the "Mars Curse". In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India's Mars Orbiter Mission (MOM) is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (). The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it was launched on 19 July 2020 and went into orbit around Mars on 9 February 2021. The uncrewed exploratory probe was named "Hope Probe" and was sent to Mars to study its atmosphere in detail.
Phobos
The Russian space mission Fobos-Grunt, which launched on 9 November 2011, experienced a failure leaving it stranded in low Earth orbit. It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a "trans-shipment point" for spaceships traveling to Mars.
Asteroids
Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileos planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object, 433 Eros. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA's Dawn spacecraft, launched in 2007.
Hayabusa was a robotic spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid's shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid twice to collect samples. The spacecraft returned to Earth on 13 June 2010.
Jupiter
The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been "flybys", in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet's orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is precluded.
Reaching Jupiter from Earth requires a delta-v of 9.2 km/s, which is comparable to the 9.7 km/s delta-v needed to reach low Earth orbit. Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.
Jupiter has 95 known moons, many of which have relatively little known information about them.
Saturn
Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (Cassini–Huygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.
Saturn has at least 62 known moons, although the exact number is debatable since Saturn's rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.
Uranus
The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77°, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet's unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.
Images of Uranus proved to have a uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be unique, being profoundly affected by the planet's unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.
Neptune
The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of . The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.
Although the extremely uniform appearance of Uranus during Voyager 2s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter's Great Red Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100 km/h. Voyager 2 also examined Neptune's ring and moon system. It discovered 900 complete rings and additional partial ring "arcs" around Neptune. In addition to examining Neptune's three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune's largest moon, Triton, is a captured Kuiper belt object.
Pluto
The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn's moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.
After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003. New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.
Kuiper Belt Objects
The New Horizons mission also performed a flyby of the small planetesimal Arrokoth, in the Kuiper belt, in 2019. This was its first extended mission.
Comets
Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet's tail. The Philae lander successfully landed on Comet Churyumov–Gerasimenko in 2014 as part of the broader Rosetta mission.
Deep space exploration
Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space. Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.
Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion. Beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.
Future of space exploration
Breakthrough Starshot
Breakthrough Starshot is a research and engineering project by the Breakthrough Initiatives to develop a proof-of-concept fleet of light sail spacecraft named StarChip, to be capable of making the journey to the Alpha Centauri star system 4.37 light-years away. It was founded in 2016 by Yuri Milner, Stephen Hawking, and Mark Zuckerberg.
Asteroids
An article in the science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars. In order to make such an approach viable, three requirements need to be fulfilled: first, "a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit"; second, "extending flight duration and distance capability to ever-increasing ranges out to Mars"; and finally, "developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin". Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them without great risk to radiation exposure.
Artemis program
The Artemis program is an ongoing crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA, with the goal of landing "the first woman and the next man" on the Moon, specifically at the lunar south pole region. Artemis would be the next step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for private companies to build a lunar economy, and eventually sending humans to Mars.
In 2017, the lunar campaign was authorized by Space Policy Directive 1, using various ongoing spacecraft programs such as Orion, the Lunar Gateway, Commercial Lunar Payload Services, and adding an undeveloped crewed lander. The Space Launch System will serve as the primary launch vehicle for Orion, while commercial launch vehicles are planned for use to launch other elements of the campaign. NASA requested $1.6 billion in additional funding for Artemis for fiscal year 2020, while the U.S. Senate Appropriations Committee requested from NASA a five-year budget profile which is needed for evaluation and approval by the U.S. Congress. As of 2024, the first Artemis mission was launched in 2022 with the second mission, a crewed lunar flyby planned for 2025. Construction on the Lunar Gateway is underway with initial capabilities set for the 2025–2027 timeframe. The first CLPS lander landed in 2024, marking the first US spacecraft to land since Apollo 17.
Rationales
The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program. It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate substantial revenue. In addition, it has been argued that space exploration programs help inspire youth to study in science and engineering. Space exploration also gives scientists the ability to perform experiments in other settings and expand humanity's knowledge.
Another claim is that space exploration is a necessity to humankind and that staying on Earth will eventually lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said, "I don't think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I'm an optimist. We will reach out to the stars." Author Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight. He argued that humanity's choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.
These motivations could be attributed to one of the first rocket scientists in NASA, Wernher von Braun, and his vision of humans moving beyond Earth. The basis of this plan was to:
Develop multi-stage rockets capable of placing satellites, animals, and humans in space.
Development of large, winged reusable spacecraft capable of carrying humans and equipment into Earth orbit in a way that made space access routine and cost-effective.
Construction of a large, permanently occupied space station to be used as a platform both to observe Earth and from which to launch deep space expeditions.
Launching the first human flights around the Moon, leading to the first landings of humans on the Moon, with the intent of exploring that body and establishing permanent lunar bases.
Assembly and fueling of spaceships in Earth orbit for the purpose of sending humans to Mars with the intent of eventually colonizing that planet.
Known as the Von Braun Paradigm, the plan was formulated to lead humans in the exploration of space. Von Braun's vision of human space exploration served as the model for efforts in space exploration well into the twenty-first century, with NASA incorporating this approach into the majority of their projects. The steps were followed out of order, as seen by the Apollo program reaching the moon before the space shuttle program was started, which in turn was used to complete the International Space Station. Von Braun's Paradigm formed NASA's drive for human exploration, in the hopes that humans discover the far reaches of the universe.
NASA has produced a series of public service announcement videos supporting the concept of space exploration.
Overall, the U.S. public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is "a good investment", compared to 21% who did not.
Human nature
Space advocacy and space policy regularly invokes exploration as a human nature.
Topics
Spaceflight
Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
Satellites
Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.
Commercialization of space
The commercialization of space first started out with the launching of private satellites by NASA or other space agencies. Current examples of the commercial satellite use of space include satellite navigation systems, satellite television, satellite communications (such as internet services) and satellite radio. The next step of commercialization of space was seen as human spaceflight. Flying humans safely to and from space had become routine to NASA and Russia. Reusable spacecraft were an entirely new engineering challenge, something only seen in novels and films like Star Trek and War of the Worlds. Astronaut Buzz Aldrin supported the use of making a reusable vehicle like the space shuttle. Aldrin held that reusable spacecraft were the key in making space travel affordable, stating that the use of "passenger space travel is a huge potential market big enough to justify the creation of reusable launch vehicles". Space tourism is a next step in the use of reusable vehicles in the commercialization of space. The purpose of this form of space travel is personal pleasure.
Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have changed the cost and overall landscape of space exploration, and are expected to continue to do so in the near future.
Alien life
Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology. It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology''' (from Greek: έξω, exo, "outside"). The term "Xenobiology" has been used as well, but this is technically incorrect because its terminology means "biology of the foreigners". Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth. In the Solar System, some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.
Human spaceflight and habitation
To date, the longest human occupation of space is the International Space Station which has been in continuous use for . Valeri Polyakov's record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. The health effects of space have been well documented through years of research conducted in the field of aerospace medicine. Analog environments similar to those experienced in space travel (like deep sea submarines), have been used in this research to further explore the relationship between isolation and extreme environments. It is imperative that the health of the crew be maintained as any deviation from baseline may compromise the integrity of the mission as well as the safety of the crew, hence the astronauts must endure rigorous medical screenings and tests prior to embarking on any missions. However, it does not take long for the environmental dynamics of spaceflight to commence its toll on the human body; for example, space motion sickness (SMS) – a condition which affects the neurovestibular system and culminates in mild to severe signs and symptoms such as vertigo, dizziness, fatigue, nausea, and disorientation – plagues almost all space travelers within their first few days in orbit. Space travel can also have an impact on the psyche of the crew members as delineated in anecdotal writings composed after their retirement. Space travel can adversely affect the body's natural biological clock (circadian rhythm); sleep patterns causing sleep deprivation and fatigue; and social interaction; consequently, residing in a Low Earth Orbit (LEO) environment for a prolonged amount of time can result in both mental and physical exhaustion. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, problems with eyesight, and radiation exposure. The lack of gravity causes fluid to rise upward which can cause pressure to build up in the eye, resulting in vision problems; the loss of bone minerals and densities; cardiovascular deconditioning; and decreased endurance and muscle mass.
Radiation is an insidious health hazard to space travelers as it is invisible and can cause cancer. When above the Earth's magnetic field, spacecraft are no longer protected from the sun's radiation; the danger of radiation is even more potent in deep space. The hazards of radiation can be ameliorated through protective shielding on the spacecraft, alerts, and dosimetry.
Fortunately, with new and rapidly evolving technological advancements, those in Mission Control are able to monitor the health of their astronauts more closely using telemedicine. One may not be able to completely evade the physiological effects of space flight, but those effects can be mitigated. For example, medical systems aboard space vessels such as the International Space Station (ISS) are well equipped and designed to counteract the effects of lack of gravity and weightlessness; on-board treadmills can help prevent muscle loss and reduce the risk of developing premature osteoporosis. Additionally, a crew medical officer is appointed for each ISS mission and a flight surgeon is available 24/7 via the ISS Mission Control Center located in Houston, Texas. Although the interactions are intended to take place in real time, communications between the space and terrestrial crew may become delayed – sometimes by as much as 20 minutes – as their distance from each other increases when the spacecraft moves further out of low Earth orbit; because of this the crew are trained and need to be prepared to respond to any medical emergencies that may arise on the vessel as the ground crew are hundreds of miles away.
Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a "steppingstone" to the other planets, especially Mars. At the end of 2006, NASA announced they were planning to build a permanent Moon base with continual presence by 2024.
Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, , ratified by all spacefaring nations. Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.
Human representation and participation
Participation and representation of humanity in space is an issue ever since the first phase of space exploration. Some rights of non-spacefaring countries have been mostly secured through international space law, declaring space the "province of all mankind", understanding spaceflight as its resource, though sharing of space for all humanity is still criticized as imperialist and lacking. Additionally to international inclusion, the inclusion of women and people of colour has also been lacking. To reach a more inclusive spaceflight, some organizations like the Justspace Alliance and IAU featured Inclusive Astronomy have been formed in recent years.
Women
The first woman to go to space was Valentina Tereshkova. She flew in 1963 but it was not until the 1980s that another woman entered space again. All astronauts were required to be military test pilots at the time and women were not able to join this career. This is one reason for the delay in allowing women to join space crews. After the rule changed, Svetlana Savitskaya became the second woman to go to space, she was also from the Soviet Union. Sally Ride became the next woman in space and the first woman to fly to space through the United States program.
Since then, eleven other countries have allowed women astronauts. The first all-female space walk occurred in 2018, including Christina Koch and Jessica Meir. They had both previously participated in space walks with NASA. The first woman to go to the Moon is planned for 2026.
Despite these developments, women are underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs, and limit the space missions they are able to go on, include:
agencies limiting women to half as much time in space than men, arguing that there may be unresearched additional risks for cancer.
a lack of space suits sized appropriately for female astronauts.
Art
Artistry in and from space ranges from signals, capturing and arranging material like Yuri Gagarin's selfie in space or the image The Blue Marble, over drawings like the first one in space by cosmonaut and artist Alexei Leonov, music videos like Chris Hadfield's cover of Space Oddity on board the ISS, to permanent installations on celestial bodies like on the Moon.
See also
Discovery and exploration of the Solar System
Spacecraft propulsion
List of crewed spacecraft
List of missions to Mars
List of missions to the outer planets
List of landings on extraterrestrial bodies
List of spaceflight records
Robotic space exploration programs
Robotic spacecraft
Timeline of planetary exploration
Landings on other planets
Pioneer program
Luna program
Zond program
Venera program
Mars probe program
Ranger program
Mariner program
Surveyor program
Viking program
Voyager program
Vega program
Phobos program
Discovery program
Chandrayaan Program
Mangalyaan Program
Chang'e Program
Private Astrobotic Technology Program
Living in space
Interplanetary contamination
Animals in space
Animals in space
Monkeys in space
Russian space dogs
Humans in space
Astronauts
List of human spaceflights
List of human spaceflights by program
Vostok program
Mercury program
Voskhod program
Gemini program
Soyuz program
Apollo program
Salyut program
Skylab
Space Shuttle program
Mir
International Space Station
Vision for Space Exploration
Aurora Programme
Tier One
Effect of spaceflight on the human body
Space architecture
Space archaeology
flexible path destinations set
Recent and future developments
Commercial astronauts
Artemis program
Energy development
Exploration of Mars
Space tourism
Private spaceflight
Space colonization
Interstellar spaceflight
Deep space exploration
Human outpost
Mars to Stay
NewSpace
NASA lunar outpost concepts
Other
List of spaceflights
Timeline of Solar System exploration
List of artificial objects on extra-terrestrial surfaces
Space station
Space telescope
Sample return mission
Atmospheric reentry
Space and survival
List of spaceflight-related accidents and incidents
Religion in space
Militarisation of space
French space program
Russian explorers
U.S. space exploration history on U.S. stamps
Deep-sea exploration
Arctic exploration
Criticism of space exploration
References
Further reading
An overview of the history of space exploration and predictions for the future.
External links
Building a Spacefaring Civilization,
Chronology of space exploration, astrobiology, exoplanets and news.
Space related news
Space Exploration Network
NASA's website on human space travel
NASA's website on space exploration technology.
"America's Space Program: Exploring a New Frontier", a National Park Service Teaching with Historic Places (TwHP) lesson plan
The Soviet-Russian Spaceflight's History Photoarchive
The 21 Greatest Space Photos Ever. – slideshow by Life Magazine''
"From Stargazers to Starships", extensive educational web site and course covering spaceflight, astronomy and related physics
We Are The Explorers, NASA Promotional Video (Press Release. )
Recent Advancement in Space technology and satellite technology 2024.
Astropolitics
Exploration
Solar System | Space exploration | Astronomy | 8,929 |
11,945,645 | https://en.wikipedia.org/wiki/2%CF%80%20theorem | In mathematics, the theorem of Gromov and Thurston states a sufficient condition for Dehn filling on a cusped hyperbolic 3-manifold to result in a negatively curved 3-manifold.
Let be a cusped hyperbolic 3-manifold. Disjoint horoball neighborhoods of each cusp can be selected. The boundaries of these neighborhoods are quotients of horospheres and thus have Euclidean metrics. A slope, i.e. unoriented isotopy class of simple closed curves on these boundaries, thus has a well-defined length by taking the minimal Euclidean length over all curves in the isotopy class. The theorem states: a Dehn filling of with each filling slope greater than results in a 3-manifold with a complete metric of negative sectional curvature. In fact, this metric can be selected to be identical to the original hyperbolic metric outside the horoball neighborhoods.
The basic idea of the proof is to explicitly construct a negatively curved metric inside each horoball neighborhood that matches the metric near the horospherical boundary. This construction, using cylindrical coordinates, works when the filling slope is greater than . See for complete details.
According to the geometrization conjecture, these negatively curved 3-manifolds must actually admit a complete hyperbolic metric. A horoball packing argument due to Thurston shows that there are at most 48 slopes to avoid on each cusp to get a hyperbolic 3-manifold. For one-cusped hyperbolic 3-manifolds, an improvement due to Colin Adams gives 24 exceptional slopes.
This result was later improved independently by and with the 6 theorem. The "6 theorem" states that Dehn filling along slopes of length greater than 6 results in a hyperbolike 3-manifold, i.e. an irreducible, atoroidal, non-Seifert-fibered 3-manifold with infinite word hyperbolic fundamental group. Yet again assuming the geometrization conjecture, these manifolds have a complete hyperbolic metric. An argument of Agol's shows that there are at most 12 exceptional slopes.
References
.
.
.
3-manifolds
Theorems in geometry | 2π theorem | Mathematics | 446 |
4,059,082 | https://en.wikipedia.org/wiki/Kadowaki%E2%80%93Woods%20ratio | The Kadowaki–Woods ratio is the ratio of A, the quadratic term of the resistivity and γ2, the square of the linear term of the specific heat. This ratio is found to be a constant for transition metals, and for heavy-fermion compounds, although at different values.
In 1968 M. J. Rice pointed out that the coefficient A should vary predominantly as the square of the linear electronic specific heat coefficient γ; in particular he showed that the ratio A/γ2 is material independent for the pure 3d, 4d and 5d transition metals. Heavy-fermion compounds are characterized by very large values of A and γ. Kadowaki and Woods showed that A/γ2 is material-independent within the heavy-fermion compounds, and that it is about 25 times larger than in aforementioned transition metals.
It was shown by K. Miyake, T. Matsuura and C.M. Varma that local Fermi liquids, quasiparticle mass and lifetime are linked consistent with the A/γ2 ratio. This suggest that the Kadowaki-Woods ratio reflects a relation between quasiparticle mass and lifetime renormalisation as a function of electron-electron interaction strength.
According to the theory of electron-electron scattering the ratio A/γ2 contains indeed several non-universal factors, including the square of the strength of the effective electron-electron interaction. Since in general the interactions differ in nature from one group of materials to another, the same values of A/γ2 are only expected within a particular group. In 2005 Hussey proposed a re-scaling of A/γ2 to account for unit cell volume, dimensionality, carrier density and multi-band effects. In 2009 Jacko, Fjaerestad, and Powell demonstrated fdx(n)A/γ2 to have the same value in transition metals, heavy fermions, organics and oxides with A varying over 10 orders of magnitude, where fdx(n) may be written in terms of the dimensionality of the system, the electron density and, in layered systems, the interlayer spacing or the interlayer hopping integral.
See also
Wilson ratio
References
Correlated electrons
Condensed matter physics
Fermions | Kadowaki–Woods ratio | Physics,Chemistry,Materials_science,Engineering | 465 |
11,465,762 | https://en.wikipedia.org/wiki/Pileolaria%20terebinthi | Pileolaria terebinthi is a plant pathogen infecting pistachio trees including Pistacia vera, Pistacia atlantica, and Pistacia terebinthus.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Fruit tree diseases
Pucciniales
Fungus species | Pileolaria terebinthi | Biology | 68 |
36,644,224 | https://en.wikipedia.org/wiki/Cohen%20ring | In algebra, a Cohen ring is a field or a complete discrete valuation ring of mixed characteristic whose maximal ideal is generated by p. Cohen rings are used in the Cohen structure theorem for complete Noetherian local rings.
See also
Norm field
References
Cohen's paper was written when "local ring" meant what is now called a "Noetherian local ring".
Commutative algebra | Cohen ring | Mathematics | 79 |
56,135,961 | https://en.wikipedia.org/wiki/Catalyst%20transfer%20polymerization | Catalyst transfer polymerization (CTP), or catalyst transfer polycondensation, is a type of living chain-growth polymerization that is used for synthesizing conjugated polymers. Benefits to using CTP over other methods are low polydispersity and control over number average molecular weight in the resulting polymers. Very few monomers have been demonstrated to undergo CTP.
History
The first reports of CTP came simultaneously from the labs of Yokozawa and McCullough in 2004, with the recognition that polythiophene can be synthesized with low dispersity and with control over molecular weight. This recognition sparked interest in polymerization mechanism so that it could be expanded to other monomers. Few polymers can be synthesized via CTP, so most conjugated polymers are synthesized via step-growth using palladium catalyzed cross-coupling reactions.
Characteristics
CTP is exclusively performed on arene monomers to give conjugated polymers. The polymers obtained from CTP are often low dispersity due to its living, chain growth nature. Mass spectrometry can be used to identify end-groups on the polymer to determine if the polymer was synthesized via chain growth.
Types
CTP utilizes cross coupling reactions (see Mechanism below) with monomers containing magnesium-, zinc-, boron-, and tin-based transmetallating groups, giving rise to Kumada CTP, Negishi CTP, Suzuki CTP, and Stille CTP reactions.
Mechanism
The mechanism of CTP has been debated. The living chain-growth nature of CTP can be explained by the existence of a π-complex (as described in this section) but can also be explained via polymer reactivity.
Initiation
Initiation from a metal(II) species (either Ni or Pd) involves two monomers transmetalating onto the metal center to form a complex that can undergo reductive elimination. The complex formed after reductive elimination is referred to as a π-complex because the catalyst bound to the π system of the monomer. The catalyst can isomerize to other π-complexes via a process known as "ring-walking" to the π-bond adjacent to a C-X bond at the end of the chain allowing oxidative addition to occur. The product of oxidative addition is an active polymer-metal(II)-halide, and it can react with monomers in the propagation reaction.
Propagation
The propagation steps of CTP occurs through a cycle of transmetalation, reductive elimination, ring walking, and oxidative addition. The existence of a π-complex allows for the polymerization to be controlled as it ensures that the catalyst cannot dissociate from the polymer chain (and start new chains). This means that the number of polymer chains at the end of the polymerization should be equal to the number of catalysts in solution, and that the average degree of polymerization of the sample at the end of polymerization should be equal to the ratio of monomers to catalysts in solution.
Termination
A characteristic of CTP is living chain-growth character, meaning that the catalyst will have a reactive chain end for the entirety of the polymerization. Therefore, to terminate the polymerization, a quenching agent must be added, such as a strong acid to protonate the polymer, or a nucleophile to add an end cap the polymer.
If the π-complex is too weakly bound, termination of polymer chains can occur before a quenching agent is added, causing lower molecular weight polymers to form. Current research into CTP focuses on finding catalysts that form strong catalyst-polymer π-complexes such that the polymerization remains living.
Analysis
Success of CTP is often evaluated using gel permeation chromatography, matrix-assisted laser desorption/ionization, nuclear magnetic resonance spectroscopy. GPC characterization enables determination of average molecular weight. MALDI and NMR allow for identification of end groups of the polymer chain.
Polymer reactivity versus π-complex
The chain growth nature of CTP can also be described without invoking a catalyst-polymer π-complex. If we assume that no π-complex forms and instead every time a monomer was added to a polymer, the polymer becomes more reactive, we would also see chain growth since the largest polymers in the reaction would be the most reactive and would react with monomers preferentially. The presence of this mechanism and one mediated by a π-complex can be elucidated by studying the end groups of the polymers using mass spectrometry.
Polymers that can be synthesized by CTP
A non-exhaustive list of the polymers that can be synthesized using CTP:
Polythiophene
Polyphenylene
Polyselenophene
Polytellurophene
Polythiazole
Polybenzothiadiazole
Polypyrrole
Polyfluorene
References
Polymerization reactions | Catalyst transfer polymerization | Chemistry,Materials_science | 1,007 |
797,291 | https://en.wikipedia.org/wiki/Teabagging | Teabagging is a slang term for the sexual act involving placing the scrotum into the mouth of a sexual partner for sexual pleasure, or onto the face or head of another person, sometimes as a comedic device.
The name of the practice, when it is done in a repeated in-and-out motion, is derived from its passing resemblance to the dipping of a tea bag into a cup of hot water as a method of brewing tea. As a form of non-penetrative sex, it can be done for its own enjoyment or as foreplay.
Oral sex
Along with the penis, the scrotum is sensitive and considered to be an erogenous zone. This makes varying degrees of stimulation an integral part of oral sex. And while some may enjoy the stimulation, not everyone responds to it. Sex experts have suggested various techniques that the performer can use during fellatio to increase their partner's pleasure. These include gently sucking and tugging on the scrotum and use of lips to ensure minimal contact with their teeth. It has also been recommended as a form of foreplay or safer sex. It presents a low risk of transmission for many diseases, including HIV.
Its gain in prominence has been attributed to its depiction in the 1998 film Pecker, directed by John Waters. It has since become popular enough with couples to be discussed during an episode of the television series Sex and the City.
Sex and relationship experts have varying definitions on how the act is performed. According to columnist Dan Savage, the person whose scrotum is being stimulated is known as "the teabagger" and the one giving the stimulation is "the teabaggee": "A teabagger dips sack; a teabaggee receives dipped sack." Some consider the act to be as simple as fellatio involving the scrotum. Others consider the position to involve the man squatting over his reclined partner while the testicles are repeatedly raised and lowered into the mouth. Whether licking and fondling is considered teabagging was once debated on The Howard Stern Show.
In video games
Teabagging in video games involves a player character rapidly and repeatedly crouching over the corpse of another player-controlled character as a form of humiliation or to provoke the other player. The practice likely originated from multiplayer communities in games such as Quake or Counter-Strike, and it became more prominent in later first-person shooter games like Halo: Combat Evolved. The use of teabagging is now widespread in video game culture, although some gamers consider it to be an act of bad sportsmanship or harassment.
The act courted much controversy across June and July 2022 when two professional female Valorant players received suspensions by Riot Games for criticizing people who had compared the act to sexual assault. In addition to the suspensions, the players were also doxxed and faced real-world consequences. The suspensions caused outrage in much of the Valorant and wider internet community, with various commentators calling the comparison to real-world sexual violence as "out of control" and "absurd".
The player known as Dawn, who received a three-month suspension for voicing her opinion, said of the situation: "I have watched [sexual assault] happen in broad daylight. It is not something you can compare to crouching in a video game. I was visibly upset by this, as were hundreds of thousands of people, and replied under her thread expressing my frustrations and concerns."
Social ridicule and harassment
Teabagging is not always carried out consensually, such as when it is done as a practical joke, which, in some jurisdictions, is legally considered sexual assault or sexual battery. It has been practiced during hazing or bullying incidents, with reports including groups holding down victims while the perpetrator "shoved his testicles in [the victim's] face" or puts his "crotch to his head".
See also
Turkey slap
References
External links
Oral eroticism
Practical jokes
Sex- and gender-related slurs
Scrotum
Sexual acts
Sexual slang | Teabagging | Biology | 822 |
8,413,399 | https://en.wikipedia.org/wiki/Inorganic%20pyrophosphatase | Inorganic pyrophosphatase (or inorganic diphosphatase, PPase) is an enzyme () that catalyzes the conversion of one ion of pyrophosphate to two phosphate ions. This is a highly exergonic reaction, and therefore can be coupled to unfavorable biochemical transformations in order to drive these transformations to completion. The functionality of this enzyme plays a critical role in lipid metabolism (including lipid synthesis and degradation), calcium absorption and bone formation, and DNA synthesis, as well as other biochemical transformations.
Two types of inorganic diphosphatase, very different in terms of both amino acid sequence and structure, have been characterised to date: soluble and transmembrane proton-pumping pyrophosphatases (sPPases and H(+)-PPases, respectively). sPPases are ubiquitous proteins that hydrolyse pyrophosphate to release heat, whereas H+-PPases, so far unidentified in animal and fungal cells, couple the energy of PPi hydrolysis to proton movement across biological membranes.
Structure
Thermostable soluble pyrophosphatase had been isolated from the extremophile Thermococcus litoralis. The 3-dimensional structure was determined using x-ray crystallography, and was found to consist of two alpha-helices, as well as an antiparallel closed beta-sheet. The form of inorganic pyrophosphatase isolated from Thermococcus litoralis was found to contain a total of 174 amino acid residues and have a hexameric oligomeric organization (Image 1).
Humans possess two genes encoding pyrophosphatase, PPA1 and PPA2. PPA1 has been assigned to a gene locus on human chromosome 10, and PPA2 to chromosome 4.
Mechanism
Though the precise mechanism of catalysis via inorganic pyrophosphatase in most organisms remains uncertain, site-directed mutagenesis studies in Escherichia coli have allowed for analysis of the enzyme active site and identification of key amino acids. In particular, this analysis has revealed 17 residues of that may be of functional importance in catalysis.
Further research suggests that the protonation state of Asp67 is responsible for modulating the reversibility of the reaction in Escherichia coli. The carboxylate functional group of this residue has been shown to perform a nucleophilic attack on the pyrophosphate substrate when four magnesium ions are present. Direct coordination with these four magnesium ions and hydrogen bonding interactions with Arg43, Lys29, and Lys142 (all positively charged residues) have been shown to anchor the substrate to the active site. The four magnesium ions are also suggested to be involved in the stabilization of the trigonal bipyramid transition state, which lowers the energetic barrier for the aforementioned nucleophilic attack.
Several studies have also identified additional substrates that can act as allosteric effectors. In particular, the binding of pyrophosphate (PPi) to the effector site of inorganic pyrophosphatase increases its rate of hydrolysis at the active site. ATP has also been shown to function as an allosteric activator in Escherichia coli, while fluoride has been shown to inhibit hydrolysis of pyrophosphate in yeast.
Biological function and significance
The hydrolysis of inorganic pyrophosphate (PPi) to two phosphate ions is utilized in many biochemical pathways to render reactions effectively irreversible. This process is highly exergonic (accounting for approximately a −19kJ change in free energy), and therefore greatly increases the energetic favorability of reaction system when coupled with a typically less-favorable reaction.
Inorganic pyrophosphatase catalyzes this hydrolysis reaction in the early steps of lipid degradation, a prominent example of this phenomenon. By promoting the rapid hydrolysis of pyrophosphate (PPi), Inorganic pyrophosphatase provides the driving force for the activation of fatty acids destined for beta oxidation.
Before fatty acids can undergo degradation to fulfill the metabolic needs of an organism, they must first be activated via a thioester linkage to coenzyme A. This process is catalyzed by the enzyme acyl CoA synthetase, and occurs on the outer mitochondrial membrane. This activation is accomplished in two reactive steps: (1) the fatty acid reacts with a molecule of ATP to form an enzyme-bound acyl adenylate and pyrophosphate (PPi), and (2) the sulfhydryl group of CoA attacks the acyl adenylate, forming acyl CoA and a molecule of AMP. Each of these two steps is reversible under biological conditions, save for the additional hydrolysis of PPi by inorganic pyrophosphatase. This coupled hydrolysis provides the driving force for the overall forward activation reaction, and serves as a source of inorganic phosphate used in other biological processes.
Evolution
Examination of prokaryotic and eukaryotic forms of soluble inorganic pyrophosphatase (sPPase, ) has shown that they differ significantly in both amino acid sequence, number of residues, and oligomeric organization. Despite differing structural components, recent work has suggested a large degree of evolutionary conservation of active site structure as well as reaction mechanism, based on kinetic data. Analysis of approximately one million genetic sequences taken from organisms in the Sargasso Sea identified a 57 residue sequence within the regions coding for proton-pumping inorganic pyrophosphatase (H+-PPase) that appears to be highly conserved; this region primarily consisted of the four early amino acid residues Gly, Ala, Val and Asp, suggesting an evolutionarily ancient origin for the protein.
References
External links
Further reading
Protein families
EC 3.6.1
Metal enzymes
Enzymes of known structure | Inorganic pyrophosphatase | Biology | 1,228 |
19,809,804 | https://en.wikipedia.org/wiki/Massed%20negative%20practice | Massed negative practice is a proposed treatment for the tics of Tourette syndrome in which the individual with Tourette's "practices" tics continuously until a conditioned level of fatigue is reached. It is based upon the Hullian learning theory, which holds that tics are "maladaptive habits that are strengthened by repetition and can be replaced by the strengthening of more adaptive habits (i.e., not having tics)". There is little evidence supporting its efficacy in the treatment of tics.
References
Behavior therapy
Behavior modification | Massed negative practice | Biology | 111 |
56,873 | https://en.wikipedia.org/wiki/Lactose%20intolerance | Lactose intolerance is caused by a lessened ability or a complete inability to digest lactose, a sugar found in dairy products. Humans vary in the amount of lactose they can tolerate before symptoms develop. Symptoms may include abdominal pain, bloating, diarrhea, flatulence, and nausea. These symptoms typically start thirty minutes to two hours after eating or drinking something containing lactose, with the severity typically depending on the amount consumed. Lactose intolerance does not cause damage to the gastrointestinal tract.
Lactose intolerance is due to the lack of the enzyme lactase in the small intestines to break lactose down into glucose and galactose. There are four types: primary, secondary, developmental, and congenital. Primary lactose intolerance occurs as the amount of lactase declines as people grow up. Secondary lactose intolerance is due to injury to the small intestine. Such injury could be the result of infection, celiac disease, inflammatory bowel disease, or other diseases. Developmental lactose intolerance may occur in premature babies and usually improves over a short period of time. Congenital lactose intolerance is an extremely rare genetic disorder in which little or no lactase is made from birth. The reduction of lactase production starts typically in late childhood or early adulthood, but prevalence increases with age.
Diagnosis may be confirmed if symptoms resolve following eliminating lactose from the diet. Other supporting tests include a hydrogen breath test and a stool acidity test. Other conditions that may produce similar symptoms include irritable bowel syndrome, celiac disease, and inflammatory bowel disease. Lactose intolerance is different from a milk allergy. Management is typically by decreasing the amount of lactose in the diet, taking lactase supplements, or treating the underlying disease. People are typically able to drink at least one cup of milk without developing symptoms, with greater amounts tolerated if drunk with a meal or throughout the day.
Worldwide, around 65% of adults are affected by lactose malabsorption. Other mammals usually lose the ability to digest lactose after weaning. Lactose intolerance is the ancestral state of all humans before the recent evolution of lactase persistence in some cultures, which extends lactose tolerance into adulthood. Lactase persistence evolved in several populations independently, probably as an adaptation to the domestication of dairy animals around 10,000 years ago. Today the prevalence of lactose tolerance varies widely between regions and ethnic groups. The ability to digest lactose is most common in people of Northern European descent, and to a lesser extent in some parts of the Middle East and Africa. Lactose intolerance is most common among people of East Asian descent, with 90% lactose intolerance, people of Jewish descent, in many African countries and Arab countries, and among people of Southern European descent (notably amongst Greeks and Italians). Traditional food cultures reflect local variations in tolerance and historically many societies have adapted to low levels of tolerance by making dairy products that contain less lactose than fresh milk. The medicalization of lactose intolerance as a disorder has been attributed to biases in research history, since most early studies were conducted amongst populations which are normally tolerant, as well as the cultural and economic importance and impact of milk in countries such as the United States.
Terminology
Lactose intolerance primarily refers to a syndrome with one or more symptoms upon the consumption of food substances containing lactose sugar. Individuals may be lactose intolerant to varying degrees, depending on the severity of these symptoms.
Hypolactasia is the term specifically for the small intestine producing little or no lactase enzyme. If a person with hypolactasia consumes lactose sugar, it results in lactose malabsorption. The digestive system is unable to process the lactose sugar, and the unprocessed sugars in the gut produce the symptoms of lactose intolerance.
Lactose intolerance is not an allergy, because it is not an immune response, but rather a sensitivity to dairy caused by a deficiency of lactase enzyme. Milk allergy, occurring in about 2% of the population, is a separate condition, with distinct symptoms that occur when the presence of milk proteins trigger an immune reaction.
Signs and symptoms
The principal manifestation of lactose intolerance is an adverse reaction to products containing lactose (primarily milk), including abdominal bloating and cramps, flatulence, diarrhea, nausea, borborygmi, and vomiting (particularly in adolescents). These appear one-half to two hours after consumption. The severity of these signs and symptoms typically increases with the amount of lactose consumed; most lactose-intolerant people can tolerate a certain level of lactose in their diets without ill effects.
Because lactose intolerance is not an allergy, it does not produce allergy symptoms (such as itching, hives, or anaphylaxis).
Causes
Lactose intolerance is a consequence of lactase deficiency, which may be genetic (primary hypolactasia and primary congenital alactasia) or environmentally induced (secondary or acquired hypolactasia). In either case, symptoms are caused by insufficient levels of lactase in the lining of the duodenum. Lactose, a disaccharide molecule found in milk and dairy products, cannot be directly absorbed through the wall of the small intestine into the bloodstream, so, in the absence of lactase, passes intact into the colon. Bacteria in the colon can metabolise lactose, and the resulting fermentation produces copious amounts of gas (a mixture of hydrogen, carbon dioxide, and methane) that causes the various abdominal symptoms. The unabsorbed sugars and fermentation products also raise the osmotic pressure of the colon, causing an increased flow of water into the bowels (diarrhea).
Lactose intolerance in infants (congenital lactase deficiency) is caused by mutations in the LCT gene. The LCT gene provides the instructions for making lactase. Mutations are believed to interfere with the function of lactase, causing affected infants to have a severely impaired ability to digest lactose in breast milk or formula. Lactose intolerance in adulthood is a result of gradually decreasing activity (expression) of the LCT gene after infancy, which occurs in most humans. The specific DNA sequence in the MCM6 gene helps control whether the LCT gene is turned on or off. At least several thousand years ago, some humans developed a mutation in the MCM6 gene that keeps the LCT gene turned on even after breast feeding is stopped. Populations that are lactose intolerant lack this mutation. The LCT and MCM6 genes are both located on the long arm (q) of chromosome 2 in region 21. The locus can be expressed as 2q21. The lactase deficiency also could be linked to certain heritages and varies widely. A 2016 study of over 60,000 participants from 89 countries found regional prevalence of lactose malabsorption was "64% (54–74) in Asia (except Middle East), 47% (33–61) in eastern Europe, Russia, and former Soviet Republics, 38% (CI 18–57) in Latin America, 70% (57–83) in the Middle East, 66% (45–88) in northern Africa, 42% (13–71) in northern America, 45% (19–71) in Oceania, 63% (54–72) in sub-Saharan Africa, and 28% (19–37) in northern, southern and western Europe." According to Johns Hopkins Medicine, lactose intolerance is more common in Asian Americans, African Americans, Mexican Americans, and Native Americans. Analysis of the DNA of 94 ancient skeletons in Europe and Russia concluded that the mutation for lactose tolerance appeared about 4,300 years ago and spread throughout the European population.
Some human populations have developed lactase persistence, in which lactase production continues into adulthood probably as a response to the benefits of being able to digest milk from farm animals. Some have argued that this links intolerance to natural selection favoring lactase-persistent individuals, but it is also consistent with a physiological response to decrease lactase production when it is not needed in cultures in which dairy products are not an available food source. Although populations in Europe, India, Arabia, and Africa were first thought to have high rates of lactase persistence because of a single mutation, lactase persistence has been traced to a number of mutations that occurred independently. Different alleles for lactase persistence have developed at least three times in East African populations, with persistence extending from 26% in Tanzania to 88% in the Beja pastoralist population in Sudan.
The accumulation of epigenetic factors, primarily DNA methylation, in the extended LCT region, including the gene enhancer located in the MCM6 gene near C/T-13910 SNP, may also contribute to the onset of lactose intolerance in adults. Age-dependent expression of LCT in mice intestinal epithelium has been linked to DNA methylation in the gene enhancer.
Lactose intolerance is classified according to its causes as:
Primary hypolactasia
Primary hypolactasia, or primary lactase deficiency, is genetic, develops in childhood at various ages, and is caused by the absence of a lactase persistence allele. In individuals without the lactase persistence allele, less lactase is produced by the body over time, leading to hypolactasia in adulthood. The frequency of lactase persistence, which allows lactose tolerance, varies enormously worldwide, with the highest prevalence in Northwestern Europe, declines across southern Europe and the Middle East and is low in Asia and most of Africa, although it is common in pastoralist populations from Africa.
Secondary hypolactasia
Secondary hypolactasia or secondary lactase deficiency, also called acquired hypolactasia or acquired lactase deficiency, is caused by an injury to the small intestine. This form of lactose intolerance can occur in both infants and lactase persistent adults and is generally reversible. It may be caused by acute gastroenteritis, coeliac disease, Crohn's disease, ulcerative colitis, chemotherapy, intestinal parasites (such as giardia), or other environmental causes.
Primary congenital alactasia
Primary congenital alactasia, also called congenital lactase deficiency, is an extremely rare, autosomal recessive enzyme defect that prevents lactase expression from birth. People with congenital lactase deficiency cannot digest lactose from birth, so cannot digest breast milk. This genetic defect is characterized by a complete lack of lactase (alactasia). About 40 cases have been reported worldwide, mainly limited to Finland. Before the 20th century, babies born with congenital lactase deficiency often did not survive, but death rates decreased with soybean-derived infant formulas and manufactured lactose-free dairy products.
Diagnosis
In order to assess lactose intolerance, intestinal function is challenged by ingesting more dairy products than can be readily digested. Clinical symptoms typically appear within 30 minutes, but may take up to two hours, depending on other foods and activities. Substantial variability in response (symptoms of nausea, cramping, bloating, diarrhea, and flatulence) is to be expected, as the extent and severity of lactose intolerance varies among individuals.
The next step is to determine whether it is due to primary lactase deficiency or an underlying disease that causes secondary lactase deficiency. Physicians should investigate the presence of undiagnosed coeliac disease, Crohn's disease, or other enteropathies when secondary lactase deficiency is suspected and infectious gastroenteritis has been ruled out.
Lactose intolerance is distinct from milk allergy, an immune response to cow's milk proteins. They may be distinguished in diagnosis by giving lactose-free milk, producing no symptoms in the case of lactose intolerance, but the same reaction as to normal milk in the presence of a milk allergy. A person can have both conditions. If positive confirmation is necessary, four tests are available.
Hydrogen breath test
In a hydrogen breath test, the most accurate lactose intolerance test, after an overnight fast, 25 grams of lactose (in a solution with water) are swallowed. If the lactose cannot be digested, enteric bacteria metabolize it and produce hydrogen, which, along with methane, if produced, can be detected on the patient's breath by a clinical gas chromatograph or compact solid-state detector. The test takes about 2.5 hours to complete. If the hydrogen levels in the patient's breath are high, they may have lactose intolerance. This test is not usually done on babies and very young children, because it can cause severe diarrhea.
Lactose tolerance test
In conjunction, measuring blood glucose level every 10 to 15 minutes after ingestion will show a "flat curve" in individuals with lactose malabsorption, while the lactase persistent will have a significant "top", with a typical elevation of 50% to 100%, within one to two hours. However, due to the need for frequent blood sampling, this approach has been largely replaced by breath testing.
After an overnight fast, blood is drawn and then 50 grams of lactose (in aqueous solution) are swallowed. Blood is then drawn again at the 30-minute, 1-hour, 2-hour, and 3-hour marks. If the lactose cannot be digested, blood glucose levels will rise by less than 20 mg/dl.
Stool acidity test
This test can be used to diagnose lactose intolerance in infants, for whom other forms of testing are risky or impractical. The infant is given lactose to drink. If the individual is tolerant, the lactose is digested and absorbed in the small intestine; otherwise, it is not digested and absorbed, and it reaches the colon. The bacteria in the colon, mixed with the lactose, cause acidity in stools. Stools passed after the ingestion of the lactose are tested for level of acidity. If the stools are acidic, the infant is intolerant to lactose.
Stool pH in lactose intolerance is less than 5.5.
Intestinal biopsy
An intestinal biopsy must confirm lactase deficiency following discovery of elevated hydrogen in the hydrogen breath test. Modern techniques have enabled a bedside test, identifying presence of lactase enzyme on upper gastrointestinal endoscopy instruments. However, for research applications such as mRNA measurements, a specialist laboratory is required.
Stool sugar chromatography
Chromatography can be used to separate and identify undigested sugars present in faeces. Although lactose may be detected in the faeces of people with lactose intolerance, this test is not considered reliable enough to conclusively diagnose or exclude lactose intolerance.
Genetic diagnostic
Genetic tests may be useful in assessing whether a person has primary lactose intolerance. Lactase activity persistence in adults is associated with two polymorphisms: C/T 13910 and G/A 22018 located in the MCM6 gene. These polymorphisms may be detected by molecular biology techniques at the DNA extracted from blood or saliva samples; genetic kits specific for this diagnosis are available. The procedure consists of extracting and amplifying DNA from the sample, following with a hybridation protocol in a strip. Colored bands are obtained as result, and depending on the different combinations, it would be possible to determine whether the patient is lactose intolerant. This test allows a noninvasive definitive diagnostic.
Management
When lactose intolerance is due to secondary lactase deficiency, treatment of the underlying disease may allow lactase activity to return to normal levels. In people with celiac disease, lactose intolerance normally reverts or improves several months after starting a gluten-free diet, but temporary dietary restriction of lactose may be needed.
People with primary lactase deficiency cannot modify their body's ability to produce lactase. In societies where lactose intolerance is the norm, it is not considered a condition that requires treatment. However, where dairy is a larger component of the normal diet, a number of efforts may be useful. There are four general principles in dealing with lactose intolerance: avoidance of dietary lactose, substitution to maintain nutrient intake, regulation of calcium intake, and use of enzyme substitute. Regular consumption of dairy food by lactase deficient individuals may also reduce symptoms of intolerance by promoting colonic bacteria adaptation.
Dietary avoidance
The primary way of managing the symptoms of lactose intolerance is to limit the intake of lactose to a level that can be tolerated. Lactase deficient individuals vary in the amount of lactose they can tolerate, and some report that their tolerance varies over time, depending on health status and pregnancy. However, as a rule of thumb, people with primary lactase deficiency and no small intestine injury are usually able to consume at least 12 grams of lactose per sitting without symptoms, or with only mild symptoms, with greater amounts tolerated if consumed with a meal or throughout the day.
Lactose is found primarily in dairy products, which vary in the amount of lactose they contain:
Milk – unprocessed cow's milk is about 4.7% lactose; goat's milk 4.7%; sheep's milk 4.7%; buffalo milk 4.86%; and yak milk 4.93%.
Sour cream and buttermilk – if made in the traditional way, this may be tolerable, but most modern brands add milk solids.
Yogurt – lactobacilli used in the production of yogurt metabolize lactose to varying degrees, depending on the type of yogurt. Some bacteria found in yogurt also produce their own lactase, which facilitates digestion in the intestines of lactose intolerant individuals.
Cheese – The curdling of cheese concentrates most of the lactose from milk into the whey: fresh cottage cheese contains 7% of the lactose found in an equivalent mass of milk. Further fermentation and aging converts the remaining lactose into lactic acid; traditionally made hard cheeses, which have a long ripening period, contain virtually no lactose: cheddar contains less than 1.5% of the lactose found in an equivalent mass of milk. However, manufactured cheeses may be produced using processes that do not have the same lactose-reducing properties.
There used to be a lack of standardization on how lactose is measured and reported in food. The different molecular weights of anhydrous lactose or lactose monohydrate result in up to 5% difference. One source recommends using the "carbohydrates" or "sugars" part of the nutritional label as surrogate for lactose content, but such "lactose by difference" values are not assured to correspond to real lactose content. The stated dairy content of a product also varies according to manufacturing processes and labelling practices, and commercial terminology varies between languages and regions. As a result, absolute figures for the amount of lactose consumed (by weight) may not be very reliable.
Kosher products labeled pareve or fleishig are free of milk. However, if a "D" (for "dairy") is present next to the circled "K", "U", or other hechsher, the food product likely contains milk solids, although it may also simply indicate the product was produced on equipment shared with other products containing milk derivatives.
Lactose is also a commercial food additive used for its texture, flavor, and adhesive qualities. It is found in additives labelled as casein, caseinate, whey, lactoserum, milk solids, modified milk ingredients, etc. As such, lactose is found in foods such as processed meats (sausages/hot dogs, sliced meats, pâtés), gravy stock powder, margarines, sliced breads, breakfast cereals, potato chips, processed foods, medications, prepared meals, meal replacements (powders and bars), protein supplements (powders and bars), and even beers in the milk stout style. Some barbecue sauces and liquid cheeses used in fast-food restaurants may also contain lactose. When dining out, carrying lactose intolerance cards that explain dietary restrictions in the local language can help communicate needs to restaurant staff. Lactose is often used as the primary filler (main ingredient) in most prescription and non-prescription solid pill form medications, though product labeling seldom mentions the presence of 'lactose' or 'milk', and neither do product monograms provided to pharmacists, and most pharmacists are unaware of the very wide scale yet common use of lactose in such medications until they contact the supplier or manufacturer for verification.
Milk substitutes
Plant-based milks and derivatives such as soy milk, rice milk, almond milk, coconut milk, hazelnut milk, oat milk, hemp milk, macadamia nut milk, and peanut milk are inherently lactose-free. Low-lactose and lactose-free versions of foods are often available to replace dairy-based foods for those with lactose intolerance.
Lactase supplements
When lactose avoidance is not possible, or on occasions when a person chooses to consume such items, then enzymatic lactase supplements may be used.
Lactase enzymes similar to those produced in the small intestines of humans are produced industrially by fungi of the genus Aspergillus. The enzyme, β-galactosidase, is available in tablet form in a variety of doses, in many countries without a prescription. It functions well only in high-acid environments, such as that found in the human gut due to the addition of gastric juices from the stomach. Unfortunately, too much acid can denature it, so it should not be taken on an empty stomach. Also, the enzyme is ineffective if it does not reach the small intestine by the time the problematic food does. Lactose-sensitive individuals can experiment with both timing and dosage to fit their particular needs.
While essentially the same process as normal intestinal lactose digestion, direct treatment of milk employs a different variety of industrially produced lactase. This enzyme, produced by yeast from the genus Kluyveromyces, takes much longer to act, must be thoroughly mixed throughout the product, and is destroyed by even mildly acidic environments. Its main use is in producing the lactose-free or lactose-reduced dairy products sold in supermarkets.
Rehabituation to dairy products
Regular consumption of dairy foods containing lactose can promote a colonic bacteria adaptation, enhancing a favorable microbiome, which allows people with primary lactase deficiency to diminish their intolerance and to consume more dairy foods. The way to induce tolerance is based on progressive exposure, consuming smaller amounts frequently, distributed throughout the day. Lactose intolerance can also be managed by ingesting live yogurt cultures containing lactobacilli that are able to digest the lactose in other dairy products.
Epidemiology
Worldwide, about 65% of people experience some form of lactose intolerance as they age past infancy, but there are significant differences between populations and regions. As few as 5% of northern Europeans are lactose intolerant, while as many as 90% of adults in parts of Asia are lactose intolerant.
In northern European countries, early adoption of dairy farming conferred a selective evolutionary advantage to individuals that could tolerate lactose. This led to higher frequencies of lactose tolerance in these countries. For example, almost 100% of Irish people are predicted to be lactose tolerant. Conversely, regions of the south, such as Africa, did not adopt dairy farming as early and tolerance from milk consumption did not occur the same way as in northern Europe. Lactose intolerance is common among people of Jewish descent, as well as from West Africa, the Arab countries, Greece, and Italy. Different populations will present certain gene constructs depending on the evolutionary and cultural pre-settings of the geographical region.
History
Greater lactose tolerance has come about in two ways. Some populations have developed genetic changes to allow the digestion of lactose: lactase persistence. Other populations developed cooking methods like milk fermentation.
Lactase persistence in humans evolved relatively recently (in the last 10,000 years) among some populations. Around 8,000 years ago in modern-day Turkey, humans became reliant on newly-domesticated animals that could be milked; such as cows, sheep, and goats. This resulted in higher frequency of lactase persistence. Lactase persistence became high in regions such as Europe, Scandinavia, the Middle East and Northwestern India. However, most people worldwide remain lactase -persistent. Populations that raised animals not used for milk tend to have 90–100 percent of a lactose intolerant rate. For this reason, lactase persistence is of some interest to the fields of anthropology, human genetics, and archaeology, which typically use the genetically derived persistence/non-persistence terminology.
The rise of dairy and producing dairy related products from cow milk alone, varies across different regions of the world, aside from genetic predisposition. The process of turning milk into cheese dates back earlier than 5200 BC.
DNA analysis in February 2012 revealed that Ötzi was lactose intolerant, supporting the theory that lactose intolerance was still common at that time, despite the increasing spread of agriculture and dairying.
Genetic analysis shows lactase persistence has developed several times in different places independently in an example of convergent evolution.
History of research
It was not until relatively recently that medicine recognised the worldwide prevalence of lactose intolerance and its genetic causes. Its symptoms were described as early as Hippocrates (460–370 BC), but until the 1960s, the prevailing assumption was that tolerance was the norm. Intolerance was explained as the result of a milk allergy, intestinal pathogens, or as being psychosomatic – it being recognised that some cultures did not practice dairying, and people from those cultures often reacted badly to consuming milk. Two reasons have been given for this misconception. One was that early research was conducted solely on European-descended populations, which have an unusually low incidence of lactose intolerance and an extensive cultural history of dairying. As a result, researchers wrongly concluded that tolerance was the global norm. Another reason is that lactose intolerance tends to be under-reported: lactose intolerant individuals can tolerate at least some lactose before they show symptoms, and their symptoms differ in severity. The large majority of people are able to digest some quantity of milk, for example in tea or coffee, without developing any adverse effects. Fermented dairy products, such as cheese, also contain significantly less lactose than plain milk. Therefore, in societies where tolerance is the norm, many lactose intolerant people who consume only small amounts of dairy, or have only mild symptoms, may be unaware that they cannot digest lactose.
Eventually, in the 1960s, it was recognised that lactose intolerance was correlated with race in the United States. Subsequent research revealed that lactose intolerance was more common globally than tolerance, and that the variation was due to genetic differences, not an adaptation to cultural practices.
Other animals
Most mammals normally cease to produce lactase and become lactose intolerant after weaning. The downregulation of lactase expression in mice could be attributed to the accumulation of DNA methylation in the Lct gene and the adjacent Mcm6 gene.
See also
References
External links
Digestive system
Milk
Conditions diagnosed by stool test
Food sensitivity
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Ötzi | Lactose intolerance | Biology | 5,808 |
63,775,376 | https://en.wikipedia.org/wiki/Terbium%28IV%29%20oxide | Terbium(IV) oxide is an inorganic compound with a chemical formula TbO2. It can be produced by oxidizing terbium(III) oxide by oxygen gas at 1000 atm and 300 °C.
Decomposition
Terbium(IV) oxide starts to decompose at 340 °C, producing Tb5O8 and oxygen:
5 TbO2 → Tb5O8 + O2
References
See also
Terbium(III) oxide
Terbium(III,IV) oxide
Terbium compounds
Oxides | Terbium(IV) oxide | Chemistry | 108 |
1,118,498 | https://en.wikipedia.org/wiki/Terence%20Tao | Terence Chi-Shen Tao (; born 17 July 1975) is an Australian-American mathematician, Fields medalist, and professor of mathematics at the University of California, Los Angeles (UCLA), where he holds the James and Carol Collins Chair in the College of Letters and Sciences. His research includes topics in harmonic analysis, partial differential equations, algebraic combinatorics, arithmetic combinatorics, geometric combinatorics, probability theory, compressed sensing and analytic number theory.
Tao was born to Chinese immigrant parents and raised in Adelaide. Tao won the Fields Medal in 2006 and won the Royal Medal and Breakthrough Prize in Mathematics in 2014, and is a 2006 MacArthur Fellow. Tao has been the author or co-author of over three hundred research papers, and is widely regarded as one of the greatest living mathematicians.
Life and career
Family
Tao's parents are first generation immigrants from Hong Kong to Australia. Tao's father, Billy Tao, was a Chinese paediatrician who was born in Shanghai and earned his medical degree (MBBS) from the University of Hong Kong in 1969. Tao's mother, Grace Leong, was born in Hong Kong; she received a first-class honours degree in mathematics and physics at the University of Hong Kong. She was a secondary school teacher of mathematics and physics in Hong Kong. Billy and Grace met as students at the University of Hong Kong. They then emigrated from Hong Kong to Australia in 1972.
Tao also has two brothers, Trevor and Nigel, who are currently living in Australia. Both formerly represented Australia at the International Mathematical Olympiad. Furthermore, Trevor Tao has been representing Australia internationally in chess and holds the title of Chess International Master.
Tao speaks Cantonese but cannot write Chinese. Tao is married to Laura Tao, an electrical engineer at NASA's Jet Propulsion Laboratory. They live in Los Angeles, California, and have two children.
Childhood
A child prodigy, Terence Tao skipped 5 grades. Tao exhibited extraordinary mathematical abilities from an early age, attending university-level mathematics courses at the age of 9. He is one of only three children in the history of the Johns Hopkins Study of Exceptional Talent program to have achieved a score of 700 or greater on the SAT math section while just eight years old; Tao scored a 760. Julian Stanley, Director of the Study of Mathematically Precocious Youth, stated that Tao had the greatest mathematical reasoning ability he had found in years of intensive searching.
Tao was the youngest participant to date in the International Mathematical Olympiad, first competing at the age of ten; in 1986, 1987, and 1988, he won a bronze, silver, and gold medal, respectively. Tao remains the youngest winner of each of the three medals in the Olympiad's history, having won the gold medal at the age of 13 in 1988.
Career
At age 14, Tao attended the Research Science Institute, a summer program for secondary students. In 1991, he received his bachelor's and master's degrees at the age of 16 from Flinders University under the direction of Garth Gaudry. In 1992, he won a postgraduate Fulbright Scholarship to undertake research in mathematics at Princeton University in the United States. From 1992 to 1996, Tao was a graduate student at Princeton University under the direction of Elias Stein, receiving his PhD at the age of 21. In 1996, he joined the faculty of the University of California, Los Angeles. In 1999, when he was 24, he was promoted to full professor at UCLA and remains the youngest person ever appointed to that rank by the institution.
He is known for his collaborative mindset; by 2006, Tao had worked with over 30 others in his discoveries, reaching 68 co-authors by October 2015.
Tao has had a particularly extensive collaboration with British mathematician Ben J. Green; together they proved the Green–Tao theorem, which is well known among both amateur and professional mathematicians. This theorem states that there are arbitrarily long arithmetic progressions of prime numbers. The New York Times described it this way:
Many other results of Tao have received mainstream attention in the scientific press, including:
his establishment of finite time blowup for a modification of the Navier–Stokes existence and smoothness Millennium Problem
his 2015 resolution of the Erdős discrepancy problem, which used entropy estimates within analytic number theory
his 2019 progress on the Collatz conjecture, in which he proved the probabilistic claim that almost all Collatz orbits attain almost bounded values.
Tao has also resolved or made progress on a number of conjectures. In 2012, Green and Tao announced proofs of the conjectured "orchard-planting problem," which asks for the maximum number of lines through exactly 3 points in a set of n points in the plane, not all on a line. In 2018, with Brad Rodgers, Tao showed that the de Bruijn–Newman constant, the nonpositivity of which is equivalent to the Riemann hypothesis, is nonnegative. In 2020, Tao proved Sendov's conjecture, concerning the locations of the roots and critical points of a complex polynomial, in the special case of polynomials with sufficiently high degree.
Recognition
Tao has won numerous mathematician honours and awards over the years. He is a Fellow of the Royal Society, the Australian Academy of Science (Corresponding Member), the National Academy of Sciences (Foreign member), the American Academy of Arts and Sciences, the American Philosophical Society, and the American Mathematical Society. In 2006 he received the Fields Medal; he was the first Australian, the first UCLA faculty member, and one of the youngest mathematicians to receive the award. He was also awarded the MacArthur Fellowship. He has been featured in The New York Times, CNN, USA Today, Popular Science, and many other media outlets. In 2014, Tao received a CTY Distinguished Alumni Honor from Johns Hopkins Center for Gifted and Talented Youth in front of 979 attendees in 8th and 9th grade that are in the same program from which Tao graduated. In 2021, President Joe Biden announced Tao had been selected as one of 30 members of his President's Council of Advisors on Science and Technology, a body bringing together America's most distinguished leaders in science and technology. In 2021, Tao was awarded the Riemann Prize Week as recipient of the inaugural Riemann Prize 2019 by the Riemann International School of Mathematics at the University of Insubria. Tao was a finalist to become Australian of the Year in 2007.
As of 2022, Tao had published over three hundred articles, along with sixteen books. He has an Erdős number of 2. He is a highly cited researcher.
An article by New Scientist writes of his ability:
British mathematician and Fields medalist Timothy Gowers remarked on Tao's breadth of knowledge:
Research contributions
Dispersive partial differential equations
From 2001 to 2010, Tao was part of a collaboration with James Colliander, Markus Keel, Gigliola Staffilani, and Hideo Takaoka. They found a number of novel results, many to do with the well-posedness of weak solutions, for Schrödinger equations, KdV equations, and KdV-type equations.Michael Christ, Colliander, and Tao developed methods of Carlos Kenig, Gustavo Ponce, and Luis Vega to establish ill-posedness of certain Schrödinger and KdV equations for Sobolev data of sufficiently low exponents. In many cases these results were sharp enough to perfectly complement well-posedness results for sufficiently large exponents as due to Bourgain, Colliander−Keel−Staffilani−Takaoka−Tao, and others. Further such notable results for Schrödinger equations were found by Tao in collaboration with Ioan Bejenaru.
A particularly notable result of the Colliander−Keel−Staffilani−Takaoka−Tao collaboration established the long-time existence and scattering theory of a power-law Schrödinger equation in three dimensions. Their methods, which made use of the scale-invariance of the simple power law, were extended by Tao in collaboration with Monica Vișan and Xiaoyi Zhang to deal with nonlinearities in which the scale-invariance is broken. Rowan Killip, Tao, and Vișan later made notable progress on the two-dimensional problem in radial symmetry.
An article by Tao in 2001 considered the wave maps equation with two-dimensional domain and spherical range. He built upon earlier innovations of Daniel Tataru, who considered wave maps valued in Minkowski space. Tao proved the global well-posedness of solutions with sufficiently small initial data. The fundamental difficulty is that Tao considers smallness relative to the critical Sobolev norm, which typically requires sophisticated techniques. Tao later adapted some of his work on wave maps to the setting of the Benjamin–Ono equation; Alexandru Ionescu and Kenig later obtained improved results with Tao's methods.
In 2016, Tao constructed a variant of the Navier–Stokes equations which possess solutions exhibiting irregular behavior in finite time. Due to structural similarities between Tao's system and the Navier–Stokes equations themselves, it follows that any positive resolution of the Navier–Stokes existence and smoothness problem must take into account the specific nonlinear structure of the equations. In particular, certain previously proposed resolutions of the problem could not be legitimate. Tao speculated that the Navier–Stokes equations might be able to simulate a Turing complete system, and that as a consequence it might be possible to (negatively) resolve the existence and smoothness problem using a modification of his results. However, such results remain (as of 2024) conjectural.
Harmonic analysis
Bent Fuglede introduced the Fuglede conjecture in the 1970s, positing a tile-based characterisation of those Euclidean domains for which a Fourier ensemble provides a basis of Tao resolved the conjecture in the negative for dimensions larger than 5, based upon the construction of an elementary counterexample to an analogous problem in the setting of finite groups.
With Camil Muscalu and Christoph Thiele, Tao considered certain multilinear singular integral operators with the multiplier allowed to degenerate on a hyperplane, identifying conditions which ensure operator continuity relative to spaces. This unified and extended earlier notable results of Ronald Coifman, Carlos Kenig, Michael Lacey, Yves Meyer, Elias Stein, and Thiele, among others. Similar problems were analysed by Tao in 2001 in the context of Bourgain spaces, rather than the usual spaces. Such estimates are used in establishing well-posedness results for dispersive partial differential equations, following famous earlier work of Jean Bourgain, Kenig, Gustavo Ponce, and Luis Vega, among others.
A number of Tao's results deal with "restriction" phenomena in Fourier analysis, which have been widely studied since the time of the articles of Charles Fefferman, Robert Strichartz, and Peter Tomas in the 1970s. Here one studies the operation which restricts input functions on Euclidean space to a submanifold and outputs the product of the Fourier transforms of the corresponding measures. It is of major interest to identify exponents such that this operation is continuous relative to spaces. Such multilinear problems originated in the 1990s, including in notable work of Jean Bourgain, Sergiu Klainerman, and Matei Machedon. In collaboration with Ana Vargas and Luis Vega, Tao made some foundational contributions to the study of the bilinear restriction problem, establishing new exponents and drawing connections to the linear restriction problem. They also found analogous results for the bilinear Kakeya problem which is based upon the X-ray transform instead of the Fourier transform. In 2003, Tao adapted ideas developed by Thomas Wolff for bilinear restriction to conical sets into the setting of restriction to quadratic hypersurfaces. The multilinear setting for these problems was further developed by Tao in collaboration with Jonathan Bennett and Anthony Carbery; their work was extensively used by Bourgain and Larry Guth in deriving estimates for general oscillatory integral operators.
Compressed sensing and statistics
In collaboration with Emmanuel Candes and Justin Romberg, Tao has made notable contributions to the field of compressed sensing. In mathematical terms, most of their results identify settings in which a convex optimisation problem correctly computes the solution of an optimisation problem which seems to lack a computationally tractable structure. These problems are of the nature of finding the solution of an underdetermined linear system with the minimal possible number of nonzero entries, referred to as "sparsity". Around the same time, David Donoho considered similar problems from the alternative perspective of high-dimensional geometry.
Motivated by striking numerical experiments, Candes, Romberg, and Tao first studied the case where the matrix is given by the discrete Fourier transform. Candes and Tao abstracted the problem and introduced the notion of a "restricted linear isometry," which is a matrix that is quantitatively close to an isometry when restricted to certain subspaces. They showed that it is sufficient for either exact or optimally approximate recovery of sufficiently sparse solutions. Their proofs, which involved the theory of convex duality, were markedly simplified in collaboration with Romberg, to use only linear algebra and elementary ideas of harmonic analysis. These ideas and results were later improved by Candes. Candes and Tao also considered relaxations of the sparsity condition, such as power-law decay of coefficients. They complemented these results by drawing on a large corpus of past results in random matrix theory to show that, according to the Gaussian ensemble, a large number of matrices satisfy the restricted isometry property.
In 2007, Candes and Tao introduced a novel statistical estimator for linear regression, which they called the "Dantzig selector." They proved a number of results on its success as an estimator and model selector, roughly in parallel to their earlier work on compressed sensing. A number of other authors have since studied the Dantzig selector, comparing it to similar objects such as the statistical lasso introduced in the 1990s. Trevor Hastie, Robert Tibshirani, and Jerome H. Friedman conclude that it is "somewhat unsatisfactory" in a number of cases. Nonetheless, it remains of significant interest in the statistical literature.
In 2009, Candes and Benjamin Recht considered an analogous problem for recovering a matrix from knowledge of only a few of its entries and the information that the matrix is of low rank. They formulated the problem in terms of convex optimisation, studying minimisation of the nuclear norm. Candes and Tao, in 2010, developed further results and techniques for the same problem. Improved results were later found by Recht. Similar problems and results have also been considered by a number of other authors.
Random matrices
In the 1950s, Eugene Wigner initiated the study of random matrices and their eigenvalues. Wigner studied the case of hermitian and symmetric matrices, proving a "semicircle law" for their eigenvalues. In 2010, Tao and Van Vu made a major contribution to the study of non-symmetric random matrices. They showed that if is large and the entries of a matrix are selected randomly according to any fixed probability distribution of expectation 0 and standard deviation 1, then the eigenvalues of will tend to be uniformly scattered across the disk of radius around the origin; this can be made precise using the language of measure theory. This gave a proof of the long-conjectured circular law, which had previously been proved in weaker formulations by many other authors. In Tao and Vu's formulation, the circular law becomes an immediate consequence of a "universality principle" stating that the distribution of the eigenvalues can depend only on the average and standard deviation of the given component-by-component probability distribution, thereby providing a reduction of the general circular law to a calculation for specially-chosen probability distributions.
In 2011, Tao and Vu established a "four moment theorem", which applies to random hermitian matrices whose components are independently distributed, each with average 0 and standard deviation 1, and which are exponentially unlikely to be large (as for a Gaussian distribution). If one considers two such random matrices which agree on the average value of any quadratic polynomial in the diagonal entries and on the average value of any quartic polynomial in the off-diagonal entries, then Tao and Vu show that the expected value of a large number of functions of the eigenvalues will also coincide, up to an error which is uniformly controllable by the size of the matrix and which becomes arbitrarily small as the size of the matrix increases. Similar results were obtained around the same time by László Erdös, Horng-Tzer Yau, and Jun Yin.
Analytic number theory and arithmetic combinatorics
In 2004, Tao, together with Jean Bourgain and Nets Katz, studied the additive and multiplicative structure of subsets of finite fields of prime order. It is well known that there are no nontrivial subrings of such a field. Bourgain, Katz, and Tao provided a quantitative formulation of this fact, showing that for any subset of such a field, the number of sums and products of elements of the subset must be quantitatively large, as compared to the size of the field and the size of the subset itself. Improvements of their result were later given by Bourgain, Alexey Glibichuk, and Sergei Konyagin.
Tao and Ben Green proved the existence of arbitrarily long arithmetic progressions in the prime numbers; this result is generally referred to as the Green–Tao theorem, and is among Tao's most well-known results. The source of Green and Tao's arithmetic progressions is Endre Szemerédi's 1975 theorem on existence of arithmetic progressions in certain sets of integers. Green and Tao showed that one can use a "transference principle" to extend the validity of Szemerédi's theorem to further sets of integers. The Green–Tao theorem then arises as a special case, although it is not trivial to show that the prime numbers satisfy the conditions of Green and Tao's extension of the Szemerédi theorem.
In 2010, Green and Tao gave a multilinear extension of Dirichlet's celebrated theorem on arithmetic progressions. Given a matrix and a matrix whose components are all integers, Green and Tao give conditions on when there exist infinitely many matrices such that all components of are prime numbers. The proof of Green and Tao was incomplete, as it was conditioned upon unproven conjectures. Those conjectures were proved in later work of Green, Tao, and Tamar Ziegler.
Notable awards
Terence Tao has won numerous awards for his work. Terence Tao won the Fields Medal, the highest award of mathematics, in 2006.
1999 – Packard Fellowship
2000 – Salem Prize for:
"his work in harmonic analysis and on related questions in geometric measure theory and partial differential equations."
2002 – Bôcher Memorial Prize for:
Global regularity of wave maps I. Small critical Sobolev norm in high dimensions. Internat. Math. Res. Notices (2001), no. 6, 299–328.
Global regularity of wave maps II. Small energy in two dimensions. Comm. Math. Phys. 2244 (2001), no. 2, 443–544.
in addition to "his remarkable series of papers, written in collaboration with J. Colliander, M. Keel, G. Staffilani, and H. Takaoka, on global regularity in optimal Sobolev spaces for KdV and other equations, as well as his many deep contributions to Strichartz and bilinear estimates."
2003 – Clay Research Award for:
his restriction theorems in Fourier analysis, his work on wave maps, his global existence theorems for KdV-type equations, and for his solution with Allen Knutson of Horn's conjecture
2005 – Australian Mathematical Society Medal
2005 – Ostrowski Prize (with Ben Green) for:
"their exceptional achievements in the area of analytic and combinatorial number theory"
2005 – Levi L.Conant Prize (with Allen Knutson) for:
their expository article "Honeycombs and Sums of Hermitian Matrices" (Notices of the AMS. 48 (2001), 175–186.)
2006 – Fields Medal for:
"his contributions to partial differential equations, combinatorics, harmonic analysis and additive number theory"
2006 – MacArthur Award
2006 – SASTRA Ramanujan Prize
2006 – Sloan Fellowship
2007 – Fellow of the Royal Society
2008 – Alan T. Waterman Award for:
"his surprising and original contributions to many fields of mathematics, including number theory, differential equations, algebra, and harmonic analysis"
2008 – Onsager Medal for:
"his combination of mathematical depth, width and volume in a manner unprecedented in contemporary mathematics". His Lars Onsager lecture was entitled "Structure and randomness in the prime numbers" at NTNU, Norway.
2009 – Inducted into the American Academy of Arts and Sciences
2010 – King Faisal International Prize
2010 – Nemmers Prize in Mathematics
2010 – Polya Prize (with Emmanuel Candès)
2012 – Crafoord Prize
2012 – Simons Investigator
2014 – Breakthrough Prize in Mathematics
"For numerous breakthrough contributions to harmonic analysis, combinatorics, partial differential equations and analytic number theory."
2014 – Royal Medal
2015 – PROSE award in the category of "Mathematics" for:
"Hilbert's Fifth Problem and Related Topics"
2019 – Riemann Prize
2019 – The Carnegie Corporation of New York honored Tao with 2019 Great Immigrant Award.
2020 – Princess of Asturias Award for Technical and Scientific Research, with Emmanuel Candès, for their work on compressed sensing
2020 – Bolyai Prize
2021 – IEEE Jack S. Kilby Signal Processing Medal
2022 – Global Australian of the Year (Advance Global Australians; Advance.org)
2022 – Grande Médaille
2023 – Alexanderson Award (with Kaisa Matomäki, Maksym Radziwiłł, Joni Teräväinen, and Tamar Ziegler) for:
Higher uniformity of bounded multiplicative functions in short intervals on average. Annals of Mathematics, Second Series (2023), 197(2): 739–857.
Major publications
Textbooks
Research articles
Notes
See also
Cramer conjecture
Erdős discrepancy problem
Goldbach's weak conjecture
Inscribed square problem
References
External links
Terence Tao's home page
Tao's research blog
Tao's MathOverflow page
Terence Tao's entry in the Numericana Hall of Fame
1975 births
21st-century American male writers
21st-century American mathematicians
21st-century Australian mathematicians
21st-century science writers
Additive combinatorialists
American male bloggers
American people of Chinese descent
American people of Hong Kong descent
American science writers
American textbook writers
Australian emigrants to the United States
Australian male bloggers
Australian people of Chinese descent
Australian people of Hong Kong descent
Australian science writers
Australian textbook writers
Clay Research Award recipients
Educators from California
Fellows of the American Academy of Arts and Sciences
Fellows of the American Mathematical Society
Fellows of the Australian Academy of Science
Fellows of the Royal Society
Fields Medalists
Flinders University alumni
Foreign associates of the National Academy of Sciences
Harmonic analysis
International Mathematical Olympiad participants
Living people
MacArthur Fellows
Mathematical analysts
Mathematicians from California
Number theorists
PDE theorists
Princeton University alumni
Recipients of the SASTRA Ramanujan Prize
Science bloggers
Scientists from Adelaide
Scientists from Los Angeles
Simons Investigator
Sloan Research Fellows
University of California, Los Angeles faculty
Writers from Los Angeles | Terence Tao | Mathematics | 4,794 |
33,691,189 | https://en.wikipedia.org/wiki/Fr%C3%A9chet%E2%80%93Kolmogorov%20theorem | In functional analysis, the Fréchet–Kolmogorov theorem (the names of Riesz or Weil are sometimes added as well) gives a necessary and sufficient condition for a set of functions to be relatively compact in an Lp space. It can be thought of as an Lp version of the Arzelà–Ascoli theorem, from which it can be deduced. The theorem is named after Maurice René Fréchet and Andrey Kolmogorov.
Statement
Let be a subset of with , and let denote the translation of by , that is,
The subset is relatively compact if and only if the following properties hold:
(Equicontinuous) uniformly on .
(Equitight) uniformly on .
The first property can be stated as such that with
Usually, the Fréchet–Kolmogorov theorem is formulated with the extra assumption that is bounded (i.e., uniformly on ). However, it has been shown that equitightness and equicontinuity imply this property.
Special case
For a subset of , where is a bounded subset of , the condition of equitightness is not needed. Hence, a necessary and sufficient condition for to be relatively compact is that the property of equicontinuity holds. However, this property must be interpreted with care as the below example shows.
Examples
Existence of solutions of a PDE
Let be a sequence of solutions of the viscous Burgers equation posed in :
with smooth enough. If the solutions enjoy the -contraction and -bound properties, we will show existence of solutions of the inviscid Burgers equation
The first property can be stated as follows: If are solutions of the Burgers equation with as initial data, then
The second property simply means that .
Now, let be any compact set, and define
where is on the set and 0 otherwise. Automatically, since
Equicontinuity is a consequence of the -contraction since is a solution of the Burgers equation with as initial data and since the -bound holds: We have that
We continue by considering
The first term on the right-hand side satisfies
by a change of variable and the -contraction. The second term satisfies
by a change of variable and the -bound. Moreover,
Both terms can be estimated as before when noticing that the time equicontinuity follows again by the -contraction. The continuity of the translation mapping in then gives equicontinuity uniformly on .
Equitightness holds by definition of by taking big enough.
Hence, is relatively compact in , and then there is a convergent subsequence of in . By a covering argument, the last convergence is in .
To conclude existence, it remains to check that the limit function, as , of a subsequence of satisfies
See also
Arzelà–Ascoli theorem
Helly's selection theorem
Rellich–Kondrachov theorem
References
Literature
Theorems in functional analysis
Compactness theorems | Fréchet–Kolmogorov theorem | Mathematics | 607 |
31,061,495 | https://en.wikipedia.org/wiki/Amino%20acid%20response | Amino acid response is the mechanism triggered in mammalian cells by amino acid starvation.
The amino acid response pathway is triggered by shortage of any essential amino acid, and results in an increase in activating transcription factor ATF4, which in turn affects many processes by sundry pathways to limit or increase the production of other proteins.
Essential amino acids are crucial to maintain homeostasis within an organism. Diet plays an important role in the health of an organism, as evidence ranging from human epidemiological to model organism experimental data suggests that diet-dependent pathways impact a variety of adult stem cells.
Amino acid response pathway
Amino acid deficiency detection
At low concentration of amino acid, GCN2 is activated due to the increase level of uncharged tRNA molecules. Uncharged tRNA activates GCN2 due to the displacement of the protein kinase moiety from a bipartite tRNA-binding domain. Activated GCN2 phosphorylates itself and eIF2α, it triggers a transcriptional and translational response to restore amino acid homeostasis by affecting the utilization, acquisition, and mobilization of amino acid in an organism.
Increased synthesis of ATF4
In homeostasis, eIF2 combines with guanosine triphosphate (GTP) to activate the mRNA which will start transcription and simultaneously lead to the hydrolysis of GTP so that the process can start again. However during an essential amino acid shortage, P-eIF2α is phosphorylated and binds tightly to eIF2B preventing GDP from turning back to GTP leading to fewer mRNAs being activated and fewer proteins being synthesized. This response causes translation to be increased for some mRNAs, including ATF4, which regulates the transcription of other genes.
Proteins increased by the amino acid response
Some of the proteins whose concentration is increased by the amino acid response include:
Membrane transporters
Transcription factors from the basic region/leucine zipper (bZIP) superfamily
Growth factors
Metabolic enzymes
Leucine starvation
Starvation induces the lysosomal retention of leucine such that it requires RAG-GTPases and the lysosomal protein complex regulator. PCAF is recruited specifically to the CHOP amino acid response element to enhance the ATF4 transcriptional activity.
References
Transcription factors | Amino acid response | Chemistry,Biology | 465 |
533,867 | https://en.wikipedia.org/wiki/Backup | In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup". Backups can be used to recover data after its loss from data deletion or corruption, or to recover data from an earlier time. Backups provide a simple form of IT disaster recovery; however not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server.
A backup system contains at least one copy of all data considered worth saving. The data storage requirements can be large. An information repository model may be used to provide structure to this storage. There are different types of data storage devices used for copying backups of data that is already in secondary storage onto archive files. There are also different ways these devices can be arranged to provide geographic dispersion, data security, and portability.
Data is selected, extracted, and manipulated for storage. The process can include methods for dealing with live data, including open files, as well as compression, encryption, and de-duplication. Additional techniques apply to enterprise client-server backup. Backup schemes may include dry runs that validate the reliability of the data being backed up. There are limitations and human factors involved in any backup scheme.
Storage
A backup strategy requires an information repository, "a secondary storage space for data" that aggregates backups of data "sources". The repository could be as simple as a list of all backup media (DVDs, etc.) and the dates produced, or could include a computerized index, catalog, or relational database.
The backup data needs to be stored, requiring a backup rotation scheme, which is a system of backing up data to computer media that limits the number of backups of different dates retained separately, by appropriate re-use of the data storage media by overwriting of backups no longer needed. The scheme determines how and when each piece of removable storage is used for a backup operation and how long it is retained once it has backup data stored on it. The 3-2-1 rule can aid in the backup process. It states that there should be at least 3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite, in a remote location (this can include cloud storage). 2 or more different media should be used to eliminate data loss due to similar reasons (for example, optical discs may tolerate being underwater while LTO tapes may not, and SSDs cannot fail due to head crashes or damaged spindle motors since they do not have any moving parts, unlike hard drives). An offsite copy protects against fire, theft of physical media (such as tapes or discs) and natural disasters like floods and earthquakes. Physically protected hard drives are an alternative to an offsite copy, but they have limitations like only being able to resist fire for a limited period of time, so an offsite copy still remains as the ideal choice.
Because there is no perfect storage, many backup experts recommend maintaining a second copy on a local physical device, even if the data is also backed up offsite.
Backup methods
Unstructured
An unstructured repository may simply be a stack of tapes, DVD-Rs or external HDDs with minimal information about what was backed up and when. This method is the easiest to implement, but unlikely to achieve a high level of recoverability as it lacks automation.
Full only/System imaging
A repository using this backup method contains complete source data copies taken at one or more specific points in time. Copying system images, this method is frequently used by computer technicians to record known good configurations. However, imaging is generally more useful as a way of deploying a standard configuration to many systems rather than as a tool for making ongoing backups of diverse systems.
Incremental
An incremental backup stores data changed since a reference point in time. Duplicate copies of unchanged data are not copied. Typically a full backup of all files is made once or at infrequent intervals, serving as the reference point for an incremental repository. Subsequently, a number of incremental backups are made after successive time periods. Restores begin with the last full backup and then apply the incrementals.
Some backup systems can create a from a series of incrementals, thus providing the equivalent of frequently doing a full backup. When done to modify a single archive file, this speeds restores of recent versions of files.
Near-CDP
Continuous Data Protection (CDP) refers to a backup that instantly saves a copy of every change made to the data. This allows restoration of data to any point in time and is the most comprehensive and advanced data protection. Near-CDP backup applications—often marketed as "CDP"—automatically take incremental backups at a specific interval, for example every 15 minutes, one hour, or 24 hours. They can therefore only allow restores to an interval boundary. Near-CDP backup applications use journaling and are typically based on periodic "snapshots", read-only copies of the data frozen at a particular point in time.
Near-CDP (except for Apple Time Machine) intent-logs every change on the host system, often by saving byte or block-level differences rather than file-level differences. This backup method differs from simple disk mirroring in that it enables a roll-back of the log and thus a restoration of old images of data. Intent-logging allows precautions for the consistency of live data, protecting self-consistent files but requiring applications "be quiesced and made ready for backup."
Near-CDP is more practicable for ordinary personal backup applications, as opposed to true CDP, which must be run in conjunction with a virtual machine or equivalent and is therefore generally used in enterprise client-server backups.
Software may create copies of individual files such as written documents, multimedia projects, or user preferences, to prevent failed write events caused by power outages, operating system crashes, or exhausted disk space, from causing data loss. A common implementation is an appended ".bak" extension to the file name.
Reverse incremental
A Reverse incremental backup method stores a recent archive file "mirror" of the source data and a series of differences between the "mirror" in its current state and its previous states. A reverse incremental backup method starts with a non-image full backup. After the full backup is performed, the system periodically synchronizes the full backup with the live copy, while storing the data necessary to reconstruct older versions. This can either be done using hard links—as Apple Time Machine does, or using binary diffs.
Differential
A differential backup saves only the data that has changed since the last full backup. This means a maximum of two backups from the repository are used to restore the data. However, as time from the last full backup (and thus the accumulated changes in data) increases, so does the time to perform the differential backup. Restoring an entire system requires starting from the most recent full backup and then applying just the last differential backup.
A differential backup copies files that have been created or changed since the last full backup, regardless of whether any other differential backups have been made since, whereas an incremental backup copies files that have been created or changed since the most recent backup of any type (full or incremental). Changes in files may be detected through a more recent date/time of last modification file attribute, and/or changes in file size. Other variations of incremental backup include multi-level incrementals and block-level incrementals that compare parts of files instead of just entire files.
Storage media
Regardless of the repository model that is used, the data has to be copied onto an archive file data storage medium. The medium used is also referred to as the type of backup destination.
Magnetic tape
Magnetic tape was for a long time the most commonly used medium for bulk data storage, backup, archiving, and interchange. It was previously a less expensive option, but this is no longer the case for smaller amounts of data. Tape is a sequential access medium, so the rate of continuously writing or reading data can be very fast. While tape media itself has a low cost per space, tape drives are typically dozens of times as expensive as hard disk drives and optical drives.
Many tape formats have been proprietary or specific to certain markets like mainframes or a particular brand of personal computer. By 2014 LTO had become the primary tape technology. The other remaining viable "super" format is the IBM 3592 (also referred to as the TS11xx series). The Oracle StorageTek T10000 was discontinued in 2016.
Hard disk
The use of hard disk storage has increased over time as it has become progressively cheaper. Hard disks are usually easy to use, widely available, and can be accessed quickly. However, hard disk backups are close-tolerance mechanical devices and may be more easily damaged than tapes, especially while being transported. In the mid-2000s, several drive manufacturers began to produce portable drives employing ramp loading and accelerometer technology (sometimes termed a "shock sensor"), and by 2010 the industry average in drop tests for drives with that technology showed drives remaining intact and working after a 36-inch non-operating drop onto industrial carpeting. Some manufacturers also offer 'ruggedized' portable hard drives, which include a shock-absorbing case around the hard disk, and claim a range of higher drop specifications. Over a period of years the stability of hard disk backups is shorter than that of tape backups.
External hard disks can be connected via local interfaces like SCSI, USB, FireWire, or eSATA, or via longer-distance technologies like Ethernet, iSCSI, or Fibre Channel. Some disk-based backup systems, via Virtual Tape Libraries or otherwise, support data deduplication, which can reduce the amount of disk storage capacity consumed by daily and weekly backup data.
Optical storage
Optical storage uses lasers to store and retrieve data. Recordable CDs, DVDs, and Blu-ray Discs are commonly used with personal computers and are generally cheap. The capacities and speeds of these discs have typically been lower than hard disks or tapes. Advances in optical media may shrink that gap in the future.
Potential future data losses caused by gradual media degradation can be predicted by measuring the rate of correctable minor data errors, of which consecutively too many increase the risk of uncorrectable sectors. Support for error scanning varies among optical drive vendors.
Many optical disc formats are WORM type, which makes them useful for archival purposes since the data cannot be changed in any way, including by user error and by malware such as ransomware. Moreover, optical discs are not vulnerable to head crashes, magnetism, imminent water ingress or power surges; and, a fault of the drive typically just halts the spinning.
Optical media is modular; the storage controller is not tied to media itself like with hard drives or flash storage (→flash memory controller), allowing it to be removed and accessed through a different drive. However, recordable media may degrade earlier under long-term exposure to light.
Some optical storage systems allow for cataloged data backups without human contact with the discs, allowing for longer data integrity. A French study in 2008 indicated that the lifespan of typically-sold CD-Rs was 2–10 years, but one manufacturer later estimated the longevity of its CD-Rs with a gold-sputtered layer to be as high as 100 years. Sony's proprietary Optical Disc Archive can in 2016 reach a read rate of 250 MB/s.
Solid-state drive
Solid-state drives (SSDs) use integrated circuit assemblies to store data. Flash memory, thumb drives, USB flash drives, CompactFlash, SmartMedia, Memory Sticks, and Secure Digital card devices are relatively expensive for their low capacity, but convenient for backing up relatively low data volumes. A solid-state drive does not contain any movable parts, making it less susceptible to physical damage, and can have huge throughput of around 500 Mbit/s up to 6 Gbit/s. Available SSDs have become more capacious and cheaper. Flash memory backups are stable for fewer years than hard disk backups.
Remote backup service
Remote backup services or cloud backups involve service providers storing data offsite. This has been used to protect against events such as fires, floods, or earthquakes which could destroy locally stored backups. Cloud-based backup (through services like or similar to Google Drive, and Microsoft OneDrive) provides a layer of data protection. However, the users must trust the provider to maintain the privacy and integrity of their data, with confidentiality enhanced by the use of encryption. Because speed and availability are limited by a user's online connection, users with large amounts of data may need to use cloud seeding and large-scale recovery.
Management
Various methods can be used to manage backup media, striking a balance between accessibility, security and cost. These media management methods are not mutually exclusive and are frequently combined to meet the user's needs. Using on-line disks for staging data before it is sent to a near-line tape library is a common example.
Online
Online backup storage is typically the most accessible type of data storage, and can begin a restore in milliseconds. An internal hard disk or a disk array (maybe connected to SAN) is an example of an online backup. This type of storage is convenient and speedy, but is vulnerable to being deleted or overwritten, either by accident, by malevolent action, or in the wake of a data-deleting virus payload.
Near-line
Nearline storage is typically less accessible and less expensive than online storage, but still useful for backup data storage. A mechanical device is usually used to move media units from storage into a drive where the data can be read or written. Generally it has safety properties similar to on-line storage. An example is a tape library with restore times ranging from seconds to a few minutes.
Off-line
Off-line storage requires some direct action to provide access to the storage media: for example, inserting a tape into a tape drive or plugging in a cable. Because the data is not accessible via any computer except during limited periods in which they are written or read back, they are largely immune to on-line backup failure modes. Access time varies depending on whether the media are on-site or off-site.
Off-site data protection
Backup media may be sent to an off-site vault to protect against a disaster or other site-specific problem. The vault can be as simple as a system administrator's home office or as sophisticated as a disaster-hardened, temperature-controlled, high-security bunker with facilities for backup media storage. A data replica can be off-site but also on-line (e.g., an off-site RAID mirror).
Backup site
A backup site or disaster recovery center is used to store data that can enable computer systems and networks to be restored and properly configured in the event of a disaster. Some organisations have their own data recovery centres, while others contract this out to a third-party. Due to high costs, backing up is rarely considered the preferred method of moving data to a DR site. A more typical way would be remote disk mirroring, which keeps the DR data as up to date as possible.
Selection and extraction of data
A backup operation starts with selecting and extracting coherent units of data. Most data on modern computer systems is stored in discrete units, known as files. These files are organized into filesystems. Deciding what to back up at any given time involves tradeoffs. By backing up too much redundant data, the information repository will fill up too quickly. Backing up an insufficient amount of data can eventually lead to the loss of critical information.
Files
Copying files: Making copies of files is the simplest and most common way to perform a backup. A means to perform this basic function is included in all backup software and all operating systems.
Partial file copying: A backup may include only the blocks or bytes within a file that have changed in a given period of time. This can substantially reduce needed storage space, but requires higher sophistication to reconstruct files in a restore situation. Some implementations require integration with the source file system.
Deleted files: To prevent the unintentional restoration of files that have been intentionally deleted, a record of the deletion must be kept.
Versioning of files: Most backup applications, other than those that do only full only/System imaging, also back up files that have been modified since the last backup. "That way, you can retrieve many different versions of a given file, and if you delete it on your hard disk, you can still find it in your [information repository] archive."
Filesystems
Filesystem dump: A copy of the whole filesystem in block-level can be made. This is also known as a "raw partition backup" and is related to disk imaging. The process usually involves unmounting the filesystem and running a program like dd (Unix). Because the disk is read sequentially and with large buffers, this type of backup can be faster than reading every file normally, especially when the filesystem contains many small files, is highly fragmented, or is nearly full. But because this method also reads the free disk blocks that contain no useful data, this method can also be slower than conventional reading, especially when the filesystem is nearly empty. Some filesystems, such as XFS, provide a "dump" utility that reads the disk sequentially for high performance while skipping unused sections. The corresponding restore utility can selectively restore individual files or the entire volume at the operator's choice.
Identification of changes: Some filesystems have an archive bit for each file that says it was recently changed. Some backup software looks at the date of the file and compares it with the last backup to determine whether the file was changed.
Versioning file system: A versioning filesystem tracks all changes to a file. The NILFS versioning filesystem for Linux is an example.
Live data
Files that are actively being updated present a challenge to back up. One way to back up live data is to temporarily quiesce them (e.g., close all files), take a "snapshot", and then resume live operations. At this point the snapshot can be backed up through normal methods. A snapshot is an instantaneous function of some filesystems that presents a copy of the filesystem as if it were frozen at a specific point in time, often by a copy-on-write mechanism. Snapshotting a file while it is being changed results in a corrupted file that is unusable. This is also the case across interrelated files, as may be found in a conventional database or in applications such as Microsoft Exchange Server. The term fuzzy backup can be used to describe a backup of live data that looks like it ran correctly, but does not represent the state of the data at a single point in time.
Backup options for data files that cannot be or are not quiesced include:
Open file backup: Many backup software applications undertake to back up open files in an internally consistent state. Some applications simply check whether open files are in use and try again later. Other applications exclude open files that are updated very frequently. Some low-availability interactive applications can be backed up via natural/induced pausing.
Interrelated database files backup: Some interrelated database file systems offer a means to generate a "hot backup" of the database while it is online and usable. This may include a snapshot of the data files plus a snapshotted log of changes made while the backup is running. Upon a restore, the changes in the log files are applied to bring the copy of the database up to the point in time at which the initial backup ended. Other low-availability interactive applications can be backed up via coordinated snapshots. However, genuinely-high-availability interactive applications can be only be backed up via Continuous Data Protection.
Metadata
Not all information stored on the computer is stored in files. Accurately recovering a complete system from scratch requires keeping track of this non-file data too.
System description: System specifications are needed to procure an exact replacement after a disaster.
Boot sector: The boot sector can sometimes be recreated more easily than saving it. It usually isn't a normal file and the system won't boot without it.
Partition layout: The layout of the original disk, as well as partition tables and filesystem settings, is needed to properly recreate the original system.
File metadata: Each file's permissions, owner, group, ACLs, and any other metadata need to be backed up for a restore to properly recreate the original environment.
System metadata: Different operating systems have different ways of storing configuration information. Microsoft Windows keeps a registry of system information that is more difficult to restore than a typical file.
Manipulation of data and dataset optimization
It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations can improve backup speed, restore speed, data security, media usage and/or reduced bandwidth requirements.
Automated data grooming
Out-of-date data can be automatically deleted, but for personal backup applications—as opposed to enterprise client-server backup applications where automated data "grooming" can be customized—the deletion can at most be globally delayed or be disabled.
Compression
Various schemes can be employed to shrink the size of the source data to be stored so that it uses less storage space. Compression is frequently a built-in feature of tape drive hardware.
Deduplication
Redundancy due to backing up similarly configured workstations can be reduced, thus storing just one copy. This technique can be applied at the file or raw block level. This potentially large reduction is called deduplication. It can occur on a server before any data moves to backup media, sometimes referred to as source/client side deduplication. This approach also reduces bandwidth required to send backup data to its target media. The process can also occur at the target storage device, sometimes referred to as inline or back-end deduplication.
Duplication
Sometimes backups are duplicated to a second set of storage media. This can be done to rearrange the archive files to optimize restore speed, or to have a second copy at a different location or on a different storage medium—as in the disk-to-disk-to-tape capability of Enterprise client-server backup.
Encryption
High-capacity removable storage media such as backup tapes present a data security risk if they are lost or stolen. Encrypting the data on these media can mitigate this problem, however encryption is a CPU intensive process that can slow down backup speeds, and the security of the encrypted backups is only as effective as the security of the key management policy.
Multiplexing
When there are many more computers to be backed up than there are destination storage devices, the ability to use a single storage device with several simultaneous backups can be useful. However cramming the scheduled backup window via "multiplexed backup" is only used for tape destinations.
Refactoring
The process of rearranging the sets of backups in an archive file is known as refactoring. For example, if a backup system uses a single tape each day to store the incremental backups for all the protected computers, restoring one of the computers could require many tapes. Refactoring could be used to consolidate all the backups for a single computer onto a single tape, creating a "synthetic full backup". This is especially useful for backup systems that do incrementals forever style backups.
Staging
Sometimes backups are copied to a staging disk before being copied to tape. This process is sometimes referred to as D2D2T, an acronym for Disk-to-disk-to-tape. It can be useful if there is a problem matching the speed of the final destination device with the source device, as is frequently faced in network-based backup systems. It can also serve as a centralized location for applying other data manipulation techniques.
Objectives
Recovery point objective (RPO): The point in time that the restarted infrastructure will reflect, expressed as "the maximum targeted period in which data (transactions) might be lost from an IT service due to a major incident". Essentially, this is the roll-back that will be experienced as a result of the recovery. The most desirable RPO would be the point just prior to the data loss event. Making a more recent recovery point achievable requires increasing the frequency of synchronization between the source data and the backup repository.
Recovery time objective (RTO): The amount of time elapsed between disaster and restoration of business functions.
Data security: In addition to preserving access to data for its owners, data must be restricted from unauthorized access. Backups must be performed in a manner that does not compromise the original owner's undertaking. This can be achieved with data encryption and proper media handling policies.
Data retention period: Regulations and policy can lead to situations where backups are expected to be retained for a particular period, but not any further. Retaining backups after this period can lead to unwanted liability and sub-optimal use of storage media.
Checksum or hash function validation: Applications that back up to tape archive files need this option to verify that the data was accurately copied.
Backup process monitoring: Enterprise client-server backup applications need a user interface that allows administrators to monitor the backup process, and proves compliance to regulatory bodies outside the organization; for example, an insurance company in the USA might be required under HIPAA to demonstrate that its client data meet records retention requirements.
User-initiated backups and restores: To avoid or recover from minor disasters, such as inadvertently deleting or overwriting the "good" versions of one or more files, the computer user—rather than an administrator—may initiate backups and restores (from not necessarily the most-recent backup) of files or folders.
See also
About backup
Backup software and services
List of backup software
Comparison of online backup services
Comparison of backup software
Glossary of backup terms
Virtual backup appliance
Related topics
Data consistency
Data degradation
Data portability
Data proliferation
Database dump
Digital preservation
Disaster recovery and business continuity auditing
World Backup Day
Notes
References
External links
Computer data
Data management
Data security
Records management | Backup | Technology,Engineering | 5,471 |
16,845,302 | https://en.wikipedia.org/wiki/PNRC1 | Proline-rich nuclear receptor coactivator 1 is a protein that, in humans, is encoded by the PNRC1 gene.
Function
PNRC1 functions as a coactivator for several nuclear receptors including AR, ERα, ERRα, ERRγ, GR, SF1, PR, TR, RAR and RXR. The interaction between PNRC1 with nuclear receptors occurs through the SH3 domain of PNRC1.
References
Further reading
External links
Gene expression
Transcription coregulators | PNRC1 | Chemistry,Biology | 110 |
10,300,128 | https://en.wikipedia.org/wiki/Systematic%20Census%20of%20Australian%20Plants | The Systematic census of Australian plants, with chronologic, literary and geographic annotations, more commonly known as the Systematic Census of Australian Plants, also known by its standard botanic abbreviation Syst. Census Austral. Pl., is a survey of the vascular flora of Australia prepared by Government botanist for the state of Victoria Ferdinand von Mueller and published in 1882.
Von Mueller describes the development of the census in the preface of the volume as an extension of the seven volumes of the Flora Australiensis written by George Bentham. A new flora was necessary since as more areas of Australia were explored and settled, the flora of the island-continent became better collected and described. The first census increased the number of described species from the 8125 in Flora Australiensis to 8646. The book records all the known species indigenous to Australia and Norfolk Island; with records of species distribution.
Von Mueller noted that by 1882 it had become difficult to distinguish some introduced species from native ones:
The lines of demarkation between truly indigenous and more recently immigrated plants can no longer in all cases be drawn with precision; but whereas Alchemilla vulgaris and Veronica serpyllifolia were found along with several European Carices in untrodden parts of the Australian Alps during the author's earliest explorations, Alchemilla arvensis and Veronica peregrina were at first only noticed near settlements. The occurrence of Arabis glabra, Geum urbanum, Agiimonia eupatoria, Eupatorium cannabinum, Cavpesium cernuum and some others may therefore readily be disputed as indigenous, and some questions concerning the nativity of various of our plants will probably remain for ever involved in doubts.
In 1889 an updated edition of the census was published, the Second Systematic Census increased the number of described species to 8839. Von Mueller dedicated both works to Joseph Dalton Hooker and Augustin Pyramus de Candolle.
The work is of historic significance as the first Australian flora written in Australia. Following its publication, research and writing on the flora of Australia has largely been carried out in Australia.
See also
Australian Bird Count (ABC)
Flora of Australia (series) (59-volume series)
References
External links
Full text of the Systematic Census of Australian Plants from the Internet Archive
1882 non-fiction books
Biological censuses
Books about Australian natural history
Florae (publication)
Botany in Australia | Systematic Census of Australian Plants | Biology | 492 |
4,191,785 | https://en.wikipedia.org/wiki/Aigo | Beijing Huaqi Information Digital Technology Co., Ltd, trading as Aigo (stylized as aigo), is a Chinese consumer electronics company. It is headquartered in the Ideal Plaza () in Haidian District, Beijing.
History
Beijing Huaqi Information Digital Technology Co Ltd (北京华旗资讯科技发展有限公司) is a consumer electronics manufacturer headquartered in Beijing. It was founded by Féng Jūn, who is the current president, since 1993. The company initially produced keyboards. aigo may be participating in a trend that sees Chinese nationals preferring to purchase locally produced durable goods.
Products
Aigo's products include MIDs, digital media players, computer cases, digital cameras, cpu cooling fans, computer peripherals,monitors and computer mouses.
Subsidiaries
Aigo has 27 subsidiaries and several R&D facilities. An incomplete list of aigo's subsidiaries can be found here.
Aigo Music
Established in 1993 and located in Beijing, aigo Music operates a digital music service much like iTunes. The first of its kind in China, it is, as of 2009, the biggest portal for legal downloading of music in the country. Strategic partnerships with Warner Music, EMI and Sony allow a wide range of music to be offered at 0.99 yuan per song.
Beijing aifly Education and Technology Co Ltd
aigo set up this English as a Second Language brand with help from Crazy English founder Li Yang.
Beijing aigo Digital Animation Institution
An aigo subsidiary that specializes in 3D animated films.
Huaqi Information Technology (Singapore) Pte Ltd
Set up in October 2003, it operates two official aigo outlet stores in Singapore.
Shenzhen aigo R&D Co Ltd
Established in 2006, this Shenzhen-based research and development facility focuses on the development of mobile multimedia software.
Sponsorships
aigo is a sponsor of a number of sporting events, the majority involving automobile racing.
Motorsport
aigo was an official partner of the Vodafone McLaren Mercedes Formula One team.
As of 2008, aigo sponsored Chinese driver "Frankie" Cheng Congfu, in A1GP racing.
aigo was an official partner of the 2007 race of champions, a racing competition that uses a variety of different vehicles.
aigo was one of the sponsors of Bryan Herta Autosports during Indianapolis 500.
Football
aigo, as of 2009, had a global strategic cooperation effort with Manchester United.
Notes
References
External links
Aigo
Aigo
aigo tagged posts @ gizmodo.com
aigo tagged posts @ engadget.com
Electronics companies of China
Chinese brands
Consumer electronics brands
Companies established in 1993
Computer hardware companies
Computer storage companies
Computer systems companies
Manufacturing companies based in Beijing
Privately held companies of China
1993 establishments in China
1993 in Beijing | Aigo | Technology | 550 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.